threads
listlengths
1
2.99k
[ { "msg_contents": "I have updated /HISTORY for 7.3beta2. Looking at the open items list, I\nthink we are ready for beta2 now.\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix BeOS, QNX4 ports\nFix AIX large file compile failure of 2002-09-11 (Andreas)\nGet bison upgrade on postgresql.org for ecpg only (Marc)\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nFix clusterdb to be schema-aware\nChange log_min_error_statement to be off by default (Gavin)\nFix return tuple counts/oid/tag for rules\nLoading 7.2 pg_dumps\n\tfunctions no longer public executable\n\tlanguages no longer public usable\nAdd schema dump option to pg_dump\nAdd param for length check for char()/varchar()\nFix $libdir in loaded functions?\nMake SET not start a transaction with autocommit off, document it\nAdd GRANT EXECUTE to all /contrib functions\n\nOn Going\n--------\nSecurity audit\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages\nMove documation to gborg for moved projects\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Sep 2002 01:26:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "HISTORY updated for 7.3beta2" }, { "msg_contents": "En Mon, 23 Sep 2002 01:26:25 -0400 (EDT)\nBruce Momjian <pgman@candle.pha.pa.us> escribi�:\n\n> 7 . 3 O P E N I T E M S\n\n> Fix clusterdb to be schema-aware\n\nPlease apply the patch attached and this should be solved.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"I think my standards have lowered enough that now I think 'good design'\nis when the page doesn't irritate the living fuck out of me.\" (JWZ)", "msg_date": "Mon, 23 Sep 2002 04:24:47 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] HISTORY updated for 7.3beta2" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> 7 . 3 O P E N I T E M S\n>\n> Loading 7.2 pg_dumps\n> \tfunctions no longer public executable\n> \tlanguages no longer public usable\n\n\nAlthough it's reasonably easy to fix no-privileges problems for\nfunctions after you load a dump, it occurs to me that the same does not\nhold for PL languages. If a newly created language doesn't have USAGE\navailable to public, then any function definitions in your dump are\ngoing to fail, if they belong to non-superusers.\n\nI am thinking that the better course might be to have newly created\nlanguages default to USAGE PUBLIC, at least for a release or two.\n\nWe might also consider letting newly created functions default to\nEXECUTE PUBLIC. I think this is less essential, but a case could still\nbe made for it on backwards-compatibility grounds.\n\nIf you don't want to hard-wire that behavior, what about a GUC variable\nthat could be turned on while loading old dumps?\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Sep 2002 11:13:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Default privileges for 7.3" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > 7 . 3 O P E N I T E M S\n> >\n> > Loading 7.2 pg_dumps\n> > \tfunctions no longer public executable\n> > \tlanguages no longer public usable\n> \n> \n> Although it's reasonably easy to fix no-privileges problems for\n> functions after you load a dump, it occurs to me that the same does not\n> hold for PL languages. If a newly created language doesn't have USAGE\n> available to public, then any function definitions in your dump are\n> going to fail, if they belong to non-superusers.\n> \n> I am thinking that the better course might be to have newly created\n> languages default to USAGE PUBLIC, at least for a release or two.\n> \n> We might also consider letting newly created functions default to\n> EXECUTE PUBLIC. I think this is less essential, but a case could still\n> be made for it on backwards-compatibility grounds.\n\nYes, I am wondering if we should go one release with them open to give\npeople a chance to adjust, but actually, I don't understand how we could\ndo that effectively. Do we tell them to add GRANTs in 7.3 and tighten\nit down in 7.4, and if we do that, will the GRANTs be recorded in\npg_dump properly?\n\nTo me a table contains data, while a function usually just causes an\naction, and I don't see why an action has to be restricted (same with\nlanguage). I realize we have some actions that must be limited, like\nclearing the stat collector, but the majority seem benign. Does the\nstandard require us to restrict their executability?\n\n> If you don't want to hard-wire that behavior, what about a GUC variable\n> that could be turned on while loading old dumps?\n\nI think GUC is going to be confusing. Let's see if we can decide on a\ngood course first.\n\nWell, we better decide something before we do beta2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Sep 2002 11:28:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Default privileges for 7.3" }, { "msg_contents": "Tom Lane writes:\n\n> I am thinking that the better course might be to have newly created\n> languages default to USAGE PUBLIC, at least for a release or two.\n\nThat seems reasonable. Since everyone is supposed to use createlang,\nthat's the effective default anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 23 Sep 2002 23:15:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Default privileges for 7.3" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> I am thinking that the better course might be to have newly created\n>> languages default to USAGE PUBLIC, at least for a release or two.\n\n> That seems reasonable. Since everyone is supposed to use createlang,\n> that's the effective default anyway.\n\nGood point. I shall make it happen.\n\nHow do you feel about allowing functions to default to EXECUTE PUBLIC?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Sep 2002 17:42:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for 7.3 " }, { "msg_contents": "Hello!\n\nOn Mon, 23 Sep 2002, Tom Lane wrote:\n\n> I am thinking that the better course might be to have newly created\n> languages default to USAGE PUBLIC, at least for a release or two.\n>\n> We might also consider letting newly created functions default to\n> EXECUTE PUBLIC. I think this is less essential, but a case could still\n> be made for it on backwards-compatibility grounds.\n\nHm...M$ had proven this way is false. See BUGTRAQ about sp_* stories every\nquarter.;)\n\n> If you don't want to hard-wire that behavior, what about a GUC variable\n> that could be turned on while loading old dumps?\n> Comments?\nThat seems to be more reasonable.\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Tue, 24 Sep 2002 22:10:22 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <byg@center-f1.ru>", "msg_from_op": false, "msg_subject": "Re: Default privileges for 7.3" }, { "msg_contents": "Tom Lane writes:\n\n> How do you feel about allowing functions to default to EXECUTE PUBLIC?\n\nLess excited, but if it gets us to the point of no known problems during\nupgrade we might as well do it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 24 Sep 2002 20:07:27 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Default privileges for 7.3 " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> How do you feel about allowing functions to default to EXECUTE PUBLIC?\n\n> Less excited, but if it gets us to the point of no known problems during\n> upgrade we might as well do it.\n\nOkay, I've changed the defaults for both languages and functions; if we\nthink of something better we can change it again ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Sep 2002 19:15:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default privileges for 7.3 " } ]
[ { "msg_contents": "Hi All,\n\nWhen I try 2 or 3 consecutive select count(*) on my database I've the \nproblem shown below.\nHere is a psql session log:\n\n[root@foradada root]# psql -d database\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ndatabase=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.2.2 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\ndatabase=# select count(*) from detail;\n count\n--------\n 181661\n(1 row)\n\ndatabase=# select count(*) from detail;\n count\n--------\n 181660\n(1 row)\n\ndatabase=# select count(*) from detail;\nFATAL 2: open of /var/lib/pgsql/data/pg_clog/0303 failed: No such file or \ndirecto\nry\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Succeeded.\ndatabase=#\n\nRoberto Fichera.\n\n", "msg_date": "Mon, 23 Sep 2002 12:27:47 +0200", "msg_from": "Roberto Fichera <kernel@tekno-soft.it>", "msg_from_op": true, "msg_subject": "Problem on PG7.2.2" }, { "msg_contents": "Roberto Fichera <kernel@tekno-soft.it> writes:\n> database=# select count(*) from detail;\n> count\n> --------\n> 181661\n> (1 row)\n\n> database=# select count(*) from detail;\n> count\n> --------\n> 181660\n> (1 row)\n\n> database=# select count(*) from detail;\n> FATAL 2: open of /var/lib/pgsql/data/pg_clog/0303 failed: No such file or \n> directory\n\n[ blinks... ] That's with no one else modifying the table meanwhile?\n\nI think you've got *serious* hardware problems. Hard to tell if it's\ndisk or memory, but get out those diagnostic programs now ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Sep 2002 10:40:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on PG7.2.2 " }, { "msg_contents": "At 10.40 23/09/02 -0400, Tom Lane wrote:\n\n>Roberto Fichera <kernel@tekno-soft.it> writes:\n> > database=# select count(*) from detail;\n> > count\n> > --------\n> > 181661\n> > (1 row)\n>\n> > database=# select count(*) from detail;\n> > count\n> > --------\n> > 181660\n> > (1 row)\n>\n> > database=# select count(*) from detail;\n> > FATAL 2: open of /var/lib/pgsql/data/pg_clog/0303 failed: No such file or\n> > directory\n>\n>[ blinks... ] That's with no one else modifying the table meanwhile?\n\nThis table is used to hold all the logs from our Radius servers,\nso we have only INSERT from the radiusd server.\n\n\n>I think you've got *serious* hardware problems. Hard to tell if it's\n>disk or memory, but get out those diagnostic programs now ...\n\nWhat diagnostic programs do you suggest ?\n\n\nRoberto Fichera.\n\n", "msg_date": "Mon, 23 Sep 2002 17:08:45 +0200", "msg_from": "Roberto Fichera <kernel@tekno-soft.it>", "msg_from_op": true, "msg_subject": "Re: Problem on PG7.2.2 " }, { "msg_contents": "At 10.40 23/09/02 -0400, you wrote:\n\n>Roberto Fichera <kernel@tekno-soft.it> writes:\n> > database=# select count(*) from detail;\n> > count\n> > --------\n> > 181661\n> > (1 row)\n>\n> > database=# select count(*) from detail;\n> > count\n> > --------\n> > 181660\n> > (1 row)\n>\n> > database=# select count(*) from detail;\n> > FATAL 2: open of /var/lib/pgsql/data/pg_clog/0303 failed: No such file or\n> > directory\n>\n>[ blinks... ] That's with no one else modifying the table meanwhile?\n>\n>I think you've got *serious* hardware problems. Hard to tell if it's\n>disk or memory, but get out those diagnostic programs now ...\n\nThis table is used to hold all the logs for our Radius authentication & \nstatistics,\nso we have only INSERT from the radiusd server.\nI had no problem at all. No crash no panic, nothing.\n\ndatabase=# \\d ts\n Table \"ts\"\n Column | Type | Modifiers\n--------+---------------+-----------\n name | character(15) |\n ip_int | cidr | not null\nPrimary key: ts_pkey\n\ndatabase=# DROP TABLE TS;\nERROR: cannot find attribute 1 of relation ts_pkey\ndatabase=# DROP INDEX TS_PKEY;\nERROR: cannot find attribute 1 of relation ts_pkey\ndatabase=#\n\nand again\n\n[root@foradada pgsql]# pg_dump -d -f detail.sql -t detail database\npg_dump: dumpClasses(): SQL command failed\npg_dump: Error message from server: FATAL 2: open of \n/var/lib/pgsql/data/pg_clog/0202 failed: No such file or directory\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\npg_dump: The command was: FETCH 100 FROM _pg_dump_cursor\n[root@foradada pgsql]# ls -al\ntotale 10464\ndrwx------ 4 postgres postgres 4096 set 23 17:44 .\ndrwxr-xr-x 14 root root 4096 set 23 13:06 ..\ndrwx------ 2 postgres postgres 4096 ago 26 20:13 backups\n-rw------- 1 postgres postgres 5519 set 5 00:53 .bash_history\n-rw-r--r-- 1 postgres postgres 107 ago 26 20:13 .bash_profile\ndrwx------ 6 postgres postgres 4096 set 23 18:18 data\n-rw-r--r-- 1 root root 6221242 set 24 12:04 detail.sql\n-rw-r--r-- 1 root root 157 giu 25 14:43 initdb.i18n\n-rw------- 1 postgres postgres 10088 set 5 00:14 .psql_history\n[root@foradada pgsql]#\n\n\nRoberto Fichera.\n\n", "msg_date": "Tue, 24 Sep 2002 12:07:32 +0200", "msg_from": "Roberto Fichera <kernel@tekno-soft.it>", "msg_from_op": true, "msg_subject": "Re: Problem on PG7.2.2 " }, { "msg_contents": "Roberto Fichera <kernel@tekno-soft.it> writes:\n> At 10.40 23/09/02 -0400, you wrote:\n>> I think you've got *serious* hardware problems. Hard to tell if it's\n>> disk or memory, but get out those diagnostic programs now ...\n\n> database=# DROP TABLE TS;\n> ERROR: cannot find attribute 1 of relation ts_pkey\n> database=# DROP INDEX TS_PKEY;\n> ERROR: cannot find attribute 1 of relation ts_pkey\n\nNow you've got corrupted system indexes (IIRC this is a symptom of\nproblems in one of the indexes for pg_attribute).\n\nYou might be able to recover from this using a REINDEX DATABASE\noperation (read the man page carefully, it's a bit tricky), but\nI am convinced that you've got hardware problems. I would suggest\nthat you first shut down the database and then find and fix your\nhardware problem --- otherwise, things will just get worse and worse.\nAfter you have a stable platform again, you can try to restore\nconsistency to the database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Sep 2002 09:31:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem on PG7.2.2 " }, { "msg_contents": "At 09.31 24/09/02 -0400, Tom Lane wrote:\n\n>Roberto Fichera <kernel@tekno-soft.it> writes:\n> > At 10.40 23/09/02 -0400, you wrote:\n> >> I think you've got *serious* hardware problems. Hard to tell if it's\n> >> disk or memory, but get out those diagnostic programs now ...\n>\n> > database=# DROP TABLE TS;\n> > ERROR: cannot find attribute 1 of relation ts_pkey\n> > database=# DROP INDEX TS_PKEY;\n> > ERROR: cannot find attribute 1 of relation ts_pkey\n>\n>Now you've got corrupted system indexes (IIRC this is a symptom of\n>problems in one of the indexes for pg_attribute).\n\nI'll run some memory checker.\n\n>You might be able to recover from this using a REINDEX DATABASE\n>operation (read the man page carefully, it's a bit tricky), but\n>I am convinced that you've got hardware problems. I would suggest\n>that you first shut down the database and then find and fix your\n>hardware problem --- otherwise, things will just get worse and worse.\n>After you have a stable platform again, you can try to restore\n>consistency to the database.\n\nI'll try it.\n\n\nRoberto Fichera.\n\n", "msg_date": "Tue, 24 Sep 2002 18:27:19 +0200", "msg_from": "Roberto Fichera <kernel@tekno-soft.it>", "msg_from_op": true, "msg_subject": "Re: Problem on PG7.2.2 " }, { "msg_contents": "At 09.31 24/09/02 -0400, Tom Lane wrote:\n>Roberto Fichera <kernel@tekno-soft.it> writes:\n> > At 10.40 23/09/02 -0400, you wrote:\n> >> I think you've got *serious* hardware problems. Hard to tell if it's\n> >> disk or memory, but get out those diagnostic programs now ...\n>\n> > database=# DROP TABLE TS;\n> > ERROR: cannot find attribute 1 of relation ts_pkey\n> > database=# DROP INDEX TS_PKEY;\n> > ERROR: cannot find attribute 1 of relation ts_pkey\n>\n>Now you've got corrupted system indexes (IIRC this is a symptom of\n>problems in one of the indexes for pg_attribute).\n>\n>You might be able to recover from this using a REINDEX DATABASE\n>operation (read the man page carefully, it's a bit tricky), but\n>I am convinced that you've got hardware problems. I would suggest\n>that you first shut down the database and then find and fix your\n>hardware problem --- otherwise, things will just get worse and worse.\n>After you have a stable platform again, you can try to restore\n>consistency to the database.\n\nBelow there is the first try session and as you can see there is the same \nproblem :-(!\n\nbash-2.05a$ postgres -D /var/lib/pgsql/data -O -P database\nDEBUG: database system was shut down at 2002-09-24 18:39:24 CEST\nDEBUG: checkpoint record is at 0/2AE97110\nDEBUG: redo record is at 0/2AE97110; undo record is at 0/0; shutdown TRUE\nDEBUG: next transaction id: 366635; next oid: 1723171\nDEBUG: database system is ready\n\nPOSTGRES backend interactive interface\n$Revision: 1.245.2.2 $ $Date: 2002/02/27 23:17:01 $\n\nbackend> reindex table detail;\nFATAL 2: open of /var/lib/pgsql/data/pg_clog/0504 failed: No such file or \ndirectory\nDEBUG: shutting down\nDEBUG: database system is shut down\nbash-2.05a$\n\n\nRoberto Fichera.\n\n", "msg_date": "Tue, 24 Sep 2002 18:43:36 +0200", "msg_from": "Roberto Fichera <kernel@tekno-soft.it>", "msg_from_op": true, "msg_subject": "Re: Problem on PG7.2.2 " } ]
[ { "msg_contents": "Hello All,\n\nI have written a small daemon that can automatically vacuum PostgreSQL \ndatabase, depending upon activity per table.\n\nIt sits on top of postgres statistics collector. The postgres installation \nshould have per row statistics collection enabled.\n\nFeatures are,\n\n* Vacuuming based on activity on the table\n* Per table vacuum. So only heavily updated tables are vacuumed.\n* multiple databases supported\n* Performs 'vacuum analyze' only, so it will not block the database\n\n\nThe project location is \nhttp://gborg.postgresql.org/project/pgavd/projdisplay.php\n\nLet me know for bugs/improvements and comments.. \n\nI am sure real world postgres installations has some sort of scripts doing \nsimilar thing. This is an attempt to provide a generic interface to periodic \nvacuum.\n\n\nBye\n Shridhar\n\n--\nThe Abrams' Principle:\tThe shortest distance between two points is off the \nwall.\n\n", "msg_date": "Mon, 23 Sep 2002 19:13:44 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Postgresql Automatic vacuum" }, { "msg_contents": "On 23 Sep 2002 at 14:50, Lee Kindness wrote:\n\n> Shridhar,\n> \n> Might be useful to add a .tag.gz to the downloads, so people do not\n> have to use CVS to take a look.\n\nThere is a development snapshot..\n\n\nBye\n Shridhar\n\n--\nIn most countries selling harmful things like drugs is punishable.Then howcome \npeople can sell Microsoft software and go unpunished?(By hasku@rost.abo.fi, \nHasse Skrifvars)\n\n", "msg_date": "Mon, 23 Sep 2002 19:26:57 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Postgresql Automatic vacuum" }, { "msg_contents": "On 23 Sep 2002 at 13:28, Matthew T. O'Connor wrote:\n\n> On Monday 23 September 2002 09:43 am, Shridhar Daithankar wrote:\n> > Hello All,\n> >\n> > I have written a small daemon that can automatically vacuum PostgreSQL\n> > database, depending upon activity per table.\n> \n> Hello Shridhar, sorry I didn't respond to the email you sent me a while back. \n> Anyway, I saw this post, and just started taking a look a it. I wasn't \n> thinking of doing this as a totally separate executable / code base, but \n> perhaps that has advantages I need to think more. \n> \n> A couple of quick questions, you are using C++, but all postgres source code \n> is in C, do you want this to eventually be included as part of the postgres \n> distribution? If so, I think that C might be a better choice.\n\nWell, I wrote it in C++ because I like it. I have lost habit of writing pure C \ncode. Nothing else.\n\nAs far as getting into base postgresql distro. I don't mind it rewriting but I \nhave some reservations.\n\n1) As it is postgresql source code is huge. Adding functions to it which \ndirectly taps into it's nervous system e.g. cache, would take far more time to \nperfect in all conditions.\n\nMy application as it is is an external client app. It enjoys all the isolation \nprovided by postgresql. Besides this is a low priority functionality at \nruntime, unlike real time replication. It would rarely matter it vacuum is \ntriggered after 6 seconds instead of configuerd 5 seconds, for example.\n\nLess code, less bugs is my thinking. \n\nI wanted this functionality out fast. I didn't want to invest in learning \npostgresql source code because I didn't have time. So I wrote a separate app. \nBesides it would run on all previous postgresql versions which supports \nstatistics collection. That's a huge plus if you ask me.\n\n2) Consider this. No other database offers built in tool to clean the things. \nIs it that nobody needs it? No everybody needs it. And then you end up cleaning \ndatabase by taking it down.\n\nIf people take for granted that postgresql does not need manual cleaning, by \ndeploying apps. like pgavd, vacuum will be a big feature of postgres. Clean the \ndatabase without taking it down..\n\n\n> I will play with it more and give you some more feedback.\n\nAwaiting that.\n\nI am Cc'ing this to Hackers because I am sure some people might have same \ndoubts.\nBye\n Shridhar\n\n--\nintoxicated, adj.:\tWhen you feel sophisticated without being able to pronounce \nit.\n\n", "msg_date": "Tue, 24 Sep 2002 11:46:58 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Postgresql Automatic vacuum" }, { "msg_contents": "Am Dienstag, 24. September 2002 08:16 schrieb Shridhar Daithankar:\n>\n> > I will play with it more and give you some more feedback.\n>\n> Awaiting that.\n>\n\nIMO there are still several problems with that approach, namely:\n* every database will get \"polluted\" with the autovacuum table, which is undesired \n* the biggest problem is the ~/.pgavrc file. I think it should work like other postgres utils do, e.g. supporting -U, -d, ....\n* it's not possible to use without activly administration the config file. it should be able to work without\n adminstrator assistance.\n\nWhen this is a daemon, why not store the data in memory? Even with several thousands of tables the memory footprint would\n still be small. And it should be possible to use for all databases without modifying a config file.\n\nTwo weeks ago I began writing a similar daemon, but had no time yet to finish it. I've tried to avoid using fixed numbers (namely \"vacuum table\nafter 1000 updates\") and tried to make my own heuristic based on the statistics data and the size of the table. The reason is, for a large table 1000 entries might be \na small percentage and vacuum is not necessary, while for small tables 10 updates might be sufficient.\n\nBest regards,\n\tMario Weilguni\n\n", "msg_date": "Tue, 24 Sep 2002 08:42:06 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql Automatic vacuum" }, { "msg_contents": "On 24 Sep 2002 at 8:42, Mario Weilguni wrote:\n\n> Am Dienstag, 24. September 2002 08:16 schrieb Shridhar Daithankar:\n> IMO there are still several problems with that approach, namely:\n> * every database will get \"polluted\" with the autovacuum table, which is undesired \n\nI agree. But that was the best alternative I could see. explanation \nfollows..Besides I didn't want to touch PG meta data..\n\n> * the biggest problem is the ~/.pgavrc file. I think it should work like other postgres utils do, e.g. supporting -U, -d, ....\n\nShouldn't be a problem. The config stuff is working and I can add that. I would \nrather term it a minor issue. On personal preference, I would just fire it \nwithout any arguments. It's not a thing that you change daily. Configure it in \nconfig file and done..\n\n> * it's not possible to use without activly administration the config file. it should be able to work without\n> adminstrator assistance.\n\nWell. I would call that tuning. Each admin can tune it. Yes it's an effort but \ncertainly not an active administration.\n \n> When this is a daemon, why not store the data in memory? Even with several thousands of tables the memory footprint would\n> still be small. And it should be possible to use for all databases without modifying a config file.\n\nWell. When postgresql has ability to deal with arbitrary number of rows, it \nseemed redundant to me to duplicate all those functionality. Why write lists \nand arrays again and again? Let postgresql do it.\n\n\n> Two weeks ago I began writing a similar daemon, but had no time yet to finish it. I've tried to avoid using fixed numbers (namely \"vacuum table\n> after 1000 updates\") and tried to make my own heuristic based on the statistics data and the size of the table. The reason is, for a large table 1000 entries might be \n> a small percentage and vacuum is not necessary, while for small tables 10 updates might be sufficient.\n\nWell, that fixed number is not really fixed but admin tunable, that too per \ndatabase. These are just defaults. Tune it to suit your needs.\n\nThe objective of whole exercise is to get rid of periodic vacuum as this app. \nshifts threshold to activity rather than time.\n\nBesides a table should be vacuumed when it starts affecting performance. On an \ninstallation if a table a 1M rows and change 1K rows affects performance, there \nwill be a similar performance hit for a 100K rows table for 1K rows update. \nBecause overhead involved would be almost same.(Not disk space. pgavd does not \ntarget vacuum full but tuple size should matter).\n\nAt least me thinks so..\n\nI plan to implement per table threshold in addition to per database thresholds. \nBut right now, it seems like overhead to me. Besides there is an item in TODO, \nto shift unit of work from rows to blocks affected. I guess that takes care of \nsome of your points..\nBye\n Shridhar\n\n--\nJones' Second Law:\tThe man who smiles when things go wrong has thought of \nsomeone\tto blame it on.\n\n", "msg_date": "Tue, 24 Sep 2002 12:32:43 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Postgresql Automatic vacuum" } ]
[ { "msg_contents": "Hello.\n\nI'm just curious as to the 7.3 status of a couple of things:\n\n1. Back in Feb. I wrote (in regards to Oracle behavior):\n\n\"Unlike normal queries where blocks are added to the MRU end of \nan LRU list, full table scans add the blocks to the LRU end of \nthe LRU list. I was wondering, in the light of the discussion of \nusing LRU-K, if PostgreSQL does, or if anyone has tried, this \ntechnique?\"\n\nBruce wrote:\n\n\"Yes, someone from India has a project to test LRU-K and MRU for \nlarge table scans and report back the results. He will \nimplement whichever is best.\"\n\nDid this make it into 7.3?\n\n2. Gavin Sherry had worked up a patch so that temporary \nrelations could be dropped automatically upon transaction \ncommit. Did any of those patches it make it? I notice that \nwhenever I create a temporary table in a transaction, my HD \nlight blinks. Is this a forced fsync() causes by the fact that \nthe SQL standard defines temporary relations as surviving across \ntransactions? If so, I'd bet those of us who use \ntransaction-local temporary tables could get few drops more of \nperformance from an ON COMMIT drop patch w/o fsync.\n\nAny thoughts?\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n", "msg_date": "Mon, 23 Sep 2002 11:57:14 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "Temp tables and LRU-K caching" }, { "msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> Bruce wrote:\n> \"Yes, someone from India has a project to test LRU-K and MRU for \n> large table scans and report back the results. He will \n> implement whichever is best.\"\n> Did this make it into 7.3?\n\nNo, we never heard back from that guy. It is still a live topic though.\nOne of the Red Hat people was looking at it over the summer, and I think\nNeil Conway is experimenting with LRU-2 code right now.\n\n> 2. Gavin Sherry had worked up a patch so that temporary \n> relations could be dropped automatically upon transaction \n> commit. Did any of those patches it make it?\n\nNo they didn't; I forget whether there was any objection to his last try\nor it was just too late to get reviewed before feature freeze.\n\n> I notice that \n> whenever I create a temporary table in a transaction, my HD \n> light blinks. Is this a forced fsync() causes by the fact that \n> the SQL standard defines temporary relations as surviving across \n> transactions?\n\nA completely-in-memory temp table is not really practical in Postgres,\nfor two reasons: one being that its schema information is stored in\nthe definitely-not-temp system catalogs, and the other being that we\nrequest allocation of disk space for each page of the table, even if\nit's temp. It might be possible to work around the latter issue (at\nthe cost of quite unfriendly behavior should you run out of disk space)\nbut short of a really major rewrite there isn't any way to avoid keeping\ntemp table catalog info in the regular catalogs. So you are certainly\ngoing to get a disk hit when you create or drop a temp table.\n\n7.3 should be considerably better than 7.1 or 7.2 for temp table access\nbecause it doesn't WAL-log operations on the data within temp tables,\nthough.\n\nAnother thing I'd like to see in the near future is a configurable\nsetting for the amount of memory space that can be used for temp-table\nbuffers. The current setting is ridiculously small (64*8K IIRC), but\nthere's not much point in increasing it until we also have a smarter\nmanagement algorithm for the temp buffers. I've asked Neil to look at\nmaking the improved LRU-K buffer management algorithm apply to temp\nbuffers as well as regular shared buffers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Sep 2002 12:24:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching " }, { "msg_contents": "Mike Mascari wrote:\n> Hello.\n> \n> I'm just curious as to the 7.3 status of a couple of things:\n> \n> 1. Back in Feb. I wrote (in regards to Oracle behavior):\n> \n> \"Unlike normal queries where blocks are added to the MRU end of \n> an LRU list, full table scans add the blocks to the LRU end of \n> the LRU list. I was wondering, in the light of the discussion of \n> using LRU-K, if PostgreSQL does, or if anyone has tried, this \n> technique?\"\n> \n> Bruce wrote:\n> \n> \"Yes, someone from India has a project to test LRU-K and MRU for \n> large table scans and report back the results. He will \n> implement whichever is best.\"\n> \n> Did this make it into 7.3?\n\nThat person stopped working on it. It is still on the TODO list.\n\n> 2. Gavin Sherry had worked up a patch so that temporary \n> relations could be dropped automatically upon transaction \n> commit. Did any of those patches it make it? I notice that \n> whenever I create a temporary table in a transaction, my HD \n> light blinks. Is this a forced fsync() causes by the fact that \n> the SQL standard defines temporary relations as surviving across \n> transactions? If so, I'd bet those of us who use \n> transaction-local temporary tables could get few drops more of \n> performance from an ON COMMIT drop patch w/o fsync.\n\nThis has me confused. There was an exchange with Gavin Auguest 27/28\nwhich resulted in a patch:\n\n\thttp://archives.postgresql.org/pgsql-patches/2002-08/msg00475.php\n\nand my adding it to the patches list:\n\n\thttp://archives.postgresql.org/pgsql-patches/2002-08/msg00502.php\n\nHowever, it was never applied. I don't see any discussion refuting the\npatch or any email removing it from the queue. The only thing I can\nthink of is that somehow I didn't apply it. \n\nMy only guess is that I said I was putting in the queue, but didn't. I\nam concerned if there are any other patches I missed. I see the cube\npatch being added to the queue 40 seconds later, and I know that was in\nthere because I see the message removing it from the queue. I must have\nmade a mistake on that one.\n\nWhat do we do now? The author clearly got it in before beta, but we are\nin beta now. I think we should apply it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Sep 2002 12:34:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching" }, { "msg_contents": "Tom Lane wrote:\n> Another thing I'd like to see in the near future is a configurable\n> setting for the amount of memory space that can be used for temp-table\n> buffers. The current setting is ridiculously small (64*8K IIRC), but\n> there's not much point in increasing it until we also have a smarter\n> management algorithm for the temp buffers. I've asked Neil to look at\n> making the improved LRU-K buffer management algorithm apply to temp\n> buffers as well as regular shared buffers.\n\nSpeaking of sizing, I wonder if we should query about the amount of RAM\nin the machine either during initdb or later and size based on that.\n\nIn other words, if we add a GUC variable that shows the amount of RAM,\nwe could size things based on that value.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Sep 2002 12:36:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching" }, { "msg_contents": "Tom Lane wrote:\n> Mike Mascari <mascarm@mascari.com> writes:\n> > Bruce wrote:\n> > \"Yes, someone from India has a project to test LRU-K and MRU for \n> > large table scans and report back the results. He will \n> > implement whichever is best.\"\n> > Did this make it into 7.3?\n> \n> No, we never heard back from that guy. It is still a live topic though.\n> One of the Red Hat people was looking at it over the summer, and I think\n> Neil Conway is experimenting with LRU-2 code right now.\n> \n> > 2. Gavin Sherry had worked up a patch so that temporary \n> > relations could be dropped automatically upon transaction \n> > commit. Did any of those patches it make it?\n> \n> No they didn't; I forget whether there was any objection to his last try\n> or it was just too late to get reviewed before feature freeze.\n\nI see it going into the patch queue. Here is the full thread:\n\n\thttp://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=200208272124.g7RLO1L20172%40candle.pha.pa.us&rnum=1&prev=/groups%3Fq%3Dcreate%2Btemp%2Btable%2Bon%2Bcommit%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26lr%3D%26ie%3DUTF-8%26scoring%3Dd%26selm%3D200208272124.g7RLO1L20172%2540candle.pha.pa.us%26rnum%3D1\n\nI don't see why it wasn't applied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Sep 2002 12:39:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching" }, { "msg_contents": "Tom Lane wrote:\n> Mike Mascari <mascarm@mascari.com> writes:\n> \n>>Bruce wrote:\n>>\"Yes, someone from India has a project to test LRU-K and MRU for \n>>large table scans and report back the results. He will \n>>implement whichever is best.\"\n>>Did this make it into 7.3?\n> \n> No, we never heard back from that guy. It is still a live topic though.\n> One of the Red Hat people was looking at it over the summer, and I think\n> Neil Conway is experimenting with LRU-2 code right now.\n\nOkay.\n\n> \n>>2. Gavin Sherry had worked up a patch so that temporary \n>>relations could be dropped automatically upon transaction \n>>commit. Did any of those patches it make it?\n> \n> \n> No they didn't; I forget whether there was any objection to his last try\n> or it was just too late to get reviewed before feature freeze.\n\nNuts. Oh well. Hopefully for 7.4...\n\n> \n>>I notice that \n>>whenever I create a temporary table in a transaction, my HD \n>>light blinks. Is this a forced fsync() causes by the fact that \n>>the SQL standard defines temporary relations as surviving across \n>>transactions?\n> \n> \n> A completely-in-memory temp table is not really practical in Postgres,\n> for two reasons: one being that its schema information is stored in\n> the definitely-not-temp system catalogs, and the other being that we\n> request allocation of disk space for each page of the table, even if\n> it's temp. \n\nI knew what I was asking made no sense two seconds after \nclicking 'Send'. Unfortunately, there's no undo on my mail \nclient ;-).\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n", "msg_date": "Mon, 23 Sep 2002 12:40:54 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "Re: Temp tables and LRU-K caching" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> What do we do now? The author clearly got it in before beta, but we are\n> in beta now. I think we should apply it.\n\nNo. It's a feature addition and we are in feature freeze. Moreover,\nit's an unreviewed feature addition (I certainly never had time to look\nat the last version of the patch). Hold it for 7.4.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 23 Sep 2002 12:52:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> No, we never heard back from that guy. It is still a live topic though.\n> One of the Red Hat people was looking at it over the summer, and I think\n> Neil Conway is experimenting with LRU-2 code right now.\n\nJust to confirm that, I'm working on this, and hope to have something\nready for public consumption soon. Tom was kind enough to send me some\nold code of his that implemented an LRU-2 replacement scheme, and I've\nused that as the guide for my new implementation. I just got a really\nbasic version working yesterday -- I'll post a patch once I get\nsomething I'm satisfied with. I also still need to look into the local\nbuffer management stuff suggested by Tom.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "23 Sep 2002 14:50:09 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching" } ]
[ { "msg_contents": "\nOK, I will save this for 7.4. Sorry, Gavin. I missed this one for 7.3.\n\n---------------------------------------------------------------------------\n\npgman wrote:\n> Tom Lane wrote:\n> > Mike Mascari <mascarm@mascari.com> writes:\n> > > Bruce wrote:\n> > > \"Yes, someone from India has a project to test LRU-K and MRU for \n> > > large table scans and report back the results. He will \n> > > implement whichever is best.\"\n> > > Did this make it into 7.3?\n> > \n> > No, we never heard back from that guy. It is still a live topic though.\n> > One of the Red Hat people was looking at it over the summer, and I think\n> > Neil Conway is experimenting with LRU-2 code right now.\n> > \n> > > 2. Gavin Sherry had worked up a patch so that temporary \n> > > relations could be dropped automatically upon transaction \n> > > commit. Did any of those patches it make it?\n> > \n> > No they didn't; I forget whether there was any objection to his last try\n> > or it was just too late to get reviewed before feature freeze.\n> \n> I see it going into the patch queue. Here is the full thread:\n> \n> \thttp://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=200208272124.g7RLO1L20172%40candle.pha.pa.us&rnum=1&prev=/groups%3Fq%3Dcreate%2Btemp%2Btable%2Bon%2Bcommit%2Bgroup:comp.databases.postgresql.*%26hl%3Den%26lr%3D%26ie%3DUTF-8%26scoring%3Dd%26selm%3D200208272124.g7RLO1L20172%2540candle.pha.pa.us%26rnum%3D1\n> \n> I don't see why it wasn't applied.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 23 Sep 2002 16:31:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Temp tables and LRU-K caching" }, { "msg_contents": "On Mon, 23 Sep 2002, Bruce Momjian wrote:\n\n> \n> OK, I will save this for 7.4. Sorry, Gavin. I missed this one for 7.3.\n\nSuch is life.\n\nGavin\n\n\n", "msg_date": "Tue, 24 Sep 2002 09:16:58 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Temp tables and LRU-K caching" } ]
[ { "msg_contents": "Oleg Lebedev wrote:\n> Ok, here are all the files.\n> \n\nI'm now seeing the problem you reported. It is a bug in the new table function \ncode. Basically, you are trying to do this:\n\nDELETE FROM tablea\nWHERE NOT EXISTS\n(\n SELECT remoteid\n FROM\n (\n SELECT remoteid\n FROM dblink('hostaddr=1.23.45.6 port=5432 dbname=webspec user=user\n password=pass',\n 'SELECT objectid FROM tablea WHERE objectid = ' ||\n tablea.objectid)\n AS dblink_rec(remoteid int8)\n ) AS t1\n);\n\nBut if you try:\n\nSELECT remoteid\nFROM\n(\n SELECT remoteid\n FROM dblink('hostaddr=1.23.45.6 port=5432 dbname=webspec user=user\n password=pass',\n 'SELECT objectid FROM tablea WHERE objectid = ' ||\n tablea.objectid)\n AS dblink_rec(remoteid int8)\n) AS t1;\n\nyou'll get:\n\nERROR: FROM function expression may not refer to other relations of same \nquery level\n\nwhich is what you're supposed to get. Apparently the error is not getting \ngenerated as it should when this query is run as a subquery.\n\nWhat you should actually be doing is:\n\nDELETE FROM tablea\nWHERE NOT EXISTS\n(\n SELECT remoteid\n FROM dblink('hostaddr=1.23.45.6 port=5432 dbname=webspec user=user\n password=pass',\n 'SELECT objectid FROM tablea WHERE objectid = ' ||\n tablea.objectid)\n AS dblink_rec(remoteid int8)\n);\nDELETE 0\n\nThis should make your function work on 7.3beta, but I still need to track down \na fix for the bug. Thanks for the report!\n\nJoe\n\n", "msg_date": "Mon, 23 Sep 2002 20:36:35 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: DBLink: interesting issue" }, { "msg_contents": "Joe Conway wrote:\n > Oleg Lebedev wrote:\n >\n >> Ok, here are all the files.\n >>\n\nThis dblink thread on GENERAL led me to a bug in the planner subselect code.\nHere is an example query that triggers it (independent of dblink and/or table\nfunctions):\n\nreplica=# create table foo(f1 int);\nCREATE TABLE\nreplica=# SELECT * FROM foo t WHERE NOT EXISTS (SELECT remoteid FROM (SELECT\nf1 as remoteid FROM foo WHERE f1 = t.f1) AS t1);\nserver closed the connection unexpectedly\n This probably means the server terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n\nIt doesn't matter how foo is defined.\n\nI'm just starting to dig in to this, but was hoping for any thoughts or\nguidance I can get.\n\nThanks,\n\nJoe\n\np.s. Below is a backtrace:\n\n#3 0x081797a1 in ExceptionalCondition () at assert.c:46\n#4 0x0810e102 in replace_var (var=0x82f73a8) at subselect.c:81\n#5 0x0811293c in expression_tree_mutator (node=0x82f7438, mutator=0x810e96c\n<replace_correlation_vars_mutator>,\n context=0x0) at clauses.c:2314\n#6 0x0810e9a5 in replace_correlation_vars_mutator (node=0x82f7438,\ncontext=0x0) at subselect.c:540\n#7 0x08112718 in expression_tree_mutator (node=0x82f7454, mutator=0x810e96c\n<replace_correlation_vars_mutator>,\n context=0x0) at clauses.c:2179\n#8 0x0810e9a5 in replace_correlation_vars_mutator (node=0x82f7454,\ncontext=0x0) at subselect.c:540\n#9 0x0811293c in expression_tree_mutator (node=0x82f7480, mutator=0x810e96c\n<replace_correlation_vars_mutator>,\n context=0x0) at clauses.c:2314\n#10 0x0810e9a5 in replace_correlation_vars_mutator (node=0x82f7480,\ncontext=0x0) at subselect.c:540\n#11 0x0810e968 in SS_replace_correlation_vars (expr=0x82f7480) at subselect.c:525\n#12 0x0810cef5 in preprocess_expression (parse=0x82f6830, expr=0x82f7064,\nkind=1) at planner.c:725\n#13 0x0810cf7e in preprocess_qual_conditions (parse=0x82f6830,\njtnode=0x82f6d70) at planner.c:775\n#14 0x0810c75c in subquery_planner (parse=0x82f6830, tuple_fraction=1) at\nplanner.c:168\n#15 0x0810e260 in make_subplan (slink=0x82f6698) at subselect.c:185\n#16 0x0811293c in expression_tree_mutator (node=0x82f6780, mutator=0x810e9bc\n<process_sublinks_mutator>, context=0x0)\n at clauses.c:2314\n#17 0x0810ea35 in process_sublinks_mutator (node=0x82f6780, context=0x0) at\nsubselect.c:586\n#18 0x08112718 in expression_tree_mutator (node=0x82f6754, mutator=0x810e9bc\n<process_sublinks_mutator>, context=0x0)\n at clauses.c:2179\n#19 0x0810ea35 in process_sublinks_mutator (node=0x82f6754, context=0x0) at\nsubselect.c:586\n#20 0x0811293c in expression_tree_mutator (node=0x82f679c, mutator=0x810e9bc\n<process_sublinks_mutator>, context=0x0)\n at clauses.c:2314\n#21 0x0810ea35 in process_sublinks_mutator (node=0x82f679c, context=0x0) at\nsubselect.c:586\n#22 0x0810e9b8 in SS_process_sublinks (expr=0x82f679c) at subselect.c:553\n#23 0x0810cede in preprocess_expression (parse=0x82f46d4, expr=0x82fc164,\nkind=1) at planner.c:721\n#24 0x0810cf7e in preprocess_qual_conditions (parse=0x82f46d4,\njtnode=0x82fc36c) at planner.c:775\n#25 0x0810c75c in subquery_planner (parse=0x82f46d4, tuple_fraction=-1) at\nplanner.c:168\n#26 0x0810c68c in planner (parse=0x82f46d4) at planner.c:96\n\n\n", "msg_date": "Mon, 23 Sep 2002 22:33:51 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "subselect bug (was Re: [GENERAL] DBLink: interesting issue)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> replica=# create table foo(f1 int);\n> CREATE TABLE\n> replica=# SELECT * FROM foo t WHERE NOT EXISTS (SELECT remoteid FROM (SELECT\n> f1 as remoteid FROM foo WHERE f1 = t.f1) AS t1);\n> server closed the connection unexpectedly\n\nIck.\n\n> I'm just starting to dig in to this, but was hoping for any thoughts or\n> guidance I can get.\n\nI can look at this, unless you really want to solve it yourself ...\n\n> p.s. Below is a backtrace:\n\nThe debug output:\n\nTRAP: FailedAssertion(\"!(var->varlevelsup > 0 && var->varlevelsup < PlannerQueryLevel)\", File: \"subselect.c\", Line: 81)\n\nsuggests that the problem is with variable depth --- I'm guessing that\nwe're not adjusting varlevelsup correctly at some step of the planning\nprocess. Offhand I'd expect the innermost \"select\" to be pulled up into\nthe parent select (the argument of EXISTS) and probably something is\ngoing wrong with that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Sep 2002 12:17:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: subselect bug (was Re: [GENERAL] DBLink: interesting issue) " }, { "msg_contents": "Tom Lane wrote:\n>>I'm just starting to dig in to this, but was hoping for any thoughts or\n>>guidance I can get.\n> \n> I can look at this, unless you really want to solve it yourself ...\n> \n\nI'll look into it a bit for my own edification, but if you have the time to \nsolve it, I wouldn't want to get in the way. In any case, if you think it \nshould be fixed before beta2, I'd give you better odds than me ;-)\n\nJoe\n\n", "msg_date": "Tue, 24 Sep 2002 09:48:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: subselect bug (was Re: [GENERAL] DBLink: interesting" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> replica=# create table foo(f1 int);\n> CREATE TABLE\n> replica=# SELECT * FROM foo t WHERE NOT EXISTS (SELECT remoteid FROM (SELECT\n> f1 as remoteid FROM foo WHERE f1 = t.f1) AS t1);\n> server closed the connection unexpectedly\n\nGot it --- this bug has been there awhile :-(, ever since we had the\npull-up-subquery logic, which was in 7.1 IIRC. The pullup code\nneglected to adjust references to uplevel Vars. Surprising that no one\nreported this sooner.\n\nThe attached patch is against CVS tip. It will not apply cleanly to\n7.2 because pull_up_subqueries() has been modified since then, but if\nanyone's desperate for a fix in 7.2 it could probably be adapted.\n\n\t\t\tregards, tom lane\n\n*** src/backend/optimizer/plan/planner.c.orig\tWed Sep 4 17:30:30 2002\n--- src/backend/optimizer/plan/planner.c\tTue Sep 24 14:02:54 2002\n***************\n*** 337,352 ****\n \n \t\t\t/*\n \t\t\t * Now make a modifiable copy of the subquery that we can run\n! \t\t\t * OffsetVarNodes on.\n \t\t\t */\n \t\t\tsubquery = copyObject(subquery);\n \n \t\t\t/*\n! \t\t\t * Adjust varnos in subquery so that we can append its\n \t\t\t * rangetable to upper query's.\n \t\t\t */\n \t\t\trtoffset = length(parse->rtable);\n \t\t\tOffsetVarNodes((Node *) subquery, rtoffset, 0);\n \n \t\t\t/*\n \t\t\t * Replace all of the top query's references to the subquery's\n--- 337,358 ----\n \n \t\t\t/*\n \t\t\t * Now make a modifiable copy of the subquery that we can run\n! \t\t\t * OffsetVarNodes and IncrementVarSublevelsUp on.\n \t\t\t */\n \t\t\tsubquery = copyObject(subquery);\n \n \t\t\t/*\n! \t\t\t * Adjust level-0 varnos in subquery so that we can append its\n \t\t\t * rangetable to upper query's.\n \t\t\t */\n \t\t\trtoffset = length(parse->rtable);\n \t\t\tOffsetVarNodes((Node *) subquery, rtoffset, 0);\n+ \n+ \t\t\t/*\n+ \t\t\t * Upper-level vars in subquery are now one level closer to their\n+ \t\t\t * parent than before.\n+ \t\t\t */\n+ \t\t\tIncrementVarSublevelsUp((Node *) subquery, -1, 1);\n \n \t\t\t/*\n \t\t\t * Replace all of the top query's references to the subquery's\n", "msg_date": "Tue, 24 Sep 2002 14:48:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: subselect bug (was Re: [GENERAL] DBLink: interesting issue) " } ]
[ { "msg_contents": "Hi all,\n\nIt occurs to me that opening web page on www.postgresql.org, asking the\nuser to select the mirror, is rather unprofessional. I am sure this has\nbeen discussed before but I thought I would bring it up again anyway.\n\nSo, why not just redirect people to one of the mirrors listed? This could\nbe done based on IP (yes it is inaccurate but it is close enough and has\nthe same net effect: pushing people off the main web server) or it could\nbe done by simply redirecting to a random mirror.\n\n From a quick look, there is nothing of any real size on the site\n(excluding developer.postgresql.org, which is not the issue) to warrant\npeople wanting to access a geographically local server anyway. (Unlike the\ncase of FTP, for which the list of mirrors is very useful).\n\nGavin\n\n", "msg_date": "Tue, 24 Sep 2002 13:49:46 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Web site" }, { "msg_contents": "On Tue, 24 Sep 2002, Gavin Sherry wrote:\n\n> Hi all,\n>\n> It occurs to me that opening web page on www.postgresql.org, asking the\n> user to select the mirror, is rather unprofessional. I am sure this has\n> been discussed before but I thought I would bring it up again anyway.\n\nAlready being worked on ...\n\n> So, why not just redirect people to one of the mirrors listed? This\n> could be done based on IP (yes it is inaccurate but it is close enough\n> and has the same net effect: pushing people off the main web server) or\n> it could be done by simply redirecting to a random mirror.\n\nHave tried both in the past with disastrous results ...\n\n\n", "msg_date": "Tue, 24 Sep 2002 01:50:07 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "> > could be done based on IP (yes it is inaccurate but it is close enough\n> > and has the same net effect: pushing people off the main web server) or\n> > it could be done by simply redirecting to a random mirror.\n> \n> Have tried both in the past with disastrous results ...\n\nWhat method will be employed instead?\n\nGavin\n\n", "msg_date": "Tue, 24 Sep 2002 15:13:49 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: Web site" }, { "msg_contents": "On Tue, 24 Sep 2002, Gavin Sherry wrote:\n\n> Hi all,\n>\n> It occurs to me that opening web page on www.postgresql.org, asking the\n> user to select the mirror, is rather unprofessional. I am sure this has\n> been discussed before but I thought I would bring it up again anyway.\n\nYour point?\n\n> So, why not just redirect people to one of the mirrors listed? This could\n> be done based on IP (yes it is inaccurate but it is close enough and has\n> the same net effect: pushing people off the main web server) or it could\n> be done by simply redirecting to a random mirror.\n\nBeen there, done that, didn't work. Too much of a job to keep track of\nthat many IP blocks too.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 24 Sep 2002 03:59:33 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "Hi,\n\n>>So, why not just redirect people to one of the mirrors listed? This\n>>could be done based on IP (yes it is inaccurate but it is close enough\n>>and has the same net effect: pushing people off the main web server) or\n>>it could be done by simply redirecting to a random mirror.\nI think it would be stupid, I am, who wants to decide where to go. If I \nfeel that .co.uk is better than others I'll chose that, and bookmark if \nI want.\n(random??? brbrbrbrbr) :)\n\nC.\n\n", "msg_date": "Tue, 24 Sep 2002 14:23:21 +0200", "msg_from": "CoL <col@mportal.hu>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "On Tue, Sep 24, 2002 at 03:59:33AM -0400, Vince Vielhaber wrote:\n> On Tue, 24 Sep 2002, Gavin Sherry wrote:\n> \n> > Hi all,\n> >\n> > It occurs to me that opening web page on www.postgresql.org, asking the\n> > user to select the mirror, is rather unprofessional. I am sure this has\n> > been discussed before but I thought I would bring it up again anyway.\n> \n> Your point?\n> \n> > So, why not just redirect people to one of the mirrors listed? This could\n> > be done based on IP (yes it is inaccurate but it is close enough and has\n> > the same net effect: pushing people off the main web server) or it could\n> > be done by simply redirecting to a random mirror.\n> \n> Been there, done that, didn't work. Too much of a job to keep track of\n> that many IP blocks too.\n \n\nI'd suggest setting a cookie, so I only see the 'pick a mirror' the\nfirst time. And provide a link to 'pick a different mirror' that resets\nor ignores the cookie.\n\nRoss\n", "msg_date": "Tue, 24 Sep 2002 10:24:01 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "On Tue, 24 Sep 2002, Ross J. Reedstrom wrote:\n\n> On Tue, Sep 24, 2002 at 03:59:33AM -0400, Vince Vielhaber wrote:\n> > On Tue, 24 Sep 2002, Gavin Sherry wrote:\n> >\n> > > Hi all,\n> > >\n> > > It occurs to me that opening web page on www.postgresql.org, asking the\n> > > user to select the mirror, is rather unprofessional. I am sure this has\n> > > been discussed before but I thought I would bring it up again anyway.\n> >\n> > Your point?\n> >\n> > > So, why not just redirect people to one of the mirrors listed? This could\n> > > be done based on IP (yes it is inaccurate but it is close enough and has\n> > > the same net effect: pushing people off the main web server) or it could\n> > > be done by simply redirecting to a random mirror.\n> >\n> > Been there, done that, didn't work. Too much of a job to keep track of\n> > that many IP blocks too.\n>\n>\n> I'd suggest setting a cookie, so I only see the 'pick a mirror' the\n> first time. And provide a link to 'pick a different mirror' that resets\n> or ignores the cookie.\n\nOr choose the mirror that works best for you and bookmark it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 24 Sep 2002 11:26:55 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "On Tue, Sep 24, 2002 at 11:26:55AM -0400, Vince Vielhaber wrote:\n> On Tue, 24 Sep 2002, Ross J. Reedstrom wrote:\n> >\n> > I'd suggest setting a cookie, so I only see the 'pick a mirror' the\n> > first time. And provide a link to 'pick a different mirror' that resets\n> > or ignores the cookie.\n> \n> Or choose the mirror that works best for you and bookmark it.\n\nOf course, that's what _I_ do, but the dicussion was how to make the\nfrontpage 'user friendly' and 'professional'. Lots of global corps. do\nthe 'pick your geographical region' thing, but mainly for sales reasons,\nso it must be professional, right?\n\nRoss\n", "msg_date": "Tue, 24 Sep 2002 10:38:46 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> It occurs to me that opening web page on www.postgresql.org, asking the\n> user to select the mirror, is rather unprofessional.\n\nI agree; not only that, it has advertisements on it. What's the\njustification for that, considering that none of the mirror sites\n(AFAIK) have ads on them?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "24 Sep 2002 12:54:21 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "Neil Conway wrote:\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > It occurs to me that opening web page on www.postgresql.org, asking the\n> > user to select the mirror, is rather unprofessional.\n> \n> I agree; not only that, it has advertisements on it. What's the\n> justification for that, considering that none of the mirror sites\n> (AFAIK) have ads on them?\n\nI wondered that myself.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 24 Sep 2002 13:40:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "Ross J. Reedstrom wrote:\n\n> On Tue, Sep 24, 2002 at 03:59:33AM -0400, Vince Vielhaber wrote:\n> > On Tue, 24 Sep 2002, Gavin Sherry wrote:\n> >\n> > > Hi all,\n> > >\n> > > It occurs to me that opening web page on www.postgresql.org, asking\nthe\n> > > user to select the mirror, is rather unprofessional. I am sure this\nhas\n> > > been discussed before but I thought I would bring it up again anyway.\n> >\n> > Your point?\n> >\n> > > So, why not just redirect people to one of the mirrors listed? This\ncould\n> > > be done based on IP (yes it is inaccurate but it is close enough and\nhas\n> > > the same net effect: pushing people off the main web server) or it\ncould\n> > > be done by simply redirecting to a random mirror.\n> >\n> > Been there, done that, didn't work. Too much of a job to keep track of\n> > that many IP blocks too.\n>\n>\n> I'd suggest setting a cookie, so I only see the 'pick a mirror' the\n> first time. And provide a link to 'pick a different mirror' that resets\n> or ignores the cookie.\n\nIf I had a vote, I would vote +1 for this option. I think it's easy to\nimplement and shouldn't have negative effects on performance.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Tue, 24 Sep 2002 22:20:39 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "On 24 Sep 2002, Neil Conway wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > It occurs to me that opening web page on www.postgresql.org, asking the\n> > user to select the mirror, is rather unprofessional.\n>\n> I agree; not only that, it has advertisements on it. What's the\n> justification for that, considering that none of the mirror sites\n> (AFAIK) have ads on them?\n\nActually, that is part of the redesign as well ...\n\n\n", "msg_date": "Wed, 25 Sep 2002 22:23:51 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Web site" }, { "msg_contents": "> > > could be done based on IP (yes it is inaccurate but it is close enough\n> > > and has the same net effect: pushing people off the main web server) or\n> > > it could be done by simply redirecting to a random mirror.\n> > \n> > Have tried both in the past with disastrous results ...\n> \n> What method will be employed instead?\n\nAnyone thought about using GeoIP and writing a script that'd dump the\ndatabase into something that could be postgresql usable? Just a\nthought. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 27 Sep 2002 02:22:55 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Web site" } ]
[ { "msg_contents": "Just an FYI - this kind of thing would be a *great* feature addition to the generic PostgresSQL release. We at Lyris often hear that \"postgressql is very slow, and the files are getting larger\" and then \"wow! it's so much faster now that we're regularly vacuuming!\" after we let them know about this need (the RPM install of PostgresSQL is so easy that most people don't read any docs). Automatic maintenance of database tables is a Good Thing (tm) and would make more people we introduce to pgsql favorably disposed toward it.\n\n-john\n\n\n> I have written a small daemon that can automatically vacuum PostgreSQL \n> database, depending upon activity per table.\n\n> It sits on top of postgres statistics collector. The postgres installation \n> should have per row statistics collection enabled.\n\n> Features are,\n\n> * Vacuuming based on activity on the table\n> * Per table vacuum. So only heavily updated tables are vacuumed.\n> * multiple databases supported\n> * Performs 'vacuum analyze' only, so it will not block the database\n\n\n> The project location is \n> http://gborg.postgresql.org/project/pgavd/projdisplay.php\n\n> Let me know for bugs/improvements and comments.. \n\n> I am sure real world postgres installations has some sort of scripts doing \n> similar thing. This is an attempt to provide a generic interface to periodic \n> vacuum.\n\n\n> Bye\n> Shridhar\n\n> --\n> The Abrams' Principle: The shortest distance between two points is off the \n> wall.\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Mon, 23 Sep 2002 22:18:02 -0700", "msg_from": "John Buckman <john@lyris.com>", "msg_from_op": true, "msg_subject": "Re: Postgresql Automatic vacuum" } ]
[ { "msg_contents": "Rohit Seth recently added support for the use of large TLB pages on\nLinux if the processor architecture supports them (I believe the\nSPARC, IA32, and IA64 have hugetlb support, more archs will probably\nbe added). The patch was merged into Linux 2.5.36, so it will more\nthan likely be in Linux 2.6. For more information on large TLB pages\nand why they are generally viewed to improve database performance, see\nhere:\n\n http://lwn.net/Articles/6535/ (the patch this refers to is an\n earlier implementation, I believe, but the idea is the same)\n http://lwn.net/Articles/10293/ (item #4)\n\nI'd like to enable PostgreSQL to use large TLB pages, if the OS and\nprocessor support them. In talking to the author of the TLB patches\nfor Linux (Rohit Seth), he described the current API:\n\n======\n1) Only two system calls. These are:\n\nsys_alloc_hugepages(int key, unsigned long addr, unsigned long len,\n int prot, int flag)\n\nsys_free_hugepages(unsigned long addr)\n\nKey will be equal to zero if user wants these huge pages as private.\nA positive int value will be used for unrelated apps to share the same\nphysical huge pages.\n\naddr is the user prefered address. The kernel may decide to allocate\na different virtual address (depending on availability and alignment\nfactors).\n\nlen is the requested size of memory wanted by user app.\n\nprot could get the value of PROT_READ, PROT_WRITE, PROT_EXEC\n\nflag: The only allowed value right now is IPC_CREAT, which in case of\nshred hugepages (across processes) tells the kernel to create a new\nsegment if none is already created. If this flag is not provided and\nthere is no hugepage segment corresponding to the \"key\" then ENOENT is\nreturned. More like on the lines of IPC_CREAT flag for shmget\nroutine.\n\nOn success sys_alloc_hugepages returns the virtual address allocated\nby kernel.\n=====\n\nSo as I understand it, we would basically replace the calls to\nshmget(), shmdt(), etc. with these system calls. The behavior will be\nslightly different, however -- I'm not sure if this API supports\neverything we expect the SysV IPC API to support (e.g. telling the #\nof clients attached to a given segment). Can anyone comment on\nexactly what functionality we expect when dealing with the storage\nmechanism of the shared buffer?\n\nAny comments would be appreciated.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "24 Sep 2002 09:50:02 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "making use of large TLB pages" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> I'd like to enable PostgreSQL to use large TLB pages, if the OS and\n> processor support them.\n\nHmm ... it seems interesting, but I'm hesitant to do a lot of work\nto support something that's only available on one hardware-and-OS\ncombination. (If we were talking about a Windows-specific hack,\nyou'd already have lost the audience, no? But I digress.)\n\n> So as I understand it, we would basically replace the calls to\n> shmget(), shmdt(), etc. with these system calls. The behavior will be\n> slightly different, however -- I'm not sure if this API supports\n> everything we expect the SysV IPC API to support (e.g. telling the #\n> of clients attached to a given segment).\n\nI trust it at least supports inheriting the page mapping over a fork()?\n\n> Can anyone comment on\n> exactly what functionality we expect when dealing with the storage\n> mechanism of the shared buffer?\n\nThe only thing we use beyond the obvious \"here's some memory accessible\nby both parent and child processes\" is the #-of-clients functionality\nyou mentioned. The reason that that is interesting is it provides a\nsafety interlock against the case where a postmaster has crashed but\nleft child backends running. If a new postmaster is started and starts\nits own collection of children then we are in very bad hot water,\nbecause the old and new backend sets will be modifying the same database\nfiles without any mutual awareness or interlocks. This *will* lead to\nserious, possibly unrecoverable database corruption.\n\nThe SysV API provides a reliable interlock to prevent this scenario:\nwe read the old shared memory block ID from the old postmaster's\npostmaster.pid file, and look to see if that block (a) still exists\nand (b) still has attached processes (presumably backends). If it's\ngone or has no attached processes, it's safe for the new postmaster\nto continue startup.\n\nI have little love for the SysV shmem API, but I haven't thought of\nan equivalently reliable interlock for this scenario without it.\n(For example, something along the lines of requiring each backend\nto write its PID into a file isn't very reliable at all: it leaves\na window at each backend start where the backend hasn't yet written\nits PID, and it increases by a large factor the risk we've already\nseen wherein stale PID entries in lockfiles might by chance match the\nPIDs of other, unrelated processes.)\n\nAny ideas for better answers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 00:49:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> Neil Conway <neilc@samurai.com> writes:\n> > I'd like to enable PostgreSQL to use large TLB pages, if the OS\n> > and processor support them.\n> \n> Hmm ... it seems interesting, but I'm hesitant to do a lot of work\n> to support something that's only available on one hardware-and-OS\n> combination.\n\nTrue; further, I personally find the current API a little\ncumbersome. For example, we get 4MB pages on Solaris with a few lines\nof code:\n\n#if defined(solaris) && defined(__sparc__) /* use intimate shared\n\t\tmemory on SPARC Solaris */ memAddress = shmat(shmid, 0,\n\t\tSHM_SHARE_MMU);\n\nBut given that\n\n (a) Linux on x86 is probably our most popular platform\n\n (b) Every x86 since the Pentium has supported large pages\n\n (c) Other archs, like IA64 and SPARC, also support large pages\n\nI think it's worthwhile implementing this, if possible.\n\n> I trust it at least supports inheriting the page mapping over a\n> fork()?\n\nI'll check on this, but I'm pretty sure that it does.\n\n> The SysV API provides a reliable interlock to prevent this scenario:\n> we read the old shared memory block ID from the old postmaster's\n> postmaster.pid file, and look to see if that block (a) still exists\n> and (b) still has attached processes (presumably backends).\n\nIf the postmaster is starting up and the segment still exists, could\nwe assume that's an error condition, and force the admin to manually\nfix it? It does make the system less robust, but I'm suspicious of any\nattempts to automagically fix a situation in which we *know* something\nhas gone seriously wrong...\n\nAnother possibility might be to still allocate a small SysV shmem\narea, and use that to provide the interlock, while we allocate the\nbuffer area using sys_alloc_hugepages. That's somewhat of a hack, but\nI think it would resolve the interlock problem, at least.\n\n> Any ideas for better answers?\n\nStill scratching my head on this one, and I'll let you know if I think\nof anything better.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "25 Sep 2002 13:07:19 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> I think it's worthwhile implementing this, if possible.\n\nI wasn't objecting (I work for Red Hat, remember ;-)). I was just\nsaying there's a limit to the messiness I think we should accept.\n\n>> The SysV API provides a reliable interlock to prevent this scenario:\n>> we read the old shared memory block ID from the old postmaster's\n>> postmaster.pid file, and look to see if that block (a) still exists\n>> and (b) still has attached processes (presumably backends).\n\n> If the postmaster is starting up and the segment still exists, could\n> we assume that's an error condition, and force the admin to manually\n> fix it?\n\nIt wasn't clear from your description whether large-TLB shmem segments\neven have IDs that one could use to determine whether \"the segment still\nexists\". If the segments are anonymous then how do you do that?\n\n> It does make the system less robust, but I'm suspicious of any\n> attempts to automagically fix a situation in which we *know* something\n> has gone seriously wrong...\n\nWe've spent a lot of effort on trying to ensure that we (a) start up\nwhen it's safe and (b) refuse to start up when it's not safe. While (b)\nis clearly the more critical point, backsliding on (a) isn't real nice\neither. People don't like postmasters that randomly fail to start.\n\n> Another possibility might be to still allocate a small SysV shmem\n> area, and use that to provide the interlock, while we allocate the\n> buffer area using sys_alloc_hugepages. That's somewhat of a hack, but\n> I think it would resolve the interlock problem, at least.\n\nNot a bad idea ... I have not got a better one offhand ... but watch\nout for SHMMIN settings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 13:30:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages " }, { "msg_contents": "Okay, I did some more research into this area. It looks like it will\nbe feasible to use large TLB pages for PostgreSQL.\n\nTom Lane <tgl@sss.pgh.pa.us> writes:\n> It wasn't clear from your description whether large-TLB shmem segments\n> even have IDs that one could use to determine whether \"the segment still\n> exists\".\n\nThere are two types of hugepages:\n\n (a) private: Not shared on fork(), not accessible to processes\n other than the one that allocates the pages.\n\n (b) shared: Shared across a fork(), accessible to other\n processes: different processes can access the same segment\n if they call sys_alloc_hugepages() with the same key.\n\nSo for a standalone backend, we can just use private pages (probably\nworth using private hugepages rather than malloc, although I doubt it\nmatters much either way).\n\n> > Another possibility might be to still allocate a small SysV shmem\n> > area, and use that to provide the interlock, while we allocate the\n> > buffer area using sys_alloc_hugepages. That's somewhat of a hack, but\n> > I think it would resolve the interlock problem, at least.\n> \n> Not a bad idea ... I have not got a better one offhand ... but watch\n> out for SHMMIN settings.\n\nAs it turns out, this will be completely unnecessary. Since hugepages\nare an in-kernel data structure, the kernel takes care of ensuring\nthat dieing processes don't orphan any unused hugepage segments. The\nlogic works like this: (for shared hugepages)\n\n (a) sys_alloc_hugepages() without IPC_EXCL will return a\n pointer to an existing segment, if there is one that\n matches the key. If an existing segment is found, the\n usage counter for that segment is incremented. If no\n matching segment exists, an error is returned. (I'm pretty\n sure the usage counter is also incremented after a fork(),\n but I'll double-check that.)\n\n (b) sys_free_hugepages() decrements the usage counter\n\n (c) when a process that has allocated a shared hugepage dies\n for *any reason* (even kill -9), the usage counter is\n decremented\n\n (d) if the usage counter for a given segment ever reaches\n zero, the segment is deleted and the memory is free'd.\n\nIf we used a key that would remain the same between runs of the\npostmaster, this should ensure that there isn't a possibility of two\nindependant sets of backends operating on the same data dir. The most\nlogical way to do this IMHO would be to just hash the data dir, but I\nsuppose the current method of using the port number should work as\nwell.\n\nTo elaborate on (a) a bit, we'd want to use this logic when allocating\na new set of hugepages on postmaster startup:\n\n (1) call sys_alloc_hugepages() without IPC_EXCL. If it returns\n an error, we're in the clear: there's no page matching\n that key. If it returns a pointer to a previously existing\n segment, panic: it is very likely that there are some\n orphaned backends still active.\n\n (2) If the previous call didn't find anything, call\n sys_alloc_hugepages() again, specifying IPC_EXCL to create\n a new segment.\n\nNow, the question is: how should this be implemented? You recently\ndid some of the legwork toward supporting different APIs for shared\nmemory / semaphores, which makes this work easier -- unfortunately,\nsome additional stuff is still needed. Specifically, support for\nhugepages is a configuration option, that may or may not be enabled\n(if it's disabled, the syscall returns a specific error). So I believe\nthe logic is something like:\n\n - if compiling on a Linux system, enable support for hugepages\n (the regular SysV stuff is still needed as a backup)\n\n - if we're compiling on a Linux system but the kernel headers\n don't define the syscalls we need, use some reasonable\n defaults (e.g. the syscall numbers for the current hugepage\n syscalls in Linux 2.5)\n\n - at runtime, try to make one of these syscalls. If it fails,\n fall back to the SysV stuff.\n\nDoes that sound reasonable?\n\nAny other comments would be appreciated.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "28 Sep 2002 01:30:45 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> If we used a key that would remain the same between runs of the\n> postmaster, this should ensure that there isn't a possibility of two\n> independant sets of backends operating on the same data dir. The most\n> logical way to do this IMHO would be to just hash the data dir, but I\n> suppose the current method of using the port number should work as\n> well.\n\nYou should stick as closely as possible to the key logic currently used\nfor SysV shmem keys. That logic is intended to cope with the case where\nsomeone else is already using the key# that we initially generate, as\nwell as the case where we discover a collision with a pre-existing\nbackend set. (We tell the difference by looking for a magic number at\nthe start of the shmem segment.)\n\nNote that we do not assume the key is the same on each run; that's why\nwe store it in postmaster.pid.\n\n> (1) call sys_alloc_hugepages() without IPC_EXCL. If it returns\n> an error, we're in the clear: there's no page matching\n> that key. If it returns a pointer to a previously existing\n> segment, panic: it is very likely that there are some\n> orphaned backends still active.\n\ns/panic/and the PG magic number appears in the segment header, panic/\n\n> - if we're compiling on a Linux system but the kernel headers\n> don't define the syscalls we need, use some reasonable\n> defaults (e.g. the syscall numbers for the current hugepage\n> syscalls in Linux 2.5)\n\nI think this is overkill, and quite possibly dangerous. If we don't see\nthe symbols then don't try to compile the code.\n\nOn the whole it seems that this allows a very nearly one-to-one mapping\nto the existing SysV functionality. We don't have the \"number of\nconnected processes\" syscall, perhaps, but we don't need it: if a\nhugepages segment exists we can assume the number of connected processes\nis greater than 0, and that's all we really need to know.\n\nI think it's okay to stuff this support into the existing\nport/sysv_shmem.c file, rather than make a separate file (particularly\ngiven your point that we have to be able to fall back to SysV calls at\nruntime). I'd suggest reorganizing the code in that file slightly to\nseparate the actual syscalls from the controlling logic in\nPGSharedMemoryCreate(). Probably also will have to extend the API for\nPGSharedMemoryIsInUse() and RecordSharedMemoryInLockFile() to allow\nthree fields to be recorded in postmaster.pid, not two --- you'll want\na boolean indicating whether the stored key is for a SysV or hugepage\nsegment.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 11:05:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages " }, { "msg_contents": "\nI haven't been following this thread. Can someone answer:\n\n\tIs TLB Linux-only?\n\tWhy use it and non SysV memory?\n\tIs it a lot of code?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Neil Conway <neilc@samurai.com> writes:\n> > If we used a key that would remain the same between runs of the\n> > postmaster, this should ensure that there isn't a possibility of two\n> > independant sets of backends operating on the same data dir. The most\n> > logical way to do this IMHO would be to just hash the data dir, but I\n> > suppose the current method of using the port number should work as\n> > well.\n> \n> You should stick as closely as possible to the key logic currently used\n> for SysV shmem keys. That logic is intended to cope with the case where\n> someone else is already using the key# that we initially generate, as\n> well as the case where we discover a collision with a pre-existing\n> backend set. (We tell the difference by looking for a magic number at\n> the start of the shmem segment.)\n> \n> Note that we do not assume the key is the same on each run; that's why\n> we store it in postmaster.pid.\n> \n> > (1) call sys_alloc_hugepages() without IPC_EXCL. If it returns\n> > an error, we're in the clear: there's no page matching\n> > that key. If it returns a pointer to a previously existing\n> > segment, panic: it is very likely that there are some\n> > orphaned backends still active.\n> \n> s/panic/and the PG magic number appears in the segment header, panic/\n> \n> > - if we're compiling on a Linux system but the kernel headers\n> > don't define the syscalls we need, use some reasonable\n> > defaults (e.g. the syscall numbers for the current hugepage\n> > syscalls in Linux 2.5)\n> \n> I think this is overkill, and quite possibly dangerous. If we don't see\n> the symbols then don't try to compile the code.\n> \n> On the whole it seems that this allows a very nearly one-to-one mapping\n> to the existing SysV functionality. We don't have the \"number of\n> connected processes\" syscall, perhaps, but we don't need it: if a\n> hugepages segment exists we can assume the number of connected processes\n> is greater than 0, and that's all we really need to know.\n> \n> I think it's okay to stuff this support into the existing\n> port/sysv_shmem.c file, rather than make a separate file (particularly\n> given your point that we have to be able to fall back to SysV calls at\n> runtime). I'd suggest reorganizing the code in that file slightly to\n> separate the actual syscalls from the controlling logic in\n> PGSharedMemoryCreate(). Probably also will have to extend the API for\n> PGSharedMemoryIsInUse() and RecordSharedMemoryInLockFile() to allow\n> three fields to be recorded in postmaster.pid, not two --- you'll want\n> a boolean indicating whether the stored key is for a SysV or hugepage\n> segment.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 01:39:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \tIs TLB Linux-only?\n\nWell, the \"TLB\" is a feature of the CPU, so no. Many modern processors\nsupport large TLB pages in some fashion.\n\nHowever, the specific API for using large TLB pages differs between\noperating systems. The API I'm planning to implement is the one\nprovided by recent versions of Linux (2.5.38+).\n\nI've only looked briefly at enabling the usage of large pages on other\noperating systems. On Solaris, we already use large pages (due to\nusing Intimate Shared Memory). On HPUX, you apparently need call\nchattr on the executable for it to use large pages. AFAIK the BSDs\ndon't support large pages for user-land apps -- if I'm incorrect, let\nme know.\n\n> \tWhy use it and non SysV memory?\n\nIt's faster, at least in theory. I posted these links at the start of\nthe thread:\n\n http://lwn.net/Articles/6535/\n http://lwn.net/Articles/10293/\n\n> \tIs it a lot of code?\n\nI haven't implemented it yet, so I'm not sure. However, I don't think\nit will be a lot of code.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "29 Sep 2002 02:04:40 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \tIs TLB Linux-only?\n> \n> Well, the \"TLB\" is a feature of the CPU, so no. Many modern processors\n> support large TLB pages in some fashion.\n> \n> However, the specific API for using large TLB pages differs between\n> operating systems. The API I'm planning to implement is the one\n> provided by recent versions of Linux (2.5.38+).\n> \n> I've only looked briefly at enabling the usage of large pages on other\n> operating systems. On Solaris, we already use large pages (due to\n> using Intimate Shared Memory). On HPUX, you apparently need call\n> chattr on the executable for it to use large pages. AFAIK the BSDs\n> don't support large pages for user-land apps -- if I'm incorrect, let\n> me know.\n> \n> > \tWhy use it and non SysV memory?\n> \n> It's faster, at least in theory. I posted these links at the start of\n> the thread:\n> \n> http://lwn.net/Articles/6535/\n> http://lwn.net/Articles/10293/\n> \n> > \tIs it a lot of code?\n> \n> I haven't implemented it yet, so I'm not sure. However, I don't think\n> it will be a lot of code.\n\nOK, personally, I would like to see an actual speedup of PostgreSQL\nqueries before I would apply such a OS-specific, version-specific patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 09:38:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, personally, I would like to see an actual speedup of PostgreSQL\n> queries before I would apply such a OS-specific, version-specific\n> patch.\n\nDon't be silly. A performance improvement is a performance\nimprovement. According to your logic, using assembly-optimized locking\nprimitives shouldn't be done unless we've exhausted every possible\noptimization in every other part of the system (a process which will\nlikely never be finished).\n\nIf the optimization was for some obscure UNIX variant and/or an\nobscure processor, I would agree that it wouldn't be worth the\nbother. But given that\n\n (a) Linux on IA32 is likely our most popular platform [1]\n\n (b) In theory, this will help performance where we need it\n most, IMHO (high-end systems using large shared buffers)\n\nI think it's at least worth implementing -- if it doesn't provide a\nnoticeable performance improvement, then we don't need to merge it.\n\nCheers,\n\nNeil\n\n[1] It's worth noting that the huge tlb patch currently works in IA64,\nSPARC, and may well be ported to additional architectures in the\nfuture.\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "29 Sep 2002 11:21:22 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> OK, personally, I would like to see an actual speedup of PostgreSQL\n>> queries before I would apply such a OS-specific, version-specific\n>> patch.\n\n> Don't be silly. A performance improvement is a performance\n> improvement.\n\nNo, Bruce was saying that he wanted to see demonstrable improvement\n*due to this specific change* before committing to support a\nplatform-specific API. I agree with him, actually. If you do the\nTLB code and can't measure any meaningful performance improvement\nwhen using it vs. when not, I'd not be excited about cluttering the\ndistribution with it.\n\n> I think it's at least worth implementing -- if it doesn't provide a\n> noticeable performance improvement, then we don't need to merge it.\n\nYou're on the same page, you just don't realize it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Sep 2002 11:36:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages " }, { "msg_contents": "Tom Lane wrote:\n> Neil Conway <neilc@samurai.com> writes:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> OK, personally, I would like to see an actual speedup of PostgreSQL\n> >> queries before I would apply such a OS-specific, version-specific\n> >> patch.\n> \n> > Don't be silly. A performance improvement is a performance\n> > improvement.\n> \n> No, Bruce was saying that he wanted to see demonstrable improvement\n> *due to this specific change* before committing to support a\n> platform-specific API. I agree with him, actually. If you do the\n> TLB code and can't measure any meaningful performance improvement\n> when using it vs. when not, I'd not be excited about cluttering the\n> distribution with it.\n> \n> > I think it's at least worth implementing -- if it doesn't provide a\n> > noticeable performance improvement, then we don't need to merge it.\n> \n> You're on the same page, you just don't realize it...\n\nI see what he thought I said, I just can't figure out how he read it\nthat way.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 17:30:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "Neil,\n\nI agree with Bruce and Tom. AFAIK and in my experience I don't think it\nwill be a significantly measurable increase. Not only that, but the\nportability issue itself tends to make it less desireable. I recently\nported SAP DB and the coinciding DevTools over to OpenBSD and learned again\nfirst-hand what a pain in the ass having platform-specific code is. I guess\nit's up to you, Neil. If you want to spend the time trying to implement it,\nand it does prove to have a significant performance increase I'd say maybe.\nIMHO, I just think that time could be better spent improving the current\nsystem rather than trying to add to it in a singular way. Sorry if my\ncomments are out-of-line on this one but it has been a thread for some time\nI'm just kinda tired of reading theory vs proof.\n\nSince you are so set on trying to implement this, I'm just wondering what\ndocumentation has tested evidence of measurable increases in similar\nsituations? I just like arguments to be backed by proof... and I'm sure\nthere is documentation on this somewhere.\n\n-Jonah\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\nSent: Sunday, September 29, 2002 3:30 PM\nTo: Tom Lane\nCc: Neil Conway; PostgreSQL Hackers\nSubject: Re: [HACKERS] making use of large TLB pages\n\n\nTom Lane wrote:\n> Neil Conway <neilc@samurai.com> writes:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> OK, personally, I would like to see an actual speedup of PostgreSQL\n> >> queries before I would apply such a OS-specific, version-specific\n> >> patch.\n>\n> > Don't be silly. A performance improvement is a performance\n> > improvement.\n>\n> No, Bruce was saying that he wanted to see demonstrable improvement\n> *due to this specific change* before committing to support a\n> platform-specific API. I agree with him, actually. If you do the\n> TLB code and can't measure any meaningful performance improvement\n> when using it vs. when not, I'd not be excited about cluttering the\n> distribution with it.\n>\n> > I think it's at least worth implementing -- if it doesn't provide a\n> > noticeable performance improvement, then we don't need to merge it.\n>\n> You're on the same page, you just don't realize it...\n\nI see what he thought I said, I just can't figure out how he read it\nthat way.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n", "msg_date": "Sun, 29 Sep 2002 20:49:12 -0600", "msg_from": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com>", "msg_from_op": false, "msg_subject": "Re: making use of large TLB pages" }, { "msg_contents": "\"Jonah H. Harris\" <jharris@nightstarcorporation.com> writes:\n> I agree with Bruce and Tom.\n\nAFAIK Bruce and Tom (and myself) agree that this is a good idea,\nprovided it makes a noticeable performance difference (and if it\ndoesn't, it's not worth applying).\n\n> AFAIK and in my experience I don't think it will be a significantly\n> measurable increase.\n\nCan you elaborate on this experience?\n \n> Not only that, but the portability issue itself tends to make it\n> less desireable.\n\nWell, that's obvious: code that improves PostgreSQL on *all* platforms\nis clearly superior to code that only improves it on a couple. That's\nnot to say that the latter code is absolutely without merit, however.\n\n> Sorry if my comments are out-of-line on this one but it has been a\n> thread for some time I'm just kinda tired of reading theory vs\n> proof.\n\nWell, ISTM the easiest way to get some \"proof\" is to implement it and\nbenchmark the results. IMHO any claims about performance prior to that\nare mostly hand waving.\n\n> Since you are so set on trying to implement this, I'm just wondering\n> what documentation has tested evidence of measurable increases in\n> similar situations?\n\n(/me wonders if people bother reading the threads they reply to)\n\nhttp://lwn.net/Articles/10293/\n\nAccording to the HP guys, Oracle saw an 8% performance improvement in\nTPC-C when they started using large pages.\n\nTo be perfectly honest, I really have no idea if that will translate\ninto an 8% performance gain for PostgreSQL, or whether the performance\ngain only applies if you're using a machine with 16GB of RAM, or\nwhether the speedup from large pages is really just a correction of\nsome Oracle deficiency that we don't suffer from, etc. However, I do\nthink it's worth finding out.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "29 Sep 2002 23:03:52 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "Re: making use of large TLB pages" } ]
[ { "msg_contents": "The Makefile for contrib/earthdistance indicates that there should be a\nregression test, but the files seem to be missing from CVS. The change to the\nMakefile was made here:\n\nhttp://developer.postgresql.org/cvsweb.cgi/contrib/earthdistance/Makefile.diff?r1=1.11&r2=1.12\n\nWas the Makefile change a mistake, or are there files missing?\n\nJoe\n\n\n", "msg_date": "Tue, 24 Sep 2002 10:43:51 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "contrib/earthdistance missing regression test files" }, { "msg_contents": "On Tue, Sep 24, 2002 at 10:43:51 -0700,\n Joe Conway <mail@joeconway.com> wrote:\n> The Makefile for contrib/earthdistance indicates that there should be a\n> regression test, but the files seem to be missing from CVS. The change to \n> the\n> Makefile was made here:\n> \n> http://developer.postgresql.org/cvsweb.cgi/contrib/earthdistance/Makefile.diff?r1=1.11&r2=1.12\n> \n> Was the Makefile change a mistake, or are there files missing?\n\nThere is supposed to be a regression test. I may have forgotten to use\n-N or -r on the diff. If it is confirmed that the files needed for the\nregression test didn't make it into the submitted diff file, I can send\nin a diff versus current cvs.\n", "msg_date": "Tue, 24 Sep 2002 15:02:20 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "On Tue, Sep 24, 2002 at 15:02:20 -0500,\n Bruno Wolff III <bruno@wolff.to> wrote:\n> \n> There is supposed to be a regression test. I may have forgotten to use\n> -N or -r on the diff. If it is confirmed that the files needed for the\n> regression test didn't make it into the submitted diff file, I can send\n> in a diff versus current cvs.\n\nI still have a copy of the diff file (at least I think it is the one I\nsent in) and it has the regression sql and output files defined.\n", "msg_date": "Tue, 24 Sep 2002 15:07:38 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Tue, Sep 24, 2002 at 15:02:20 -0500,\n> Bruno Wolff III <bruno@wolff.to> wrote:\n> > \n> > There is supposed to be a regression test. I may have forgotten to use\n> > -N or -r on the diff. If it is confirmed that the files needed for the\n> > regression test didn't make it into the submitted diff file, I can send\n> > in a diff versus current cvs.\n> \n> I still have a copy of the diff file (at least I think it is the one I\n> sent in) and it has the regression sql and output files defined.\n\nYep, I missed adding earthdistance.out. Is that the only file.\n\nI usually do a 'gmake distclean' and 'cvs update' after a batch of\npatches to see that there aren't any new files in my CVS tree. I missed\nit this time.\n\nDoes that fix the problem?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 24 Sep 2002 16:10:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "On Tue, Sep 24, 2002 at 16:10:19 -0400,\n Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> Bruno Wolff III wrote:\n> > On Tue, Sep 24, 2002 at 15:02:20 -0500,\n> > Bruno Wolff III <bruno@wolff.to> wrote:\n> > > \n> > > There is supposed to be a regression test. I may have forgotten to use\n> > > -N or -r on the diff. If it is confirmed that the files needed for the\n> > > regression test didn't make it into the submitted diff file, I can send\n> > > in a diff versus current cvs.\n> > \n> > I still have a copy of the diff file (at least I think it is the one I\n> > sent in) and it has the regression sql and output files defined.\n> \n> Yep, I missed adding earthdistance.out. Is that the only file.\n\nNo. The new files are:\nexpected/earthdistance.out (which you metion above)\nsql/earthdistance.sql\n", "msg_date": "Tue, 24 Sep 2002 15:41:32 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "Bruno Wolff III wrote:\n> On Tue, Sep 24, 2002 at 16:10:19 -0400,\n> Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> > Bruno Wolff III wrote:\n> > > On Tue, Sep 24, 2002 at 15:02:20 -0500,\n> > > Bruno Wolff III <bruno@wolff.to> wrote:\n> > > > \n> > > > There is supposed to be a regression test. I may have forgotten to use\n> > > > -N or -r on the diff. If it is confirmed that the files needed for the\n> > > > regression test didn't make it into the submitted diff file, I can send\n> > > > in a diff versus current cvs.\n> > > \n> > > I still have a copy of the diff file (at least I think it is the one I\n> > > sent in) and it has the regression sql and output files defined.\n> > \n> > Yep, I missed adding earthdistance.out. Is that the only file.\n> \n> No. The new files are:\n> expected/earthdistance.out (which you metion above)\n> sql/earthdistance.sql\n\nOK, here's what I see now in CVS:\n\n\t#$ pwd\n\t/pgtop/contrib/earthdistance\n\t#$ lf\n\tCVS/ README.earthdistance earthdistance.out\n\tMakefile earthdistance.c earthdistance.sql.in\n\nWhat should be changed?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 24 Sep 2002 17:03:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> No. The new files are:\n>> expected/earthdistance.out (which you metion above)\n>> sql/earthdistance.sql\n\n> OK, here's what I see now in CVS:\n\n> \t#$ pwd\n> \t/pgtop/contrib/earthdistance\n> \t#$ lf\n> \tCVS/ README.earthdistance earthdistance.out\n> \tMakefile earthdistance.c earthdistance.sql.in\n\n> What should be changed?\n\nThe earthdistance.out file should be in an expected/ subdirectory, not\ndirectly in the contrib/earthdistance directory. Also, there is a\nmissing regression input script file earthdistance.sql (this is not\nrelated to earthdistance.sql.in), which should be in a sql/\nsubdirectory.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Sep 2002 17:33:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> No. The new files are:\n> >> expected/earthdistance.out (which you metion above)\n> >> sql/earthdistance.sql\n> \n> > OK, here's what I see now in CVS:\n> \n> > \t#$ pwd\n> > \t/pgtop/contrib/earthdistance\n> > \t#$ lf\n> > \tCVS/ README.earthdistance earthdistance.out\n> > \tMakefile earthdistance.c earthdistance.sql.in\n> \n> > What should be changed?\n> \n> The earthdistance.out file should be in an expected/ subdirectory, not\n> directly in the contrib/earthdistance directory. Also, there is a\n> missing regression input script file earthdistance.sql (this is not\n> related to earthdistance.sql.in), which should be in a sql/\n> subdirectory.\n\nOK, done. I thought the earthdistance.sql file was derived from\nearthdistance.sql.in, and I thought the out was just a test file. I got\nthem fixed now, in their proper directory. How do I run the regression\ntests for /contrib stuff?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 24 Sep 2002 23:46:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How do I run the regression tests for /contrib stuff?\n\nmake\nmake install\nmake installcheck\n\nAFAICT, earthdistance is nowhere near passing yet :-(. It looks to\nme like the regression test is depending on the cube-based features\nthat we decided to hold off for 7.4. Bruno, is that right?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 24 Sep 2002 23:57:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files " }, { "msg_contents": "On Tue, Sep 24, 2002 at 23:57:29 -0400,\n Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How do I run the regression tests for /contrib stuff?\n> \n> make\n> make install\n> make installcheck\n> \n> AFAICT, earthdistance is nowhere near passing yet :-(. It looks to\n> me like the regression test is depending on the cube-based features\n> that we decided to hold off for 7.4. Bruno, is that right?\n\nIt shouldn't be. When I resubmitted the patch I intended to take out\nall of the cube related tests. If there is a reference to cube in there\nit is by mistake.\n\n", "msg_date": "Wed, 25 Sep 2002 07:26:23 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "> > AFAICT, earthdistance is nowhere near passing yet :-(. It looks to\n> > me like the regression test is depending on the cube-based features\n> > that we decided to hold off for 7.4. Bruno, is that right?\n> \n> It shouldn't be. When I resubmitted the patch I intended to take out\n> all of the cube related tests. If there is a reference to cube in there\n> it is by mistake.\n\nI took a look at the diff file I submitted and the only reference to\ncube in the regression test was in a comment I didn't change after\nremoving the tests for the cube based distance stuff.\n", "msg_date": "Wed, 25 Sep 2002 07:41:20 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "\nOK, I reinstalled the proper earthdistance.out/sql files and it passes\nregession now. Sorry for the mistake.\n\n---------------------------------------------------------------------------\n\nBruno Wolff III wrote:\n> > > AFAICT, earthdistance is nowhere near passing yet :-(. It looks to\n> > > me like the regression test is depending on the cube-based features\n> > > that we decided to hold off for 7.4. Bruno, is that right?\n> > \n> > It shouldn't be. When I resubmitted the patch I intended to take out\n> > all of the cube related tests. If there is a reference to cube in there\n> > it is by mistake.\n> \n> I took a look at the diff file I submitted and the only reference to\n> cube in the regression test was in a comment I didn't change after\n> removing the tests for the cube based distance stuff.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 09:05:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I reinstalled the proper earthdistance.out/sql files and it passes\n> regession now. Sorry for the mistake.\n\nLooks good here too. Thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 12:24:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/earthdistance missing regression test files " } ]
[ { "msg_contents": "Hi,\n\nI'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can\nbe more or less rewritten completely, and probably should to get rigth\nall the cases mentioned in the past attisinherited discussion. Is this\ndesirable for 7.3? It can probably be hacked around and the rewrite\nkept for 7.4, but I think it will be much simpler after the rewrite.\n\nWhat do people think about this?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Siempre hay que alimentar a los dioses, aunque la tierra este seca\" (Orual)\n\n", "msg_date": "Tue, 24 Sep 2002 18:01:57 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "pg_dump and inherited attributes" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can\n> be more or less rewritten completely, and probably should to get rigth\n> all the cases mentioned in the past attisinherited discussion. Is this\n> desirable for 7.3? It can probably be hacked around and the rewrite\n> kept for 7.4, but I think it will be much simpler after the rewrite.\n\nIf it's a bug then it's fair game to fix in 7.3. But keep in mind that\npg_dump has to behave at least somewhat sanely when called against older\nservers ... will your rewrite behave reasonably if the server does not\noffer attinhcount values?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 00:01:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump and inherited attributes " }, { "msg_contents": "En Wed, 25 Sep 2002 00:01:24 -0400\nTom Lane <tgl@sss.pgh.pa.us> escribi�:\n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can\n> > be more or less rewritten completely, and probably should to get rigth\n> > all the cases mentioned in the past attisinherited discussion. Is this\n> > desirable for 7.3? It can probably be hacked around and the rewrite\n> > kept for 7.4, but I think it will be much simpler after the rewrite.\n> \n> If it's a bug then it's fair game to fix in 7.3. But keep in mind that\n> pg_dump has to behave at least somewhat sanely when called against older\n> servers ... will your rewrite behave reasonably if the server does not\n> offer attinhcount values?\n\nNah. I don't think it's worth it: I had forgotten that older versions\nshould be supported. I just left the code as is and added a\nversion-specific test.\n\nThis patch allows pg_dump to dump correctly local definition of columns.\nIn particular,\n\nCREATE TABLE p1 (f1 int, f2 int);\nCREATE TABLE p2 (f1 int);\nCREATE TABLE c () INHERITS (p1, p2);\nALTER TABLE ONLY p1 DROP COLUMN f1;\nCREATE TABLE p3 (f1 int);\nCREATE TABLE c2 (f1 int) INHERITS (p3);\n\nWill be dumped as\nCREATE TABLE p1 (f2 int);\nCREATE TABLE p2 (f1 int);\nCREATE TABLE c (f1 int) INHERITS (p1, p2);\nCREATE TABLE c2 (f1 int) INHERITS (p3);\n\n(Previous version will dump\nCREATE TABLE c () INHERITS (p1, p2)\nCREATE TABLE c2 () INHERITS (p3) )\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nA male gynecologist is like an auto mechanic who never owned a car.\n- Carrie Snow", "msg_date": "Sat, 28 Sep 2002 15:44:02 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_dump and inherited attributes" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nAlvaro Herrera wrote:\n> En Wed, 25 Sep 2002 00:01:24 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> escribi?:\n> \n> > Alvaro Herrera <alvherre@atentus.com> writes:\n> > > I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can\n> > > be more or less rewritten completely, and probably should to get rigth\n> > > all the cases mentioned in the past attisinherited discussion. Is this\n> > > desirable for 7.3? It can probably be hacked around and the rewrite\n> > > kept for 7.4, but I think it will be much simpler after the rewrite.\n> > \n> > If it's a bug then it's fair game to fix in 7.3. But keep in mind that\n> > pg_dump has to behave at least somewhat sanely when called against older\n> > servers ... will your rewrite behave reasonably if the server does not\n> > offer attinhcount values?\n> \n> Nah. I don't think it's worth it: I had forgotten that older versions\n> should be supported. I just left the code as is and added a\n> version-specific test.\n> \n> This patch allows pg_dump to dump correctly local definition of columns.\n> In particular,\n> \n> CREATE TABLE p1 (f1 int, f2 int);\n> CREATE TABLE p2 (f1 int);\n> CREATE TABLE c () INHERITS (p1, p2);\n> ALTER TABLE ONLY p1 DROP COLUMN f1;\n> CREATE TABLE p3 (f1 int);\n> CREATE TABLE c2 (f1 int) INHERITS (p3);\n> \n> Will be dumped as\n> CREATE TABLE p1 (f2 int);\n> CREATE TABLE p2 (f1 int);\n> CREATE TABLE c (f1 int) INHERITS (p1, p2);\n> CREATE TABLE c2 (f1 int) INHERITS (p3);\n> \n> (Previous version will dump\n> CREATE TABLE c () INHERITS (p1, p2)\n> CREATE TABLE c2 () INHERITS (p3) )\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> A male gynecologist is like an auto mechanic who never owned a car.\n> - Carrie Snow\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 21:17:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump and inherited attributes" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nAlvaro Herrera wrote:\n> En Wed, 25 Sep 2002 00:01:24 -0400\n> Tom Lane <tgl@sss.pgh.pa.us> escribi?:\n> \n> > Alvaro Herrera <alvherre@atentus.com> writes:\n> > > I'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can\n> > > be more or less rewritten completely, and probably should to get rigth\n> > > all the cases mentioned in the past attisinherited discussion. Is this\n> > > desirable for 7.3? It can probably be hacked around and the rewrite\n> > > kept for 7.4, but I think it will be much simpler after the rewrite.\n> > \n> > If it's a bug then it's fair game to fix in 7.3. But keep in mind that\n> > pg_dump has to behave at least somewhat sanely when called against older\n> > servers ... will your rewrite behave reasonably if the server does not\n> > offer attinhcount values?\n> \n> Nah. I don't think it's worth it: I had forgotten that older versions\n> should be supported. I just left the code as is and added a\n> version-specific test.\n> \n> This patch allows pg_dump to dump correctly local definition of columns.\n> In particular,\n> \n> CREATE TABLE p1 (f1 int, f2 int);\n> CREATE TABLE p2 (f1 int);\n> CREATE TABLE c () INHERITS (p1, p2);\n> ALTER TABLE ONLY p1 DROP COLUMN f1;\n> CREATE TABLE p3 (f1 int);\n> CREATE TABLE c2 (f1 int) INHERITS (p3);\n> \n> Will be dumped as\n> CREATE TABLE p1 (f2 int);\n> CREATE TABLE p2 (f1 int);\n> CREATE TABLE c (f1 int) INHERITS (p1, p2);\n> CREATE TABLE c2 (f1 int) INHERITS (p3);\n> \n> (Previous version will dump\n> CREATE TABLE c () INHERITS (p1, p2)\n> CREATE TABLE c2 () INHERITS (p3) )\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> A male gynecologist is like an auto mechanic who never owned a car.\n> - Carrie Snow\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 12:20:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_dump and inherited attributes" } ]
[ { "msg_contents": "\n\nIn answer to the question posed at the end of the message below:\n\nYes, I do get the similar results.\n\nA quick investigation shows that the SPI_freetuptable at the end of\npltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64 (which looks\nsensible to me) but which has a memory context of 0x7f7f7f7f (the unallocated\nmarker).\n\nBriefly following through to check this value shows that as long as I have\nCLOBBER_FREED_MEMORY defined, which I presume I do having configured with\n--debug, this value is also consistent with the tuptable having been freed\nbefore this faulting invocation.\n\nI haven't looked too closely yet but at a glance I can't see what could be\ngoing wrong with the exception that the tuptable is freed even if zero rows are\nreturned by SPI_exec. That and I'm not sure what that $T(id) thing is doing in\nthe SQL submited to pltcl_SPI_exec. Oh 'eck, I've been reading that test\nfunction wrong, it's got a level of nesting.\n\nUnfortunately, I am currently trying to throw together a quick demo of\nsomething at the moment so can't investigate too fully for the next day or so.\nIf someone wants to pick this up feel free otherwise I'll look into it later.\n\n\n--\nNigel J. Andrews\n\n\nOn Tue, 24 Sep 2002, Ian Harding wrote to me:\n\n> First, thank you very much for working on this issue. Pltcl is extremely important to me right now, and this memory leak is cramping my style a bit.\n> \n> I applied the patch you sent to my pltcl.c (I am at version 7.2.1, but it seems to apply fine...) It builds fine, psql starts fine, but my test function still blows up dramatically.\n> \n> Here is the script I am using:\n> \n> drop function memleak();\n> create function memleak() returns int as '\n> \n> for {set i 1} {$i < 100} {incr i} {\n> set sql \"select ''foo''\"\n> spi_exec \"$sql\"\n> }\n> \n> \n> ' language 'pltcl';\n> \n> drop table testable;\n> create table testable (\n> id int,\n> data text);\n> \n> insert into testable values (1, 'foobar');\n> insert into testable values (2, 'foobar');\n> insert into testable values (3, 'foobar');\n> insert into testable values (4, 'foobar');\n> insert into testable values (5, 'foobar');\n> insert into testable values (6, 'foobar');\n> \n> drop function memleak(int);\n> create function memleak(int) returns int as '\n> \n> set sql \"select * From testable\"\n> spi_exec -array T \"$sql\" {\n> \n> for {set i 1} {$i < 100} {incr i} {\n> set sql \"select * from testable where id = $T(id)\"\n> spi_exec \"$sql\"\n> }\n> }\n> ' language 'pltcl';\n> \n> Here is what happens:\n> \n> bash-2.05# psql -U iharding test < testfunction\n> DROP\n> CREATE\n> ERROR: table \"testable\" does not exist\n> CREATE\n> INSERT 118942676 1\n> INSERT 118942677 1\n> INSERT 118942678 1\n> INSERT 118942679 1\n> INSERT 118942680 1\n> INSERT 118942681 1\n> DROP\n> CREATE\n> bash-2.05# psql -U iharding test\n> Welcome to psql, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> test=# select memleak();\n> memleak\n> ---------\n> 0\n> (1 row)\n> \n> test=# select memleak(1);\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> The connection to the server was lost. Attempting reset: Failed.\n> !#\n> \n> \n> Here is the end of the log:\n> \n> DEBUG: server process (pid 1992) was terminated by signal 11\n> DEBUG: terminating any other active server processes\n> DEBUG: all server processes terminated; reinitializing shared memory and semaphores\n> IpcMemoryCreate: shmget(key=5432001, size=29769728, 03600) failed: Cannot allocate memory\n> \n> This error usually means that PostgreSQL's request for a shared\n> ...\n>\n>\n> Do you have similar results?\n\n\n", "msg_date": "Tue, 24 Sep 2002 23:41:39 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Re: pltcl.so patch" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> Yes, I do get the similar results.\n> \n> A quick investigation shows that the SPI_freetuptable at the end of\n> pltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64\n> (which looks sensible to me) but which has a memory context of\n> 0x7f7f7f7f (the unallocated marker).\n\nAttached is a patch against CVS HEAD which fixes this, I believe. The\nproblem appears to be the newly added free of the tuptable at the end\nof pltcl_SPI_exec(). I've added a comment to that effect:\n\n\t/*\n\t * Do *NOT* free the tuptable here. That's because if the loop\n\t * body executed any SQL statements, it will have already free'd\n\t * the tuptable itself, so freeing it twice is not wise. We could\n\t * get around this by making a copy of SPI_tuptable->vals and\n\t * feeding that to pltcl_set_tuple_values above, but that would\n\t * still leak memory (the palloc'ed copy would only be free'd on\n\t * context reset).\n\t */\n\nAt least, I *think* that's the problem -- I've only been looking at\nthe code for about 20 minutes, so I may be wrong. In any case, this\nmakes both memleak() and memleak(1) work on my machine. Let me know if\nit works for you, and/or if someone knows of a better solution.\n\nI also added some SPI_freetuptable() calls in some places where Nigel\ndidn't, and added some paranoia when dealing with statically sized\nbuffers (snprintf() rather than sprintf(), and so on). I also didn't\ninclude Nigel's changes to some apparently unrelated PL/Python stuff\n-- this patch includes only the PL/Tcl changes.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC", "msg_date": "25 Sep 2002 00:56:32 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: pltcl.so patch" }, { "msg_contents": "On 25 Sep 2002, Neil Conway wrote:\n\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > Yes, I do get the similar results.\n> > \n> > A quick investigation shows that the SPI_freetuptable at the end of\n> > pltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64\n> > (which looks sensible to me) but which has a memory context of\n> > 0x7f7f7f7f (the unallocated marker).\n> \n> Attached is a patch against CVS HEAD which fixes this, I believe. The\n> problem appears to be the newly added free of the tuptable at the end\n> of pltcl_SPI_exec(). I've added a comment to that effect:\n> \n> \t/*\n> \t * Do *NOT* free the tuptable here. That's because if the loop\n> \t * body executed any SQL statements, it will have already free'd\n> \t * the tuptable itself, so freeing it twice is not wise. We could\n> \t * get around this by making a copy of SPI_tuptable->vals and\n> \t * feeding that to pltcl_set_tuple_values above, but that would\n> \t * still leak memory (the palloc'ed copy would only be free'd on\n> \t * context reset).\n> \t */\n\nThat's certainly where the fault was happening. However, that's where the\noriginal memory leak problem was coming from (without the SPI_freetuptable\ncall). It could be I got that fix wrong and the extra calls you've added are\nthe right fix for that. I'll take a look to see what I can learn later.\n\n> At least, I *think* that's the problem -- I've only been looking at\n> the code for about 20 minutes, so I may be wrong. In any case, this\n> makes both memleak() and memleak(1) work on my machine. Let me know if\n> it works for you, and/or if someone knows of a better solution.\n\nI'll have to check later.\n\n> \n> I also added some SPI_freetuptable() calls in some places where Nigel\n> didn't, and added some paranoia when dealing with statically sized\n> buffers (snprintf() rather than sprintf(), and so on). I also didn't\n> include Nigel's changes to some apparently unrelated PL/Python stuff\n> -- this patch includes only the PL/Tcl changes.\n\nI dare say the plpython needs to be checked by someone who knows how to since I\ncan well imagine the same nested call fault will exist there.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Wed, 25 Sep 2002 07:35:49 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Re: pltcl.so patch" }, { "msg_contents": "\n\nOkay, I've looked again at spi_exec and I believe I can fix the bug I\nintroduced and the memory leak. However, I have only looked quickly and not\nmade these most recent changes to the execp version nor to the plpython\ncode. Therefore I am not attaching a patch at the moment, just mentioning that\nI've straightened this out in my brain a bit more.\n\n\nOn Wed, 25 Sep 2002, Nigel J. Andrews wrote:\n\n> On 25 Sep 2002, Neil Conway wrote:\n> \n> > \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > > Yes, I do get the similar results.\n> > > \n> > > A quick investigation shows that the SPI_freetuptable at the end of\n> > > pltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64\n> > > (which looks sensible to me) but which has a memory context of\n> > > 0x7f7f7f7f (the unallocated marker).\n> > \n> > Attached is a patch against CVS HEAD which fixes this, I believe. The\n> > problem appears to be the newly added free of the tuptable at the end\n> > of pltcl_SPI_exec(). I've added a comment to that effect:\n> > \n> > \t/*\n> > \t * Do *NOT* free the tuptable here. That's because if the loop\n> > \t * body executed any SQL statements, it will have already free'd\n> > \t * the tuptable itself, so freeing it twice is not wise. We could\n> > \t * get around this by making a copy of SPI_tuptable->vals and\n> > \t * feeding that to pltcl_set_tuple_values above, but that would\n> > \t * still leak memory (the palloc'ed copy would only be free'd on\n> > \t * context reset).\n> > \t */\n> \n> That's certainly where the fault was happening. However, that's where the\n> original memory leak problem was coming from (without the SPI_freetuptable\n> call). It could be I got that fix wrong and the extra calls you've added are\n> the right fix for that. I'll take a look to see what I can learn later.\n> \n> > At least, I *think* that's the problem -- I've only been looking at\n> > the code for about 20 minutes, so I may be wrong. In any case, this\n> > makes both memleak() and memleak(1) work on my machine. Let me know if\n> > it works for you, and/or if someone knows of a better solution.\n> \n> I'll have to check later.\n> \n> > \n> > I also added some SPI_freetuptable() calls in some places where Nigel\n> > didn't, and added some paranoia when dealing with statically sized\n> > buffers (snprintf() rather than sprintf(), and so on). I also didn't\n> > include Nigel's changes to some apparently unrelated PL/Python stuff\n> > -- this patch includes only the PL/Tcl changes.\n> \n> I dare say the plpython needs to be checked by someone who knows how to since I\n> can well imagine the same nested call fault will exist there.\n> \n> \n> \n\n-- \nNigel J. Andrews\n\n\n", "msg_date": "Thu, 26 Sep 2002 02:35:04 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Re: pltcl.so patch" }, { "msg_contents": "\nOh, so this is the later version. Fine. Let me know when it is ready.\n\n---------------------------------------------------------------------------\n\nNigel J. Andrews wrote:\n> \n> \n> Okay, I've looked again at spi_exec and I believe I can fix the bug I\n> introduced and the memory leak. However, I have only looked quickly and not\n> made these most recent changes to the execp version nor to the plpython\n> code. Therefore I am not attaching a patch at the moment, just mentioning that\n> I've straightened this out in my brain a bit more.\n> \n> \n> On Wed, 25 Sep 2002, Nigel J. Andrews wrote:\n> \n> > On 25 Sep 2002, Neil Conway wrote:\n> > \n> > > \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > > > Yes, I do get the similar results.\n> > > > \n> > > > A quick investigation shows that the SPI_freetuptable at the end of\n> > > > pltcl_SPI_exec is trying to free a tuptable of value 0x82ebe64\n> > > > (which looks sensible to me) but which has a memory context of\n> > > > 0x7f7f7f7f (the unallocated marker).\n> > > \n> > > Attached is a patch against CVS HEAD which fixes this, I believe. The\n> > > problem appears to be the newly added free of the tuptable at the end\n> > > of pltcl_SPI_exec(). I've added a comment to that effect:\n> > > \n> > > \t/*\n> > > \t * Do *NOT* free the tuptable here. That's because if the loop\n> > > \t * body executed any SQL statements, it will have already free'd\n> > > \t * the tuptable itself, so freeing it twice is not wise. We could\n> > > \t * get around this by making a copy of SPI_tuptable->vals and\n> > > \t * feeding that to pltcl_set_tuple_values above, but that would\n> > > \t * still leak memory (the palloc'ed copy would only be free'd on\n> > > \t * context reset).\n> > > \t */\n> > \n> > That's certainly where the fault was happening. However, that's where the\n> > original memory leak problem was coming from (without the SPI_freetuptable\n> > call). It could be I got that fix wrong and the extra calls you've added are\n> > the right fix for that. I'll take a look to see what I can learn later.\n> > \n> > > At least, I *think* that's the problem -- I've only been looking at\n> > > the code for about 20 minutes, so I may be wrong. In any case, this\n> > > makes both memleak() and memleak(1) work on my machine. Let me know if\n> > > it works for you, and/or if someone knows of a better solution.\n> > \n> > I'll have to check later.\n> > \n> > > \n> > > I also added some SPI_freetuptable() calls in some places where Nigel\n> > > didn't, and added some paranoia when dealing with statically sized\n> > > buffers (snprintf() rather than sprintf(), and so on). I also didn't\n> > > include Nigel's changes to some apparently unrelated PL/Python stuff\n> > > -- this patch includes only the PL/Tcl changes.\n> > \n> > I dare say the plpython needs to be checked by someone who knows how to since I\n> > can well imagine the same nested call fault will exist there.\n> > \n> > \n> > \n> \n> -- \n> Nigel J. Andrews\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 01:39:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pltcl.so patch" } ]
[ { "msg_contents": "\nHi,\n\nI'm looking at pg_dump/common.c:flagInhAttrs() and suspect that it can\nbe more or less rewritten completely, and probably should to get rigth\nall the cases mentioned in the past attisinherited discussion. Is this\ndesirable for 7.3? It can probably be hacked around and the rewrite\nkept for 7.4, but I think it will be much simpler after the rewrite.\n\nWhat do people think about this?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Siempre hay que alimentar a los dioses, aunque la tierra este seca\" (Orual)\n\n\n", "msg_date": "Tue, 24 Sep 2002 19:05:18 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "pg_dump and inherited attributes" } ]
[ { "msg_contents": "news.postgresql.org seems to be down (and has been for a while -- I think I \ntried a day or so ago and found it down then also)\n\nJoe\n\n", "msg_date": "Tue, 24 Sep 2002 16:49:39 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "news.postgresql.org down" } ]
[ { "msg_contents": "> As far as getting into base postgresql distro. I don't mind it rewriting but I\n> have some reservations.\n> 1) As it is postgresql source code is huge. Adding functions to it which \n> directly taps into it's nervous system e.g. cache, would take far more time to\n> perfect in all conditions.\n\nIt doesn't have to make its way into the postgresql daemon itself -- in fact since some people like tuning the vacuuming, it makes more sense to make this a daemon. No, my suggestion is simple that some sort of auto-vacuumer be compiled as a stand-alone app and included in the standard postgresql tar.gz file, and the install instructions recommend the site adding it as a cron job. \n\nOn linux, it'd be good if the RPM install it automatically (or else it ran as a mostly-asleep daemon) because so many of the Linux/Postgresql users we see have just no clue about Postgresql, and no intention of reading anything.\n\nJust an FYI, a message I received today from the postmaster at a major telco about their postgresql experience:\n> We have experienced some problems but they have generally\n> cleared up after a database vacuum. However, sometimes I \n> have found that the vacuum itself (especially a vacuum analyze)\n> will go into the CPU consumption loop.\n\n-john\n", "msg_date": "Tue, 24 Sep 2002 18:12:18 -0700", "msg_from": "John Buckman <john@lyris.com>", "msg_from_op": true, "msg_subject": "Re: Postgresql Automatic vacuum" } ]
[ { "msg_contents": "> It doesn't have to make its way into the postgresql daemon itself -- in\nfact since some people like tuning the vacuuming, it makes more sense to\nmake this a daemon. No, my suggestion is simple that some sort of\nauto-vacuumer be compiled as a stand-alone app and included in the standard\npostgresql tar.gz file, and the install instructions recommend the site\nadding it as a cron job.\n\nunless I missed something.... the point of a daemon is so that we don't need\nto use cron.\n\nI also think that some type of daemon should be included in the pg sources,\nand installed with the rest of the system, and if configured to do so, the\npostmaster launches the auto vac daemon. I think this still makes sense\neven with the proposed setup (autovac client is just special client app).\n\n", "msg_date": "Wed, 25 Sep 2002 01:10:01 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": true, "msg_subject": "Re: Postgresql Automatic vacuum" }, { "msg_contents": "On 25 Sep 2002 at 1:10, Matthew T. O'Connor wrote:\n\n> > It doesn't have to make its way into the postgresql daemon itself -- in\n> fact since some people like tuning the vacuuming, it makes more sense to\n> make this a daemon. No, my suggestion is simple that some sort of\n> auto-vacuumer be compiled as a stand-alone app and included in the standard\n> postgresql tar.gz file, and the install instructions recommend the site\n> adding it as a cron job.\n> \n> unless I missed something.... the point of a daemon is so that we don't need\n> to use cron.\n> \n> I also think that some type of daemon should be included in the pg sources,\n> and installed with the rest of the system, and if configured to do so, the\n> postmaster launches the auto vac daemon. I think this still makes sense\n> even with the proposed setup (autovac client is just special client app).\n\nI would suggest adding it to pg_ctl. Best place to put it IMO..\n\nI can make available rpm of pgavd but with checkinstall, I guess people will \ncan have more distro. spcecific rpms. One rpm is obviously not going to install \non all distros..\n\n\n\nBye\n Shridhar\n\n--\nMoore's Constant:\tEverybody sets out to do something, and everybody\tdoes \nsomething, but no one does what he sets out to do.\n\n", "msg_date": "Wed, 25 Sep 2002 11:54:39 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Postgresql Automatic vacuum" } ]
[ { "msg_contents": "Just got this. :-)\n\nMichael\n\n----- Forwarded message from Akim Demaille <akim@epita.fr> -----\n\nTo: Michael Meskes <meskes@debian.org>\nSubject: Re: bison 1.49 release\nFrom: Akim Demaille <akim@epita.fr>\nDate: 25 Sep 2002 11:32:42 +0200\n\n>>>>> \"Michael\" == Michael Meskes <meskes@debian.org> writes:\n\nMichael> We are already in feature freeze. I'd say release will be in\nMichael> about a month.\n\nBison in Two weeks is doable. Is this enough?\n\n\n----- End forwarded message -----\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 25 Sep 2002 11:38:08 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "[akim@epita.fr: Re: bison 1.49 release]" } ]
[ { "msg_contents": "Because the new 7.3 SSL code doesn't work (per Peter), and the author is\nnot responding, I am about to yank out that code. Peter suggests\nripping out all the new code rather than try to pick around and remove\njust the broken parts.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 09:41:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "New SSL code to be removed" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Because the new 7.3 SSL code doesn't work (per Peter), and the author is\n> not responding, I am about to yank out that code. Peter suggests\n> ripping out all the new code rather than try to pick around and remove\n> just the broken parts.\n\nAgreed. I allways wondered what SSL DB-connections are good for.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n", "msg_date": "Wed, 25 Sep 2002 11:33:33 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: New SSL code to be removed" }, { "msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Because the new 7.3 SSL code doesn't work (per Peter), and the author is\n> > not responding, I am about to yank out that code. Peter suggests\n> > ripping out all the new code rather than try to pick around and remove\n> > just the broken parts.\n> \n> Agreed. I allways wondered what SSL DB-connections are good for.\n\nI am not going to rip out SSL, just the changes. We do have people who\nuse SSL quite a bit. Looking at the code, however, I may see an easy\nway to allow SSL connections without requiring server certificates. If\nthat is doable, I may just make that change and let the rest of the code\nstay.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 12:29:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New SSL code to be removed" }, { "msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Because the new 7.3 SSL code doesn't work (per Peter), and the author is\n> > not responding, I am about to yank out that code. Peter suggests\n> > ripping out all the new code rather than try to pick around and remove\n> > just the broken parts.\n> \n> Agreed. I allways wondered what SSL DB-connections are good for.\n\nI am now in email contact with Bear and he is assisting me in disabling\nall certificates for 7.3. The code will be marked as NOT_USED and can\ntherefore be enables in later relases. He wants to get back this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 15:35:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: New SSL code to be removed" }, { "msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Because the new 7.3 SSL code doesn't work (per Peter), and the author is\n> > not responding, I am about to yank out that code. Peter suggests\n> > ripping out all the new code rather than try to pick around and remove\n> > just the broken parts.\n> \n> Agreed. I allways wondered what SSL DB-connections are good for.\n\nOK, I have aplied the following patch to allow SSL to work without\nclient certificates. There was some confusion in the code because while\nthe comments said client certificates were not required, the\ninfrastructure on the client side was required. This patch removes the\nrequirement, and adds a comment so Bear can make adjustments for 7.4. I\ndon't think we ever want to _require_ client-side certificates.\n\nI did not remove the code because after quick review I saw that his code\nactually filled in areas our pre-7.3 code was missing. I will have him\nreview this patch and make any adjustments.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.139\ndiff -c -c -r1.139 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t25 Sep 2002 21:16:10 -0000\t1.139\n--- doc/src/sgml/runtime.sgml\t26 Sep 2002 04:36:08 -0000\n***************\n*** 2876,2881 ****\n--- 2876,2882 ----\n Enter the old passphrase to unlock the existing key. Now do\n <programlisting>\n openssl req -x509 -in cert.req -text -key cert.pem -out cert.cert\n+ chmod og-rwx cert.pem\n cp cert.pem <replaceable>$PGDATA</replaceable>/server.key\n cp cert.cert <replaceable>$PGDATA</replaceable>/server.crt\n </programlisting>\nIndex: src/backend/libpq/be-secure.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/libpq/be-secure.c,v\nretrieving revision 1.14\ndiff -c -c -r1.14 be-secure.c\n*** src/backend/libpq/be-secure.c\t4 Sep 2002 23:31:34 -0000\t1.14\n--- src/backend/libpq/be-secure.c\t26 Sep 2002 04:36:12 -0000\n***************\n*** 642,650 ****\n--- 642,654 ----\n \tsnprintf(fnbuf, sizeof fnbuf, \"%s/root.crt\", DataDir);\n \tif (!SSL_CTX_load_verify_locations(SSL_context, fnbuf, CA_PATH))\n \t{\n+ \t\treturn 0;\n+ #ifdef NOT_USED\n+ \t\t/* CLIENT CERTIFICATES NOT REQUIRED bjm 2002-09-26 */\n \t\tpostmaster_error(\"could not read root cert file (%s): %s\",\n \t\t\t\t\t\t fnbuf, SSLerrmessage());\n \t\tExitPostmaster(1);\n+ #endif\n \t}\n \tSSL_CTX_set_verify(SSL_context,\n \t\t\t\t\tSSL_VERIFY_PEER | SSL_VERIFY_CLIENT_ONCE, verify_cb);\nIndex: src/interfaces/libpq/fe-secure.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/interfaces/libpq/fe-secure.c,v\nretrieving revision 1.13\ndiff -c -c -r1.13 fe-secure.c\n*** src/interfaces/libpq/fe-secure.c\t22 Sep 2002 20:57:21 -0000\t1.13\n--- src/interfaces/libpq/fe-secure.c\t26 Sep 2002 04:36:23 -0000\n***************\n*** 726,735 ****\n--- 726,739 ----\n \t\t\t\t pwd->pw_dir);\n \t\tif (stat(fnbuf, &buf) == -1)\n \t\t{\n+ \t\t\treturn 0;\n+ #ifdef NOT_USED\n+ \t\t\t/* CLIENT CERTIFICATES NOT REQUIRED bjm 2002-09-26 */\n \t\t\tprintfPQExpBuffer(&conn->errorMessage,\n \t\t\t\t libpq_gettext(\"could not read root certificate list (%s): %s\\n\"),\n \t\t\t\t\t\t\t fnbuf, strerror(errno));\n \t\t\treturn -1;\n+ #endif\n \t\t}\n \t\tif (!SSL_CTX_load_verify_locations(SSL_context, fnbuf, 0))\n \t\t{\n***************\n*** 789,794 ****\n--- 793,800 ----\n \n \t/* check the certificate chain of the server */\n \n+ #ifdef NOT_USED\n+ \t/* CLIENT CERTIFICATES NOT REQUIRED bjm 2002-09-26 */\n \t/*\n \t * this eliminates simple man-in-the-middle attacks and simple\n \t * impersonations\n***************\n*** 802,807 ****\n--- 808,814 ----\n \t\tclose_SSL(conn);\n \t\treturn -1;\n \t}\n+ #endif\n \n \t/* pull out server distinguished and common names */\n \tconn->peer = SSL_get_peer_certificate(conn->ssl);\n***************\n*** 824,829 ****\n--- 831,838 ----\n \n \t/* verify that the common name resolves to peer */\n \n+ #ifdef NOT_USED\n+ \t/* CLIENT CERTIFICATES NOT REQUIRED bjm 2002-09-26 */\n \t/*\n \t * this is necessary to eliminate man-in-the-middle attacks and\n \t * impersonations where the attacker somehow learned the server's\n***************\n*** 834,839 ****\n--- 843,849 ----\n \t\tclose_SSL(conn);\n \t\treturn -1;\n \t}\n+ #endif\n \n \treturn 0;\n }", "msg_date": "Thu, 26 Sep 2002 00:40:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "SSL code fixed" } ]
[ { "msg_contents": "\nbuild cleanly, just wanna make sure tha i haven't overlooked anything...\n\n", "msg_date": "Wed, 25 Sep 2002 11:16:35 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "beta2 ... someone wanna verify?" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> build cleanly, just wanna make sure tha i haven't overlooked anything...\n\nThe tarball seems to match my local tree ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 11:23:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta2 ... someone wanna verify? " } ]
[ { "msg_contents": "I've identified the reason for the occasional \"can't wait without a PROC\nstructure\" failures we've seen reported. I had been thinking that this\nmust occur during backend startup, before MyProc is initialized ...\nbut I was mistaken. Actually, it happens during backend shutdown,\nand the reason is that ProcKill (which releases the PGPROC structure\nand resets MyProc to NULL) is called before ShutdownBufferPoolAccess.\nBut the latter tries to acquire the bufmgr LWLock. If it has to wait,\nkaboom.\n\nThe ordering of these shutdown hooks is the reverse of the ordering\nof the startup initialization of the modules. It looks like we'll\nneed to rejigger the startup ordering ... and it also looks like that's\ngoing to be a rather ticklish issue. (See comments in BaseInit and\nInitPostgres.) Any thoughts on how to do it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 11:52:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Cause of \"can't wait without a PROC structure\"" }, { "msg_contents": "On Wed, 2002-09-25 at 09:52, Tom Lane wrote:\n> I've identified the reason for the occasional \"can't wait without a PROC\n> structure\" failures we've seen reported. I had been thinking that this\n> must occur during backend startup, before MyProc is initialized ...\n> but I was mistaken. Actually, it happens during backend shutdown,\n> and the reason is that ProcKill (which releases the PGPROC structure\n> and resets MyProc to NULL) is called before ShutdownBufferPoolAccess.\n> But the latter tries to acquire the bufmgr LWLock. If it has to wait,\n> kaboom.\n> \n\nGreat news that you've identified the problem. We continue to see this\nevery few days and it's the only thing that takes our servers down over\nweeks of pounding.\n\n> The ordering of these shutdown hooks is the reverse of the ordering\n> of the startup initialization of the modules. It looks like we'll\n> need to rejigger the startup ordering ... and it also looks like that's\n> going to be a rather ticklish issue. (See comments in BaseInit and\n> InitPostgres.) Any thoughts on how to do it?\n> \n\nSorry I can't add any insight at this level...but I can say that it\nwould be significant to my customer(s) and my ability to recommend PG to\nfuture \"ex-Oracle users\" ;) to see a fix make it into the 7.3 final.\n\nss\n\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n\n", "msg_date": "25 Sep 2002 09:57:25 -0600", "msg_from": "Scott Shattuck <ss@technicalpursuit.com>", "msg_from_op": false, "msg_subject": "Re: Cause of \"can't wait without a PROC structure\"" }, { "msg_contents": "Scott Shattuck <ss@technicalpursuit.com> writes:\n> Sorry I can't add any insight at this level...but I can say that it\n> would be significant to my customer(s) and my ability to recommend PG to\n> future \"ex-Oracle users\" ;) to see a fix make it into the 7.3 final.\n\nRest assured that it *will* be fixed in 7.3 final; this is a \"must fix\"\nitem in my book ... and now that we know the cause, it's just a matter\nof choosing the cleanest solution.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 12:20:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cause of \"can't wait without a PROC structure\" " }, { "msg_contents": "I said:\n> The ordering of these shutdown hooks is the reverse of the ordering\n> of the startup initialization of the modules. It looks like we'll\n> need to rejigger the startup ordering ... and it also looks like that's\n> going to be a rather ticklish issue. (See comments in BaseInit and\n> InitPostgres.) Any thoughts on how to do it?\n\nI eventually decided that the most reasonable solution was to leave the\nstartup sequence alone, and fold the ProcKill and\nShutdownBufferPoolAccess shutdown hooks together. This is a little ugly\nbut it seems to beat the alternatives. ShutdownBufferPoolAccess was\neffectively assuming that LWLockReleaseAll was called just before it,\nso the two modules aren't really independent anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 16:56:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Cause of \"can't wait without a PROC structure\" " }, { "msg_contents": "Tom Lane wrote:\n> I said:\n> > The ordering of these shutdown hooks is the reverse of the ordering\n> > of the startup initialization of the modules. It looks like we'll\n> > need to rejigger the startup ordering ... and it also looks like that's\n> > going to be a rather ticklish issue. (See comments in BaseInit and\n> > InitPostgres.) Any thoughts on how to do it?\n> \n> I eventually decided that the most reasonable solution was to leave the\n> startup sequence alone, and fold the ProcKill and\n> ShutdownBufferPoolAccess shutdown hooks together. This is a little ugly\n> but it seems to beat the alternatives. ShutdownBufferPoolAccess was\n> effectively assuming that LWLockReleaseAll was called just before it,\n> so the two modules aren't really independent anyway.\n\nI understand. Sometimes the dependencies are too intricate to break\napart, and you just reorder them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 17:02:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cause of \"can't wait without a PROC structure\"" } ]
[ { "msg_contents": "I recently downloaded and built Postgresql 7.2.2. In trying to run\nthe \"gmake check\" after the build I get an error indicating that\ninitdb failed. I turned on the debug option so the text below\ncontains some of the debug statements output from \"gmake check\". Note\nthe error at the bottom \"ERROR: syntax error at line 2658: unexpected\ntoken parse error\". It seems like this is complaining about the\nsyntax of line 2658 in the postgres.bki file, but I don't see any\nproblem with the syntax here.\n\nDoes anyone have any ideas about the problem here?\n\nThanks,\nJeff Stevens\n\n\n\n\n\nDEBUG: start transaction\nDEBUG: relation created with oid 16406\nDEBUG: commit transaction\nDEBUG: start transaction\nDEBUG: open relation pg_aggregate, attrsize 66\nDEBUG: create attribute 0 name aggname len 32 num 1 type 19\nDEBUG: create attribute 1 name aggowner len 4 num 2 type 23\nDEBUG: create attribute 2 name aggtransfn len 4 num 3 type 24\nDEBUG: create attribute 3 name aggfinalfn len 4 num 4 type 24\nDEBUG: create attribute 4 name aggbasetype len 4 num 5 type 26\nDEBUG: create attribute 5 name aggtranstype len 4 num 6 type 26\nDEBUG: create attribute 6 name aggfinaltype len 4 num 7 type 26\nDEBUG: create attribute 7 name agginitval len -1 num 8 type 25\nDEBUG: commit transaction\nERROR: syntax error at line 2658: unexpected token parse error\n\ninitdb failed.\nData directory /array/web/src/postgresql-7.2.2/src/test/regress/./tmp_check/data\n will not be removed at user's request.\n", "msg_date": "25 Sep 2002 09:36:36 -0700", "msg_from": "jsco47@yahoo.com (jeff)", "msg_from_op": true, "msg_subject": "initdb failed due to syntax error" }, { "msg_contents": "jsco47@yahoo.com (jeff) writes:\n> I recently downloaded and built Postgresql 7.2.2. In trying to run\n> the \"gmake check\" after the build I get an error indicating that\n> initdb failed. I turned on the debug option so the text below\n> contains some of the debug statements output from \"gmake check\". Note\n> the error at the bottom \"ERROR: syntax error at line 2658: unexpected\n> token parse error\". It seems like this is complaining about the\n> syntax of line 2658 in the postgres.bki file, but I don't see any\n> problem with the syntax here.\n\nWhy not show us what's in postgres.bki around that line, rather than\nassuming you know what's correct?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 14:59:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: initdb failed due to syntax error " } ]
[ { "msg_contents": "Dear Momjian,\r\nhello,\r\nI want to make the show variable SQL to return message like pgresult. how can I revise the source code? Any suggestion?\r\nI really need you help.\r\nJinqiang Han\r\n\n\n\n\n\nDear Momjian,\nhello,\nI want to make the show variable SQL to return  message like \r\npgresult. how can I revise the source code? Any suggestion?\nI really need you help.\nJinqiang Han", "msg_date": "Thu, 26 Sep 2002 0:55:18 +0800", "msg_from": "=?GB2312?Q?=BA=AB=BD=FC=C7=BF?= <jqhan@db.pku.edu.cn>", "msg_from_op": true, "msg_subject": "inquiry" } ]
[ { "msg_contents": "Es facil, pru�balo ...\n�Hola! Recibes este mensaje en nombre de Francisco, que est�\ndisfrutando de las ventajas de ser un usuario de Es-F�cil!\n\nA trav�s de nuestro servicio, podr�s recibir informaci�n en tu\nbuz�n de correo, y adem�s cobrar por ello. Adem�s, te ofrecemos\notras v�as para aumentar tu cuenta y conseguir m�s dinero.\n\nTe preguntar�s c�mo lo hacemos. Es facil. Simplemente pagamos\na nuestros usuarios una parte del dinero que cobramos a las\nempresas por promocionarse a trav�s nuestro. De este modo,\nlas empresas pueden realizar promociones de acuerdo a sus \nnecesidades, y nuestros usuarios ganan dinero por mantenerse\ninformados de los temas que le interesan.\n\nSi est�s interesado/a, con�ctate a:\n\n http://www.es-facil.com/ganar/alta?Id=63334514\n\ny rellena el sencillo formulario.\n\n��No esperes m�s!! Con�ctate a:\n\n http://www.es-facil.com/ganar/alta?Id=63334514\n\ny empieza a ganar dinero ya!!!.\n\n\nAh! Tambi�n puedes ganar importantes sumas de dinero con \nnuestro programa de afiliados.\n\nEsperando que pronto seas un nuevo usuario, recibe un cordial\nsaludo,\n\nEl equipo de Es-F�cil! en nombre de Francisco Alvarez Ortiz\n\n", "msg_date": "Wed, 25 Sep 2002 19:18:41 +0200", "msg_from": "faove@hotmail.com", "msg_from_op": true, "msg_subject": "Gana con Es-F���cil!" } ]
[ { "msg_contents": "thank you for your information.\r\nI want to use the imformation of show or other utilities sql in jdbc interface. generally resultset is empty when such sql is execute. I want to get the information like psql. How can I do?\r\nThanks again.\r\n\r\n\r\n\r\nYou should ask this on general, and be more specific about how pgresult\r\nis different from that SHOW currently does.\r\n\r\nDue to time constraints, I do not directly answer general PostgreSQL\r\nquestions. For assistance, please join the appropriate mailing list and\r\npost your question:\r\n\r\nhttp://www.postgresql.org/users-lounge\r\n\r\nYou can also try the #postgresql IRC channel. See the PostgreSQL FAQ\r\nfor more information.\r\n\r\n---------------------------------------------------------------------------\r\n\r\n\r\n[ ?GB2312?] wrote:\r\n>> Dear Momjian,\r\n>> hello,\r\n>> I want to make the show variable SQL to return message like pgresult. how can I revise the source code? Any suggestion?\r\n>> I really need you help.\r\n>> Jinqiang Han\r\n>> \r\n-- \r\n Bruce Momjian | http://candle.pha.pa.us\r\n pgman@candle.pha.pa.us | (610) 359-1001\r\n + If your life is a hard drive, | 13 Roberts Road\r\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\r\n\n\n\n\n\nthank you for your information.\nI want to use the imformation of show or other utilities sql in jdbc \r\ninterface. generally  resultset is empty when such sql is execute. I want \r\nto get the information like psql. How can I do?\nThanks again.\n\n\n\nYou should ask this on general, and be more specific about how pgresult\nis different from that SHOW currently does.\n \nDue to time constraints, I do not directly answer general PostgreSQL\nquestions.  For assistance, please join the appropriate mailing list and\npost your question:\n \nhttp://www.postgresql.org/users-lounge\n \nYou can also try the #postgresql IRC channel.  See the PostgreSQL FAQ\nfor more information.\n \n---------------------------------------------------------------------------\n \n \n[ ?GB2312?] wrote:\n>>  Dear Momjian,\n>>  hello,\n>> \r\n I want to make the show variable SQL to return  message like pgresult. how can I revise the source code? Any suggestion?\n>>  I really need you help.\n>>  Jinqiang Han\n>>  \n-- \n  Bruce Momjian                        |  http://candle.pha.pa.us\n  pgman@candle.pha.pa.us               |  (610) 359-1001\n  +  If your life is a hard drive,     |  13 Roberts Road\n  +  Christ can be your backup.        |  Newtown Square, Pennsylvania 19073", "msg_date": "Thu, 26 Sep 2002 1:19:59 +0800", "msg_from": "=?GB2312?Q?=BA=AB=BD=FC=C7=BF?= <jqhan@db.pku.edu.cn>", "msg_from_op": true, "msg_subject": "Re: inquiry" }, { "msg_contents": "\nIn 7.3, SHOW returns a query results that can be resturned to jdbc. We\nare using beta1/2 now, so you can test that from ftp.postgresql.org.\n\n---------------------------------------------------------------------------\n\n[ ?GB2312?] wrote:\n> thank you for your information.\n> I want to use the imformation of show or other utilities sql in jdbc interface. generally resultset is empty when such sql is execute. I want to get the information like psql. How can I do?\n> Thanks again.\n> \n> \n> \n> You should ask this on general, and be more specific about how pgresult\n> is different from that SHOW currently does.\n> \n> Due to time constraints, I do not directly answer general PostgreSQL\n> questions. For assistance, please join the appropriate mailing list and\n> post your question:\n> \n> http://www.postgresql.org/users-lounge\n> \n> You can also try the #postgresql IRC channel. See the PostgreSQL FAQ\n> for more information.\n> \n> ---------------------------------------------------------------------------\n> \n> \n> [ ?GB2312?] wrote:\n> >> Dear Momjian��\n> >> hello,\n> >> I want to make the show variable SQL to return message like pgresult. how can I revise the source code? Any suggestion?\n> >> I really need you help.\n> >> Jinqiang Han\n> >> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 13:22:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: inquiry" } ]
[ { "msg_contents": "I am getting errors when doing a checkout, related to Marc's splitting\nup the CVS tree into modules:\n\t\n\tC pgsql/contrib/earthdistance/Makefile\n\tcvs checkout: move away\n\tpgsql/contrib/earthdistance/README.earthdistance; it is in the way\n\tC pgsql/contrib/earthdistance/README.earthdistance\n\tcvs checkout: move away pgsql/contrib/earthdistance/earthdistance.c; it\n\tis in the way\n\nI get this from a CVS checkout every time. Can someone fix it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 14:34:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "CVS checkout errors" }, { "msg_contents": "Bruce Momjian writes:\n\n> I am getting errors when doing a checkout, related to Marc's splitting\n> up the CVS tree into modules:\n\nThis split should be reverted. It was poorly thought out to begin with,\nand since all the candidates have been moved to their own homes now it's\nobsolete as well. It just confuses because of issues like this.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 26 Sep 2002 00:06:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS checkout errors" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> I am getting errors when doing a checkout, related to Marc's splitting\n>> up the CVS tree into modules:\n\n> This split should be reverted.\n\nI'm for that ... even if we have to do *another* set of fresh CVS\ncheckouts :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 18:11:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS checkout errors " }, { "msg_contents": "\nMarc, I am still seeing these errors. Would you please fix it?\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> I am getting errors when doing a checkout, related to Marc's splitting\n> up the CVS tree into modules:\n> \t\n> \tC pgsql/contrib/earthdistance/Makefile\n> \tcvs checkout: move away\n> \tpgsql/contrib/earthdistance/README.earthdistance; it is in the way\n> \tC pgsql/contrib/earthdistance/README.earthdistance\n> \tcvs checkout: move away pgsql/contrib/earthdistance/earthdistance.c; it\n> \tis in the way\n> \n> I get this from a CVS checkout every time. Can someone fix it?\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 00:17:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "CVS split problems" }, { "msg_contents": "\ncan you create a project on gborg under 'server modules' for this?\n\nOn Sun, 29 Sep 2002, Bruce Momjian wrote:\n\n>\n> Marc, I am still seeing these errors. Would you please fix it?\n>\n> ---------------------------------------------------------------------------\n>\n> Bruce Momjian wrote:\n> > I am getting errors when doing a checkout, related to Marc's splitting\n> > up the CVS tree into modules:\n> >\n> > \tC pgsql/contrib/earthdistance/Makefile\n> > \tcvs checkout: move away\n> > \tpgsql/contrib/earthdistance/README.earthdistance; it is in the way\n> > \tC pgsql/contrib/earthdistance/README.earthdistance\n> > \tcvs checkout: move away pgsql/contrib/earthdistance/earthdistance.c; it\n> > \tis in the way\n> >\n> > I get this from a CVS checkout every time. Can someone fix it?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n", "msg_date": "Sun, 29 Sep 2002 23:39:39 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS split problems" }, { "msg_contents": "Marc G. Fournier wrote:\n> \n> can you create a project on gborg under 'server modules' for this?\n\nUh, I don't see the logic in moving earthdistance out of /contrib. It\nuses /cube, which is in contrib. I didn't think we were moving loadable\nmodules out to gborg yet, and I didn't think we were doing that during\nbeta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 00:12:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS split problems" }, { "msg_contents": "On Mon, 30 Sep 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> >\n> > can you create a project on gborg under 'server modules' for this?\n>\n> Uh, I don't see the logic in moving earthdistance out of /contrib. It\n> uses /cube, which is in contrib. I didn't think we were moving loadable\n> modules out to gborg yet, and I didn't think we were doing that during\n> beta.\n\nNot on the branch, just on the head ... but, you have a point ... what\nwere those errors again? and doing what?\n\n", "msg_date": "Mon, 30 Sep 2002 13:52:25 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS split problems" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 30 Sep 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > >\n> > > can you create a project on gborg under 'server modules' for this?\n> >\n> > Uh, I don't see the logic in moving earthdistance out of /contrib. It\n> > uses /cube, which is in contrib. I didn't think we were moving loadable\n> > modules out to gborg yet, and I didn't think we were doing that during\n> > beta.\n> \n> Not on the branch, just on the head ... but, you have a point ... what\n> were those errors again? and doing what?\n\nI get on postgresql.org on a previously checked out CVS:\n\n$ cvs -q -d :pserver:momjian@postgresql.org:/cvsroot checkout -P pgsql\ncvs checkout: move away pgsql/contrib/earthdistance/Makefile; it is in the way\nC pgsql/contrib/earthdistance/Makefile\ncvs checkout: move away pgsql/contrib/earthdistance/README.earthdistance; it is in the way\nC pgsql/contrib/earthdistance/README.earthdistance\ncvs checkout: move away pgsql/contrib/earthdistance/earthdistance.c; it is in the way\nC pgsql/contrib/earthdistance/earthdistance.c\ncvs checkout: move away pgsql/contrib/earthdistance/earthdistance.sql.in; it is in the way\nC pgsql/contrib/earthdistance/earthdistance.sql.in\ncvs checkout: move away pgsql/contrib/earthdistance/expected/earthdistance.out; it is in the way\nC pgsql/contrib/earthdistance/expected/earthdistance.out\ncvs checkout: move away pgsql/contrib/earthdistance/sql/earthdistance.sql; it is in the way\nC pgsql/contrib/earthdistance/sql/earthdistance.sql\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 13:28:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS split problems" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I get on postgresql.org on a previously checked out CVS:\n\n> $ cvs -q -d :pserver:momjian@postgresql.org:/cvsroot checkout -P pgsql\n> cvs checkout: move away pgsql/contrib/earthdistance/Makefile; it is in the way\n> C pgsql/contrib/earthdistance/Makefile\n> [etc]\n\nNote that \"cvs update\" doesn't give any complaints.\n\nI surmise that a checkout visits the earthdistance directory twice for\nsome reason; perhaps it is listed in both of the modules comprising\n\"pgsql\"?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 14:18:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: CVS split problems " }, { "msg_contents": "Bruce Momjian wrote:\n> I am getting errors when doing a checkout, related to Marc's splitting\n> up the CVS tree into modules:\n> \t\n> \tC pgsql/contrib/earthdistance/Makefile\n> \tcvs checkout: move away\n> \tpgsql/contrib/earthdistance/README.earthdistance; it is in the way\n> \tC pgsql/contrib/earthdistance/README.earthdistance\n> \tcvs checkout: move away pgsql/contrib/earthdistance/earthdistance.c; it\n> \tis in the way\n> \n> I get this from a CVS checkout every time. Can someone fix it?\n\nFixed. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 17:02:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: CVS checkout errors" }, { "msg_contents": "On Mon, 30 Sep 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I get on postgresql.org on a previously checked out CVS:\n>\n> > $ cvs -q -d :pserver:momjian@postgresql.org:/cvsroot checkout -P pgsql\n> > cvs checkout: move away pgsql/contrib/earthdistance/Makefile; it is in the way\n> > C pgsql/contrib/earthdistance/Makefile\n> > [etc]\n>\n> Note that \"cvs update\" doesn't give any complaints.\n>\n> I surmise that a checkout visits the earthdistance directory twice for\n> some reason; perhaps it is listed in both of the modules comprising\n> \"pgsql\"?\n\nOkay, this is the only message(s) I have on this ... since \"merging\"\nearthdistance back into the tree will most likely cause more headaches,\nbreakage and outcries, and since I see no reason why anyone would want to\n'checkout' a module that has already been checked out (instead of doing an\nupdate like the rest of us), there is no way I'm going to put\nearthdistance back in ...\n\n... unless there is, in fact, a completely different problem?\n\nOnce v7.3 is released, I'd like to see a continuation of moving the\nnon-core stuff over to GBorg, as well, so this will likely disappear at\nthat time ...\n\n\n\n", "msg_date": "Mon, 21 Oct 2002 19:14:40 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: CVS split problems " }, { "msg_contents": "Marc G. Fournier writes:\n\n> Okay, this is the only message(s) I have on this ... since \"merging\"\n> earthdistance back into the tree will most likely cause more headaches,\n> breakage and outcries, and since I see no reason why anyone would want to\n> 'checkout' a module that has already been checked out (instead of doing an\n> update like the rest of us), there is no way I'm going to put\n> earthdistance back in ...\n>\n> ... unless there is, in fact, a completely different problem?\n\nIt causes a useless and confusing divergence between the module names used\nto check out things and the names that appear in various messages, files,\nand the online views. Certainly it'd be a bad idea to do this now, but\nplease do it after 7.3 is released. Just because removing a silliness\ncauses a brief inconvenience is no reason to hang on to a silliness.\n\n> Once v7.3 is released, I'd like to see a continuation of moving the\n> non-core stuff over to GBorg, as well, so this will likely disappear at\n> that time ...\n\nThe issues I point out would continue to exist.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 22 Oct 2002 22:10:43 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CVS split problems " } ]
[ { "msg_contents": "The subject says it all:\n\nDEBUG: StartTransactionCommand\nDEBUG: ProcessQuery\nERROR: Relation 0 does not exist\nLOG: statement: INSERT INTO trans (meter_id,stats_id,flowindex,firsttime,firstt...\n\nThe code generating the insert statement didn't change, just the server - but\nwhat does it mean? There is no 0. The most complicated are bits like:\n\n('08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5536695.89 second')::time\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 25 Sep 2002 21:30:15 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> The subject says it all:\n> ERROR: Relation 0 does not exist\n> LOG: statement: INSERT INTO trans (meter_id,stats_id,flowindex,firsttime,firstt...\n\nCould we see the *whole* query? And the schemas of the table(s) it\nuses? Seems like a bug to me, but without enough context to reproduce\nit I can't do much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 16:52:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "On Wed, Sep 25, 2002 at 04:52:30PM -0400, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > The subject says it all:\n> > ERROR: Relation 0 does not exist\n> > LOG: statement: INSERT INTO trans (meter_id,stats_id,flowindex,firsttime,firstt...\n> \n> Could we see the *whole* query? And the schemas of the table(s) it\n> uses? Seems like a bug to me, but without enough context to reproduce\n> it I can't do much.\n\nDEBUG: StartTransactionCommand\nDEBUG: ProcessQuery\nERROR: Relation 0 does not exist\nLOG: statement: INSERT INTO trans (meter_id,stats_id,flowindex,firsttime,firsttimed,firsttimet,firsttimei,lasttime,lasttimed,lasttimet,lasttimei,sourcepeeraddress,sourcepeername,sourcetransaddress,destpeeraddress,desttransaddress,sourcetranstype,frompdus,fromoctets,topdus,tooctets,deltafromoctets,deltatooctets) VALUES (411, currval('stats_id_seq'),5,'08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5536695.89 second',('08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5536695.89 second')::date,('08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5536695.89 second')::time,553669589,'08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5660731.53 second',('08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5660731.53 second')::date,('08:52:11 Mon 22 Jul 2002'::timestamp without time zone+'5660731.53 second')::time,566073153,'192.168.3.4','hostname.here.ac.uk',2,'192.168.3.2',53,17,14403,5271978,14419,1226291,507098::bigint,109306::bigint)\nDEBUG: AbortCurrentTransaction\nLOG: pq_recvbuf: unexpected EOF on client connection\nDEBUG: proc_exit(0)\n\n\n\nOne thing which bugs me: I have a currval in there, and that is the very\nfirst query which reaches the database, so it won't be \"set\", will it, but\nthen, how could it have worked for months with the other version of server?\n(Source files are old - recompiled just in case postgres header files changed)\nHmm then it would have complained about Relation \"stats_id_seq\" no?\n\n Table \"public.trans\"\n Column | Type | Modifiers \n--------------------+--------------------------------+-----------\n meter_id | integer | \n stats_id | integer | \n flowindex | integer | \n firsttime | timestamp(6) without time zone | \n firsttimed | date | \n firsttimet | time(0) without time zone | \n firsttimei | integer | \n lasttime | timestamp(6) without time zone | \n lasttimet | time(0) without time zone | \n lasttimed | date | \n lasttimei | integer | \n sourcepeeraddress | inet | \n sourcepeername | text | \n sourcetransaddress | integer | \n destpeeraddress | inet | \n destpeername | text | \n desttransaddress | integer | \n sourcetranstype | integer | \n frompdus | integer | \n fromoctets | integer | \n topdus | integer | \n tooctets | integer | \n deltafromoctets | bigint | \n deltatooctets | bigint | \n dpndate | date | \n nettype | integer | \nIndexes: firsttimei_idx btree (firsttimei),\n srcpeername_idx btree (sourcepeername)\nTriggers: RI_ConstraintTrigger_14413070,\n RI_ConstraintTrigger_14413073\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 25 Sep 2002 22:29:49 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> One thing which bugs me: I have a currval in there, and that is the very\n> first query which reaches the database, so it won't be \"set\", will it, but\n> then, how could it have worked for months with the other version of server?\n\nGood question. Do you have any ON INSERT rules on that table?\n\nCould you try setting a breakpoint at elog() to capture the stack trace\nleading up to the error? (Note elog() will be called for each log entry\nthat's made, so if you just set the breakpoint at the start of the\nroutine, you'll have to 'continue' several times until the actual ERROR\ncall occurs.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 17:49:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "On Wed, Sep 25, 2002 at 05:49:17PM -0400, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > One thing which bugs me: I have a currval in there, and that is the very\n> > first query which reaches the database, so it won't be \"set\", will it, but\n> > then, how could it have worked for months with the other version of server?\n> \n> Good question. Do you have any ON INSERT rules on that table?\n\nNo\n\n> Could you try setting a breakpoint at elog() to capture the stack trace\n> leading up to the error?\n\n#0 elog (lev=15, fmt=0x821133b \"statement: %s\") at elog.c:114\n#1 0x81812db in elog (lev=20, fmt=0x8196b02 \"Relation %u does not exist\")\n at elog.c:438\n#2 0x80791a2 in relation_open (relationId=0, lockmode=2) at heapam.c:474\n#3 0x8079329 in heap_open (relationId=0, lockmode=2) at heapam.c:602\n#4 0x816d94b in RI_FKey_check (fcinfo=0xbfbfc884) at ri_triggers.c:212\n#5 0x816dee1 in RI_FKey_check_ins (fcinfo=0xbfbfc884) at ri_triggers.c:506\n#6 0x80d4d2b in ExecCallTriggerFunc (trigdata=0xbfbfc9ac, finfo=0x82dd01c, \n per_tuple_context=0x83342d8) at trigger.c:974\n#7 0x80d5852 in DeferredTriggerExecute (event=0x833801c, itemno=0, \n rel=0x82f5494, finfo=0x82dd01c, per_tuple_context=0x83342d8)\n at trigger.c:1497\n#8 0x80d5a5f in deferredTriggerInvokeEvents (immediate_only=1 '\\001')\n at trigger.c:1620\n#9 0x80d5c29 in DeferredTriggerEndQuery () at trigger.c:1775\n#10 0x8136d57 in finish_xact_command () at postgres.c:894\n#11 0x8136c25 in pg_exec_query_string (query_string=0x82d701c, dest=Remote, \n parse_context=0x82818ac) at postgres.c:827\n#12 0x8137e19 in PostgresMain (argc=6, argv=0xbfbfccb4, \n username=0x825f925 \"root\") at postgres.c:1924\n#13 0x811cad2 in DoBackend (port=0x825f800) at postmaster.c:2276\n#14 0x811c3f9 in BackendStartup (port=0x825f800) at postmaster.c:1908\n#15 0x811b5af in ServerLoop () at postmaster.c:993\n#16 0x811b132 in PostmasterMain (argc=4, argv=0x825a040) at postmaster.c:774\n#17 0x80f4ee5 in main (argc=4, argv=0xbfbfd4c0) at main.c:209\n#18 0x8069880 in ___start ()\n\n\nThe definition of trans had:\nTriggers: RI_ConstraintTrigger_14413070,\n RI_ConstraintTrigger_14413073\n\nI was inserting meter_id=411, stats_id=currval('stats_id_seq')\nmeter.id=411 exists. Hard to tell about the other one.. Still don't see\nwhy this ever worked..\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 25 Sep 2002 23:13:16 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n>> Could you try setting a breakpoint at elog() to capture the stack trace\n>> leading up to the error?\n\n> #0 elog (lev=15, fmt=0x821133b \"statement: %s\") at elog.c:114\n> #1 0x81812db in elog (lev=20, fmt=0x8196b02 \"Relation %u does not exist\")\n> at elog.c:438\n> #2 0x80791a2 in relation_open (relationId=0, lockmode=2) at heapam.c:474\n> #3 0x8079329 in heap_open (relationId=0, lockmode=2) at heapam.c:602\n> #4 0x816d94b in RI_FKey_check (fcinfo=0xbfbfc884) at ri_triggers.c:212\n> #5 0x816dee1 in RI_FKey_check_ins (fcinfo=0xbfbfc884) at ri_triggers.c:506\n\nHm. Apparently tgconstrrelid is 0 in the pg_trigger row for your ON\nINSERT trigger --- can you confirm that by looking in pg_trigger?\n\nNext question is how it got that way. Did you create this table from a\ndump, and if so do you still have the dump file? I'm wondering exactly\nwhat SQL command was used to create the trigger ...\n\n> I was inserting meter_id=411, stats_id=currval('stats_id_seq')\n> meter.id=411 exists. Hard to tell about the other one.. Still don't see\n> why this ever worked..\n\nI'm confused about that too. The trigger failure is definitely\nhappening after we insert the row, so currval() must have gotten done\nbefore reaching this point. So *something* is executing a nextval()\nbefore we get to the point of evaluating the currval(). You got any\ndefaults on the table that might do it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 18:26:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "On Wed, Sep 25, 2002 at 05:49:17PM -0400, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > One thing which bugs me: I have a currval in there, and that is the very\n> > first query which reaches the database, so it won't be \"set\", will it, but\n> > then, how could it have worked for months with the other version of server?\n> \n> Good question. Do you have any ON INSERT rules on that table?\n\nCurious: if I get the program to print what it thinks it is sending the\ndatabase, it does:\n\nSELECT id FROM meter WHERE meterstart = '08:52:11 Mon 22 Jul 2002'\nINSERT INTO stats (timeslice,timesliced,timeslicet,aps,...\nSELECT MAX(fromoctets),MAX(tooctets) FROM stats,trans WHERE...\nINSERT INTO trans (meter_id,stats_id,flowindex,firsttime,firsttimed,firsttimet,f\n\nso, there currval should be OK. I was running postmaster -d4, yet the only\nquery I saw was the last LOG one. I pretty sure that I would see all queries\nwith -d3 before..\n(Now postmaster won't shutdown pg_ctl: postmaster does not shut down)\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 25 Sep 2002 23:28:21 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> ... I was running postmaster -d4, yet the only\n> query I saw was the last LOG one. I pretty sure that I would see all queries\n> with -d3 before..\n\nIt looked to me like you were just running with the recently-added\nfrill to log only queries that cause errors; which is on by default.\n\n(Looks at code...) Ah. It looks like -d to the postmaster no longer\nmeans anywhere near what it used to. Bruce --- compare the handling\nof -d in the backend (postgres.c lines 1251ff) with its handling in\nthe postmaster (postmaster.c lines 444ff). Big difference. Are we\ngoing to make these more alike? If so, which one do we like?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 18:35:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "postmaster -d option (was Re: [GENERAL] Relation 0 does not exist)" }, { "msg_contents": "On Wed, Sep 25, 2002 at 11:28:21PM +0100, Patrick Welche wrote:\n> (Now postmaster won't shutdown pg_ctl: postmaster does not shut down)\n\nIt's stuck in ServerLoop () at postmaster.c:949\n(gdb) print rmask\n$1 = {fds_bits = {8, 0, 0, 0, 0, 0, 0, 0}}\n(gdb) print wmask\n$2 = {fds_bits = {0, 0, 0, 0, 0, 0, 0, 0}}\n(gdb) print timeout\n$3 = {tv_sec = 60, tv_usec = 0}\n\nRemembers Tip 1: Don't kill -9 the postmaster... Hmm 2 and 15 don't do anything..\n\nPatrick\n", "msg_date": "Wed, 25 Sep 2002 23:39:00 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n>> (Now postmaster won't shutdown pg_ctl: postmaster does not shut down)\n\n> It's stuck in ServerLoop () at postmaster.c:949\n> (gdb) print rmask\n> $1 = {fds_bits = {8, 0, 0, 0, 0, 0, 0, 0}}\n> (gdb) print wmask\n> $2 = {fds_bits = {0, 0, 0, 0, 0, 0, 0, 0}}\n> (gdb) print timeout\n> $3 = {tv_sec = 60, tv_usec = 0}\n\nThat's about what I'd expect it to be doing. The final decision to exit\nwould normally be made when we see the shutdown process exit (about\nline 1587 in postmaster.c). What are the contents of ShutdownPID,\nCheckPointPID, Shutdown, and FatalError? Are there any remaining child\nprocesses of the postmaster?\n\n> Remembers Tip 1: Don't kill -9 the postmaster... Hmm 2 and 15 don't do anything..\n\nThat tip is pretty obsolete, but before you pull the trigger it would be\nnice to try to learn more. I wonder if you have hit some obscure race\ncondition that prevents the postmaster from realizing it's done?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 18:46:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "On Wed, Sep 25, 2002 at 06:46:27PM -0400, Tom Lane wrote:\n... \n> That's about what I'd expect it to be doing. The final decision to exit\n> would normally be made when we see the shutdown process exit (about\n> line 1587 in postmaster.c). What are the contents of ShutdownPID,\n> CheckPointPID, Shutdown, and FatalError?\n\n(gdb) print ShutdownPID\n$1 = 0\n(gdb) print CheckPointPID\n$2 = 0\n(gdb) print Shutdown\n$3 = 2\n(gdb) print FatalError\n$4 = 0 '\\000'\n\n> Are there any remaining child\n> processes of the postmaster?\n\n# ps ax | grep post\n10828 p3 S+ 0:00.04 postmaster -o -W 15 -d4 (postgres)\n10829 p3 S+ 0:00.00 postmaster: stats buffer process (postgres)\n10831 p3 S+ 0:00.01 postmaster: stats collector process (postgres)\n11387 p5 D+ 0:00.00 grep post \n11360 p9 SW+ 0:00.00 vi postmaster.c \n\n> That tip is pretty obsolete, but before you pull the trigger it would be\n> nice to try to learn more. I wonder if you have hit some obscure race\n> condition that prevents the postmaster from realizing it's done?\n\n? Nothing else was going on... Now I'm hammering the box looking for the\nconstraint in the 3.3Gb dump file, so swap is a bit low.\n\nCheers,\n\nPatrick\n", "msg_date": "Wed, 25 Sep 2002 23:54:56 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> Remembers Tip 1: Don't kill -9 the postmaster... Hmm 2 and 15 don't do anything..\n\nHow about SIGQUIT (if you didn't use -9 already)?\n\nI've been staring at the logic in postmaster.c and I don't see how it\ncould fail to quit when Shutdown is nonzero and there aren't any\nchildren left... maybe the signal is being held off for some weird\nreason?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 19:32:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > ... I was running postmaster -d4, yet the only\n> > query I saw was the last LOG one. I pretty sure that I would see all queries\n> > with -d3 before..\n> \n> It looked to me like you were just running with the recently-added\n> frill to log only queries that cause errors; which is on by default.\n> \n> (Looks at code...) Ah. It looks like -d to the postmaster no longer\n> means anywhere near what it used to. Bruce --- compare the handling\n> of -d in the backend (postgres.c lines 1251ff) with its handling in\n> the postmaster (postmaster.c lines 444ff). Big difference. Are we\n> going to make these more alike? If so, which one do we like?\n\nI am sorry but I don't understand. They look like they both set\nserver_min_messages. There was a comment in one that said\nclient_min_messages but I just fixed that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/tcop/postgres.c,v\nretrieving revision 1.294\ndiff -c -c -r1.294 postgres.c\n*** src/backend/tcop/postgres.c\t25 Sep 2002 20:31:40 -0000\t1.294\n--- src/backend/tcop/postgres.c\t26 Sep 2002 01:57:48 -0000\n***************\n*** 1258,1267 ****\n \t\t\t\t\t\tsprintf(debugstr, \"debug%s\", optarg);\n \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", debugstr, ctx, gucsource);\n \t\t\t\t\t\tpfree(debugstr);\n- \n \t\t\t\t\t\t/*\n \t\t\t\t\t\t * -d is not the same as setting\n! \t\t\t\t\t\t * client_min_messages because it enables other\n \t\t\t\t\t\t * output options.\n \t\t\t\t\t\t */\n \t\t\t\t\t\tif (atoi(optarg) >= 1)\n--- 1258,1266 ----\n \t\t\t\t\t\tsprintf(debugstr, \"debug%s\", optarg);\n \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", debugstr, ctx, gucsource);\n \t\t\t\t\t\tpfree(debugstr);\n \t\t\t\t\t\t/*\n \t\t\t\t\t\t * -d is not the same as setting\n! \t\t\t\t\t\t * server_min_messages because it enables other\n \t\t\t\t\t\t * output options.\n \t\t\t\t\t\t */\n \t\t\t\t\t\tif (atoi(optarg) >= 1)\n***************\n*** 1275,1288 ****\n \t\t\t\t\t\tif (atoi(optarg) >= 5)\n \t\t\t\t\t\t\tSetConfigOption(\"debug_print_rewritten\", \"true\", ctx, gucsource);\n \t\t\t\t\t}\n- \t\t\t\t\telse\n- \n- \t\t\t\t\t\t/*\n- \t\t\t\t\t\t * -d 0 allows user to prevent postmaster debug\n- \t\t\t\t\t\t * from propagating to backend.\n- \t\t\t\t\t\t */\n- \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", \"notice\",\n- \t\t\t\t\t\t\t\t\t\tctx, gucsource);\n \t\t\t\t}\n \t\t\t\tbreak;\n \n--- 1274,1279 ----", "msg_date": "Wed, 25 Sep 2002 22:07:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not\n exist)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> (Looks at code...) Ah. It looks like -d to the postmaster no longer\n>> means anywhere near what it used to. Bruce --- compare the handling\n>> of -d in the backend (postgres.c lines 1251ff) with its handling in\n>> the postmaster (postmaster.c lines 444ff). Big difference. Are we\n>> going to make these more alike? If so, which one do we like?\n\n> I am sorry but I don't understand. They look like they both set\n> server_min_messages.\n\nYeah, but postgres.c *also* sets log_connections, log_statement,\ndebug_print_parse, debug_print_plan, debug_print_rewritten depending\non the -d level. This behavior is not random; it's an attempt to\nreproduce the effects of the historical -d switch. The postmaster.c\ncode is blowing off all those considerations.\n\n> *** 1275,1288 ****\n> \t\t\t\t\t\tif (atoi(optarg) >= 5)\n> \t\t\t\t\t\t\tSetConfigOption(\"debug_print_rewritten\", \"true\", ctx, gucsource);\n> \t\t\t\t\t}\n> - \t\t\t\t\telse\n> - \n> - \t\t\t\t\t\t/*\n> - \t\t\t\t\t\t * -d 0 allows user to prevent postmaster debug\n> - \t\t\t\t\t\t * from propagating to backend.\n> - \t\t\t\t\t\t */\n> - \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", \"notice\",\n> - \t\t\t\t\t\t\t\t\t\tctx, gucsource);\n> \t\t\t\t}\n> \t\t\t\tbreak;\n\nI think you are deleting your own code there ... why?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 23:28:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not\n exist)" }, { "msg_contents": "\nUh, yes, it is a little confusing and I am not sure that patch is right\nanymore. I haven't applied it.\n\nAnother issue is that we used to have a global debug_level variable that was\npropogated to the client. Now, we just have the GUC value which does\npropogate like the global one did. Does the postmaster still pass -dX\ndown to the child like it used to? I don't see why you say, \"The\npostmaster.c code is blowing off all those considerations.\"\n\nI -d0 think functions properly except that it sets the value to 'notice'\nrather than resetting it to the postgresql.conf value. Is there a way\nto do that?\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> (Looks at code...) Ah. It looks like -d to the postmaster no longer\n> >> means anywhere near what it used to. Bruce --- compare the handling\n> >> of -d in the backend (postgres.c lines 1251ff) with its handling in\n> >> the postmaster (postmaster.c lines 444ff). Big difference. Are we\n> >> going to make these more alike? If so, which one do we like?\n> \n> > I am sorry but I don't understand. They look like they both set\n> > server_min_messages.\n> \n> Yeah, but postgres.c *also* sets log_connections, log_statement,\n> debug_print_parse, debug_print_plan, debug_print_rewritten depending\n> on the -d level. This behavior is not random; it's an attempt to\n> reproduce the effects of the historical -d switch. The postmaster.c\n> code is blowing off all those considerations.\n> \n> > *** 1275,1288 ****\n> > \t\t\t\t\t\tif (atoi(optarg) >= 5)\n> > \t\t\t\t\t\t\tSetConfigOption(\"debug_print_rewritten\", \"true\", ctx, gucsource);\n> > \t\t\t\t\t}\n> > - \t\t\t\t\telse\n> > - \n> > - \t\t\t\t\t\t/*\n> > - \t\t\t\t\t\t * -d 0 allows user to prevent postmaster debug\n> > - \t\t\t\t\t\t * from propagating to backend.\n> > - \t\t\t\t\t\t */\n> > - \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", \"notice\",\n> > - \t\t\t\t\t\t\t\t\t\tctx, gucsource);\n> > \t\t\t\t}\n> > \t\t\t\tbreak;\n> \n> I think you are deleting your own code there ... why?\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 23:55:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not\n exist)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... Now, we just have the GUC value which does\n> propogate like the global one did. Does the postmaster still pass -dX\n> down to the child like it used to?\n\nEvidently not; else Patrick wouldn't be complaining that it doesn't\nwork like it used to.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 00:00:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not\n\texist)" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... Now, we just have the GUC value which does\n> > propogate like the global one did. Does the postmaster still pass -dX\n> > down to the child like it used to?\n> \n> Evidently not; else Patrick wouldn't be complaining that it doesn't\n> work like it used to.\n\nOK, got it. I knew server_min_messages would propogate to the client,\nbut that doesn't trigger the -d special cases in postgres.c. I re-added\nthe -d flag propogation to the postmaster. I also changed the postgres\n-d0 behavior to just reset server_min_messages rather than setting it to\n'notice.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/postmaster/postmaster.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/postmaster/postmaster.c,v\nretrieving revision 1.288\ndiff -c -c -r1.288 postmaster.c\n*** src/backend/postmaster/postmaster.c\t4 Sep 2002 20:31:24 -0000\t1.288\n--- src/backend/postmaster/postmaster.c\t26 Sep 2002 05:15:33 -0000\n***************\n*** 230,235 ****\n--- 230,237 ----\n \n static unsigned int random_seed = 0;\n \n+ static int\tdebug_flag = 0;\n+ \n extern char *optarg;\n extern int\toptind,\n \t\t\topterr;\n***************\n*** 452,457 ****\n--- 454,460 ----\n \t\t\t\t\tSetConfigOption(\"server_min_messages\", debugstr,\n \t\t\t\t\t\t\t\t\tPGC_POSTMASTER, PGC_S_ARGV);\n \t\t\t\t\tpfree(debugstr);\n+ \t\t\t\t\tdebug_flag = atoi(optarg);\n \t\t\t\t\tbreak;\n \t\t\t\t}\n \t\t\tcase 'F':\n***************\n*** 2028,2033 ****\n--- 2031,2037 ----\n \tchar\t *remote_host;\n \tchar\t *av[ARGV_SIZE * 2];\n \tint\t\t\tac = 0;\n+ \tchar\t\tdebugbuf[ARGV_SIZE];\n \tchar\t\tprotobuf[ARGV_SIZE];\n \tchar\t\tdbbuf[ARGV_SIZE];\n \tchar\t\toptbuf[ARGV_SIZE];\n***************\n*** 2207,2212 ****\n--- 2211,2225 ----\n \t */\n \n \tav[ac++] = \"postgres\";\n+ \n+ \t/*\n+ \t * Pass the requested debugging level along to the backend.\n+ \t */\n+ \tif (debug_flag > 0)\n+ \t{\n+ \t\tsprintf(debugbuf, \"-d%d\", debug_flag);\n+ \t\tav[ac++] = debugbuf;\n+ \t}\n \n \t/*\n \t * Pass any backend switches specified with -o in the postmaster's own\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/tcop/postgres.c,v\nretrieving revision 1.294\ndiff -c -c -r1.294 postgres.c\n*** src/backend/tcop/postgres.c\t25 Sep 2002 20:31:40 -0000\t1.294\n--- src/backend/tcop/postgres.c\t26 Sep 2002 05:15:41 -0000\n***************\n*** 1281,1288 ****\n \t\t\t\t\t\t * -d 0 allows user to prevent postmaster debug\n \t\t\t\t\t\t * from propagating to backend.\n \t\t\t\t\t\t */\n! \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", \"notice\",\n! \t\t\t\t\t\t\t\t\t\tctx, gucsource);\n \t\t\t\t}\n \t\t\t\tbreak;\n \n--- 1281,1287 ----\n \t\t\t\t\t\t * -d 0 allows user to prevent postmaster debug\n \t\t\t\t\t\t * from propagating to backend.\n \t\t\t\t\t\t */\n! \t\t\t\t\t\tResetPGVariable(\"server_min_messages\");\n \t\t\t\t}\n \t\t\t\tbreak;", "msg_date": "Thu, 26 Sep 2002 01:16:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not\n exist)" }, { "msg_contents": "On Wed, Sep 25, 2002 at 06:26:22PM -0400, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> >> Could you try setting a breakpoint at elog() to capture the stack trace\n> >> leading up to the error?\n> \n> > #0 elog (lev=15, fmt=0x821133b \"statement: %s\") at elog.c:114\n> > #1 0x81812db in elog (lev=20, fmt=0x8196b02 \"Relation %u does not exist\")\n> > at elog.c:438\n> > #2 0x80791a2 in relation_open (relationId=0, lockmode=2) at heapam.c:474\n> > #3 0x8079329 in heap_open (relationId=0, lockmode=2) at heapam.c:602\n> > #4 0x816d94b in RI_FKey_check (fcinfo=0xbfbfc884) at ri_triggers.c:212\n> > #5 0x816dee1 in RI_FKey_check_ins (fcinfo=0xbfbfc884) at ri_triggers.c:506\n> \n> Hm. Apparently tgconstrrelid is 0 in the pg_trigger row for your ON\n> INSERT trigger --- can you confirm that by looking in pg_trigger?\n\nEven more entertaining: tgconstrrelid=0 for all rows in pg_trigger.\n\n> Next question is how it got that way. Did you create this table from a\n> dump, and if so do you still have the dump file? I'm wondering exactly\n> what SQL command was used to create the trigger ...\n\nOriginally it was created with\n\ncreate table meter (\n id serial primary key,\n...\ncreate table stats (\n id serial primary key,\n...\ncreate table trans (\n meter_id integer references meter(id),\n stats_id integer references stats(id),\n...\n\nthen dumped with the v7.3 pg_dumpall which generated:\n\nALTER TABLE ONLY meter\n ADD CONSTRAINT meter_pkey PRIMARY KEY (id);\n\nALTER TABLE ONLY stats\n ADD CONSTRAINT stats_pkey PRIMARY KEY (id);\n\n--\n-- TOC entry 117 (OID 8658004)\n-- Name: RI_ConstraintTrigger_8658003; Type: TRIGGER; Schema: ; Owner: prlw1\n--\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\"\n AFTER INSERT OR UPDATE ON trans\nNOT DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW\n EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('<unnamed>', 'trans', 'meter', 'UNSPE\nCIFIED', 'meter_id', 'id');\n\n\n--\n-- TOC entry 113 (OID 8658006)\n-- Name: RI_ConstraintTrigger_8658005; Type: TRIGGER; Schema: ; Owner: prlw1\n--\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\"\n AFTER DELETE ON meter\nNOT DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW\n EXECUTE PROCEDURE \"RI_FKey_noaction_del\" ('<unnamed>', 'trans', 'meter', 'UN\nSPECIFIED', 'meter_id', 'id');\n\n\n--\n-- TOC entry 114 (OID 8658008)\n-- Name: RI_ConstraintTrigger_8658007; Type: TRIGGER; Schema: ; Owner: prlw1\n--\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\"\n AFTER UPDATE ON meter\nNOT DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW\n EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', 'trans', 'meter', 'UN\nSPECIFIED', 'meter_id', 'id');\n\n\n--\n-- TOC entry 118 (OID 8658010)\n-- Name: RI_ConstraintTrigger_8658009; Type: TRIGGER; Schema: ; Owner: prlw1\n--\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\"\n AFTER INSERT OR UPDATE ON trans\nNOT DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW\n EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('<unnamed>', 'trans', 'stats', 'UNSPE\nCIFIED', 'stats_id', 'id');\n\n\n--\n-- TOC entry 115 (OID 8658012)\n-- Name: RI_ConstraintTrigger_8658011; Type: TRIGGER; Schema: ; Owner: prlw1\n--\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\"\n AFTER DELETE ON stats\nNOT DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW\n EXECUTE PROCEDURE \"RI_FKey_noaction_del\" ('<unnamed>', 'trans', 'stats', 'UN\nSPECIFIED', 'stats_id', 'id');\n\n\n--\n-- TOC entry 116 (OID 8658014)\n-- Name: RI_ConstraintTrigger_8658013; Type: TRIGGER; Schema: ; Owner: prlw1\n--\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\"\n AFTER UPDATE ON stats\nNOT DEFERRABLE INITIALLY IMMEDIATE\n FOR EACH ROW\n EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', 'trans', 'stats', 'UN\nSPECIFIED', 'stats_id', 'id');\n\n\n\n\nCheers,\n\nPatrick\n", "msg_date": "Thu, 26 Sep 2002 10:55:09 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> then dumped with the v7.3 pg_dumpall which generated:\n\n> CREATE CONSTRAINT TRIGGER \"<unnamed>\"\n> AFTER INSERT OR UPDATE ON trans\n> NOT DEFERRABLE INITIALLY IMMEDIATE\n> FOR EACH ROW\n> EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('<unnamed>', 'trans', 'meter', 'UNSPE\n> CIFIED', 'meter_id', 'id');\n\nYeek. The lack of a FROM <table> clause in that trigger definition is\nwhy it's not working. IIRC, the FROM was optional in pre-7.3 releases,\nbut it is *required* now. (We probably should adjust the syntax\naccordingly.)\n\n7.3 pg_dump is not working hard enough to regenerate the appropriate\ninfo, which we can fix, but I'm wondering how it got that way in the\nfirst place. The bug that could originally cause tgconstrrelid to be\nforgotten was a pg_dump bug that existed up to about 7.0. Is it\npossible that these tables have a continuous history of being dumped\nand reloaded back to 7.0 or before?\n\nAnyway the quickest fix seems to be to manually drop the triggers\nand reconstruct the FK relationships with ALTER TABLE ADD FOREIGN KEY\ncommands. If that seems too messy to do by hand, you can wait till\nI've got a pg_dump patch to do it for you.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 09:59:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> diff -c -c -r1.294 postgres.c\n> *** src/backend/tcop/postgres.c 25 Sep 2002 20:31:40 -0000 1.294\n> --- src/backend/tcop/postgres.c 26 Sep 2002 05:15:41 -0000\n> ***************\n> *** 1281,1288 ****\n> * -d 0 allows user to prevent postmaster debug\n> * from propagating to backend.\n> */\n> ! SetConfigOption(\"server_min_messages\", \"notice\",\n> ! ctx, gucsource);\n> }\n> break;\n \n> --- 1281,1287 ----\n> * -d 0 allows user to prevent postmaster debug\n> * from propagating to backend.\n> */\n> ! ResetPGVariable(\"server_min_messages\");\n> }\n> break;\n \n\nIf you want \"export PGOPTIONS=-d0\" to do what the comment says, you'd\nalso need to Reset all of the other GUC variables that -dN might have\nset. However, I'm not sure that I agree with the goal in the first\nplace. If the admin has set debugging on the postmaster command line,\nshould it really be possible for users to turn it off so easily?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:11:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does not\n\texist)" }, { "msg_contents": "On Thu, Sep 26, 2002 at 09:59:32AM -0400, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > then dumped with the v7.3 pg_dumpall which generated:\n> \n> > CREATE CONSTRAINT TRIGGER \"<unnamed>\"\n> > AFTER INSERT OR UPDATE ON trans\n> > NOT DEFERRABLE INITIALLY IMMEDIATE\n> > FOR EACH ROW\n> > EXECUTE PROCEDURE \"RI_FKey_check_ins\" ('<unnamed>', 'trans', 'meter', 'UNSPE\n> > CIFIED', 'meter_id', 'id');\n> \n> Yeek. The lack of a FROM <table> clause in that trigger definition is\n> why it's not working. IIRC, the FROM was optional in pre-7.3 releases,\n> but it is *required* now. (We probably should adjust the syntax\n> accordingly.)\n> \n> 7.3 pg_dump is not working hard enough to regenerate the appropriate\n> info, which we can fix, but I'm wondering how it got that way in the\n> first place. The bug that could originally cause tgconstrrelid to be\n> forgotten was a pg_dump bug that existed up to about 7.0. Is it\n> possible that these tables have a continuous history of being dumped\n> and reloaded back to 7.0 or before?\n\nI wrote the system last year and it started running for real\n Thu 29 Mar 22:41:25 2001\nI think that is after 7.0? It has gone through a number of dump/restore\ncycles.\n\n> Anyway the quickest fix seems to be to manually drop the triggers\n> and reconstruct the FK relationships with ALTER TABLE ADD FOREIGN KEY\n> commands. If that seems too messy to do by hand, you can wait till\n> I've got a pg_dump patch to do it for you.\n\nJust a note on output. Before I had\n\n\\d trans...\nIndexes: firsttimei_idx btree (firsttimei),\n srcpeername_idx btree (sourcepeername)\nTriggers: RI_ConstraintTrigger_14413070,\n RI_ConstraintTrigger_14413073\n\\d meter...\nIndexes: meter_pkey primary key btree (id)\nTriggers: RI_ConstraintTrigger_14413071,\n RI_ConstraintTrigger_14413072\n\\d stats...\nIndexes: stats_pkey primary key btree (id)\nTriggers: RI_ConstraintTrigger_14413074,\n RI_ConstraintTrigger_14413075\n\nafter drop trigger/alter table add foreign key:\n\n\\d trans\nIndexes: firsttimei_idx btree (firsttimei),\n srcpeername_idx btree (sourcepeername)\nForeign Key constraints: $1 FOREIGN KEY (meter_id) REFERENCES meter(id) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION,\n $2 FOREIGN KEY (stats_id) REFERENCES stats(id) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION\n\\d meter...\nIndexes: meter_pkey primary key btree (id)\n\\d stats...\nIndexes: stats_pkey primary key btree (id)\n\n\nI take it the difference is because before tgconstrrelid was zero, and now\nit isn't? (Apart from pg_sync_pg_pwd and pg_sync_pg_group)\n\nThank you for the help! (Working now :-) .. now to see what's up with libpq++)\n\nPatrick\n", "msg_date": "Thu, 26 Sep 2002 16:31:01 +0100", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "Re: Relation 0 does not exist" }, { "msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n>> Anyway the quickest fix seems to be to manually drop the triggers\n>> and reconstruct the FK relationships with ALTER TABLE ADD FOREIGN KEY\n>> commands. If that seems too messy to do by hand, you can wait till\n>> I've got a pg_dump patch to do it for you.\n\n> Just a note on output. Before I had\n\n> \\d trans...\n> Indexes: firsttimei_idx btree (firsttimei),\n> srcpeername_idx btree (sourcepeername)\n> Triggers: RI_ConstraintTrigger_14413070,\n> RI_ConstraintTrigger_14413073\n\n> after drop trigger/alter table add foreign key:\n\n> \\d trans\n> Indexes: firsttimei_idx btree (firsttimei),\n> srcpeername_idx btree (sourcepeername)\n> Foreign Key constraints: $1 FOREIGN KEY (meter_id) REFERENCES meter(id) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION,\n> $2 FOREIGN KEY (stats_id) REFERENCES stats(id) MATCH FULL ON UPDATE NO ACTION ON DELETE NO ACTION\n\n> I take it the difference is because before tgconstrrelid was zero, and now\n> it isn't?\n\nNo, the difference is that now there is a pg_constraint entry for the\nforeign-key relationship, and the system understands that the triggers\nexist to implement that FK constraint so it doesn't show them separately.\nBefore they were just random triggers and \\d didn't have any special\nknowledge about them.\n\nThe main advantage of having the pg_constraint entry is that you can\nALTER TABLE DROP CONSTRAINT to get rid of the FK constraint (and the\ntriggers of course). No more manual mucking with triggers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 11:39:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Relation 0 does not exist " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > diff -c -c -r1.294 postgres.c\n> > *** src/backend/tcop/postgres.c 25 Sep 2002 20:31:40 -0000 1.294\n> > --- src/backend/tcop/postgres.c 26 Sep 2002 05:15:41 -0000\n> > ***************\n> > *** 1281,1288 ****\n> > * -d 0 allows user to prevent postmaster debug\n> > * from propagating to backend.\n> > */\n> > ! SetConfigOption(\"server_min_messages\", \"notice\",\n> > ! ctx, gucsource);\n> > }\n> > break;\n> \n> > --- 1281,1287 ----\n> > * -d 0 allows user to prevent postmaster debug\n> > * from propagating to backend.\n> > */\n> > ! ResetPGVariable(\"server_min_messages\");\n> > }\n> > break;\n> \n\nTurns out I had to revert this change. There isn't a username at this\npoint in the code so the ResetPGVariable username test fails, and even\nthen, I don't think there is any way to set a variable to the value\nbefore -d5 set it.\n\n> If you want \"export PGOPTIONS=-d0\" to do what the comment says, you'd\n> also need to Reset all of the other GUC variables that -dN might have\n> set. However, I'm not sure that I agree with the goal in the first\n> place. If the admin has set debugging on the postmaster command line,\n> should it really be possible for users to turn it off so easily?\n\nI see what you are saying, that you can pass -d0 from the client and\nundo the -d5. Yes, I was wondering about that because even on the\ncommand line, if you do -d5 -d0, you still get those extra options.\n\nOK, attached patch applied. The restriction that you can't lower the\ndebug level with PGOPTIONS is a separate issue. What I clearly didn't\nwant to do was to reset those other options for -d0, and I didn't have\nto do that for -d0 to work as advertised.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/backend/tcop/postgres.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/tcop/postgres.c,v\nretrieving revision 1.296\ndiff -c -c -r1.296 postgres.c\n*** src/backend/tcop/postgres.c\t27 Sep 2002 03:34:15 -0000\t1.296\n--- src/backend/tcop/postgres.c\t27 Sep 2002 03:49:34 -0000\n***************\n*** 1136,1141 ****\n--- 1136,1142 ----\n \tconst char *DBName = NULL;\n \tbool\t\tsecure;\n \tint\t\t\terrs = 0;\n+ \tint\t\t\tdebug_flag = 0;\n \tGucContext\tctx;\n \tGucSource\tgucsource;\n \tchar\t *tmp;\n***************\n*** 1250,1255 ****\n--- 1251,1257 ----\n \n \t\t\tcase 'd':\t\t\t/* debug level */\n \t\t\t\t{\n+ \t\t\t\t\tdebug_flag = atoi(optarg);\n \t\t\t\t\t/* Set server debugging level. */\n \t\t\t\t\tif (atoi(optarg) != 0)\n \t\t\t\t\t{\n***************\n*** 1259,1283 ****\n \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", debugstr, ctx, gucsource);\n \t\t\t\t\t\tpfree(debugstr);\n \n- \t\t\t\t\t\t/*\n- \t\t\t\t\t\t * -d is not the same as setting\n- \t\t\t\t\t\t * client_min_messages because it enables other\n- \t\t\t\t\t\t * output options.\n- \t\t\t\t\t\t */\n- \t\t\t\t\t\tif (atoi(optarg) >= 1)\n- \t\t\t\t\t\t\tSetConfigOption(\"log_connections\", \"true\", ctx, gucsource);\n- \t\t\t\t\t\tif (atoi(optarg) >= 2)\n- \t\t\t\t\t\t\tSetConfigOption(\"log_statement\", \"true\", ctx, gucsource);\n- \t\t\t\t\t\tif (atoi(optarg) >= 3)\n- \t\t\t\t\t\t\tSetConfigOption(\"debug_print_parse\", \"true\", ctx, gucsource);\n- \t\t\t\t\t\tif (atoi(optarg) >= 4)\n- \t\t\t\t\t\t\tSetConfigOption(\"debug_print_plan\", \"true\", ctx, gucsource);\n- \t\t\t\t\t\tif (atoi(optarg) >= 5)\n- \t\t\t\t\t\t\tSetConfigOption(\"debug_print_rewritten\", \"true\", ctx, gucsource);\n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t\t/*\n! \t\t\t\t\t\t * -d 0 allows user to prevent postmaster debug\n \t\t\t\t\t\t * from propagating to backend. It would be nice\n \t\t\t\t\t\t * to set it to the postgresql.conf value here.\n \t\t\t\t\t\t */\n--- 1261,1270 ----\n \t\t\t\t\t\tSetConfigOption(\"server_min_messages\", debugstr, ctx, gucsource);\n \t\t\t\t\t\tpfree(debugstr);\n \n \t\t\t\t\t}\n \t\t\t\t\telse\n \t\t\t\t\t\t/*\n! \t\t\t\t\t\t * -d0 allows user to prevent postmaster debug\n \t\t\t\t\t\t * from propagating to backend. It would be nice\n \t\t\t\t\t\t * to set it to the postgresql.conf value here.\n \t\t\t\t\t\t */\n***************\n*** 1519,1524 ****\n--- 1506,1527 ----\n \t\t\t\terrs++;\n \t\t\t\tbreak;\n \t\t}\n+ \n+ \t/*\n+ \t * -d is not the same as setting\n+ \t * server_min_messages because it enables other\n+ \t * output options.\n+ \t */\n+ \tif (debug_flag >= 1)\n+ \t\tSetConfigOption(\"log_connections\", \"true\", ctx, gucsource);\n+ \tif (debug_flag >= 2)\n+ \t\tSetConfigOption(\"log_statement\", \"true\", ctx, gucsource);\n+ \tif (debug_flag >= 3)\n+ \t\tSetConfigOption(\"debug_print_parse\", \"true\", ctx, gucsource);\n+ \tif (debug_flag >= 4)\n+ \t\tSetConfigOption(\"debug_print_plan\", \"true\", ctx, gucsource);\n+ \tif (debug_flag >= 5)\n+ \t\tSetConfigOption(\"debug_print_rewritten\", \"true\", ctx, gucsource);\n \n \t/*\n \t * Post-processing for command line options.", "msg_date": "Thu, 26 Sep 2002 23:58:20 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster -d option (was Re: [GENERAL] Relation 0 does" } ]
[ { "msg_contents": "I'm trying to get the client utilities to compile under win32/VS.net per \nhttp://developer.postgresql.org/docs/postgres/install-win32.html.\n\nI was able to do this successfully using the 7.2.2 tarball, but using current \n7.3devel there are a number of minor issues (missing defines, adjustments to \nincludes), and one more difficult item (at least so far). The latter is the \nuse of gettimeofday in fe-connect.c:connectDBComplete for which there does not \nseem to be a good alternate under win32.\n\nIn connectDBComplete I see:\n\n/*\n * Prepare to time calculations, if connect_timeout isn't zero.\n */\nif (conn->connect_timeout != NULL)\n{\n remains.tv_sec = atoi(conn->connect_timeout);\n\nso it seems that the connection timeout can only be specified to the nearest \nsecond. Given that, is there any reason not to use time() instead of \ngettimeofday()?\n\nIt looks like there is a great deal of complexity added to the function just \nto accommodate the fact that gettimeofday returns seconds and microseconds as \ndistinct members of the result struct. I think switching this code to use \ntime() would both simplify it, and make it win32 compatible.\n\nComments?\n\nJoe\n\n\n", "msg_date": "Wed, 25 Sep 2002 15:04:39 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "compiling client utils under win32 - current 7.3devel is broken" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> ...it seems that the connection timeout can only be specified to the nearest \n> second. Given that, is there any reason not to use time() instead of \n> gettimeofday()?\n\nAs the code stands it's pretty necessary. Since we'll go around the\nloop multiple times, in much less than a second per loop in most cases,\nthe timeout resolution will be really poor if we only measure each\niteration to the nearest second.\n\n> It looks like there is a great deal of complexity added to the function just \n> to accommodate the fact that gettimeofday returns seconds and microseconds as\n> distinct members of the result struct.\n\nIt is ugly coding; if you can think of a better way, go for it.\n\nIt might work to measure time since the start of the whole process, or\nuntil the timeout target, rather than accumulating adjustments to the\n\"remains\" count each time through. In other words something like\n\n\tat start: targettime = time() + specified-timeout\n\n\teach time we are about to wait: set select timeout to\n\ttargettime - time().\n\nThis bounds the error at 1 second which is probably good enough (you\nmight want to add 1 to targettime to ensure the error is in the\nconservative direction of not timing out too soon).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 25 Sep 2002 18:18:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compiling client utils under win32 - current 7.3devel is broken " }, { "msg_contents": "Tom Lane wrote:\n> It might work to measure time since the start of the whole process, or\n> until the timeout target, rather than accumulating adjustments to the\n> \"remains\" count each time through. In other words something like\n> \n> \tat start: targettime = time() + specified-timeout\n> \n> \teach time we are about to wait: set select timeout to\n> \ttargettime - time().\n> \n> This bounds the error at 1 second which is probably good enough (you\n> might want to add 1 to targettime to ensure the error is in the\n> conservative direction of not timing out too soon).\n> \n\nI was working with this approach, when I noticed on *unmodified* cvs tip \n(about a day old):\n\ntest=# set statement_timeout=1;\nSET\ntest=# \\dt\nERROR: Query was cancelled.\ntest=#\n\nAt:\n http://developer.postgresql.org/docs/postgres/runtime-config.html#LOGGING\nthe setting is described like this:\n\n\"STATEMENT_TIMEOUT (integer)\n\nAborts any statement that takes over the specified number of milliseconds. A \nvalue of zero turns off the timer.\"\n\nThe proposed change will take this to a 1 second granularity anyway, so I was \nthinking we should change the setting to have a UOM of seconds, and fix the \ndocumentation. Any comments or concerns with regard to this plan?\n\nThanks,\n\nJoe\n\n", "msg_date": "Wed, 25 Sep 2002 17:52:29 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: compiling client utils under win32 - current 7.3devel" }, { "msg_contents": "Joe Conway wrote:\n> I was working with this approach, when I noticed on *unmodified* cvs tip \n> (about a day old):\n> \n> test=# set statement_timeout=1;\n> SET\n> test=# \\dt\n> ERROR: Query was cancelled.\n> test=#\n> \n> At:\n> http://developer.postgresql.org/docs/postgres/runtime-config.html#LOGGING\n> the setting is described like this:\n> \n> \"STATEMENT_TIMEOUT (integer)\n> \n> Aborts any statement that takes over the specified number of milliseconds. A \n> value of zero turns off the timer.\"\n> \n> The proposed change will take this to a 1 second granularity anyway, so I was \n> thinking we should change the setting to have a UOM of seconds, and fix the \n> documentation. Any comments or concerns with regard to this plan?\n\nUh, I thought you were changing connection_timeout, which is libpq and\nnot a GUC parameter, not statement_timeout. Do we want sub-second\ntimeout values? Not sure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 22:11:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: compiling client utils under win32 - current 7.3devel" }, { "msg_contents": "Bruce Momjian wrote:\n> Uh, I thought you were changing connection_timeout, which is libpq and\n> not a GUC parameter\n\nYup, you're right -- I got myself confused. Sorry.\n\n> not statement_timeout. Do we want sub-second\n> timeout values? Not sure.\n> \n\nI found it surprising that the statement_timeout was not in units of seconds, \nbut that's only because I read the docs after I tried it instead of before. I \ncan't think of a reason to have sub-second values, but it's probably not worth \nchanging it at this point.\n\nJoe\n\n", "msg_date": "Wed, 25 Sep 2002 20:21:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: compiling client utils under win32 - current 7.3devel" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > Uh, I thought you were changing connection_timeout, which is libpq and\n> > not a GUC parameter\n> \n> Yup, you're right -- I got myself confused. Sorry.\n> \n> > not statement_timeout. Do we want sub-second\n> > timeout values? Not sure.\n> > \n> \n> I found it surprising that the statement_timeout was not in units of seconds, \n> but that's only because I read the docs after I tried it instead of before. I \n> can't think of a reason to have sub-second values, but it's probably not worth \n> changing it at this point.\n\nMost queries are sub-second in duration so it seemed logical to keep it\nthe same as deadlock_timeout. I can see someone setting a 1/2 second\ndelay for queries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 25 Sep 2002 23:56:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: compiling client utils under win32 - current 7.3devel" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Joe Conway wrote:\n>> I can't think of a reason to have sub-second values, but it's\n>> probably not worth changing it at this point.\n\n> Most queries are sub-second in duration so it seemed logical to keep it\n> the same as deadlock_timeout.\n\nAnd machines get faster all the time.\n\nI'm not too concerned about resolution of a connection timeout, but\nI think we want to be able to express small query timeouts.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 00:02:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: compiling client utils under win32 - current 7.3devel " }, { "msg_contents": "Tom Lane wrote:\n> It might work to measure time since the start of the whole process, or\n> until the timeout target, rather than accumulating adjustments to the\n> \"remains\" count each time through. In other words something like\n> \n> \tat start: targettime = time() + specified-timeout\n> \n> \teach time we are about to wait: set select timeout to\n> \ttargettime - time().\n> \n> This bounds the error at 1 second which is probably good enough (you\n> might want to add 1 to targettime to ensure the error is in the\n> conservative direction of not timing out too soon).\n> \n\nThe attached patch fixes a number of issues related to compiling the client \nutilities (libpq.dll and psql.exe) for win32 (missing defines, adjustments to \nincludes, pedantic casting, non-existent functions) per:\n http://developer.postgresql.org/docs/postgres/install-win32.html.\n\nIt compiles cleanly under Windows 2000 using Visual Studio .net. Also compiles \nclean and passes all regression tests (regular and contrib) under Linux.\n\nIn addition to a review by the usual suspects, it would be very desirable for \nsomeone well versed in the peculiarities of win32 to take a look.\n\nIf there are no objections, please commit.\n\nThanks,\n\nJoe", "msg_date": "Thu, 26 Sep 2002 14:37:11 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "fix for client utils compilation under win32 " }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > It might work to measure time since the start of the whole process, or\n> > until the timeout target, rather than accumulating adjustments to the\n> > \"remains\" count each time through. In other words something like\n> > \n> > \tat start: targettime = time() + specified-timeout\n> > \n> > \teach time we are about to wait: set select timeout to\n> > \ttargettime - time().\n> > \n> > This bounds the error at 1 second which is probably good enough (you\n> > might want to add 1 to targettime to ensure the error is in the\n> > conservative direction of not timing out too soon).\n> > \n> \n> The attached patch fixes a number of issues related to compiling the client \n> utilities (libpq.dll and psql.exe) for win32 (missing defines, adjustments to \n> includes, pedantic casting, non-existent functions) per:\n> http://developer.postgresql.org/docs/postgres/install-win32.html.\n> \n> It compiles cleanly under Windows 2000 using Visual Studio .net. Also compiles \n> clean and passes all regression tests (regular and contrib) under Linux.\n> \n> In addition to a review by the usual suspects, it would be very desirable for \n> someone well versed in the peculiarities of win32 to take a look.\n> \n> If there are no objections, please commit.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/libpq/md5.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/libpq/md5.c,v\n> retrieving revision 1.18\n> diff -c -r1.18 md5.c\n> *** src/backend/libpq/md5.c\t4 Sep 2002 20:31:19 -0000\t1.18\n> --- src/backend/libpq/md5.c\t26 Sep 2002 17:56:11 -0000\n> ***************\n> *** 26,35 ****\n> *\tcan be compiled stand-alone.\n> */\n> \n> ! #ifndef MD5_ODBC\n> #include \"postgres.h\"\n> #include \"libpq/crypt.h\"\n> ! #else\n> #include \"md5.h\"\n> #endif\n> \n> --- 26,44 ----\n> *\tcan be compiled stand-alone.\n> */\n> \n> ! #if ! defined(MD5_ODBC) && ! defined(FRONTEND)\n> #include \"postgres.h\"\n> #include \"libpq/crypt.h\"\n> ! #endif\n> ! \n> ! #ifdef FRONTEND\n> ! #include \"postgres_fe.h\"\n> ! #ifndef WIN32\n> ! #include \"libpq/crypt.h\"\n> ! #endif /* WIN32 */\n> ! #endif /* FRONTEND */\n> ! \n> ! #ifdef MD5_ODBC\n> #include \"md5.h\"\n> #endif\n> \n> Index: src/bin/psql/command.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/command.c,v\n> retrieving revision 1.81\n> diff -c -r1.81 command.c\n> *** src/bin/psql/command.c\t22 Sep 2002 20:57:21 -0000\t1.81\n> --- src/bin/psql/command.c\t26 Sep 2002 18:18:17 -0000\n> ***************\n> *** 23,28 ****\n> --- 23,29 ----\n> #include <win32.h>\n> #include <io.h>\n> #include <fcntl.h>\n> + #include <direct.h>\n> #endif\n> \n> #include \"libpq-fe.h\"\n> ***************\n> *** 1163,1169 ****\n> \t\t\t\t\t\treturn NULL;\n> \t\t\t\t\t}\n> \n> ! \t\t\t\t\tif (i < token_len - 1)\n> \t\t\t\t\t\treturn_val[i + 1] = '\\0';\n> \t\t\t\t}\n> \n> --- 1164,1170 ----\n> \t\t\t\t\t\treturn NULL;\n> \t\t\t\t\t}\n> \n> ! \t\t\t\t\tif (i < (int) token_len - 1)\n> \t\t\t\t\t\treturn_val[i + 1] = '\\0';\n> \t\t\t\t}\n> \n> ***************\n> *** 1240,1246 ****\n> \t\texit(EXIT_FAILURE);\n> \t}\n> \n> ! \tfor (p = source; p - source < len && *p; p += PQmblen(p, pset.encoding))\n> \t{\n> \t\tif (esc)\n> \t\t{\n> --- 1241,1247 ----\n> \t\texit(EXIT_FAILURE);\n> \t}\n> \n> ! \tfor (p = source; p - source < (int) len && *p; p += PQmblen(p, pset.encoding))\n> \t{\n> \t\tif (esc)\n> \t\t{\n> ***************\n> *** 1278,1284 ****\n> \t\t\t\t\t\tchar\t *end;\n> \n> \t\t\t\t\t\tl = strtol(p, &end, 0);\n> ! \t\t\t\t\t\tc = l;\n> \t\t\t\t\t\tp = end - 1;\n> \t\t\t\t\t\tbreak;\n> \t\t\t\t\t}\n> --- 1279,1285 ----\n> \t\t\t\t\t\tchar\t *end;\n> \n> \t\t\t\t\t\tl = strtol(p, &end, 0);\n> ! \t\t\t\t\t\tc = (char) l;\n> \t\t\t\t\t\tp = end - 1;\n> \t\t\t\t\t\tbreak;\n> \t\t\t\t\t}\n> Index: src/bin/psql/common.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/common.c,v\n> retrieving revision 1.45\n> diff -c -r1.45 common.c\n> *** src/bin/psql/common.c\t14 Sep 2002 19:46:01 -0000\t1.45\n> --- src/bin/psql/common.c\t26 Sep 2002 18:43:31 -0000\n> ***************\n> *** 11,27 ****\n> \n> #include <errno.h>\n> #include <stdarg.h>\n> - #include <sys/time.h>\n> #ifndef HAVE_STRDUP\n> #include <strdup.h>\n> #endif\n> #include <signal.h>\n> #ifndef WIN32\n> #include <unistd.h>\t\t\t\t/* for write() */\n> #include <setjmp.h>\n> #else\n> #include <io.h>\t\t\t\t\t/* for _write() */\n> #include <win32.h>\n> #endif\n> \n> #include \"libpq-fe.h\"\n> --- 11,28 ----\n> \n> #include <errno.h>\n> #include <stdarg.h>\n> #ifndef HAVE_STRDUP\n> #include <strdup.h>\n> #endif\n> #include <signal.h>\n> #ifndef WIN32\n> + #include <sys/time.h>\n> #include <unistd.h>\t\t\t\t/* for write() */\n> #include <setjmp.h>\n> #else\n> #include <io.h>\t\t\t\t\t/* for _write() */\n> #include <win32.h>\n> + #include <sys/timeb.h>\t\t\t/* for _ftime() */\n> #endif\n> \n> #include \"libpq-fe.h\"\n> ***************\n> *** 295,303 ****\n> \tbool\t\tsuccess = false;\n> \tPGresult *results;\n> \tPGnotify *notify;\n> \tstruct timeval before,\n> \t\t\t\tafter;\n> ! \tstruct timezone tz;\n> \n> \tif (!pset.db)\n> \t{\n> --- 296,308 ----\n> \tbool\t\tsuccess = false;\n> \tPGresult *results;\n> \tPGnotify *notify;\n> + #ifndef WIN32\n> \tstruct timeval before,\n> \t\t\t\tafter;\n> ! #else\n> ! \tstruct _timeb before,\n> ! \t\t\t\tafter;\n> ! #endif\n> \n> \tif (!pset.db)\n> \t{\n> ***************\n> *** 327,337 ****\n> \t}\n> \n> \tcancelConn = pset.db;\n> \tif (pset.timing)\n> ! \t\tgettimeofday(&before, &tz);\n> \tresults = PQexec(pset.db, query);\n> \tif (pset.timing)\n> ! \t\tgettimeofday(&after, &tz);\n> \tif (PQresultStatus(results) == PGRES_COPY_IN)\n> \t\tcopy_in_state = true;\n> \t/* keep cancel connection for copy out state */\n> --- 332,352 ----\n> \t}\n> \n> \tcancelConn = pset.db;\n> + \n> + #ifndef WIN32\n> + \tif (pset.timing)\n> + \t\tgettimeofday(&before, NULL);\n> + \tresults = PQexec(pset.db, query);\n> + \tif (pset.timing)\n> + \t\tgettimeofday(&after, NULL);\n> + #else\n> \tif (pset.timing)\n> ! \t\t_ftime(&before);\n> \tresults = PQexec(pset.db, query);\n> \tif (pset.timing)\n> ! \t\t_ftime(&after);\n> ! #endif\n> ! \n> \tif (PQresultStatus(results) == PGRES_COPY_IN)\n> \t\tcopy_in_state = true;\n> \t/* keep cancel connection for copy out state */\n> ***************\n> *** 463,470 ****\n> --- 478,490 ----\n> \n> \t/* Possible microtiming output */\n> \tif (pset.timing && success)\n> + #ifndef WIN32\n> \t\tprintf(gettext(\"Time: %.2f ms\\n\"),\n> \t\t\t ((after.tv_sec - before.tv_sec) * 1000000.0 + after.tv_usec - before.tv_usec) / 1000.0);\n> + #else\n> + \t\tprintf(gettext(\"Time: %.2f ms\\n\"),\n> + \t\t\t ((after.time - before.time) * 1000.0 + after.millitm - before.millitm));\n> + #endif\n> \n> \treturn success;\n> }\n> Index: src/bin/psql/copy.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/copy.c,v\n> retrieving revision 1.25\n> diff -c -r1.25 copy.c\n> *** src/bin/psql/copy.c\t22 Sep 2002 20:57:21 -0000\t1.25\n> --- src/bin/psql/copy.c\t26 Sep 2002 18:59:11 -0000\n> ***************\n> *** 28,33 ****\n> --- 28,35 ----\n> \n> #ifdef WIN32\n> #define strcasecmp(x,y) stricmp(x,y)\n> + #define\t__S_ISTYPE(mode, mask)\t(((mode) & S_IFMT) == (mask))\n> + #define\tS_ISDIR(mode)\t __S_ISTYPE((mode), S_IFDIR)\n> #endif\n> \n> bool\t\tcopy_in_state;\n> Index: src/bin/psql/large_obj.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/large_obj.c,v\n> retrieving revision 1.21\n> diff -c -r1.21 large_obj.c\n> *** src/bin/psql/large_obj.c\t4 Sep 2002 20:31:36 -0000\t1.21\n> --- src/bin/psql/large_obj.c\t26 Sep 2002 19:04:07 -0000\n> ***************\n> *** 196,202 ****\n> \t{\n> \t\tchar\t *cmdbuf;\n> \t\tchar\t *bufptr;\n> ! \t\tint\t\t\tslen = strlen(comment_arg);\n> \n> \t\tcmdbuf = malloc(slen * 2 + 256);\n> \t\tif (!cmdbuf)\n> --- 196,202 ----\n> \t{\n> \t\tchar\t *cmdbuf;\n> \t\tchar\t *bufptr;\n> ! \t\tsize_t\t\tslen = strlen(comment_arg);\n> \n> \t\tcmdbuf = malloc(slen * 2 + 256);\n> \t\tif (!cmdbuf)\n> Index: src/bin/psql/mbprint.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/mbprint.c,v\n> retrieving revision 1.4\n> diff -c -r1.4 mbprint.c\n> *** src/bin/psql/mbprint.c\t27 Aug 2002 20:16:48 -0000\t1.4\n> --- src/bin/psql/mbprint.c\t26 Sep 2002 20:11:44 -0000\n> ***************\n> *** 202,208 ****\n> \tfor (; *pwcs && len > 0; pwcs += l)\n> \t{\n> \t\tl = pg_utf_mblen(pwcs);\n> ! \t\tif ((len < l) || ((w = ucs_wcwidth(utf2ucs(pwcs))) < 0))\n> \t\t\treturn width;\n> \t\tlen -= l;\n> \t\twidth += w;\n> --- 202,208 ----\n> \tfor (; *pwcs && len > 0; pwcs += l)\n> \t{\n> \t\tl = pg_utf_mblen(pwcs);\n> ! \t\tif ((len < (size_t) l) || ((w = ucs_wcwidth(utf2ucs(pwcs))) < 0))\n> \t\t\treturn width;\n> \t\tlen -= l;\n> \t\twidth += w;\n> Index: src/bin/psql/print.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/print.c,v\n> retrieving revision 1.31\n> diff -c -r1.31 print.c\n> *** src/bin/psql/print.c\t1 Sep 2002 23:30:46 -0000\t1.31\n> --- src/bin/psql/print.c\t26 Sep 2002 21:10:59 -0000\n> ***************\n> *** 282,288 ****\n> \t{\n> \t\tint\t\t\ttlen;\n> \n> ! \t\tif ((tlen = pg_wcswidth((unsigned char *) title, strlen(title))) >= total_w)\n> \t\t\tfprintf(fout, \"%s\\n\", title);\n> \t\telse\n> \t\t\tfprintf(fout, \"%-*s%s\\n\", (int) (total_w - tlen) / 2, \"\", title);\n> --- 282,288 ----\n> \t{\n> \t\tint\t\t\ttlen;\n> \n> ! \t\tif ((unsigned int) (tlen = pg_wcswidth((unsigned char *) title, strlen(title))) >= total_w)\n> \t\t\tfprintf(fout, \"%s\\n\", title);\n> \t\telse\n> \t\t\tfprintf(fout, \"%-*s%s\\n\", (int) (total_w - tlen) / 2, \"\", title);\n> ***************\n> *** 1184,1191 ****\n> \t\t\t footers ? (const char *const *) footers : (const char *const *) (opt->footers),\n> \t\t\t align, &opt->topt, fout);\n> \n> ! \tfree(headers);\n> ! \tfree(cells);\n> \tif (footers)\n> \t{\n> \t\tfree(footers[0]);\n> --- 1184,1191 ----\n> \t\t\t footers ? (const char *const *) footers : (const char *const *) (opt->footers),\n> \t\t\t align, &opt->topt, fout);\n> \n> ! \tfree((void *) headers);\n> ! \tfree((void *) cells);\n> \tif (footers)\n> \t{\n> \t\tfree(footers[0]);\n> Index: src/include/pg_config.h.win32\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/pg_config.h.win32,v\n> retrieving revision 1.7\n> diff -c -r1.7 pg_config.h.win32\n> *** src/include/pg_config.h.win32\t4 Sep 2002 22:54:18 -0000\t1.7\n> --- src/include/pg_config.h.win32\t26 Sep 2002 17:32:07 -0000\n> ***************\n> *** 16,21 ****\n> --- 16,23 ----\n> \n> #define MAXPGPATH 1024\n> \n> + #define INDEX_MAX_KEYS 32\n> + \n> #define HAVE_ATEXIT\n> #define HAVE_MEMMOVE\n> \n> ***************\n> *** 48,53 ****\n> --- 50,59 ----\n> \n> #define DLLIMPORT\n> \n> + #endif\n> + \n> + #ifndef __CYGWIN__\n> + #include <windows.h>\n> #endif\n> \n> #endif /* pg_config_h_win32__ */\n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.205\n> diff -c -r1.205 fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t22 Sep 2002 20:57:21 -0000\t1.205\n> --- src/interfaces/libpq/fe-connect.c\t26 Sep 2002 00:32:48 -0000\n> ***************\n> *** 21,27 ****\n> #include <errno.h>\n> #include <ctype.h>\n> #include <time.h>\n> - #include <unistd.h>\n> \n> #include \"libpq-fe.h\"\n> #include \"libpq-int.h\"\n> --- 21,26 ----\n> ***************\n> *** 1053,1062 ****\n> {\n> \tPostgresPollingStatusType flag = PGRES_POLLING_WRITING;\n> \n> ! \tstruct timeval remains,\n> ! \t\t\t *rp = NULL,\n> ! \t\t\t\tfinish_time,\n> ! \t\t\t\tstart_time;\n> \n> \tif (conn == NULL || conn->status == CONNECTION_BAD)\n> \t\treturn 0;\n> --- 1052,1061 ----\n> {\n> \tPostgresPollingStatusType flag = PGRES_POLLING_WRITING;\n> \n> ! \ttime_t\t\t\tfinish_time = 0,\n> ! \t\t\t\t\tcurrent_time;\n> ! \tstruct timeval\tremains,\n> ! \t\t\t\t *rp = NULL;\n> \n> \tif (conn == NULL || conn->status == CONNECTION_BAD)\n> \t\treturn 0;\n> ***************\n> *** 1074,1093 ****\n> \t\t}\n> \t\tremains.tv_usec = 0;\n> \t\trp = &remains;\n> \t}\n> \n> \twhile (rp == NULL || remains.tv_sec > 0 || remains.tv_usec > 0)\n> \t{\n> \t\t/*\n> - \t\t * If connecting timeout is set, get current time.\n> - \t\t */\n> - \t\tif (rp != NULL && gettimeofday(&start_time, NULL) == -1)\n> - \t\t{\n> - \t\t\tconn->status = CONNECTION_BAD;\n> - \t\t\treturn 0;\n> - \t\t}\n> - \n> - \t\t/*\n> \t\t * Wait, if necessary.\tNote that the initial state (just after\n> \t\t * PQconnectStart) is to wait for the socket to select for\n> \t\t * writing.\n> --- 1073,1086 ----\n> \t\t}\n> \t\tremains.tv_usec = 0;\n> \t\trp = &remains;\n> + \n> + \t\t/* calculate the finish time based on start + timeout */\n> + \t\tfinish_time = time((time_t *) NULL) + remains.tv_sec;\n> \t}\n> \n> \twhile (rp == NULL || remains.tv_sec > 0 || remains.tv_usec > 0)\n> \t{\n> \t\t/*\n> \t\t * Wait, if necessary.\tNote that the initial state (just after\n> \t\t * PQconnectStart) is to wait for the socket to select for\n> \t\t * writing.\n> ***************\n> *** 1128,1153 ****\n> \t\tflag = PQconnectPoll(conn);\n> \n> \t\t/*\n> ! \t\t * If connecting timeout is set, calculate remain time.\n> \t\t */\n> \t\tif (rp != NULL)\n> \t\t{\n> ! \t\t\tif (gettimeofday(&finish_time, NULL) == -1)\n> \t\t\t{\n> \t\t\t\tconn->status = CONNECTION_BAD;\n> \t\t\t\treturn 0;\n> \t\t\t}\n> ! \t\t\tif ((finish_time.tv_usec -= start_time.tv_usec) < 0)\n> ! \t\t\t{\n> ! \t\t\t\tremains.tv_sec++;\n> ! \t\t\t\tfinish_time.tv_usec += 1000000;\n> ! \t\t\t}\n> ! \t\t\tif ((remains.tv_usec -= finish_time.tv_usec) < 0)\n> ! \t\t\t{\n> ! \t\t\t\tremains.tv_sec--;\n> ! \t\t\t\tremains.tv_usec += 1000000;\n> ! \t\t\t}\n> ! \t\t\tremains.tv_sec -= finish_time.tv_sec - start_time.tv_sec;\n> \t\t}\n> \t}\n> \tconn->status = CONNECTION_BAD;\n> --- 1121,1138 ----\n> \t\tflag = PQconnectPoll(conn);\n> \n> \t\t/*\n> ! \t\t * If connecting timeout is set, calculate remaining time.\n> \t\t */\n> \t\tif (rp != NULL)\n> \t\t{\n> ! \t\t\tif (time(&current_time) == -1)\n> \t\t\t{\n> \t\t\t\tconn->status = CONNECTION_BAD;\n> \t\t\t\treturn 0;\n> \t\t\t}\n> ! \n> ! \t\t\tremains.tv_sec = finish_time - current_time;\n> ! \t\t\tremains.tv_usec = 0;\n> \t\t}\n> \t}\n> \tconn->status = CONNECTION_BAD;\n> ***************\n> *** 2946,2951 ****\n> --- 2931,2937 ----\n> \t\treturn NULL;\n> \t}\n> \n> + #ifndef WIN32\n> \t/* If password file is insecure, alert the user and ignore it. */\n> \tif (stat_buf.st_mode & (S_IRWXG | S_IRWXO))\n> \t{\n> ***************\n> *** 2955,2960 ****\n> --- 2941,2947 ----\n> \t\tfree(pgpassfile);\n> \t\treturn NULL;\n> \t}\n> + #endif\n> \n> \tfp = fopen(pgpassfile, \"r\");\n> \tfree(pgpassfile);\n> Index: src/interfaces/libpq/fe-misc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.79\n> diff -c -r1.79 fe-misc.c\n> *** src/interfaces/libpq/fe-misc.c\t4 Sep 2002 20:31:47 -0000\t1.79\n> --- src/interfaces/libpq/fe-misc.c\t26 Sep 2002 05:16:52 -0000\n> ***************\n> *** 150,158 ****\n> \t\t\t\t * try to grow the buffer. FIXME: The new size could be\n> \t\t\t\t * chosen more intelligently.\n> \t\t\t\t */\n> ! \t\t\t\tsize_t\t\tbuflen = conn->outCount + nbytes;\n> \n> ! \t\t\t\tif (buflen > conn->outBufSize)\n> \t\t\t\t{\n> \t\t\t\t\tchar\t *newbuf = realloc(conn->outBuffer, buflen);\n> \n> --- 150,158 ----\n> \t\t\t\t * try to grow the buffer. FIXME: The new size could be\n> \t\t\t\t * chosen more intelligently.\n> \t\t\t\t */\n> ! \t\t\t\tsize_t\t\tbuflen = (size_t) conn->outCount + nbytes;\n> \n> ! \t\t\t\tif (buflen > (size_t) conn->outBufSize)\n> \t\t\t\t{\n> \t\t\t\t\tchar\t *newbuf = realloc(conn->outBuffer, buflen);\n> \n> ***************\n> *** 240,246 ****\n> int\n> pqGetnchar(char *s, size_t len, PGconn *conn)\n> {\n> ! \tif (len < 0 || len > conn->inEnd - conn->inCursor)\n> \t\treturn EOF;\n> \n> \tmemcpy(s, conn->inBuffer + conn->inCursor, len);\n> --- 240,246 ----\n> int\n> pqGetnchar(char *s, size_t len, PGconn *conn)\n> {\n> ! \tif (len < 0 || len > (size_t) (conn->inEnd - conn->inCursor))\n> \t\treturn EOF;\n> \n> \tmemcpy(s, conn->inBuffer + conn->inCursor, len);\n> Index: src/interfaces/libpq/fe-print.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/fe-print.c,v\n> retrieving revision 1.46\n> diff -c -r1.46 fe-print.c\n> *** src/interfaces/libpq/fe-print.c\t29 Aug 2002 07:22:30 -0000\t1.46\n> --- src/interfaces/libpq/fe-print.c\t26 Sep 2002 05:08:47 -0000\n> ***************\n> *** 299,305 ****\n> \t\t\t\t\t(PQntuples(res) == 1) ? \"\" : \"s\");\n> \t\tfree(fieldMax);\n> \t\tfree(fieldNotNum);\n> ! \t\tfree(fieldNames);\n> \t\tif (usePipe)\n> \t\t{\n> #ifdef WIN32\n> --- 299,305 ----\n> \t\t\t\t\t(PQntuples(res) == 1) ? \"\" : \"s\");\n> \t\tfree(fieldMax);\n> \t\tfree(fieldNotNum);\n> ! \t\tfree((void *) fieldNames);\n> \t\tif (usePipe)\n> \t\t{\n> #ifdef WIN32\n> Index: src/interfaces/libpq/libpq-int.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.57\n> diff -c -r1.57 libpq-int.h\n> *** src/interfaces/libpq/libpq-int.h\t4 Sep 2002 20:31:47 -0000\t1.57\n> --- src/interfaces/libpq/libpq-int.h\t25 Sep 2002 21:08:03 -0000\n> ***************\n> *** 21,28 ****\n> #define LIBPQ_INT_H\n> \n> #include <time.h>\n> - #include <sys/time.h>\n> #include <sys/types.h>\n> \n> #if defined(WIN32) && (!defined(ssize_t))\n> typedef int ssize_t;\t\t\t/* ssize_t doesn't exist in VC (atleast\n> --- 21,30 ----\n> #define LIBPQ_INT_H\n> \n> #include <time.h>\n> #include <sys/types.h>\n> + #ifndef WIN32\n> + #include <sys/time.h>\n> + #endif\n> \n> #if defined(WIN32) && (!defined(ssize_t))\n> typedef int ssize_t;\t\t\t/* ssize_t doesn't exist in VC (atleast\n> Index: src/interfaces/libpq/libpqdll.def\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/libpqdll.def,v\n> retrieving revision 1.15\n> diff -c -r1.15 libpqdll.def\n> *** src/interfaces/libpq/libpqdll.def\t2 Jun 2002 22:36:30 -0000\t1.15\n> --- src/interfaces/libpq/libpqdll.def\t26 Sep 2002 20:18:48 -0000\n> ***************\n> *** 1,5 ****\n> LIBRARY LIBPQ\n> - DESCRIPTION \"Postgres Client Access Library\"\n> EXPORTS\n> \tPQconnectdb \t\t@ 1\n> \tPQsetdbLogin \t\t@ 2\n> --- 1,4 ----\n> ***************\n> *** 90,92 ****\n> --- 89,95 ----\n> \tPQfreeNotify\t\t@ 87\n> \tPQescapeString\t\t@ 88\n> \tPQescapeBytea\t\t@ 89\n> + \tprintfPQExpBuffer\t@ 90\n> + \tappendPQExpBuffer\t@ 91\n> + \tpg_encoding_to_char\t@ 92\n> + \tpg_utf_mblen\t\t@ 93\n> Index: src/interfaces/libpq/pqexpbuffer.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/pqexpbuffer.c,v\n> retrieving revision 1.13\n> diff -c -r1.13 pqexpbuffer.c\n> *** src/interfaces/libpq/pqexpbuffer.c\t20 Jun 2002 20:29:54 -0000\t1.13\n> --- src/interfaces/libpq/pqexpbuffer.c\t26 Sep 2002 05:12:17 -0000\n> ***************\n> *** 192,198 ****\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> --- 192,198 ----\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < (int) avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> ***************\n> *** 240,246 ****\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> --- 240,246 ----\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < (int) avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> Index: src/interfaces/libpq/win32.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/win32.h,v\n> retrieving revision 1.19\n> diff -c -r1.19 win32.h\n> *** src/interfaces/libpq/win32.h\t20 Jul 2002 05:43:31 -0000\t1.19\n> --- src/interfaces/libpq/win32.h\t26 Sep 2002 17:32:19 -0000\n> ***************\n> *** 22,28 ****\n> /*\n> * crypt not available (yet)\n> */\n> ! #define crypt(a,b) (a)\n> \n> #undef EAGAIN\t\t\t\t\t/* doesn't apply on sockets */\n> #undef EINTR\n> --- 22,28 ----\n> /*\n> * crypt not available (yet)\n> */\n> ! #define crypt(a,b) ((char *) a)\n> \n> #undef EAGAIN\t\t\t\t\t/* doesn't apply on sockets */\n> #undef EINTR\n> Index: src/utils/Makefile\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/utils/Makefile,v\n> retrieving revision 1.14\n> diff -c -r1.14 Makefile\n> *** src/utils/Makefile\t27 Jul 2002 20:10:05 -0000\t1.14\n> --- src/utils/Makefile\t26 Sep 2002 05:34:49 -0000\n> ***************\n> *** 15,18 ****\n> all:\n> \n> clean distclean maintainer-clean:\n> ! \trm -f dllinit.o\n> --- 15,18 ----\n> all:\n> \n> clean distclean maintainer-clean:\n> ! \trm -f dllinit.o getopt.o\n> Index: src/utils/getopt.c\n> ===================================================================\n> RCS file: src/utils/getopt.c\n> diff -N src/utils/getopt.c\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- src/utils/getopt.c\t26 Nov 2001 19:30:58 -0000\n> ***************\n> *** 0 ****\n> --- 1,125 ----\n> + /*\n> + * Copyright (c) 1987, 1993, 1994\n> + *\tThe Regents of the University of California. All rights reserved.\n> + *\n> + * Redistribution and use in source and binary forms, with or without\n> + * modification, are permitted provided that the following conditions\n> + * are met:\n> + * 1. Redistributions of source code must retain the above copyright\n> + *\t notice, this list of conditions and the following disclaimer.\n> + * 2. Redistributions in binary form must reproduce the above copyright\n> + *\t notice, this list of conditions and the following disclaimer in the\n> + *\t documentation and/or other materials provided with the distribution.\n> + * 3. All advertising materials mentioning features or use of this software\n> + *\t must display the following acknowledgement:\n> + *\tThis product includes software developed by the University of\n> + *\tCalifornia, Berkeley and its contributors.\n> + * 4. Neither the name of the University nor the names of its contributors\n> + *\t may be used to endorse or promote products derived from this software\n> + *\t without specific prior written permission.\n> + *\n> + * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n> + * ARE DISCLAIMED.\tIN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n> + * SUCH DAMAGE.\n> + */\n> + \n> + #if defined(LIBC_SCCS) && !defined(lint)\n> + static char sccsid[] = \"@(#)getopt.c\t8.3 (Berkeley) 4/27/95\";\n> + #endif /* LIBC_SCCS and not lint */\n> + \n> + #include <stdio.h>\n> + #include <stdlib.h>\n> + #include <string.h>\n> + \n> + int\t\t\topterr = 1,\t\t\t/* if error message should be printed */\n> + \t\t\toptind = 1,\t\t\t/* index into parent argv vector */\n> + \t\t\toptopt,\t\t\t\t/* character checked for validity */\n> + \t\t\toptreset;\t\t\t/* reset getopt */\n> + char\t *optarg;\t\t\t\t/* argument associated with option */\n> + \n> + #define BADCH\t(int)'?'\n> + #define BADARG\t(int)':'\n> + #define EMSG\t\"\"\n> + \n> + /*\n> + * getopt\n> + *\tParse argc/argv argument vector.\n> + */\n> + int\n> + getopt(nargc, nargv, ostr)\n> + int\t\t\tnargc;\n> + char\t *const * nargv;\n> + const char *ostr;\n> + {\n> + \textern char *__progname;\n> + \tstatic char *place = EMSG;\t/* option letter processing */\n> + \tchar\t *oli;\t\t\t/* option letter list index */\n> + \n> + \tif (optreset || !*place)\n> + \t{\t\t\t\t\t\t\t/* update scanning pointer */\n> + \t\toptreset = 0;\n> + \t\tif (optind >= nargc || *(place = nargv[optind]) != '-')\n> + \t\t{\n> + \t\t\tplace = EMSG;\n> + \t\t\treturn -1;\n> + \t\t}\n> + \t\tif (place[1] && *++place == '-' && place[1] == '\\0')\n> + \t\t{\t\t\t\t\t\t/* found \"--\" */\n> + \t\t\t++optind;\n> + \t\t\tplace = EMSG;\n> + \t\t\treturn -1;\n> + \t\t}\n> + \t}\t\t\t\t\t\t\t/* option letter okay? */\n> + \tif ((optopt = (int) *place++) == (int) ':' ||\n> + \t\t!(oli = strchr(ostr, optopt)))\n> + \t{\n> + \t\t/*\n> + \t\t * if the user didn't specify '-' as an option, assume it means\n> + \t\t * -1.\n> + \t\t */\n> + \t\tif (optopt == (int) '-')\n> + \t\t\treturn -1;\n> + \t\tif (!*place)\n> + \t\t\t++optind;\n> + \t\tif (opterr && *ostr != ':')\n> + \t\t\t(void) fprintf(stderr,\n> + \t\t\t\t\t \"%s: illegal option -- %c\\n\", __progname, optopt);\n> + \t\treturn BADCH;\n> + \t}\n> + \tif (*++oli != ':')\n> + \t{\t\t\t\t\t\t\t/* don't need argument */\n> + \t\toptarg = NULL;\n> + \t\tif (!*place)\n> + \t\t\t++optind;\n> + \t}\n> + \telse\n> + \t{\t\t\t\t\t\t\t/* need an argument */\n> + \t\tif (*place)\t\t\t\t/* no white space */\n> + \t\t\toptarg = place;\n> + \t\telse if (nargc <= ++optind)\n> + \t\t{\t\t\t\t\t\t/* no arg */\n> + \t\t\tplace = EMSG;\n> + \t\t\tif (*ostr == ':')\n> + \t\t\t\treturn BADARG;\n> + \t\t\tif (opterr)\n> + \t\t\t\t(void) fprintf(stderr,\n> + \t\t\t\t\t\t\t \"%s: option requires an argument -- %c\\n\",\n> + \t\t\t\t\t\t\t __progname, optopt);\n> + \t\t\treturn BADCH;\n> + \t\t}\n> + \t\telse\n> + /* white space */\n> + \t\t\toptarg = nargv[optind];\n> + \t\tplace = EMSG;\n> + \t\t++optind;\n> + \t}\n> + \treturn optopt;\t\t\t\t/* dump back option letter */\n> + }\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 01:28:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for client utils compilation under win32" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > It might work to measure time since the start of the whole process, or\n> > until the timeout target, rather than accumulating adjustments to the\n> > \"remains\" count each time through. In other words something like\n> > \n> > \tat start: targettime = time() + specified-timeout\n> > \n> > \teach time we are about to wait: set select timeout to\n> > \ttargettime - time().\n> > \n> > This bounds the error at 1 second which is probably good enough (you\n> > might want to add 1 to targettime to ensure the error is in the\n> > conservative direction of not timing out too soon).\n> > \n> \n> The attached patch fixes a number of issues related to compiling the client \n> utilities (libpq.dll and psql.exe) for win32 (missing defines, adjustments to \n> includes, pedantic casting, non-existent functions) per:\n> http://developer.postgresql.org/docs/postgres/install-win32.html.\n> \n> It compiles cleanly under Windows 2000 using Visual Studio .net. Also compiles \n> clean and passes all regression tests (regular and contrib) under Linux.\n> \n> In addition to a review by the usual suspects, it would be very desirable for \n> someone well versed in the peculiarities of win32 to take a look.\n> \n> If there are no objections, please commit.\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/backend/libpq/md5.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/backend/libpq/md5.c,v\n> retrieving revision 1.18\n> diff -c -r1.18 md5.c\n> *** src/backend/libpq/md5.c\t4 Sep 2002 20:31:19 -0000\t1.18\n> --- src/backend/libpq/md5.c\t26 Sep 2002 17:56:11 -0000\n> ***************\n> *** 26,35 ****\n> *\tcan be compiled stand-alone.\n> */\n> \n> ! #ifndef MD5_ODBC\n> #include \"postgres.h\"\n> #include \"libpq/crypt.h\"\n> ! #else\n> #include \"md5.h\"\n> #endif\n> \n> --- 26,44 ----\n> *\tcan be compiled stand-alone.\n> */\n> \n> ! #if ! defined(MD5_ODBC) && ! defined(FRONTEND)\n> #include \"postgres.h\"\n> #include \"libpq/crypt.h\"\n> ! #endif\n> ! \n> ! #ifdef FRONTEND\n> ! #include \"postgres_fe.h\"\n> ! #ifndef WIN32\n> ! #include \"libpq/crypt.h\"\n> ! #endif /* WIN32 */\n> ! #endif /* FRONTEND */\n> ! \n> ! #ifdef MD5_ODBC\n> #include \"md5.h\"\n> #endif\n> \n> Index: src/bin/psql/command.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/command.c,v\n> retrieving revision 1.81\n> diff -c -r1.81 command.c\n> *** src/bin/psql/command.c\t22 Sep 2002 20:57:21 -0000\t1.81\n> --- src/bin/psql/command.c\t26 Sep 2002 18:18:17 -0000\n> ***************\n> *** 23,28 ****\n> --- 23,29 ----\n> #include <win32.h>\n> #include <io.h>\n> #include <fcntl.h>\n> + #include <direct.h>\n> #endif\n> \n> #include \"libpq-fe.h\"\n> ***************\n> *** 1163,1169 ****\n> \t\t\t\t\t\treturn NULL;\n> \t\t\t\t\t}\n> \n> ! \t\t\t\t\tif (i < token_len - 1)\n> \t\t\t\t\t\treturn_val[i + 1] = '\\0';\n> \t\t\t\t}\n> \n> --- 1164,1170 ----\n> \t\t\t\t\t\treturn NULL;\n> \t\t\t\t\t}\n> \n> ! \t\t\t\t\tif (i < (int) token_len - 1)\n> \t\t\t\t\t\treturn_val[i + 1] = '\\0';\n> \t\t\t\t}\n> \n> ***************\n> *** 1240,1246 ****\n> \t\texit(EXIT_FAILURE);\n> \t}\n> \n> ! \tfor (p = source; p - source < len && *p; p += PQmblen(p, pset.encoding))\n> \t{\n> \t\tif (esc)\n> \t\t{\n> --- 1241,1247 ----\n> \t\texit(EXIT_FAILURE);\n> \t}\n> \n> ! \tfor (p = source; p - source < (int) len && *p; p += PQmblen(p, pset.encoding))\n> \t{\n> \t\tif (esc)\n> \t\t{\n> ***************\n> *** 1278,1284 ****\n> \t\t\t\t\t\tchar\t *end;\n> \n> \t\t\t\t\t\tl = strtol(p, &end, 0);\n> ! \t\t\t\t\t\tc = l;\n> \t\t\t\t\t\tp = end - 1;\n> \t\t\t\t\t\tbreak;\n> \t\t\t\t\t}\n> --- 1279,1285 ----\n> \t\t\t\t\t\tchar\t *end;\n> \n> \t\t\t\t\t\tl = strtol(p, &end, 0);\n> ! \t\t\t\t\t\tc = (char) l;\n> \t\t\t\t\t\tp = end - 1;\n> \t\t\t\t\t\tbreak;\n> \t\t\t\t\t}\n> Index: src/bin/psql/common.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/common.c,v\n> retrieving revision 1.45\n> diff -c -r1.45 common.c\n> *** src/bin/psql/common.c\t14 Sep 2002 19:46:01 -0000\t1.45\n> --- src/bin/psql/common.c\t26 Sep 2002 18:43:31 -0000\n> ***************\n> *** 11,27 ****\n> \n> #include <errno.h>\n> #include <stdarg.h>\n> - #include <sys/time.h>\n> #ifndef HAVE_STRDUP\n> #include <strdup.h>\n> #endif\n> #include <signal.h>\n> #ifndef WIN32\n> #include <unistd.h>\t\t\t\t/* for write() */\n> #include <setjmp.h>\n> #else\n> #include <io.h>\t\t\t\t\t/* for _write() */\n> #include <win32.h>\n> #endif\n> \n> #include \"libpq-fe.h\"\n> --- 11,28 ----\n> \n> #include <errno.h>\n> #include <stdarg.h>\n> #ifndef HAVE_STRDUP\n> #include <strdup.h>\n> #endif\n> #include <signal.h>\n> #ifndef WIN32\n> + #include <sys/time.h>\n> #include <unistd.h>\t\t\t\t/* for write() */\n> #include <setjmp.h>\n> #else\n> #include <io.h>\t\t\t\t\t/* for _write() */\n> #include <win32.h>\n> + #include <sys/timeb.h>\t\t\t/* for _ftime() */\n> #endif\n> \n> #include \"libpq-fe.h\"\n> ***************\n> *** 295,303 ****\n> \tbool\t\tsuccess = false;\n> \tPGresult *results;\n> \tPGnotify *notify;\n> \tstruct timeval before,\n> \t\t\t\tafter;\n> ! \tstruct timezone tz;\n> \n> \tif (!pset.db)\n> \t{\n> --- 296,308 ----\n> \tbool\t\tsuccess = false;\n> \tPGresult *results;\n> \tPGnotify *notify;\n> + #ifndef WIN32\n> \tstruct timeval before,\n> \t\t\t\tafter;\n> ! #else\n> ! \tstruct _timeb before,\n> ! \t\t\t\tafter;\n> ! #endif\n> \n> \tif (!pset.db)\n> \t{\n> ***************\n> *** 327,337 ****\n> \t}\n> \n> \tcancelConn = pset.db;\n> \tif (pset.timing)\n> ! \t\tgettimeofday(&before, &tz);\n> \tresults = PQexec(pset.db, query);\n> \tif (pset.timing)\n> ! \t\tgettimeofday(&after, &tz);\n> \tif (PQresultStatus(results) == PGRES_COPY_IN)\n> \t\tcopy_in_state = true;\n> \t/* keep cancel connection for copy out state */\n> --- 332,352 ----\n> \t}\n> \n> \tcancelConn = pset.db;\n> + \n> + #ifndef WIN32\n> + \tif (pset.timing)\n> + \t\tgettimeofday(&before, NULL);\n> + \tresults = PQexec(pset.db, query);\n> + \tif (pset.timing)\n> + \t\tgettimeofday(&after, NULL);\n> + #else\n> \tif (pset.timing)\n> ! \t\t_ftime(&before);\n> \tresults = PQexec(pset.db, query);\n> \tif (pset.timing)\n> ! \t\t_ftime(&after);\n> ! #endif\n> ! \n> \tif (PQresultStatus(results) == PGRES_COPY_IN)\n> \t\tcopy_in_state = true;\n> \t/* keep cancel connection for copy out state */\n> ***************\n> *** 463,470 ****\n> --- 478,490 ----\n> \n> \t/* Possible microtiming output */\n> \tif (pset.timing && success)\n> + #ifndef WIN32\n> \t\tprintf(gettext(\"Time: %.2f ms\\n\"),\n> \t\t\t ((after.tv_sec - before.tv_sec) * 1000000.0 + after.tv_usec - before.tv_usec) / 1000.0);\n> + #else\n> + \t\tprintf(gettext(\"Time: %.2f ms\\n\"),\n> + \t\t\t ((after.time - before.time) * 1000.0 + after.millitm - before.millitm));\n> + #endif\n> \n> \treturn success;\n> }\n> Index: src/bin/psql/copy.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/copy.c,v\n> retrieving revision 1.25\n> diff -c -r1.25 copy.c\n> *** src/bin/psql/copy.c\t22 Sep 2002 20:57:21 -0000\t1.25\n> --- src/bin/psql/copy.c\t26 Sep 2002 18:59:11 -0000\n> ***************\n> *** 28,33 ****\n> --- 28,35 ----\n> \n> #ifdef WIN32\n> #define strcasecmp(x,y) stricmp(x,y)\n> + #define\t__S_ISTYPE(mode, mask)\t(((mode) & S_IFMT) == (mask))\n> + #define\tS_ISDIR(mode)\t __S_ISTYPE((mode), S_IFDIR)\n> #endif\n> \n> bool\t\tcopy_in_state;\n> Index: src/bin/psql/large_obj.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/large_obj.c,v\n> retrieving revision 1.21\n> diff -c -r1.21 large_obj.c\n> *** src/bin/psql/large_obj.c\t4 Sep 2002 20:31:36 -0000\t1.21\n> --- src/bin/psql/large_obj.c\t26 Sep 2002 19:04:07 -0000\n> ***************\n> *** 196,202 ****\n> \t{\n> \t\tchar\t *cmdbuf;\n> \t\tchar\t *bufptr;\n> ! \t\tint\t\t\tslen = strlen(comment_arg);\n> \n> \t\tcmdbuf = malloc(slen * 2 + 256);\n> \t\tif (!cmdbuf)\n> --- 196,202 ----\n> \t{\n> \t\tchar\t *cmdbuf;\n> \t\tchar\t *bufptr;\n> ! \t\tsize_t\t\tslen = strlen(comment_arg);\n> \n> \t\tcmdbuf = malloc(slen * 2 + 256);\n> \t\tif (!cmdbuf)\n> Index: src/bin/psql/mbprint.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/mbprint.c,v\n> retrieving revision 1.4\n> diff -c -r1.4 mbprint.c\n> *** src/bin/psql/mbprint.c\t27 Aug 2002 20:16:48 -0000\t1.4\n> --- src/bin/psql/mbprint.c\t26 Sep 2002 20:11:44 -0000\n> ***************\n> *** 202,208 ****\n> \tfor (; *pwcs && len > 0; pwcs += l)\n> \t{\n> \t\tl = pg_utf_mblen(pwcs);\n> ! \t\tif ((len < l) || ((w = ucs_wcwidth(utf2ucs(pwcs))) < 0))\n> \t\t\treturn width;\n> \t\tlen -= l;\n> \t\twidth += w;\n> --- 202,208 ----\n> \tfor (; *pwcs && len > 0; pwcs += l)\n> \t{\n> \t\tl = pg_utf_mblen(pwcs);\n> ! \t\tif ((len < (size_t) l) || ((w = ucs_wcwidth(utf2ucs(pwcs))) < 0))\n> \t\t\treturn width;\n> \t\tlen -= l;\n> \t\twidth += w;\n> Index: src/bin/psql/print.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/print.c,v\n> retrieving revision 1.31\n> diff -c -r1.31 print.c\n> *** src/bin/psql/print.c\t1 Sep 2002 23:30:46 -0000\t1.31\n> --- src/bin/psql/print.c\t26 Sep 2002 21:10:59 -0000\n> ***************\n> *** 282,288 ****\n> \t{\n> \t\tint\t\t\ttlen;\n> \n> ! \t\tif ((tlen = pg_wcswidth((unsigned char *) title, strlen(title))) >= total_w)\n> \t\t\tfprintf(fout, \"%s\\n\", title);\n> \t\telse\n> \t\t\tfprintf(fout, \"%-*s%s\\n\", (int) (total_w - tlen) / 2, \"\", title);\n> --- 282,288 ----\n> \t{\n> \t\tint\t\t\ttlen;\n> \n> ! \t\tif ((unsigned int) (tlen = pg_wcswidth((unsigned char *) title, strlen(title))) >= total_w)\n> \t\t\tfprintf(fout, \"%s\\n\", title);\n> \t\telse\n> \t\t\tfprintf(fout, \"%-*s%s\\n\", (int) (total_w - tlen) / 2, \"\", title);\n> ***************\n> *** 1184,1191 ****\n> \t\t\t footers ? (const char *const *) footers : (const char *const *) (opt->footers),\n> \t\t\t align, &opt->topt, fout);\n> \n> ! \tfree(headers);\n> ! \tfree(cells);\n> \tif (footers)\n> \t{\n> \t\tfree(footers[0]);\n> --- 1184,1191 ----\n> \t\t\t footers ? (const char *const *) footers : (const char *const *) (opt->footers),\n> \t\t\t align, &opt->topt, fout);\n> \n> ! \tfree((void *) headers);\n> ! \tfree((void *) cells);\n> \tif (footers)\n> \t{\n> \t\tfree(footers[0]);\n> Index: src/include/pg_config.h.win32\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/include/pg_config.h.win32,v\n> retrieving revision 1.7\n> diff -c -r1.7 pg_config.h.win32\n> *** src/include/pg_config.h.win32\t4 Sep 2002 22:54:18 -0000\t1.7\n> --- src/include/pg_config.h.win32\t26 Sep 2002 17:32:07 -0000\n> ***************\n> *** 16,21 ****\n> --- 16,23 ----\n> \n> #define MAXPGPATH 1024\n> \n> + #define INDEX_MAX_KEYS 32\n> + \n> #define HAVE_ATEXIT\n> #define HAVE_MEMMOVE\n> \n> ***************\n> *** 48,53 ****\n> --- 50,59 ----\n> \n> #define DLLIMPORT\n> \n> + #endif\n> + \n> + #ifndef __CYGWIN__\n> + #include <windows.h>\n> #endif\n> \n> #endif /* pg_config_h_win32__ */\n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.205\n> diff -c -r1.205 fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t22 Sep 2002 20:57:21 -0000\t1.205\n> --- src/interfaces/libpq/fe-connect.c\t26 Sep 2002 00:32:48 -0000\n> ***************\n> *** 21,27 ****\n> #include <errno.h>\n> #include <ctype.h>\n> #include <time.h>\n> - #include <unistd.h>\n> \n> #include \"libpq-fe.h\"\n> #include \"libpq-int.h\"\n> --- 21,26 ----\n> ***************\n> *** 1053,1062 ****\n> {\n> \tPostgresPollingStatusType flag = PGRES_POLLING_WRITING;\n> \n> ! \tstruct timeval remains,\n> ! \t\t\t *rp = NULL,\n> ! \t\t\t\tfinish_time,\n> ! \t\t\t\tstart_time;\n> \n> \tif (conn == NULL || conn->status == CONNECTION_BAD)\n> \t\treturn 0;\n> --- 1052,1061 ----\n> {\n> \tPostgresPollingStatusType flag = PGRES_POLLING_WRITING;\n> \n> ! \ttime_t\t\t\tfinish_time = 0,\n> ! \t\t\t\t\tcurrent_time;\n> ! \tstruct timeval\tremains,\n> ! \t\t\t\t *rp = NULL;\n> \n> \tif (conn == NULL || conn->status == CONNECTION_BAD)\n> \t\treturn 0;\n> ***************\n> *** 1074,1093 ****\n> \t\t}\n> \t\tremains.tv_usec = 0;\n> \t\trp = &remains;\n> \t}\n> \n> \twhile (rp == NULL || remains.tv_sec > 0 || remains.tv_usec > 0)\n> \t{\n> \t\t/*\n> - \t\t * If connecting timeout is set, get current time.\n> - \t\t */\n> - \t\tif (rp != NULL && gettimeofday(&start_time, NULL) == -1)\n> - \t\t{\n> - \t\t\tconn->status = CONNECTION_BAD;\n> - \t\t\treturn 0;\n> - \t\t}\n> - \n> - \t\t/*\n> \t\t * Wait, if necessary.\tNote that the initial state (just after\n> \t\t * PQconnectStart) is to wait for the socket to select for\n> \t\t * writing.\n> --- 1073,1086 ----\n> \t\t}\n> \t\tremains.tv_usec = 0;\n> \t\trp = &remains;\n> + \n> + \t\t/* calculate the finish time based on start + timeout */\n> + \t\tfinish_time = time((time_t *) NULL) + remains.tv_sec;\n> \t}\n> \n> \twhile (rp == NULL || remains.tv_sec > 0 || remains.tv_usec > 0)\n> \t{\n> \t\t/*\n> \t\t * Wait, if necessary.\tNote that the initial state (just after\n> \t\t * PQconnectStart) is to wait for the socket to select for\n> \t\t * writing.\n> ***************\n> *** 1128,1153 ****\n> \t\tflag = PQconnectPoll(conn);\n> \n> \t\t/*\n> ! \t\t * If connecting timeout is set, calculate remain time.\n> \t\t */\n> \t\tif (rp != NULL)\n> \t\t{\n> ! \t\t\tif (gettimeofday(&finish_time, NULL) == -1)\n> \t\t\t{\n> \t\t\t\tconn->status = CONNECTION_BAD;\n> \t\t\t\treturn 0;\n> \t\t\t}\n> ! \t\t\tif ((finish_time.tv_usec -= start_time.tv_usec) < 0)\n> ! \t\t\t{\n> ! \t\t\t\tremains.tv_sec++;\n> ! \t\t\t\tfinish_time.tv_usec += 1000000;\n> ! \t\t\t}\n> ! \t\t\tif ((remains.tv_usec -= finish_time.tv_usec) < 0)\n> ! \t\t\t{\n> ! \t\t\t\tremains.tv_sec--;\n> ! \t\t\t\tremains.tv_usec += 1000000;\n> ! \t\t\t}\n> ! \t\t\tremains.tv_sec -= finish_time.tv_sec - start_time.tv_sec;\n> \t\t}\n> \t}\n> \tconn->status = CONNECTION_BAD;\n> --- 1121,1138 ----\n> \t\tflag = PQconnectPoll(conn);\n> \n> \t\t/*\n> ! \t\t * If connecting timeout is set, calculate remaining time.\n> \t\t */\n> \t\tif (rp != NULL)\n> \t\t{\n> ! \t\t\tif (time(&current_time) == -1)\n> \t\t\t{\n> \t\t\t\tconn->status = CONNECTION_BAD;\n> \t\t\t\treturn 0;\n> \t\t\t}\n> ! \n> ! \t\t\tremains.tv_sec = finish_time - current_time;\n> ! \t\t\tremains.tv_usec = 0;\n> \t\t}\n> \t}\n> \tconn->status = CONNECTION_BAD;\n> ***************\n> *** 2946,2951 ****\n> --- 2931,2937 ----\n> \t\treturn NULL;\n> \t}\n> \n> + #ifndef WIN32\n> \t/* If password file is insecure, alert the user and ignore it. */\n> \tif (stat_buf.st_mode & (S_IRWXG | S_IRWXO))\n> \t{\n> ***************\n> *** 2955,2960 ****\n> --- 2941,2947 ----\n> \t\tfree(pgpassfile);\n> \t\treturn NULL;\n> \t}\n> + #endif\n> \n> \tfp = fopen(pgpassfile, \"r\");\n> \tfree(pgpassfile);\n> Index: src/interfaces/libpq/fe-misc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/fe-misc.c,v\n> retrieving revision 1.79\n> diff -c -r1.79 fe-misc.c\n> *** src/interfaces/libpq/fe-misc.c\t4 Sep 2002 20:31:47 -0000\t1.79\n> --- src/interfaces/libpq/fe-misc.c\t26 Sep 2002 05:16:52 -0000\n> ***************\n> *** 150,158 ****\n> \t\t\t\t * try to grow the buffer. FIXME: The new size could be\n> \t\t\t\t * chosen more intelligently.\n> \t\t\t\t */\n> ! \t\t\t\tsize_t\t\tbuflen = conn->outCount + nbytes;\n> \n> ! \t\t\t\tif (buflen > conn->outBufSize)\n> \t\t\t\t{\n> \t\t\t\t\tchar\t *newbuf = realloc(conn->outBuffer, buflen);\n> \n> --- 150,158 ----\n> \t\t\t\t * try to grow the buffer. FIXME: The new size could be\n> \t\t\t\t * chosen more intelligently.\n> \t\t\t\t */\n> ! \t\t\t\tsize_t\t\tbuflen = (size_t) conn->outCount + nbytes;\n> \n> ! \t\t\t\tif (buflen > (size_t) conn->outBufSize)\n> \t\t\t\t{\n> \t\t\t\t\tchar\t *newbuf = realloc(conn->outBuffer, buflen);\n> \n> ***************\n> *** 240,246 ****\n> int\n> pqGetnchar(char *s, size_t len, PGconn *conn)\n> {\n> ! \tif (len < 0 || len > conn->inEnd - conn->inCursor)\n> \t\treturn EOF;\n> \n> \tmemcpy(s, conn->inBuffer + conn->inCursor, len);\n> --- 240,246 ----\n> int\n> pqGetnchar(char *s, size_t len, PGconn *conn)\n> {\n> ! \tif (len < 0 || len > (size_t) (conn->inEnd - conn->inCursor))\n> \t\treturn EOF;\n> \n> \tmemcpy(s, conn->inBuffer + conn->inCursor, len);\n> Index: src/interfaces/libpq/fe-print.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/fe-print.c,v\n> retrieving revision 1.46\n> diff -c -r1.46 fe-print.c\n> *** src/interfaces/libpq/fe-print.c\t29 Aug 2002 07:22:30 -0000\t1.46\n> --- src/interfaces/libpq/fe-print.c\t26 Sep 2002 05:08:47 -0000\n> ***************\n> *** 299,305 ****\n> \t\t\t\t\t(PQntuples(res) == 1) ? \"\" : \"s\");\n> \t\tfree(fieldMax);\n> \t\tfree(fieldNotNum);\n> ! \t\tfree(fieldNames);\n> \t\tif (usePipe)\n> \t\t{\n> #ifdef WIN32\n> --- 299,305 ----\n> \t\t\t\t\t(PQntuples(res) == 1) ? \"\" : \"s\");\n> \t\tfree(fieldMax);\n> \t\tfree(fieldNotNum);\n> ! \t\tfree((void *) fieldNames);\n> \t\tif (usePipe)\n> \t\t{\n> #ifdef WIN32\n> Index: src/interfaces/libpq/libpq-int.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/libpq-int.h,v\n> retrieving revision 1.57\n> diff -c -r1.57 libpq-int.h\n> *** src/interfaces/libpq/libpq-int.h\t4 Sep 2002 20:31:47 -0000\t1.57\n> --- src/interfaces/libpq/libpq-int.h\t25 Sep 2002 21:08:03 -0000\n> ***************\n> *** 21,28 ****\n> #define LIBPQ_INT_H\n> \n> #include <time.h>\n> - #include <sys/time.h>\n> #include <sys/types.h>\n> \n> #if defined(WIN32) && (!defined(ssize_t))\n> typedef int ssize_t;\t\t\t/* ssize_t doesn't exist in VC (atleast\n> --- 21,30 ----\n> #define LIBPQ_INT_H\n> \n> #include <time.h>\n> #include <sys/types.h>\n> + #ifndef WIN32\n> + #include <sys/time.h>\n> + #endif\n> \n> #if defined(WIN32) && (!defined(ssize_t))\n> typedef int ssize_t;\t\t\t/* ssize_t doesn't exist in VC (atleast\n> Index: src/interfaces/libpq/libpqdll.def\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/libpqdll.def,v\n> retrieving revision 1.15\n> diff -c -r1.15 libpqdll.def\n> *** src/interfaces/libpq/libpqdll.def\t2 Jun 2002 22:36:30 -0000\t1.15\n> --- src/interfaces/libpq/libpqdll.def\t26 Sep 2002 20:18:48 -0000\n> ***************\n> *** 1,5 ****\n> LIBRARY LIBPQ\n> - DESCRIPTION \"Postgres Client Access Library\"\n> EXPORTS\n> \tPQconnectdb \t\t@ 1\n> \tPQsetdbLogin \t\t@ 2\n> --- 1,4 ----\n> ***************\n> *** 90,92 ****\n> --- 89,95 ----\n> \tPQfreeNotify\t\t@ 87\n> \tPQescapeString\t\t@ 88\n> \tPQescapeBytea\t\t@ 89\n> + \tprintfPQExpBuffer\t@ 90\n> + \tappendPQExpBuffer\t@ 91\n> + \tpg_encoding_to_char\t@ 92\n> + \tpg_utf_mblen\t\t@ 93\n> Index: src/interfaces/libpq/pqexpbuffer.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/pqexpbuffer.c,v\n> retrieving revision 1.13\n> diff -c -r1.13 pqexpbuffer.c\n> *** src/interfaces/libpq/pqexpbuffer.c\t20 Jun 2002 20:29:54 -0000\t1.13\n> --- src/interfaces/libpq/pqexpbuffer.c\t26 Sep 2002 05:12:17 -0000\n> ***************\n> *** 192,198 ****\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> --- 192,198 ----\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < (int) avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> ***************\n> *** 240,246 ****\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> --- 240,246 ----\n> \t\t\t * actually stored, but at least one returns -1 on failure. Be\n> \t\t\t * conservative about believing whether the print worked.\n> \t\t\t */\n> ! \t\t\tif (nprinted >= 0 && nprinted < (int) avail - 1)\n> \t\t\t{\n> \t\t\t\t/* Success. Note nprinted does not include trailing null. */\n> \t\t\t\tstr->len += nprinted;\n> Index: src/interfaces/libpq/win32.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/interfaces/libpq/win32.h,v\n> retrieving revision 1.19\n> diff -c -r1.19 win32.h\n> *** src/interfaces/libpq/win32.h\t20 Jul 2002 05:43:31 -0000\t1.19\n> --- src/interfaces/libpq/win32.h\t26 Sep 2002 17:32:19 -0000\n> ***************\n> *** 22,28 ****\n> /*\n> * crypt not available (yet)\n> */\n> ! #define crypt(a,b) (a)\n> \n> #undef EAGAIN\t\t\t\t\t/* doesn't apply on sockets */\n> #undef EINTR\n> --- 22,28 ----\n> /*\n> * crypt not available (yet)\n> */\n> ! #define crypt(a,b) ((char *) a)\n> \n> #undef EAGAIN\t\t\t\t\t/* doesn't apply on sockets */\n> #undef EINTR\n> Index: src/utils/Makefile\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/utils/Makefile,v\n> retrieving revision 1.14\n> diff -c -r1.14 Makefile\n> *** src/utils/Makefile\t27 Jul 2002 20:10:05 -0000\t1.14\n> --- src/utils/Makefile\t26 Sep 2002 05:34:49 -0000\n> ***************\n> *** 15,18 ****\n> all:\n> \n> clean distclean maintainer-clean:\n> ! \trm -f dllinit.o\n> --- 15,18 ----\n> all:\n> \n> clean distclean maintainer-clean:\n> ! \trm -f dllinit.o getopt.o\n> Index: src/utils/getopt.c\n> ===================================================================\n> RCS file: src/utils/getopt.c\n> diff -N src/utils/getopt.c\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- src/utils/getopt.c\t26 Nov 2001 19:30:58 -0000\n> ***************\n> *** 0 ****\n> --- 1,125 ----\n> + /*\n> + * Copyright (c) 1987, 1993, 1994\n> + *\tThe Regents of the University of California. All rights reserved.\n> + *\n> + * Redistribution and use in source and binary forms, with or without\n> + * modification, are permitted provided that the following conditions\n> + * are met:\n> + * 1. Redistributions of source code must retain the above copyright\n> + *\t notice, this list of conditions and the following disclaimer.\n> + * 2. Redistributions in binary form must reproduce the above copyright\n> + *\t notice, this list of conditions and the following disclaimer in the\n> + *\t documentation and/or other materials provided with the distribution.\n> + * 3. All advertising materials mentioning features or use of this software\n> + *\t must display the following acknowledgement:\n> + *\tThis product includes software developed by the University of\n> + *\tCalifornia, Berkeley and its contributors.\n> + * 4. Neither the name of the University nor the names of its contributors\n> + *\t may be used to endorse or promote products derived from this software\n> + *\t without specific prior written permission.\n> + *\n> + * THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND\n> + * ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n> + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n> + * ARE DISCLAIMED.\tIN NO EVENT SHALL THE REGENTS OR CONTRIBUTORS BE LIABLE\n> + * FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL\n> + * DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS\n> + * OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)\n> + * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT\n> + * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY\n> + * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF\n> + * SUCH DAMAGE.\n> + */\n> + \n> + #if defined(LIBC_SCCS) && !defined(lint)\n> + static char sccsid[] = \"@(#)getopt.c\t8.3 (Berkeley) 4/27/95\";\n> + #endif /* LIBC_SCCS and not lint */\n> + \n> + #include <stdio.h>\n> + #include <stdlib.h>\n> + #include <string.h>\n> + \n> + int\t\t\topterr = 1,\t\t\t/* if error message should be printed */\n> + \t\t\toptind = 1,\t\t\t/* index into parent argv vector */\n> + \t\t\toptopt,\t\t\t\t/* character checked for validity */\n> + \t\t\toptreset;\t\t\t/* reset getopt */\n> + char\t *optarg;\t\t\t\t/* argument associated with option */\n> + \n> + #define BADCH\t(int)'?'\n> + #define BADARG\t(int)':'\n> + #define EMSG\t\"\"\n> + \n> + /*\n> + * getopt\n> + *\tParse argc/argv argument vector.\n> + */\n> + int\n> + getopt(nargc, nargv, ostr)\n> + int\t\t\tnargc;\n> + char\t *const * nargv;\n> + const char *ostr;\n> + {\n> + \textern char *__progname;\n> + \tstatic char *place = EMSG;\t/* option letter processing */\n> + \tchar\t *oli;\t\t\t/* option letter list index */\n> + \n> + \tif (optreset || !*place)\n> + \t{\t\t\t\t\t\t\t/* update scanning pointer */\n> + \t\toptreset = 0;\n> + \t\tif (optind >= nargc || *(place = nargv[optind]) != '-')\n> + \t\t{\n> + \t\t\tplace = EMSG;\n> + \t\t\treturn -1;\n> + \t\t}\n> + \t\tif (place[1] && *++place == '-' && place[1] == '\\0')\n> + \t\t{\t\t\t\t\t\t/* found \"--\" */\n> + \t\t\t++optind;\n> + \t\t\tplace = EMSG;\n> + \t\t\treturn -1;\n> + \t\t}\n> + \t}\t\t\t\t\t\t\t/* option letter okay? */\n> + \tif ((optopt = (int) *place++) == (int) ':' ||\n> + \t\t!(oli = strchr(ostr, optopt)))\n> + \t{\n> + \t\t/*\n> + \t\t * if the user didn't specify '-' as an option, assume it means\n> + \t\t * -1.\n> + \t\t */\n> + \t\tif (optopt == (int) '-')\n> + \t\t\treturn -1;\n> + \t\tif (!*place)\n> + \t\t\t++optind;\n> + \t\tif (opterr && *ostr != ':')\n> + \t\t\t(void) fprintf(stderr,\n> + \t\t\t\t\t \"%s: illegal option -- %c\\n\", __progname, optopt);\n> + \t\treturn BADCH;\n> + \t}\n> + \tif (*++oli != ':')\n> + \t{\t\t\t\t\t\t\t/* don't need argument */\n> + \t\toptarg = NULL;\n> + \t\tif (!*place)\n> + \t\t\t++optind;\n> + \t}\n> + \telse\n> + \t{\t\t\t\t\t\t\t/* need an argument */\n> + \t\tif (*place)\t\t\t\t/* no white space */\n> + \t\t\toptarg = place;\n> + \t\telse if (nargc <= ++optind)\n> + \t\t{\t\t\t\t\t\t/* no arg */\n> + \t\t\tplace = EMSG;\n> + \t\t\tif (*ostr == ':')\n> + \t\t\t\treturn BADARG;\n> + \t\t\tif (opterr)\n> + \t\t\t\t(void) fprintf(stderr,\n> + \t\t\t\t\t\t\t \"%s: option requires an argument -- %c\\n\",\n> + \t\t\t\t\t\t\t __progname, optopt);\n> + \t\t\treturn BADCH;\n> + \t\t}\n> + \t\telse\n> + /* white space */\n> + \t\t\toptarg = nargv[optind];\n> + \t\tplace = EMSG;\n> + \t\t++optind;\n> + \t}\n> + \treturn optopt;\t\t\t\t/* dump back option letter */\n> + }\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:09:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: fix for client utils compilation under win32" } ]
[ { "msg_contents": "The actual checking is done in INSERT/UPDATE/COPY. However, the\r\nchecking is currently very limited: every byte of a mutibyte character\r\nmust be greater than 0x7f.\r\n\r\n> Tatsuo,\r\n> \r\n> do I understand correctly that there is no checking for\r\n> convertion between local charset and unicode in insert and\r\n> checking is done only in select ?\r\n> \r\n> test=# create table qq (a text);\r\n> CREATE TABLE\r\n> test=# \\encoding koi8\r\n> test=# insert into qq values('бартунов');\r\n> INSERT 24617 1\r\n> test=# \\encoding unicode\r\n> test=# select * from qq;\r\n> a\r\n> ----------\r\n> п�п�я�������п�п�\r\n> (1 row)\r\n> \r\n> test=# \\encoding unicode\r\n> test=# insert into qq values('бартунов');\r\n> INSERT 24618 1\r\n> test=# select * from qq;\r\n> a\r\n> ----------\r\n> п�п�я�������п�п�\r\n> \r\n> (2 rows)\r\n> \r\n> test=# \\encoding koi8\r\n> test=# select * from qq;\r\n> WARNING: UtfToLocal: could not convert UTF-8 (0xc2c1). Ignored\r\n> WARNING: UtfToLocal: could not convert UTF-8 (0xd2d4). Ignored\r\n> WARNING: UtfToLocal: could not convert UTF-8 (0xd5ce). Ignored\r\n> WARNING: UtfToLocal: could not convert UTF-8 (0xcfd7). Ignored\r\n> a\r\n> ----------\r\n> бартунов\r\n> \r\n> (2 rows)\r\n> \r\n> \r\n> \r\n> \tRegards,\r\n> \t\tOleg\r\n> _____________________________________________________________\r\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\r\n> Sternberg Astronomical Institute, Moscow University (Russia)\r\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\r\n> phone: +007(095)939-16-83, +007(095)939-23-83\r\n> \r\n", "msg_date": "Thu, 26 Sep 2002 10:37:48 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: unicode" }, { "msg_contents": "Tatsuo Ishii kirjutas N, 26.09.2002 kell 03:37:\n> The actual checking is done in INSERT/UPDATE/COPY. However, the\n> checking is currently very limited: every byte of a mutibyte character\n> must be greater than 0x7f.\n\nWhere can I read about basic tech details of Unicode / Charset\nConversion / ...\n\nI't like to find answers to the following (for database created using\nUNICODE)\n\n1. Where exactly are conversions between national charsets done\n\n2. What is converyted (whole SQL statements or just data)\n\n3. What format is used for processing in memory (UCS-2, UCS-4, UTF-8,\nUTF-16, UTF-32, ...)\n\n4. What format is used when saving to disk (UCS-*, UTF-*, SCSU, ...) ?\n\n5. Are LIKE/SIMILAR aware of locale stuff ?\n\n-------------\nHannu\n \n", "msg_date": "26 Sep 2002 10:52:48 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: unicode" }, { "msg_contents": "> Where can I read about basic tech details of Unicode / Charset\n> Conversion / ...\n> \n> I't like to find answers to the following (for database created using\n> UNICODE)\n> \n> 1. Where exactly are conversions between national charsets done\n\nNo \"national charset\" is in PostgreSQL. I assume you want to know\nwhere frontend/backend encoding conversion happens. They are handled\nby pg_server_to_client(does conversion BE to FE) and\npg_client_to_server(FE to BE). These functions are called by the\ncommunication sub system(backend/libpq) and COPY. In summary, in most\ncases the encoding conversion is done before the parser and after the\nexecutor produces the final result.\n\n> 2. What is converyted (whole SQL statements or just data)\n\nWhole statement.\n\n> 3. What format is used for processing in memory (UCS-2, UCS-4, UTF-8,\n> UTF-16, UTF-32, ...)\n\n\"format\"? I assume you are talking about the encoding.\n\nIt is exactly same as the database encoding. For UNICODE database, we\nuse UTF-8. Not UCS-2 nor UCS-4.\n\n> 4. What format is used when saving to disk (UCS-*, UTF-*, SCSU, ...) ?\n\nDitto.\n\n> 5. Are LIKE/SIMILAR aware of locale stuff ?\n\nI don't know about SIMILAR, but I believe LIKE is not locale aware and\nis correct from the standard's point of view...\n--\nTatsuo Ishii\n", "msg_date": "Fri, 27 Sep 2002 13:29:19 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: unicode" } ]
[ { "msg_contents": "Hello all,\n\nSome time back I posted a query to build a site with 150GB of database. In last \ncouple of weeks, lots of things were tested at my place and there are some \nresults and again some concerns. \n\nThis is a long post. Please be patient and read thr. If we win this, I guess we \nhave a good marketing/advocacy case here..;-)\n\nFirst the problems (For those who do not read beyond first page)\n\n1) Database load time from flat file using copy is very high\n2) Creating index takes huge amount of time.\n3) Any suggsestions for runtime as data load and query will be going in \nparallel.\n\nNow the details. Note that this is a test run only..\n\nPlatform:- 4x Xeon2.4GHz/4GB RAM/4x48 SCSI RAID5/72 GB SCSI\nRedHat7.2/PostgreSQL7.1.3\n\nDatabase in flat file: \n125,000,000 records of around 100 bytes each. \nFlat file size 12GB\n\nLoad time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\nCreate unique composite index on 2 char and a timestamp field: 25226 sec.\nDatabase size on disk: 26GB\nSelect query: 1.5 sec. for approx. 150 rows.\n\nImportant postgresql.conf settings\n\nsort_mem = 12000\nshared_buffers = 24000\nfsync=true (Sad but true. Left untouched.. Will that make a difference on \nSCSI?)\nwal_buffers = 65536 \nwal_files = 64 \n\nNow the requirements\n\nInitial flat data load: 250GB of data. This has gone up since last query. It \nwas 150GB earlier..\nOngoing inserts: 5000/sec. \nNumber of queries: 4800 queries/hour\nQuery response time: 10 sec.\n\n\nNow questions.\n\n1) Instead of copying from a single 12GB data file, will a parallel copy from \nsay 5 files will speed up the things? \n\nCouple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \nsetup..\n\n2) Sort mem.=12K i.e. 94MB, sounds good enough to me. Does this need further \naddition to improve create index performance?\n\n3) 5K concurrent inserts with an index on, will this need a additional CPU \npower? Like deploying it on dual RISC CPUs etc? \n\n4) Query performance is not a problem. Though 4.8K queries per sec. expected \nresponse time from each query is 10 sec. But my guess is some serius CPU power \nwill be chewed there too..\n\n5)Will upgrading to 7.2.2/7.3 beta help?\n\nAll in all, in the test, we didn't see the performance where hardware is \nsaturated to it's limits. So effectively we are not able to get postgresql \nmaking use of it. Just pushing WAL and shared buffers does not seem to be the \nsolution.\n\nIf you guys have any suggestions. let me know. I need them all..\n\nMysql is almost out because it's creating index for last 17 hours. I don't \nthink it will keep up with 5K inserts per sec. with index. SAP DB is under \nevaluation too. But postgresql is most favourite as of now because it works. So \nI need to come up with solutions to problems that will occur in near future..\n;-)\n\nTIA..\n\nBye\n Shridhar\n\n--\nLaw of Procrastination:\tProcrastination avoids boredom; one never has\tthe \nfeeling that there is nothing important to do.\n\n", "msg_date": "Thu, 26 Sep 2002 14:05:44 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:\n> Some time back I posted a query to build a site with 150GB of database. In last \n> couple of weeks, lots of things were tested at my place and there are some \n> results and again some concerns. \n\n> 2) Creating index takes huge amount of time.\n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n\n> 2) Sort mem.=12K i.e. 94MB, sounds good enough to me. Does this need further \n> addition to improve create index performance?\n\nJust a thought. If I sort the table before making an index, would it be faster \nthan creating index on raw table? And/or if at all, how do I sort the table \nwithout duplicating it?\n\nJust a wild thought..\n\nBye\n Shridhar\n\n--\nlinux: the choice of a GNU generation(ksh@cis.ufl.edu put this on Tshirts in \n'93)\n\n", "msg_date": "Thu, 26 Sep 2002 14:24:02 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "I'll preface this by saying that while I have a large database, it doesn't\nrequire quite the performace you're talking about here.\n\nOn Thu, Sep 26, 2002 at 02:05:44PM +0530, Shridhar Daithankar wrote:\n> 1) Database load time from flat file using copy is very high\n> 2) Creating index takes huge amount of time.\n> 3) Any suggsestions for runtime as data load and query will be going in \n> parallel.\n\nYou're loading all the data in one copy. I find that INSERTs are mostly\nlimited by indexes. While index lookups are cheap, they are not free and\neach index needs to be updated for each row.\n\nI fond using partial indexes to only index the rows you actually use can\nhelp with the loading. It's a bit obscure though.\n\nAs for parallel loading, you'll be limited mostly by your I/O bandwidth.\nHave you measured it to take sure it's up to speed?\n\n> Now the details. Note that this is a test run only..\n> \n> Platform:- 4x Xeon2.4GHz/4GB RAM/4x48 SCSI RAID5/72 GB SCSI\n> RedHat7.2/PostgreSQL7.1.3\n> \n> Database in flat file: \n> 125,000,000 records of around 100 bytes each. \n> Flat file size 12GB\n> \n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n\nSo you're loading at a rate of 860KB per sec. That's not too fast. How many\nindexes are active at that time? Triggers and foreign keys also take their\ntoll.\n\n> Important postgresql.conf settings\n> \n> sort_mem = 12000\n> shared_buffers = 24000\n> fsync=true (Sad but true. Left untouched.. Will that make a difference on \n> SCSI?)\n> wal_buffers = 65536 \n> wal_files = 64 \n\nfsync IIRC only affects the WAL buffers now but it may be quite expensive,\nespecially considering it's running on every transaction commit. Oh, your\nWAL files are on a seperate disk from the data?\n\n> Initial flat data load: 250GB of data. This has gone up since last query. It \n> was 150GB earlier..\n> Ongoing inserts: 5000/sec. \n> Number of queries: 4800 queries/hour\n> Query response time: 10 sec.\n\nThat looks quite acheivable.\n\n> 1) Instead of copying from a single 12GB data file, will a parallel copy from \n> say 5 files will speed up the things? \n\nLimited by I/O bandwidth. On linux vmstat can tell you how many blocks are\nbeing loaded and stored per second. Try it. As long as sync() doesn't get\ndone too often, it should be help.\n\n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \n> setup..\n\nNo, it's not. You should be able to do better.\n\n> 2) Sort mem.=12K i.e. 94MB, sounds good enough to me. Does this need further \n> addition to improve create index performance?\n\nShould be fine. Admittedly your indexes are taking rather long to build.\n\n> 3) 5K concurrent inserts with an index on, will this need a additional CPU \n> power? Like deploying it on dual RISC CPUs etc? \n\nIt shouldn't. Do you have an idea of what your CPU usage is? ps aux should\ngive you a decent idea.\n\n> 4) Query performance is not a problem. Though 4.8K queries per sec. expected \n> response time from each query is 10 sec. But my guess is some serius CPU power \n> will be chewed there too..\n\nShould be fine.\n\n> 5)Will upgrading to 7.2.2/7.3 beta help?\n\nPossibly, though it may be wirth it just for the features/bugfixes.\n\n> All in all, in the test, we didn't see the performance where hardware is \n> saturated to it's limits. So effectively we are not able to get postgresql \n> making use of it. Just pushing WAL and shared buffers does not seem to be the \n> solution.\n> \n> If you guys have any suggestions. let me know. I need them all..\n\nFind the bottleneck: CPU, I/O or memory?\n\n> Mysql is almost out because it's creating index for last 17 hours. I don't \n> think it will keep up with 5K inserts per sec. with index. SAP DB is under \n> evaluation too. But postgresql is most favourite as of now because it works. So \n> I need to come up with solutions to problems that will occur in near future..\n> ;-)\n\n17 hours! Ouch. Either way, you should be able to do much better. Hope this\nhelps,\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Thu, 26 Sep 2002 19:05:19 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 10:51, paolo.cassago@talentmanager.c wrote:\n\n> Hi,\n> it seems you have to cluster it, I don't think you have another choise.\n\nHmm.. That didn't occur to me...I guess some real time clustering like usogres \nwould do. Unless it turns out to be a performance hog..\n\nBut this is just insert and select. No updates no deletes(Unless customer makes \na 180 degree turn) So I doubt if clustering will help. At the most I can \nreplicate data across machines and spread queries on them. Replication overhead \nas a down side and low query load on each machine as upside..\n\n> I'm retrieving the configuration of our postgres servers (I'm out of office\n> now), so I can send it to you. I was quite disperate about performance, and\n> I was thinking to migrate the data on an oracle database. Then I found this\n> configuration on the net, and I had a succesfull increase of performance.\n\nIn this case, we are upto postgresql because we/our customer wants to keep the \ncosts down..:-) Even they are asking now if it's possible to keep hardware \ncosts down as well. That's getting some funny responses here but I digress..\n\n> Maybe this can help you.\n> \n> Why you use copy to insert records? I usually use perl scripts, and they\n> work well .\n\nPerformance reasons. As I said in one of my posts earlier, putting upto 100K \nrecords in one transaction in steps of 10K did not reach performance of copy. \nAs Tom said rightly, it was a 4-1 ratio despite using transactions..\n\nThanks once again..\nBye\n Shridhar\n\n--\nSecretary's Revenge:\tFiling almost everything under \"the\".\n\n", "msg_date": "Thu, 26 Sep 2002 14:43:20 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Hi Shridhar,\n\nShridhar Daithankar wrote:\n<snip>\n> 3) Any suggsestions for runtime as data load and query will be going in\n> parallel.\n\nThat sounds unusual. From reading this, it *sounds* like you'll be\nrunning queries against an incomplete dataset, or maybe just running the\nqueries that affect the tables loaded thus far (during the initial\nload).\n\n<snip>\n> fsync=true (Sad but true. Left untouched.. Will that make a difference on\n> SCSI?)\n\nDefinitely. Have directly measured a ~ 2x tps throughput increase on\nFreeBSD when leaving fsync off whilst performance measuring stuff\nrecently (PG 7.2.2). Like anything it'll depend on workload, phase of\nmoon, etc, but it's a decent indicator.\n\n<snip>\n> Now questions.\n> \n> 1) Instead of copying from a single 12GB data file, will a parallel copy from\n> say 5 files will speed up the things?\n\nNot sure yet. Haven't get done enough performance testing (on the cards\nvery soon though).\n\n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5\n> setup..\n\nfsync = off would help during the data load, but not a good idea if\nyou're going to be running queries against it at the same time.\n\nAm still getting the hang of performance tuning stuff. Have a bunch of\nUltra160 hardware for the Intel platform, and am testing against it as\ntime permits.\n\nNot as high end as I'd like, but it's a start.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip>\n> Bye\n> Shridhar\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 26 Sep 2002 19:17:32 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 19:17, Justin Clift wrote:\n> Shridhar Daithankar wrote:\n> <snip>\n> > 3) Any suggsestions for runtime as data load and query will be going in\n> > parallel.\n> \n> That sounds unusual. From reading this, it *sounds* like you'll be\n> running queries against an incomplete dataset, or maybe just running the\n> queries that affect the tables loaded thus far (during the initial\n> load).\n\nThat's correct. Load the data so far and keep inserting data as and when it \ngenerates.\n\nThey don't mind running against data so far. It's not very accurate stuff \nIMO...\n\n> > fsync=true (Sad but true. Left untouched.. Will that make a difference on\n> > SCSI?)\n> \n> Definitely. Have directly measured a ~ 2x tps throughput increase on\n> FreeBSD when leaving fsync off whilst performance measuring stuff\n> recently (PG 7.2.2). Like anything it'll depend on workload, phase of\n> moon, etc, but it's a decent indicator.\n\nI didn't know even that matters with SCSI..Will check out..\n\n> fsync = off would help during the data load, but not a good idea if\n> you're going to be running queries against it at the same time.\n\nThat's OK for the reasons mentioned above. It wouldn't be out of place to \nexpect a UPS to such an installation...\n\nBye\n Shridhar\n\n--\nHoare's Law of Large Problems:\tInside every large problem is a small problem \nstruggling to get out.\n\n", "msg_date": "Thu, 26 Sep 2002 15:05:40 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:\n\n> On Thu, Sep 26, 2002 at 02:05:44PM +0530, Shridhar Daithankar wrote:\n> > 1) Database load time from flat file using copy is very high\n> > 2) Creating index takes huge amount of time.\n> > 3) Any suggsestions for runtime as data load and query will be going in \n> > parallel.\n> \n> You're loading all the data in one copy. I find that INSERTs are mostly\n> limited by indexes. While index lookups are cheap, they are not free and\n> each index needs to be updated for each row.\n> \n> I fond using partial indexes to only index the rows you actually use can\n> help with the loading. It's a bit obscure though.\n> \n> As for parallel loading, you'll be limited mostly by your I/O bandwidth.\n> Have you measured it to take sure it's up to speed?\n\nWell. It's like this, as of now.. CreateDB->create table->create index->Select.\n\nSo loading is not slowed by index. As of your hint of vmstat, will check it \nout.\n> So you're loading at a rate of 860KB per sec. That's not too fast. How many\n> indexes are active at that time? Triggers and foreign keys also take their\n> toll.\n\nNothing except the table where data os loaded..\n\n> fsync IIRC only affects the WAL buffers now but it may be quite expensive,\n> especially considering it's running on every transaction commit. Oh, your\n> WAL files are on a seperate disk from the data?\n\nNo. Same RAID 5 disks..\n\n> It shouldn't. Do you have an idea of what your CPU usage is? ps aux should\n> give you a decent idea.\n\nI guess we forgot to monitor system parameters. Next on my list is running \nvmstat, top and tuning bdflush.\n \n> Find the bottleneck: CPU, I/O or memory?\n\nUnderstood..\n> \n> > Mysql is almost out because it's creating index for last 17 hours. I don't \n> > think it will keep up with 5K inserts per sec. with index. SAP DB is under \n> > evaluation too. But postgresql is most favourite as of now because it works. So \n> > I need to come up with solutions to problems that will occur in near future..\n> > ;-)\n> \n> 17 hours! Ouch. Either way, you should be able to do much better. Hope this\n> helps,\n\nHeh.. no wonder this evaluation is taking more than 2 weeks.. Mysql was running \nout of disk space while creating index and crashin. An upgrade to mysql helped \nthere but no numbers as yet..\n\nThanks once again...\nBye\n Shridhar\n\n--\nBoren's Laws:\t(1) When in charge, ponder.\t(2) When in trouble, delegate.\t(3) \nWhen in doubt, mumble.\n\n", "msg_date": "Thu, 26 Sep 2002 15:16:50 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On Thursday 26 Sep 2002 9:35 am, Shridhar Daithankar wrote:\n\n[questions re: large database]\n\nBefore reading my advice please bear in mind you are operating way beyond the \nscale of anything I have ever built.\n\n> Now the details. Note that this is a test run only..\n>\n> Platform:- 4x Xeon2.4GHz/4GB RAM/4x48 SCSI RAID5/72 GB SCSI\n> RedHat7.2/PostgreSQL7.1.3\n>\n> Database in flat file:\n> 125,000,000 records of around 100 bytes each.\n> Flat file size 12GB\n>\n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n>\n> Important postgresql.conf settings\n[snipped setting details for moment]\n\nHave you tried putting the wal files, syslog etc on separate disks/volumes? If \nyou've settled on Intel, about the only thing you can optimise further is the \ndisks.\n\nOh - and the OS - make sure you're running a (good) recent kernel for that \nsort of hardware, I seem to remember some substantial changes in the 2.4 \nseries regarding multi-processor.\n\n> Now the requirements\n>\n> Initial flat data load: 250GB of data. This has gone up since last query.\n> It was 150GB earlier..\n> Ongoing inserts: 5000/sec.\n> Number of queries: 4800 queries/hour\n> Query response time: 10 sec.\n\nIs this 5000 rows in say 500 transactions or 5000 insert transactions per \nsecond. How many concurrent clients is this? Similarly for the 4800 queries, \nhow many concurrent clients is this? Are they expected to return approx 150 \nrows as in your test?\n\n> Now questions.\n>\n> 1) Instead of copying from a single 12GB data file, will a parallel copy\n> from say 5 files will speed up the things?\n\nIf the CPU is the bottle-neck then it should, but it's difficult to say \nwithout figures.\n\n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5\n> setup..\n\nWhat is saturating during the flat-file load? Something must be maxed in top / \niostat / vmstat.\n\n[snip]\n>\n> 5)Will upgrading to 7.2.2/7.3 beta help?\n\nIt's unlikely to hurt.\n\n> All in all, in the test, we didn't see the performance where hardware is\n> saturated to it's limits.\n\nSomething *must* be.\n\nWhat are your disaster recovery plans? I can see problems with taking backups \nif this beast is live 24/7.\n\n- Richard Huxton\n", "msg_date": "Thu, 26 Sep 2002 10:48:06 +0100", "msg_from": "Richard Huxton <dev@archonet.com>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> > > fsync=true (Sad but true. Left untouched.. Will that make a difference on\n> > > SCSI?)\n> >\n> > Definitely. Have directly measured a ~ 2x tps throughput increase on\n> > FreeBSD when leaving fsync off whilst performance measuring stuff\n> > recently (PG 7.2.2). Like anything it'll depend on workload, phase of\n> > moon, etc, but it's a decent indicator.\n> \n> I didn't know even that matters with SCSI..Will check out..\n\nCool. When testing it had FreeBSD 4.6.2 installed on one drive along\nwith the PostgreSQL 7.2.2 binaries, it had the data on a second drive\n(mounted as /pgdata), and it had the pg_xlog directory mounted on a\nthird drive. Swap had it's own drive as well.\n\nEverything is UltraSCSI, etc. Haven't yet tested for a performance\ndifference through moving the indexes to another drive after creation\nthough. That apparently has the potential to help as well.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 26 Sep 2002 19:49:53 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n> \n> On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:\n<snip>\n> > fsync IIRC only affects the WAL buffers now but it may be quite expensive,\n> > especially considering it's running on every transaction commit. Oh, your\n> > WAL files are on a seperate disk from the data?\n> \n> No. Same RAID 5 disks..\n\nNot sure if this is a good idea. Would have to think deeply about the\ncontroller and drive optimisation/load characteristics.\n\nIf it's any help, when I was testing recently with WAL on a separate\ndrive, the WAL logs were doing more read&writes per second than the main\ndata drive. This would of course be affected by the queries you are\nrunning against the database. I was just running Tatsuo's TPC-B stuff,\nand the OSDB AS3AP tests.\n\n> I guess we forgot to monitor system parameters. Next on my list is running\n> vmstat, top and tuning bdflush.\n\nThat'll just be the start of it for serious performance tuning and\nlearning how PostgreSQL works. :)\n\n<snip>\n> Thanks once again...\n> Bye\n> Shridhar\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 26 Sep 2002 19:56:34 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> RedHat7.2/PostgreSQL7.1.3\n\nI'd suggest a newer release of Postgres ... 7.1.3 is pretty old ...\n\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n\nWhat do you mean by \"char\" exactly? If it's really char(N), how much\nare you paying in padding space? There are very very few cases where\nI'd not say to use varchar(N), or text, instead. Also, does it have to\nbe character data? If you could use an integer or float datatype\ninstead the index operations should be faster (though I can't say by\nhow much). Have you thought carefully about the order in which the\ncomposite index columns are listed?\n\n> sort_mem = 12000\n\nTo create an index of this size, you want to push sort_mem as high as it\ncan go without swapping. 12000 sounds fine for the global setting, but\nin the process that will create the index, try setting sort_mem to some\nhundreds of megs or even 1Gb. (But be careful: the calculation of space\nactually used by CREATE INDEX is off quite a bit in pre-7.3 releases\n:-(. You should probably expect the actual process size to grow to two\nor three times what you set sort_mem to. Don't let it get so big as to\nswap.)\n\n> wal_buffers = 65536 \n\nThe above is a complete waste of memory space, which would be better\nspent on letting the kernel expand its disk cache. There's no reason\nfor wal_buffers to be more than a few dozen.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:33:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing " }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n>> On 26 Sep 2002 at 19:05, Martijn van Oosterhout wrote:\n>>> fsync IIRC only affects the WAL buffers now but it may be quite expensive,\n>>> especially considering it's running on every transaction commit. Oh, your\n>>> WAL files are on a seperate disk from the data?\n\n> Not sure if this is a good idea. Would have to think deeply about the\n> controller and drive optimisation/load characteristics.\n\n> If it's any help, when I was testing recently with WAL on a separate\n> drive, the WAL logs were doing more read&writes per second than the main\n> data drive.\n\n... but way fewer seeks. For anything involving lots of updating\ntransactions (and certainly 5000 separate insertions per second would\nqualify; can those be batched??), it should be a win to put WAL on its\nown spindle, just to get locality of access to the WAL.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:42:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "On 26 Sep 2002 at 10:33, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > RedHat7.2/PostgreSQL7.1.3\n> \n> I'd suggest a newer release of Postgres ... 7.1.3 is pretty old ...\n\nI agree.. downloadind 7.2.2 right away..\n\n> > Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> \n> What do you mean by \"char\" exactly? If it's really char(N), how much\n> are you paying in padding space? There are very very few cases where\n> I'd not say to use varchar(N), or text, instead. Also, does it have to\n> be character data? If you could use an integer or float datatype\n> instead the index operations should be faster (though I can't say by\n> how much). Have you thought carefully about the order in which the\n> composite index columns are listed?\n\nI have forwarded the idea of putting things into number. If it causes speedup \nin index lookup/creation, it would do. Looks like bigint is the order of the \nday..\n\n> \n> > sort_mem = 12000\n> \n> To create an index of this size, you want to push sort_mem as high as it\n> can go without swapping. 12000 sounds fine for the global setting, but\n> in the process that will create the index, try setting sort_mem to some\n> hundreds of megs or even 1Gb. (But be careful: the calculation of space\n> actually used by CREATE INDEX is off quite a bit in pre-7.3 releases\n> :-(. You should probably expect the actual process size to grow to two\n> or three times what you set sort_mem to. Don't let it get so big as to\n> swap.)\n\nGreat. I was skeptical to push it beyond 100MB. Now I can push it to corners..\n\n> > wal_buffers = 65536 \n> \n> The above is a complete waste of memory space, which would be better\n> spent on letting the kernel expand its disk cache. There's no reason\n> for wal_buffers to be more than a few dozen.\n\nThat was a rather desparate move. Nothing was improving performance and then we \nstarted pushing numbers.. WIll get it back.. Same goes for 64 WAL files.. A GB \nlooks like waste to me..\n\nI might have found the bottleneck, although by accident. Mysql was running out \nof space while creating index. So my friend shut down mysql and tried to move \nthings by hand to create links. He noticed that even things like cp were \nterribly slow and it hit us.. May be the culprit is the file system. Ext3 in \nthis case. \n\nMy friend argues for ext2 to eliminate journalling overhead but I favour \nreiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE \ndisk for 25 tps..\n\nWe will be attempting raiserfs and/or XFS if required. I know how much speed \ndifference exists between resiserfs and ext2. Would not be surprised if \neverythng just starts screaming in one go..\n\nBye\n Shridhar\n\n--\nCropp's Law:\tThe amount of work done varies inversly with the time spent in the\t\noffice.\n\n", "msg_date": "Thu, 26 Sep 2002 20:22:05 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing " }, { "msg_contents": "On 26 Sep 2002 at 10:42, Tom Lane wrote:\n\n> Justin Clift <justin@postgresql.org> writes:\n> > If it's any help, when I was testing recently with WAL on a separate\n> > drive, the WAL logs were doing more read&writes per second than the main\n> > data drive.\n> \n> ... but way fewer seeks. For anything involving lots of updating\n> transactions (and certainly 5000 separate insertions per second would\n> qualify; can those be batched??), it should be a win to put WAL on its\n> own spindle, just to get locality of access to the WAL.\n\nProbably they will be a single transcation. If possible we will bunch more of \nthem together.. like 5 seconds of data pushed down in a single transaction but \nnot sure it's possible..\n\nThis is bit like replication but from live oracle machine to postgres, from \ninformation I have. So there should be some chance of tuning there..\n\nBye\n Shridhar\n\n--\nLangsam's Laws:\t(1) Everything depends.\t(2) Nothing is always.\t(3) Everything \nis sometimes.\n\n", "msg_date": "Thu, 26 Sep 2002 20:28:11 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "On Thursday 26 September 2002 21:52, Shridhar Daithankar wrote:\n\n> I might have found the bottleneck, although by accident. Mysql was running\n> out of space while creating index. So my friend shut down mysql and tried\n> to move things by hand to create links. He noticed that even things like cp\n> were terribly slow and it hit us.. May be the culprit is the file system.\n> Ext3 in this case.\n>\n> My friend argues for ext2 to eliminate journalling overhead but I favour\n> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB\n> IDE disk for 25 tps..\n>\n> We will be attempting raiserfs and/or XFS if required. I know how much\n> speed difference exists between resiserfs and ext2. Would not be surprised\n> if everythng just starts screaming in one go..\n\nAs it was found by someone before any non-journaling FS is faster than\njournaling one. This due to double work done by FS and database.\n\nTry it on ext2 and compare.\n\n--\nDenis\n\n", "msg_date": "Thu, 26 Sep 2002 22:04:41 +0700", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> My friend argues for ext2 to eliminate journalling overhead but I favour\n> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE\n> disk for 25 tps..\n\nIf it's any help, the setup I mentioned before with differnt disks for\nthe data and the WAL files was getting an average of about 72 tps with\n200 concurrent users on pgbench. Haven't tuned it in a hard core way at\nall, and it only has 256MB DDR RAM in it at the moment (single CPU\nAthonXP 1600). These are figures made during the 2.5k+ test runs of\npgbench done when developing pg_autotune recently.\n\nAs a curiosity point, how predictable are the queries you're going to be\nrunning on your database? They sound very simple and very predicatable.\n\nThe pg_autotune tool might be your friend here. It can deal with\narbitrary SQL instead of using the pg_bench stuff of Tatsuos, and it can\nalso deal with an already loaded database. You'd just have to tweak the\nnames of the tables that it vacuums and the names of the indexes that it\nreindexes between each run, to get some idea of your overall server\nperformance at different load points.\n\nProbably worth taking a good look at if you're not afraid of editing\nvariables in C code. :)\n \n> We will be attempting raiserfs and/or XFS if required. I know how much speed\n> difference exists between resiserfs and ext2. Would not be surprised if\n> everythng just starts screaming in one go..\n\nWe'd all probably be interested to hear this. Added the PostgreSQL\n\"Performance\" mailing list to this thread too, Just In Case. (wow that's\na lot of cross posting now).\n\nRegards and best wishes,\n\nJustin Clift\n \n> Bye\n> Shridhar\n> \n> --\n> Cropp's Law: The amount of work done varies inversly with the time spent in the\n> office.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 27 Sep 2002 01:12:49 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 27 Sep 2002 at 1:12, Justin Clift wrote:\n\n> Shridhar Daithankar wrote:\n> As a curiosity point, how predictable are the queries you're going to be\n> running on your database? They sound very simple and very predicatable.\n\nMostly predictable selects. Not a domain expert on telecom so not very sure. \nBut in my guess prepare statement in 7.3 should come pretty handy. i.e. by the \ntime we finish evaluation and test deployment, 7.3 will be out in next couple \nof months to say so. So I would recommend doing it 7.3 way only..\n> \n> The pg_autotune tool might be your friend here. It can deal with\n> arbitrary SQL instead of using the pg_bench stuff of Tatsuos, and it can\n> also deal with an already loaded database. You'd just have to tweak the\n> names of the tables that it vacuums and the names of the indexes that it\n> reindexes between each run, to get some idea of your overall server\n> performance at different load points.\n> \n> Probably worth taking a good look at if you're not afraid of editing\n> variables in C code. :)\n\nGladly. We started with altering pgbench here for testing and rapidly settled \nto perl generated random queries. Once postgresql wins the evaluation match and \nthings come to implementation, pg_autotune would be a handy tool. Just that \ncan't do it right now. Have to fight mysql and SAP DB before that..\n\nBTW any performance figures on SAP DB? People here are as it frustrated with it \nwith difficulties in setting it up. But still..\n> \n\n> > We will be attempting raiserfs and/or XFS if required. I know how much speed\n> > difference exists between resiserfs and ext2. Would not be surprised if\n> > everythng just starts screaming in one go..\n> \n> We'd all probably be interested to hear this. Added the PostgreSQL\n> \"Performance\" mailing list to this thread too, Just In Case. (wow that's\n> a lot of cross posting now).\n\nI know..;-) Glad that PG list does not have strict policies like no non-\nsubscriber posting or no attachments.. etc.. \n\nIMO reiserfs, though journalling one, is faster than ext2 etc. because the way \nit handles metadata. Personally I haven't come across ext2 being faster than \nreiserfs on few machine here for day to day use.\n\nI guess I should have a freeBSD CD handy too.. Just to give it a try. If it \ncomes down to a better VM.. though using 2.4.19 here.. so souldn't matter \nmuch..\n\nI will keep you guys posted on file system stuff... Glad that we have much \nflexibility with postgresql..\n\nBye\n Shridhar\n\n--\nBilbo's First Law:\tYou cannot count friends that are all packed up in barrels.\n\n", "msg_date": "Thu, 26 Sep 2002 20:59:01 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 09:52, Shridhar Daithankar wrote:\n> My friend argues for ext2 to eliminate journalling overhead but I favour \n> reiserfs personally having used it in pgbench with 10M rows on paltry 20GB IDE \n> disk for 25 tps..\n> \n> We will be attempting raiserfs and/or XFS if required. I know how much speed \n> difference exists between resiserfs and ext2. Would not be surprised if \n> everythng just starts screaming in one go..\n> \n\nI'm not sure about reiserfs or ext3 but with XFS, you can create your\nlog on another disk. Also worth noting is that you can also configure\nthe size and number of log buffers. There are also some other\nperformance type enhancements you can fiddle with if you don't mind\nrisking time stamp consistency in the event of a crash. If your setup\nallows for it, you might want to consider using XFS in this\nconfiguration.\n\nWhile I have not personally tried moving XFS' log to another device,\nI've heard that performance gains can be truly stellar. Assuming memory\nallows, twiddling with the log buffering is said to allow for large\nstrides in performance as well.\n\nIf you do try this, I'd love to hear back about your results and\nimpressions.\n\nGreg", "msg_date": "26 Sep 2002 10:41:37 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Shridhar Daithankar wrote:\n> I might have found the bottleneck, although by accident. Mysql was running out \n> of space while creating index. So my friend shut down mysql and tried to move \n> things by hand to create links. He noticed that even things like cp were \n> terribly slow and it hit us.. May be the culprit is the file system. Ext3 in \n> this case. \n\nI just added a file system and multi-cpu section to my performance\ntuning paper:\n\n\thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n\nThe paper does recommend ext3, but the differences between file systems\nare very small. If you are seeing 'cp' as slow, I wonder if it may be\nsomething more general, like poorly tuned hardware or something. You can\nuse 'dd' to throw some data around the file system and see if that is\nshowing slowness; compare those numbers to another machine that has\ndifferent hardware/OS.\n\nAlso, though ext3 is slower, turning fsync off should make ext3 function\nsimilar to ext2. That would be an interesting test if you suspect ext3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 12:41:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n\n> I'm not sure about reiserfs or ext3 but with XFS, you can create your\n> log on another disk. Also worth noting is that you can also configure\n> the size and number of log buffers. There are also some other\n> performance type enhancements you can fiddle with if you don't mind\n> risking time stamp consistency in the event of a crash. If your setup\n> allows for it, you might want to consider using XFS in this\n> configuration.\n\nYou can definitely put the ext3 log on a different disk with 2.4\nkernels. \n\nAlso, if you put the WAL logs on a different disk from the main\ndatabase, and mount that partition with 'data=writeback' (ie\nmetadata-only journaling) ext3 should be pretty fast, since WAL files\nare preallocated and there will therefore be almost no metadata\nupdates.\n\nYou should be able to mount the main database with \"data=ordered\" (the\ndefault) for good performance and reasonable safety.\n\nI think putting WAL on its own disk(s) is one of the keys here.\n\n-Doug\n", "msg_date": "26 Sep 2002 13:16:36 -0400", "msg_from": "Doug cNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 11:41, Bruce Momjian wrote:\n> Shridhar Daithankar wrote:\n> > I might have found the bottleneck, although by accident. Mysql was running out \n> > of space while creating index. So my friend shut down mysql and tried to move \n> > things by hand to create links. He noticed that even things like cp were \n> > terribly slow and it hit us.. May be the culprit is the file system. Ext3 in \n> > this case. \n> \n> I just added a file system and multi-cpu section to my performance\n> tuning paper:\n> \n> \thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n> \n> The paper does recommend ext3, but the differences between file systems\n> are very small. If you are seeing 'cp' as slow, I wonder if it may be\n> something more general, like poorly tuned hardware or something. You can\n> use 'dd' to throw some data around the file system and see if that is\n> showing slowness; compare those numbers to another machine that has\n> different hardware/OS.\n\n\nThat's a good point. Also, if you're using IDE, you do need to verify\nthat you're using DMA and proper PIO mode if at possible. Also, big\nperformance improvements can be seen by making sure your IDE bus speed\nhas been properly configured. The drivetweak-gtk and hdparm utilities\ncan make huge difference in performance. Just be sure you know what the\nheck your doing when you mess with those.\n\nGreg", "msg_date": "26 Sep 2002 12:36:57 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 11:41, Bruce Momjian wrote:\n> Shridhar Daithankar wrote:\n> > I might have found the bottleneck, although by accident. Mysql was running out \n> > of space while creating index. So my friend shut down mysql and tried to move \n> > things by hand to create links. He noticed that even things like cp were \n> > terribly slow and it hit us.. May be the culprit is the file system. Ext3 in \n> > this case. \n> \n> I just added a file system and multi-cpu section to my performance\n> tuning paper:\n> \n> \thttp://www.ca.postgresql.org/docs/momjian/hw_performance/\n> \n> The paper does recommend ext3, but the differences between file systems\n> are very small. If you are seeing 'cp' as slow, I wonder if it may be\n> something more general, like poorly tuned hardware or something. You can\n> use 'dd' to throw some data around the file system and see if that is\n> showing slowness; compare those numbers to another machine that has\n> different hardware/OS.\n> \n> Also, though ext3 is slower, turning fsync off should make ext3 function\n> similar to ext2. That would be an interesting test if you suspect ext3.\n\nI'm curious as to why you recommended ext3 versus some other (JFS,\nXFS). Do you have tests which validate that recommendation or was it a\nsimple matter of getting the warm fuzzies from familiarity?\n\nGreg", "msg_date": "26 Sep 2002 12:44:22 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "If you are seeing very slow performance on a drive set, check dmesg to see \nif you're getting SCSI bus errors or something similar. If your drives \naren't properly terminated then the performance will suffer a great deal.\n\n", "msg_date": "Thu, 26 Sep 2002 12:41:55 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland wrote:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small. If you are seeing 'cp' as slow, I wonder if it may be\n> > something more general, like poorly tuned hardware or something. You can\n> > use 'dd' to throw some data around the file system and see if that is\n> > showing slowness; compare those numbers to another machine that has\n> > different hardware/OS.\n> > \n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2. That would be an interesting test if you suspect ext3.\n> \n> I'm curious as to why you recommended ext3 versus some other (JFS,\n> XFS). Do you have tests which validate that recommendation or was it a\n> simple matter of getting the warm fuzzies from familiarity?\n\nI used the attached email as a reference. I just changed the wording to\nbe:\n\t\n\tFile system choice is particularly difficult on Linux because there are\n\tso many file system choices, and none of them are optimal: ext2 is not\n\tentirely crash-safe, ext3 and xfs are journal-based, and Reiser is\n\toptimized for small files. Fortunately, the journaling file systems\n\taren't significantly slower than ext2 so they are probably the best\n\tchoice.\n\nso I don't specifically recommend ext3 anymore. As I remember, ext3 is\ngood only in that it can read ext2 file systems. I think XFS may be the\nbest bet.\n\nCan anyone clarify if \"data=writeback\" is safe for PostgreSQL. \nSpecifically, are the data files recovered properly or is this option\nonly for a filesystem containing WAL?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073", "msg_date": "Thu, 26 Sep 2002 16:00:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The paper does recommend ext3, but the differences between file systems\n> are very small.\n\nWell, I only did a very rough benchmark (a few runs of pgbench), but\nthe results I found were drastically different: ext2 was significantly\nfaster (~50%) than ext3-writeback, which was in turn significantly\nfaster (~25%) than ext3-ordered.\n\n> Also, though ext3 is slower, turning fsync off should make ext3 function\n> similar to ext2.\n\nWhy would that be?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Sep 2002 16:41:49 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small.\n> \n> Well, I only did a very rough benchmark (a few runs of pgbench), but\n> the results I found were drastically different: ext2 was significantly\n> faster (~50%) than ext3-writeback, which was in turn significantly\n> faster (~25%) than ext3-ordered.\n\nWow. That leaves no good Linux file system alternatives. PostgreSQL\njust wants an ordinary file system that has reliable recovery from a\ncrash.\n\n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2.\n> \n> Why would that be?\n\nI assumed it was the double fsync for the normal and journal that made\nthe journalling file systems slog.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:45:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "I have seen various benchmarks where XFS seems to perform best when it \ncomes to huge amounts of data and many files (due to balanced internal \nb+ trees).\nalso, XFS seems to be VERY mature and very stable.\next2/3 don't seem to be that fast in most of the benchmarks.\n\ni did some testing with reiser some time ago. the problem is that it \nseems to restore a very historic consistent snapshot of the data. XFS \nseems to be much better in this respect.\n\ni have not tested JFS yet (but on this damn AIX beside me)\nfrom my point of view i strongly recommend XFS (maybe somebody from \nRedHat should think about it).\n\n Hans\n\n\nNeil Conway wrote:\n\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n>\n>>The paper does recommend ext3, but the differences between file systems\n>>are very small.\n>> \n>>\n>\n>Well, I only did a very rough benchmark (a few runs of pgbench), but\n>the results I found were drastically different: ext2 was significantly\n>faster (~50%) than ext3-writeback, which was in turn significantly\n>faster (~25%) than ext3-ordered.\n>\n> \n>\n>>Also, though ext3 is slower, turning fsync off should make ext3 function\n>>similar to ext2.\n>> \n>>\n>\n>Why would that be?\n>\n>Cheers,\n>\n>Neil\n>\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Thu, 26 Sep 2002 22:55:30 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small.\n> \n> Well, I only did a very rough benchmark (a few runs of pgbench), but\n> the results I found were drastically different: ext2 was significantly\n> faster (~50%) than ext3-writeback, which was in turn significantly\n> faster (~25%) than ext3-ordered.\n> \n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2.\n> \n> Why would that be?\n\nOK, I changed the text to:\n\t\n\tFile system choice is particularly difficult on Linux because there are\n\tso many file system choices, and none of them are optimal: ext2 is not\n\tentirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is\n\toptimized for small files and does journalling. The journalling file\n\tsystems can be significantly slower than ext2 but when crash recovery is\n\trequired, ext2 isn't an option.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:57:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Wow. That leaves no good Linux file system alternatives.\n> PostgreSQL just wants an ordinary file system that has reliable\n> recovery from a crash.\n\nI'm not really familiar with the reasoning behind ext2's reputation as\nrecovering poorly from crashes; if we fsync a WAL record to disk\nbefore we lose power, can't we recover reliably, even with ext2?\n\n> > > Also, though ext3 is slower, turning fsync off should make ext3\n> > > function similar to ext2.\n> > \n> > Why would that be?\n> \n> I assumed it was the double fsync for the normal and journal that\n> made the journalling file systems slog.\n\nWell, a journalling file system would need to write a journal entry\nand flush that to disk, even if fsync is disabled -- whereas without\nfsync enabled, ext2 doesn't have to flush anything to disk. ISTM that\nthe performance advantage of ext2 over ext3 is should be even larger\nwhen fsync is not enabled.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Sep 2002 17:03:26 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "I tend to agree with this though I have nothing to back up it with. My\nimpression is that XFS does very well for large files. Accepting that\nas fact?, my impression is that XFS historically does well for\ndatabase's. Again, I have nothing to back that up other than hear-say\nand conjecture.\n\nGreg\n\n\nOn Thu, 2002-09-26 at 15:55, Hans-Jürgen Schönig wrote:\n> I have seen various benchmarks where XFS seems to perform best when it \n> comes to huge amounts of data and many files (due to balanced internal \n> b+ trees).\n> also, XFS seems to be VERY mature and very stable.\n> ext2/3 don't seem to be that fast in most of the benchmarks.\n> \n> i did some testing with reiser some time ago. the problem is that it \n> seems to restore a very historic consistent snapshot of the data. XFS \n> seems to be much better in this respect.\n> \n> i have not tested JFS yet (but on this damn AIX beside me)\n> from my point of view i strongly recommend XFS (maybe somebody from \n> RedHat should think about it).\n> \n> Hans\n> \n> \n> Neil Conway wrote:\n> \n> >Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > \n> >\n> >>The paper does recommend ext3, but the differences between file systems\n> >>are very small.\n> >> \n> >>\n> >\n> >Well, I only did a very rough benchmark (a few runs of pgbench), but\n> >the results I found were drastically different: ext2 was significantly\n> >faster (~50%) than ext3-writeback, which was in turn significantly\n> >faster (~25%) than ext3-ordered.\n> >\n> > \n> >\n> >>Also, though ext3 is slower, turning fsync off should make ext3 function\n> >>similar to ext2.\n> >> \n> >>\n> >\n> >Why would that be?\n> >\n> >Cheers,\n> >\n> >Neil\n> >\n> > \n> >\n> \n> \n> -- \n> *Cybertec Geschwinde u Schoenig*\n> Ludo-Hartmannplatz 1/14, A-1160 Vienna, Austria\n> Tel: +43/1/913 68 09; +43/664/233 90 75\n> www.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n> <http://cluster.postgresql.at>, www.cybertec.at \n> <http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org", "msg_date": "26 Sep 2002 16:03:51 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Has there been any thought of providing RAW disk support to bypass the fs?\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\nSent: Thursday, September 26, 2002 3:57 PM\nTo: Neil Conway\nCc: shridhar_daithankar@persistent.co.in; pgsql-hackers@postgresql.org;\npgsql-general@postgresql.org\nSubject: Re: [HACKERS] [GENERAL] Performance while loading data and\nindexing\n\n\nNeil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The paper does recommend ext3, but the differences between file systems\n> > are very small.\n>\n> Well, I only did a very rough benchmark (a few runs of pgbench), but\n> the results I found were drastically different: ext2 was significantly\n> faster (~50%) than ext3-writeback, which was in turn significantly\n> faster (~25%) than ext3-ordered.\n>\n> > Also, though ext3 is slower, turning fsync off should make ext3 function\n> > similar to ext2.\n>\n> Why would that be?\n\nOK, I changed the text to:\n\n\tFile system choice is particularly difficult on Linux because there are\n\tso many file system choices, and none of them are optimal: ext2 is not\n\tentirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is\n\toptimized for small files and does journalling. The journalling file\n\tsystems can be significantly slower than ext2 but when crash recovery is\n\trequired, ext2 isn't an option.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Thu, 26 Sep 2002 16:06:07 -0500", "msg_from": "\"James Maes\" <jmaes@materialogic.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Wow. That leaves no good Linux file system alternatives.\n> > PostgreSQL just wants an ordinary file system that has reliable\n> > recovery from a crash.\n> \n> I'm not really familiar with the reasoning behind ext2's reputation as\n> recovering poorly from crashes; if we fsync a WAL record to disk\n> before we lose power, can't we recover reliably, even with ext2?\n> \n> > > > Also, though ext3 is slower, turning fsync off should make ext3\n> > > > function similar to ext2.\n> > > \n> > > Why would that be?\n> > \n> > I assumed it was the double fsync for the normal and journal that\n> > made the journalling file systems slog.\n> \n> Well, a journalling file system would need to write a journal entry\n> and flush that to disk, even if fsync is disabled -- whereas without\n> fsync enabled, ext2 doesn't have to flush anything to disk. ISTM that\n> the performance advantage of ext2 over ext3 is should be even larger\n> when fsync is not enabled.\n\nYes, it is still double-writing. I just thought that if that wasn't\nhappening while the db was waiting for a commit that it wouldn't be too\nbad.\n\nIs it just me or do all the Linux file systems seem like they are\nlacking something when PostgreSQL is concerned? We just want a UFS-like\nfile system on Linux and no one has it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:07:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Wow. That leaves no good Linux file system alternatives.\n> > PostgreSQL just wants an ordinary file system that has reliable\n> > recovery from a crash.\n> \n> I'm not really familiar with the reasoning behind ext2's reputation as\n> recovering poorly from crashes; if we fsync a WAL record to disk\n> before we lose power, can't we recover reliably, even with ext2?\n\nWell, I have experienced data loss from ext2 before. Also, recovery\nfrom crashes on large file systems take a very, very long time. I can't\nimagine anyone running a production database on an ext2 file system\nhaving 10's or even 100's of GB. Ouch. Recovery would take forever! \nEven recovery on small file systems (2-8G) can take extended periods of\ntime. Especially so on IDE systems. Even then manual intervention is\nnot uncommon.\n\nWhile I can't say that x, y or z is the best FS to use on Linux, I can\nsay that ext2 is probably an exceptionally poor choice from a\nreliability and/or uptime perspective.\n\nGreg", "msg_date": "26 Sep 2002 16:09:15 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> > I'm not really familiar with the reasoning behind ext2's\n> > reputation as recovering poorly from crashes; if we fsync a WAL\n> > record to disk before we lose power, can't we recover reliably,\n> > even with ext2?\n> \n> Well, I have experienced data loss from ext2 before. Also, recovery\n> from crashes on large file systems take a very, very long time.\n\nYes, but wouldn't you face exactly the same issues if you ran a\nUFS-like filesystem in asynchronous mode? Albeit it's not the default,\nbut performance in synchronous mode is usually pretty poor.\n\nThe fact that ext2 defaults to asynchronous mode and UFS (at least on\nthe BSDs) defaults to synchronous mode seems like a total non-issue to\nme. Is there any more to the alleged difference in reliability?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "26 Sep 2002 17:17:30 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Can anyone clarify if \"data=writeback\" is safe for PostgreSQL. \n> Specifically, are the data files recovered properly or is this option\n> only for a filesystem containing WAL?\n\n\"data=writeback\" means that no data is journaled, just metadata (which\nis like XFS or Reiser). An fsync() call should still do what it\nnormally does, commit the writes to disk before returning.\n\n\"data=journal\" journals all data and is the slowest and safest.\n\"data=ordered\" writes out data blocks before committing a journal\ntransaction, which is faster than full data journaling (since data\ndoesn't get written twice) and almost as safe. \"data=writeback\" is\nnoted to keep obsolete data in the case of some crashes (since the\ndata may not have been written yet) but a completed fsync() should\nensure that the data is valid.\n\nSo I guess I'd probably use data=ordered for an all-on-one-fs\ninstallation, and data=writeback for a WAL-only drive.\n\nHope this helps...\n\n-Doug\n", "msg_date": "26 Sep 2002 17:31:55 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> I'm not really familiar with the reasoning behind ext2's reputation as\n> recovering poorly from crashes; if we fsync a WAL record to disk\n> before we lose power, can't we recover reliably, even with ext2?\n\nUp to a point. We do assume that the filesystem won't lose checkpointed\n(sync'd) writes to data files. To the extent that the filesystem is\nvulnerable to corruption of its own metadata for a file (indirect blocks\nor whatever ext2 uses), that's not a completely safe assumption.\n\nWe'd be happiest with a filesystem that journals its own metadata and\nnot the user data in the file(s). I dunno if there are any.\n\nHmm, maybe this is why Oracle likes doing their own filesystem on a raw\ndevice...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 17:32:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> We'd be happiest with a filesystem that journals its own metadata and\n> not the user data in the file(s). I dunno if there are any.\n\next3 with data=writeback? (See my previous message to Bruce).\n\n-Doug\n", "msg_date": "26 Sep 2002 17:37:10 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Neil Conway wrote:\n> Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> > > I'm not really familiar with the reasoning behind ext2's\n> > > reputation as recovering poorly from crashes; if we fsync a WAL\n> > > record to disk before we lose power, can't we recover reliably,\n> > > even with ext2?\n> > \n> > Well, I have experienced data loss from ext2 before. Also, recovery\n> > from crashes on large file systems take a very, very long time.\n> \n> Yes, but wouldn't you face exactly the same issues if you ran a\n> UFS-like filesystem in asynchronous mode? Albeit it's not the default,\n> but performance in synchronous mode is usually pretty poor.\n\nYes, before UFS had soft updates, the synchronous nature of UFS made it\nslower than ext2, but now with soft updates, that performance difference\nis gone so you have two files systems, ext2 and ufs, similar peformance,\nbut one is crash-safe and the other is not.\n\nAnd, when comparing the journalling file systems, you have UFS vs.\nXFS/ext3/JFS/Reiser, and UFS is faster. The only thing the journalling\nfile system give you is more rapid reboot, but frankly, if your OS goes\ndown often enough so that is an issue, you have bigger problems than\nfsync time.\n\nThe big problem is that Linux went from non-crash safe right to\ncrash-safe and reboot quick. We need a middle ground, which is where\nUFS/soft updates is.\n\n> The fact that ext2 defaults to asynchronous mode and UFS (at least on\n> the BSDs) defaults to synchronous mode seems like a total non-issue to\n> me. Is there any more to the alleged difference in reliability?\n\nThe reliability problem isn't alleged. ext2 developers admits ext2\nisn't 100% crash-safe. They will say it is usually crash-safe, but that\nisn't good enough for PostgreSQL.\n\nI wish I was wrong.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:39:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Doug McNaught wrote:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \n> > We'd be happiest with a filesystem that journals its own metadata and\n> > not the user data in the file(s). I dunno if there are any.\n> \n> ext3 with data=writeback? (See my previous message to Bruce).\n\nOK, so that makes ext3 crash safe without lots of overhead?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:41:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 17:39, Bruce Momjian wrote:\n> Neil Conway wrote:\n> > Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > > On Thu, 2002-09-26 at 16:03, Neil Conway wrote:\n> > > > I'm not really familiar with the reasoning behind ext2's\n> > > > reputation as recovering poorly from crashes; if we fsync a WAL\n> > > > record to disk before we lose power, can't we recover reliably,\n> > > > even with ext2?\n> > > \n> > > Well, I have experienced data loss from ext2 before. Also, recovery\n> > > from crashes on large file systems take a very, very long time.\n> > \n> > Yes, but wouldn't you face exactly the same issues if you ran a\n> > UFS-like filesystem in asynchronous mode? Albeit it's not the default,\n> > but performance in synchronous mode is usually pretty poor.\n> \n> Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> slower than ext2, but now with soft updates, that performance difference\n> is gone so you have two files systems, ext2 and ufs, similar peformance,\n> but one is crash-safe and the other is not.\n\nNote entirely true. ufs is both crash-safe and quick-rebootable. You\ndo need to fsck at some point, but not prior to mounting it. Any\ncorrupt blocks are empty, and are easy to avoid.\n\nSomeone just needs to implement a background fsck that will run on a\nmounted filesystem.\n\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 17:45:23 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Rod Taylor wrote:\n> > Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> > slower than ext2, but now with soft updates, that performance difference\n> > is gone so you have two files systems, ext2 and ufs, similar peformance,\n> > but one is crash-safe and the other is not.\n> \n> Note entirely true. ufs is both crash-safe and quick-rebootable. You\n> do need to fsck at some point, but not prior to mounting it. Any\n> corrupt blocks are empty, and are easy to avoid.\n\nI am assuming you need to mount the drive as part of the reboot. Of\ncourse you can boot fast with any file system if you don't have to mount\nit. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 17:47:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, 2002-09-26 at 17:47, Bruce Momjian wrote:\n> Rod Taylor wrote:\n> > > Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> > > slower than ext2, but now with soft updates, that performance difference\n> > > is gone so you have two files systems, ext2 and ufs, similar peformance,\n> > > but one is crash-safe and the other is not.\n> > \n> > Note entirely true. ufs is both crash-safe and quick-rebootable. You\n> > do need to fsck at some point, but not prior to mounting it. Any\n> > corrupt blocks are empty, and are easy to avoid.\n> \n> I am assuming you need to mount the drive as part of the reboot. Of\n> course you can boot fast with any file system if you don't have to mount\n> it. :-)\n\nSorry, poor explanation.\n\nBackground fsck (when implemented) would operate on a currently mounted\n(and active) file system. The only reason fsck is required prior to\nreboot now is because no-one had done the work.\n\nhttp://www.freebsd.org/cgi/man.cgi?query=fsck&sektion=8&manpath=FreeBSD+5.0-current\n\nSee the first paragraph of the above.\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 18:03:36 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Rod Taylor wrote:\n> On Thu, 2002-09-26 at 17:47, Bruce Momjian wrote:\n> > Rod Taylor wrote:\n> > > > Yes, before UFS had soft updates, the synchronous nature of UFS made it\n> > > > slower than ext2, but now with soft updates, that performance difference\n> > > > is gone so you have two files systems, ext2 and ufs, similar peformance,\n> > > > but one is crash-safe and the other is not.\n> > > \n> > > Note entirely true. ufs is both crash-safe and quick-rebootable. You\n> > > do need to fsck at some point, but not prior to mounting it. Any\n> > > corrupt blocks are empty, and are easy to avoid.\n> > \n> > I am assuming you need to mount the drive as part of the reboot. Of\n> > course you can boot fast with any file system if you don't have to mount\n> > it. :-)\n> \n> Sorry, poor explanation.\n> \n> Background fsck (when implemented) would operate on a currently mounted\n> (and active) file system. The only reason fsck is required prior to\n> reboot now is because no-one had done the work.\n> \n> http://www.freebsd.org/cgi/man.cgi?query=fsck&sektion=8&manpath=FreeBSD+5.0-current\n> \n> See the first paragraph of the above.\n\nOh, yes, I have heard of that missing feature.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 18:04:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Doug McNaught wrote:\n> > Tom Lane <tgl@sss.pgh.pa.us> writes:\n> > \n> > > We'd be happiest with a filesystem that journals its own metadata and\n> > > not the user data in the file(s). I dunno if there are any.\n> > \n> > ext3 with data=writeback? (See my previous message to Bruce).\n> \n> OK, so that makes ext3 crash safe without lots of overhead?\n\nMetadata is journaled so you shouldn't lose data blocks or directory\nentries. Some data blocks (that haven't been fsync()'ed) may have old\nor wrong data in them, but I think that's the same as ufs, right? And\nWAL replay should take care of that.\n\nIt'd be very interesting to do some tests of the various journaling\nmodes. I have an old K6 that I might be able to turn into a\nhit-the-reset-switch-at-ramdom-times machine. What kind of tests\nshould be run?\n\n-Doug\n", "msg_date": "26 Sep 2002 19:26:03 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> \"data=writeback\" means that no data is journaled, just metadata (which\n> is like XFS or Reiser). An fsync() call should still do what it\n> normally does, commit the writes to disk before returning.\n> \"data=journal\" journals all data and is the slowest and safest.\n> \"data=ordered\" writes out data blocks before committing a journal\n> transaction, which is faster than full data journaling (since data\n> doesn't get written twice) and almost as safe. \"data=writeback\" is\n> noted to keep obsolete data in the case of some crashes (since the\n> data may not have been written yet) but a completed fsync() should\n> ensure that the data is valid.\n\nThanks for the explanation.\n\n> So I guess I'd probably use data=ordered for an all-on-one-fs\n> installation, and data=writeback for a WAL-only drive.\n\nActually I think the ideal thing for Postgres would be data=writeback\nfor both data and WAL drives. We can handle loss of un-fsync'd data\nfor ourselves in both cases.\n\nOf course, if you keep anything besides Postgres data files on a\npartition, you'd possibly want the more secure settings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 23:07:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing " }, { "msg_contents": "Hello!\n\nOn Thu, 26 Sep 2002, Bruce Momjian wrote:\n\n> > I'm not really familiar with the reasoning behind ext2's reputation as\n> > recovering poorly from crashes; if we fsync a WAL record to disk\n\nOn relatively big volumes ext2 recovery can end up in formatting the fs \nunder certain cirrumstances.;-)\n\n> > > I assumed it was the double fsync for the normal and journal that\n> > > made the journalling file systems slog.\n> > \n> > Well, a journalling file system would need to write a journal entry\n> > and flush that to disk, even if fsync is disabled -- whereas without\n> > fsync enabled, ext2 doesn't have to flush anything to disk. ISTM that\n> > the performance advantage of ext2 over ext3 is should be even larger\n> > when fsync is not enabled.\n> \n> Yes, it is still double-writing. I just thought that if that wasn't\n> happening while the db was waiting for a commit that it wouldn't be too\n> bad.\n> \n> Is it just me or do all the Linux file systems seem like they are\n> lacking something when PostgreSQL is concerned? We just want a UFS-like\n> file system on Linux and no one has it.\n\nmount -o sync an ext2 volume on Linux - and you can get a \"UFS-like\" fs.:)\nmount -o async an FFS volume on FreeBSD - and you can get boost in fs \nperformance.\nPersonally me always mount ext2 fs where Pg is living with sync option.\nFsync in pg is off (since 6.3), this way successfully pass thru a few \nserious crashes on various systems (mostly on power problems).\nIf fsync is on in Pg, performance gets so-oh-oh-oh-oh slowly!=)\nI just have done upgrade from 2.2 kernel on ext2 to ext3 capable 2.4 one\nso I'm planning to do some benchmarking. Roughly saying w/o benchmarks, \nthe performance have been degraded in 2/3 proportion.\n\"But better safe then sorry\".\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Fri, 27 Sep 2002 12:14:40 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <byg@center-f1.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "neilc@samurai.com (Neil Conway) writes:\n\n[snip]\n> > Well, I have experienced data loss from ext2 before. Also, recovery\n> > from crashes on large file systems take a very, very long time.\n> \n> Yes, but wouldn't you face exactly the same issues if you ran a\n> UFS-like filesystem in asynchronous mode? Albeit it's not the default,\n> but performance in synchronous mode is usually pretty poor.\n> \n> The fact that ext2 defaults to asynchronous mode and UFS (at least on\n> the BSDs) defaults to synchronous mode seems like a total non-issue to\n> me. Is there any more to the alleged difference in reliability?\n\nUFS on most unix systems (BSD, solaris etc) defaults to sync\nmetadata, async data which is a mode that is completely missing\nfrom ext2 as far as I know.\n\nThis is why UFS is considered safer than ext2. (Running with\n'sync' is too slow to be a usable alternative in most cases.)\n\n _\nMats Lofkvist\nmal@algonet.se\n\n\nPS The BSD soft updates yields the safety of the default sync\n metadata / async data mode while being at least as fast as\n running fully async.\n", "msg_date": "27 Sep 2002 12:40:13 +0200", "msg_from": "Mats Lofkvist <mal@algonet.se>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "shridhar_daithankar@persistent.co.in (\"Shridhar Daithankar\") writes:\n\n[snip]\n> \n> Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \n> setup..\n> \n\nRAID5 is not the best for performance, especially write performance.\nIf it is software RAID it is even worse :-).\n\n(Note also that you need to check that you are not saturating the\nnumber of seeks the disks can handle, not just the bandwith.)\n\nStriping should be better (combined with mirroring if you need the\nsafety, but with both striping and mirroring you may need multiple\nSCSI channels).\n\n _\nMats Lofkvist\nmal@algonet.se\n", "msg_date": "27 Sep 2002 12:49:17 +0200", "msg_from": "Mats Lofkvist <mal@algonet.se>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 27 Sep 2002, Mats Lofkvist wrote:\n\n> shridhar_daithankar@persistent.co.in (\"Shridhar Daithankar\") writes:\n> \n> [snip]\n> > \n> > Couple MB of data per sec. to disk is just not saturating it. It's a RAID 5 \n> > setup..\n> > \n> \n> RAID5 is not the best for performance, especially write performance.\n> If it is software RAID it is even worse :-).\n\nI take exception to this. RAID5 is a great choice for most folks.\n\n1: RAID5 only writes out the parity stripe and data stripe, not all \nstripes when writing. So, in an 8 disk RAID5 array, writing to a single \n64 k stripe involves one 64k read (parity stripe) and two 64k writes.\n\nOn a mirror set, writing to one 64k stripe involves two 64k writes. The \ndifference isn't that great, and in my testing, a large enough RAID5 \nprovides so much faster read speads by spreading the reads across so many \nheads as to more than make up for the slightly slower writes. My testing \nhas shown that a 4 disk RAID5 can generally run about 85% or more the \nspeed of a mirror set.\n\n2: Why does EVERYONE have to jump on the bandwagon that software RAID 5 \nis bad. My workstation running RH 7.2 uses about 1% of the CPU during \nvery heavy parallel access (i.e. 50 simo pgbenchs) at most. I've seen \nmany hardware RAID cards that are noticeable slower than my workstation \nrunning software RAID. You do know that hardware RAID is just software \nRAID where the processing is done on a seperate CPU on a card, but it's \nstill software doing the work.\n\n3: We just had a hardware RAID card mark both drives in a mirror set bad. \nIt wouldn't accept them back, and all the data was gone. poof. That \nwould never happen in Linux's kernel software RAID, I can always make \nLinux take back a \"bad\" drive.\n\n\nThe only difference between RAID5 with n+1 disks and RAID0 with n disks is \nthat we have to write a parity stripe in RAID5. It's ability to handle \nhigh parallel load is much better than a RAID1 set, and on average, you \nactually write about the same amount with either RAID1 or RAID5.\n\nDon't dog software RAID5, it works and it works well in Linux. Windows, \nhowever, is another issue. There, the software RAID5 is pretty pitiful, \nboth in terms of performance and maintenance.\n\n", "msg_date": "Fri, 27 Sep 2002 09:16:03 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> We'd be happiest with a filesystem that journals its own metadata and\n> not the user data in the file(s). I dunno if there are any.\n\nMost journalling file systems work this way. Data journalling is not\nvery widespread, AFAIK.\n\n-- \nFlorian Weimer \t Weimer@CERT.Uni-Stuttgart.DE\nUniversity of Stuttgart http://CERT.Uni-Stuttgart.DE/people/fw/\nRUS-CERT fax +49-711-685-5898\n", "msg_date": "Fri, 27 Sep 2002 21:01:38 +0200", "msg_from": "Florian Weimer <Weimer@CERT.Uni-Stuttgart.DE>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "scott.marlowe wrote:\n\n>(snippage)\n>I take exception to this. RAID5 is a great choice for most folks.\n>\n>\nI agree - certainly RAID5 *used* to be rather sad, but modern cards have \nimproved this no end on the hardware side - e.g.\n\nI recently benchmarked a 3Ware 8x card on a system with 4 x 15000 rpm \nMaxtor 70Gb drives and achieved 120 Mb/s for (8K) reads and 60 Mb/s for \n(8K) writes using RAID5. I used Redhat 7.3 + ext2. The benchmarking \nprogram was Bonnie.\n\nGiven that the performance of a single disk was ~30 Mb/s for reads and \nwrites, I felt this was quite a good result ! ( Other cards I had tried \npreviously struggled to maintain 1/2 the write rate of a single disk in \nsuch a configuration).\n\nAs for software RAID5, I have not tried it out.\n\nOf course I could not get 60Mb/s while COPYing data into Postgres... \ntypically cpu seemed to be the bottleneck in this case (what was the \nactual write rate? I hear you asking..err.. cant recall I'm afraid.. \nmust try it out again )\n\ncheers\n\nMark\n\n", "msg_date": "Sat, 28 Sep 2002 13:38:52 +1200", "msg_from": "Mark Kirkwood <markir@paradise.net.nz>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "Some of you may be interested in this seemingly exhaustive benchmark\nbetween ext2/3, ReiserFS, JFS, and XFS.\n\nhttp://www.osdl.org/presentations/lwe-jgfs.pdf\n\n\n", "msg_date": "03 Oct 2002 16:09:56 -0700", "msg_from": "Mike Benoit <mikeb@netnation.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Hey, excellent. Thanks!\n\nBased on that, it appears that XFS is a pretty good FS to use. For me,\nthe real surprise was how well reiserfs performed.\n\nGreg\n\nOn Thu, 2002-10-03 at 18:09, Mike Benoit wrote:\n> Some of you may be interested in this seemingly exhaustive benchmark\n> between ext2/3, ReiserFS, JFS, and XFS.\n> \n> http://www.osdl.org/presentations/lwe-jgfs.pdf\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org", "msg_date": "03 Oct 2002 18:35:34 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> Hey, excellent. Thanks!\n> \n> Based on that, it appears that XFS is a pretty good FS to use. For me,\n> the real surprise was how well reiserfs performed.\n> \n\nOK, hardware performance paper updated:\n\n---------------------------------------------------------------------------\n\nFile system choice is particularly difficult on Linux because there are\nso many file system choices, and none of them are optimal: ext2 is not\nentirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is\noptimized for small files and does journalling. The journalling file\nsystems can be significantly slower than ext2 but when crash recovery is\nrequired, ext2 isn't an option. If ext2 must be used, mount it with sync\nenabled. Some people recommend xfs or an ext3 filesystem mounted with\ndata=writeback.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 19:59:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002, Neil Conway wrote:\n\n> The fact that ext2 defaults to asynchronous mode and UFS (at least on\n> the BSDs) defaults to synchronous mode seems like a total non-issue to\n> me. Is there any more to the alleged difference in reliability?\n\nIt was sort of pointed out here, but perhaps not made completely\nclear, that Berkley FFS defaults to synchronous meta-data updates,\nbut asynchronous data updates. You can also specify entirely\nsynchronous or entirely asynchronous updates. Linux ext2fs supports\nonly these last two modes, which is the problem.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 7 Oct 2002 00:52:24 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" } ]
[ { "msg_contents": "On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:\n> Some time back I posted a query to build a site with 150GB of database. In\nlast \n> couple of weeks, lots of things were tested at my place and there are some \n> results and again some concerns. \n\n> 2) Creating index takes huge amount of time.\n> Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> Database size on disk: 26GB\n> Select query: 1.5 sec. for approx. 150 rows.\n\nI never tried 150GB of data, but 10GB of data, and this worked fine for me. \nMaybe it will help if you post your table schema, including which indexes you\nuse, and the average size of one tuple.\n\n", "msg_date": "Thu, 26 Sep 2002 11:17:50 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 11:17, Mario Weilguni wrote:\n\n> On 26 Sep 2002 at 14:05, Shridhar Daithankar wrote:\n> > Some time back I posted a query to build a site with 150GB of database. In\n> last \n> > couple of weeks, lots of things were tested at my place and there are some \n> > results and again some concerns. \n> \n> > 2) Creating index takes huge amount of time.\n> > Load time: 14581 sec/~8600 rows persec/~ an MB of data per sec.\n> > Create unique composite index on 2 char and a timestamp field: 25226 sec.\n> > Database size on disk: 26GB\n> > Select query: 1.5 sec. for approx. 150 rows.\n> \n> I never tried 150GB of data, but 10GB of data, and this worked fine for me. \n> Maybe it will help if you post your table schema, including which indexes you\n> use, and the average size of one tuple.\n\nWell the test runs were for 10GB of data. Schema is attached. Read in fixed \nfonts..Last nullable fields are dummies but may be used in fututre and varchars \nare not acceptable(Not my requirement). Tuple size is around 100 bytes..\n\nThe index creation query was\n\nCREATE INDEX index1 ON tablename (esn,min,datetime);\n\nWhat if I put datetime ahead? It's likely the the datetime field will have high \ndegree of locality being log data..\n\nBye\n Shridhar\n\n--\nbrain, v: [as in \"to brain\"]\tTo rebuke bluntly, but not pointedly; to dispel a \nsource\tof error in an opponent.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n\n\nField Name\tField Type\tNullable\tIndexed\ntype\t\tint\t\tno\t\tno\nesn\t\tchar (10)\tno\t\tyes\nmin\t\tchar (10)\tno\t\tyes\ndatetime\ttimestamp\tno\t\tyes\nopc0\t\tchar (3)\tno\t\tno\nopc1\t\tchar (3)\tno\t\tno\nopc2\t\tchar (3)\tno\t\tno\ndpc0\t\tchar (3)\tno\t\tno\ndpc1\t\tchar (3)\tno\t\tno\ndpc2\t\tchar (3)\tno\t\tno\nnpa\t\tchar (3)\tno\t\tno\nnxx\t\tchar (3)\tno\t\tno\nrest\t\tchar (4)\tno\t\tno\nfield0\t\tint\t\tyes\t\tno\nfield1\t\tchar (4)\tyes\t\tno\nfield2\t\tint\t\tyes\t\tno\nfield3\t\tchar (4)\tyes\t\tno\nfield4\t\tint\t\tyes\t\tno\nfield5\t\tchar (4)\tyes\t\tno\nfield6\t\tint\t\tyes\t\tno\nfield7\t\tchar (4)\tyes\t\tno\nfield8\t\tint\t\tyes\t\tno\nfield9\t\tchar (4)\tyes\t\tno", "msg_date": "Thu, 26 Sep 2002 15:01:35 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" }, { "msg_contents": "On Thu, Sep 26, 2002 at 03:01:35PM +0530, Shridhar Daithankar wrote:\nContent-Description: Mail message body\n> The index creation query was\n> \n> CREATE INDEX index1 ON tablename (esn,min,datetime);\n> \n> What if I put datetime ahead? It's likely the the datetime field will have high \n> degree of locality being log data..\n\nThe order of fields depends on what you're using it for. For example, you\ncan use the above index for a query using the conditions:\n\nesn = 'aaa' \nesn = 'bbb' and min = 'xxx'\n\nbut not for queries with only\n\ndatetime = '2002-09-26'\nmin = 'ddd' and datetime = '2002-10-02'\n\nThe fields can only be used left to right. This is where a single\nmulticolumn index differs from multiple indexes of different columns.\n\nHave you used EXPLAIN ANALYSE to determine whether your indexes are being\nused optimally?\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Thu, 26 Sep 2002 19:54:48 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Performance while loading data and indexing" } ]
[ { "msg_contents": ">Well the test runs were for 10GB of data. Schema is attached. Read in fixed \n>fonts..Last nullable fields are dummies but may be used in fututre and\nvarchars \n>are not acceptable(Not my requirement). Tuple size is around 100 bytes..\n>The index creation query was\n>\n>CREATE INDEX index1 ON tablename (esn,min,datetime);\n>\n>What if I put datetime ahead? It's likely the the datetime field will have\nhigh \n>degree of locality being log data..\n\nJust an idea, I noticed you use char(10) for esn and min, and use this as\nindex. Are these really fixed len fields all having 10 bytes? Otherwise\nvarchar(10) would be better, because your tables, and especially the indices\nwill be probably much smaller.\n\nwhat average length do you have for min and esn?\n", "msg_date": "Thu, 26 Sep 2002 11:50:08 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "On 26 Sep 2002 at 11:50, Mario Weilguni wrote:\n\n> >Well the test runs were for 10GB of data. Schema is attached. Read in fixed \n> >fonts..Last nullable fields are dummies but may be used in fututre and\n> varchars \n> >are not acceptable(Not my requirement). Tuple size is around 100 bytes..\n> >The index creation query was\n> >\n> >CREATE INDEX index1 ON tablename (esn,min,datetime);\n> >\n> >What if I put datetime ahead? It's likely the the datetime field will have\n> high \n> >degree of locality being log data..\n> \n> Just an idea, I noticed you use char(10) for esn and min, and use this as\n> index. Are these really fixed len fields all having 10 bytes? Otherwise\n> varchar(10) would be better, because your tables, and especially the indices\n> will be probably much smaller.\n> \n> what average length do you have for min and esn?\n\n10 bytes. Those are id numbers.. like phone numbers always have all the digits \nfilled in..\n\n\n\nBye\n Shridhar\n\n--\nBradley's Bromide:\tIf computers get too powerful, we can organize\tthem into a \ncommittee -- that will do them in.\n\n", "msg_date": "Thu, 26 Sep 2002 15:28:01 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing" }, { "msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> On 26 Sep 2002 at 11:50, Mario Weilguni wrote:\n>> Just an idea, I noticed you use char(10) for esn and min, and use this as\n>> index. Are these really fixed len fields all having 10 bytes?\n\n> 10 bytes. Those are id numbers.. like phone numbers always have all the digits \n> filled in..\n\nIf they are numbers, can you store them as bigints instead of char(N)?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:37:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance while loading data and indexing " } ]
[ { "msg_contents": "Hello,\n\nI've mailed this to the bugs list but that seems to have stopped working on\nthe 24th so I thought I'd better email it through on here.\n\nI have found it is possible for a user with create table permission to crash\nthe 7.3b1 backend. The crash occurs because it is possible to have a table\nwith no columns after a DROP DOMAIN CASCADE. Create a table with one\ncolumn (with that columns type specified as a domain) then issue the command\nto DROP DOMAIN ... CASCADE. The column will be dropped from the table,\nleaving the table with no columns. It is then possible (not surprisngly) to\ncrash the backend by querying that table using a wildcard.\n\nRunning the SQL listed at the bottom twice will cause a crash with the\nfollowing log enteries:\n\nWARNING: ShmemInitStruct: ShmemIndex entry size is wrong\nFATAL: LockMethodTableInit: couldn't initialize LockTable\n\nUpon restarting the server the following message appears in the log, each\ntime with a different offset:\n\nLOG: ReadRecord: unexpected pageaddr 0/BA36A000 in log file 0, segment 191,\noffset 3579904\n\nI am assuming this is a consequence of the abnormal termination but I\nthought it worth mentioning\nfor completeness. It also only appears if the SQL below is wrapped up in a\ntransaction.\n\nTo recreate the problem enter the following SQL in psql:-\n\nBEGIN;\n\nCREATE DOMAIN d1 int;\n\nCREATE TABLE t1 (col_a d1);\n\n-- IF YOU DROP DOMAIN d1 CASCADE then col_a WILL BE DROPPED AND THE TABLE t1\nWILL HAVE NO COLUMNS\n\nDROP DOMAIN d1 CASCADE;\n\n-- TABLE t1 NOW HAS NO COLUMNS\n-- THIS PROBLEM CAN ALSO BE CREATED BY DROP SCHEMA .. CASCADE AS WELL (AS\nLONG AS THE TABLE IS NOT IN THE SCHEMA BEING DROPPED AND THEREFORE NOT\nDROPPED AS PART OF THE CASCADE).\n\n-- THE FOLLOWING SELECT WILL CRASH THE BACKEND\n\nSELECT t1.* FROM t1\n\n", "msg_date": "Thu, 26 Sep 2002 11:20:40 +0100", "msg_from": "\"Tim Knowles\" <tim@ametco.co.uk>", "msg_from_op": true, "msg_subject": "7.3b1 : DROP DOMAIN CASCADE CAN LEAVE A TABLE WITH NO COLUMNS" } ]
[ { "msg_contents": "\n> > configure somehow thinks it needs to #define _LARGE_FILES though, which\n> > then clashes with pg_config.h's _LARGE_FILES. I think the test needs to\n> > #include unistd.h .\n> \n> _LARGE_FILES is defined because it's necessary to make off_t 64 bits. If\n> you disagree, please post compiler output.\n\nAh, if we want off_t to be 64 bits, then we need _LARGE_FILES.\nThe problem is, that scan.c includes unistd.h before postgres.h\nand thus unistd.h defines _LARGE_FILE_API which is not allowed \ntogether with _LARGE_FILES. Do you know an answer ?\nOffhand I can only think of using -D_LARGE_FILES as a compiler flag :-(\n\nDo we really want a general 64 bit off_t or would it be sufficient in the\ntwo places that use fseeko ?\n\nAndreas\n", "msg_date": "Thu, 26 Sep 2002 12:53:15 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> The problem is, that scan.c includes unistd.h before postgres.h\n> and thus unistd.h defines _LARGE_FILE_API which is not allowed\n> together with _LARGE_FILES. Do you know an answer ?\n> Offhand I can only think of using -D_LARGE_FILES as a compiler flag :-(\n\nThat would be pretty tricky to arrange, since the macro that detects all\nthis is bundled with Autoconf. Also, it would only fix one particular\nmanifestation of the overall problem, namely that pg_config.h needs to\ncome first, period.\n\nI can see two ways to fix this properly:\n\n1. Change the flex call to something like\n\n(echo '#include \"postgres.h\"'; $(FLEX) -t -o$@) >$@\n\nThis would throw off all #line references by one.\n\n2. Direct the flex output to another file, say scan2.c, and create a new\nscan.c like this:\n\n#include \"postgres.h\"\n#include \"scan2.c\"\n\nand create the object file from that.\n\nWe have half a dozen flex calls in our tree, so either fix would propagate\na great deal of ugliness around.\n\n> Do we really want a general 64 bit off_t or would it be sufficient in the\n> two places that use fseeko ?\n\nIt affects all the I/O functions. If we want to access large files we\nneed to use the large-file capable function interface.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 27 Sep 2002 00:28:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> The problem is, that scan.c includes unistd.h before postgres.h\n> and thus unistd.h defines _LARGE_FILE_API which is not allowed\n> together with _LARGE_FILES. Do you know an answer ?\n\nActually, a better idea I just had is to include scan.c at the end of\ngram.c and compile them both into one object file.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 28 Sep 2002 13:10:53 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" } ]
[ { "msg_contents": "Hi hackers.\n\nThere is ineresting behavior of some select query mentioned in $subj.\n\nIn working db this query takes:\nreal 3m10.219s\nuser 0m0.074s\nsys 0m0.074s\n\nit's interesting that vacuum or analyze or reinex not helpfull, BUT\n\nif dump this db and create it again (whith another name mabe, no matter,\njust for testing) and query the same query on this db, it takes:\nreal 0m6.225s\nuser 0m0.072s\nsys 0m0.074s\n(other databases continue running)\n\nThere is no end of this story!\nWith some time (couple of days for example) this the same query overloads\nmachine on this new test db also! No one working with this db during this\ntime. Works continued only with real working databases. Vacuuming was\nas usual (every 2 hours without -f and with it at night one time :) :\nas i said this behavior does not depend on any vacuuming.\n\nHave anyone any ideas about this?\n\ndb=# SELECT version();\n version\n-----------------------------------------------------------------------\n PostgreSQL 7.2.2 on i386-portbld-freebsd4.6.1, compiled by GCC 2.95.3\n\n\nThanks,\n Andriy.\n\n--\n Because strait is the gate, and narrow is the way, which leadeth unto\n <b>life</b>, and few there be that find it. (MAT 7:7)\n <b>Ask</b>, and it shall be given you; <b>seek</b>, and ye shall find;\n <b>knock</b>, and it shall be opened unto you... (MAT 7:14)\n\nANT17-RIPE\n\n", "msg_date": "Thu, 26 Sep 2002 15:11:55 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "query speed depends on lifetime of frozen db?" }, { "msg_contents": "Hi hackers.\n\nThere is ineresting behavior of some select query mentioned in $subj.\n\nIn working db this query takes:\nreal 3m10.219s\nuser 0m0.074s\nsys 0m0.074s\n\nit's interesting that vacuum or analyze or reinex not helpfull, BUT\n\nif dump this db and create it again (whith another name maybe, no matter,\njust for testing) and query the same query on this db, it takes:\nreal 0m6.225s\nuser 0m0.072s\nsys 0m0.074s\n(other databases continue running)\n\nThere is no end of this story!\nWith some time (couple of days for example) this the same query overloads\nmachine on this new test db also! No one working with this db during this\ntime. Works continued only with real working databases. Vacuuming was\nas usual (every 2 hours without -f and with it at night one time) :\nas i said this behavior does not depend on any vacuuming.\n\nHave anyone any ideas about this?\n\ndb=# SELECT version();\n version\n-----------------------------------------------------------------------\n PostgreSQL 7.2.2 on i386-portbld-freebsd4.6.1, compiled by GCC 2.95.3\n\n\nThanks,\n Andriy.\n\n--\n Because strait is the gate, and narrow is the way, which leadeth unto\n <b>life</b>, and few there be that find it. (MAT 7:14)\n <b>Ask</b>, and it shall be given you; <b>seek</b>, and ye shall find;\n <b>knock</b>, and it shall be opened unto you... (MAT 7:7)\n\nANT17-RIPE\n\n", "msg_date": "Fri, 27 Sep 2002 10:58:34 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "query speed depends on lifetime of frozen db?" }, { "msg_contents": "On 27 Sep 2002 at 10:58, Andriy Tkachuk wrote:\n\n> Hi hackers.\n> \n> There is ineresting behavior of some select query mentioned in $subj.\n> \n> In working db this query takes:\n> real 3m10.219s\n> user 0m0.074s\n> sys 0m0.074s\n> \n> it's interesting that vacuum or analyze or reinex not helpfull, BUT\n> \n> if dump this db and create it again (whith another name maybe, no matter,\n> just for testing) and query the same query on this db, it takes:\n> real 0m6.225s\n> user 0m0.072s\n> sys 0m0.074s\n> (other databases continue running)\n\nLooks like a database defrag to me...\n\n> There is no end of this story!\n> With some time (couple of days for example) this the same query overloads\n> machine on this new test db also! No one working with this db during this\n> time. Works continued only with real working databases. Vacuuming was\n> as usual (every 2 hours without -f and with it at night one time) :\n> as i said this behavior does not depend on any vacuuming.\n\nwas that vacuum full or vacuum analyze? Vacuum full should help in this case..\n\nIs it that some tables with few rows gets updated heavily causing lot of dead \ntuples? May be 2 hour is bit too long before vacuum should be called. Try \nrunning table speific vacuum more periodically..\n\nHTH...\n\n\nBye\n Shridhar\n\n--\nlawsuit, n.:\tA machine which you go into as a pig and come out as a sausage.\t\t--\n Ambrose Bierce\n\n", "msg_date": "Fri, 27 Sep 2002 13:36:26 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n\n> On 27 Sep 2002 at 10:58, Andriy Tkachuk wrote:\n>\n> > Hi hackers.\n> >\n> > There is ineresting behavior of some select query mentioned in $subj.\n> >\n> > In working db this query takes:\n> > real 3m10.219s\n> > user 0m0.074s\n> > sys 0m0.074s\n> >\n> > it's interesting that vacuum or analyze or reinex not helpfull, BUT\n> >\n> > if dump this db and create it again (whith another name maybe, no matter,\n> > just for testing) and query the same query on this db, it takes:\n> > real 0m6.225s\n> > user 0m0.072s\n> > sys 0m0.074s\n> > (other databases continue running)\n>\n> Looks like a database defrag to me...\n>\n> > There is no end of this story!\n> > With some time (couple of days for example) this the same query overloads\n> > machine on this new test db also! No one working with this db during this\n> > time. Works continued only with real working databases. Vacuuming was\n> > as usual (every 2 hours without -f and with it at night one time) :\n> > as i said this behavior does not depend on any vacuuming.\n>\n> was that vacuum full or vacuum analyze? Vacuum full should help in this case..\n\nit was full with analize\nThat's what i want to say, that this is very strange for me that vacuum\nnot helpfull in this situation!\n\n>\n> Is it that some tables with few rows gets updated heavily causing lot of dead\n> tuples? May be 2 hour is bit too long before vacuum should be called. Try\n> running table speific vacuum more periodically..\n\nAs I said there was no work with this test database during the time after which\nquery becomes overloading. There was just work with another databases.\n\nThanks,\n Andriy.\n\n", "msg_date": "Fri, 27 Sep 2002 11:49:08 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, Sep 27, 2002 at 11:49:08AM +0300, Andriy Tkachuk wrote:\n> On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n> > was that vacuum full or vacuum analyze? Vacuum full should help in this case..\n> \n> it was full with analize\n> That's what i want to say, that this is very strange for me that vacuum\n> not helpfull in this situation!\n\nOk, can you post the result of VACUUM FULL VERBOSE ANALYSE ?\n\n> >\n> > Is it that some tables with few rows gets updated heavily causing lot of dead\n> > tuples? May be 2 hour is bit too long before vacuum should be called. Try\n> > running table speific vacuum more periodically..\n> \n> As I said there was no work with this test database during the time after which\n> query becomes overloading. There was just work with another databases.\n\nWell, something is happening. The verbose in the above command should help.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Fri, 27 Sep 2002 19:20:07 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On 27 Sep 2002 at 11:49, Andriy Tkachuk wrote:\n> As I said there was no work with this test database during the time after which\n> query becomes overloading. There was just work with another databases.\n\n From testing point of view, that is not much good. Try halting all work and \njust do this testing. One variable at a time..\n\nI understand it may be difficult if it's a production system but with other DBs \nworking on site, too many variables come into picture..\n\nBye\n Shridhar\n\n--\nBut Captain -- the engines can't take this much longer!\n\n", "msg_date": "Fri, 27 Sep 2002 15:11:10 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n\n> On Fri, Sep 27, 2002 at 11:49:08AM +0300, Andriy Tkachuk wrote:\n> > On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n> > > was that vacuum full or vacuum analyze? Vacuum full should help in this case..\n> >\n> > it was full with analize\n> > That's what i want to say, that this is very strange for me that vacuum\n> > not helpfull in this situation!\n>\n> Ok, can you post the result of VACUUM FULL VERBOSE ANALYSE ?\n>\n> > >\n> > > Is it that some tables with few rows gets updated heavily causing lot of dead\n> > > tuples? May be 2 hour is bit too long before vacuum should be called. Try\n> > > running table speific vacuum more periodically..\n> >\n> > As I said there was no work with this test database during the time after which\n> > query becomes overloading. There was just work with another databases.\n>\n> Well, something is happening. The verbose in the above command should help.\n\nhere it is:\n\nNOTICE: --Relation pg_type--\nNOTICE: Pages 6: Changed 0, reaped 0, Empty 0, New 0; Tup 411: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 106, MaxLen 106; Re-using: Free/Avail. Space 3000/3000; EndEmpty/Avail. Pages 0/6.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_type_oid_index: Pages 2; Tuples 411.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_type_typname_index: Pages 6; Tuples 411.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_type: Pages: 6 --> 6; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_type\nNOTICE: --Relation pg_attribute--\nNOTICE: Pages 55: Changed 0, reaped 1, Empty 0, New 0; Tup 4226: Vac 0, Keep/VTL 0/0, UnUsed 27, MinLen 98, MaxLen 98; Re-using: Free/Avail. Space 9848/6608; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_attribute_relid_attnam_index: Pages 41; Tuples 4226: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_attribute_relid_attnum_index: Pages 18; Tuples 4226: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_attribute: Pages: 55 --> 55; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_attribute\nNOTICE: --Relation pg_class--\nNOTICE: Pages 8: Changed 0, reaped 5, Empty 0, New 0; Tup 504: Vac 0, Keep/VTL 0/0, UnUsed 9, MinLen 116, MaxLen 176; Re-using: Free/Avail. Space 828/496; EndEmpty/Avail. Pages 0/2.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_class_oid_index: Pages 6; Tuples 504: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_class_relname_index: Pages 12; Tuples 504: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_class: Pages: 8 --> 8; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_class\nNOTICE: --Relation pg_group--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_group_name_index: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_group_sysid_index: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_group\nNOTICE: --Relation pg_database--\nNOTICE: Pages 1: Changed 0, reaped 1, Empty 0, New 0; Tup 12: Vac 0, Keep/VTL 0/0, UnUsed 13, MinLen 92, MaxLen 92; Re-using: Free/Avail. Space 6968/6968; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_database_datname_index: Pages 2; Tuples 12: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_database_oid_index: Pages 2; Tuples 12: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_database: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_database\nNOTICE: --Relation pg_inherits--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_inherits_relid_seqno_index: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_inherits\nNOTICE: --Relation pg_index--\nNOTICE: Pages 4: Changed 0, reaped 0, Empty 0, New 0; Tup 182: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 160, MaxLen 160; Re-using: Free/Avail. Space 2840/2432; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_index_indrelid_index: Pages 2; Tuples 182.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_index_indexrelid_index: Pages 2; Tuples 182.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_index: Pages: 4 --> 4; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_index\nNOTICE: --Relation pg_operator--\nNOTICE: Pages 10: Changed 0, reaped 0, Empty 0, New 0; Tup 623: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 116, MaxLen 116; Re-using: Free/Avail. Space 6960/6852; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_operator_oid_index: Pages 4; Tuples 623.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_operator_oprname_l_r_k_index: Pages 8; Tuples 623.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_operator: Pages: 10 --> 10; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_operator\nNOTICE: --Relation pg_opclass--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 51: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 80, MaxLen 80; Re-using: Free/Avail. Space 3888/3888; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_opclass_am_name_index: Pages 2; Tuples 51.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_opclass_oid_index: Pages 2; Tuples 51.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_opclass: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_opclass\nNOTICE: --Relation pg_am--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 4: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 120, MaxLen 120; Re-using: Free/Avail. Space 7676/7676; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_am_name_index: Pages 2; Tuples 4.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_am_oid_index: Pages 2; Tuples 4.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_am: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_am\nNOTICE: --Relation pg_amop--\nNOTICE: Pages 2: Changed 0, reaped 0, Empty 0, New 0; Tup 180: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 44, MaxLen 44; Re-using: Free/Avail. Space 7704/7692; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_amop_opc_opr_index: Pages 2; Tuples 180.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_amop_opc_strategy_index: Pages 2; Tuples 180.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_amop: Pages: 2 --> 2; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_amop\nNOTICE: --Relation pg_amproc--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 57: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 44, MaxLen 44; Re-using: Free/Avail. Space 5436/5436; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_amproc_opc_procnum_index: Pages 2; Tuples 57.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_amproc: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_amproc\nNOTICE: --Relation pg_language--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 5: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 76, MaxLen 84; Re-using: Free/Avail. Space 7744/7744; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_language_name_index: Pages 2; Tuples 5.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_language_oid_index: Pages 2; Tuples 5.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_language: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_language\nNOTICE: --Relation pg_largeobject--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_largeobject_loid_pn_index: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_largeobject\nNOTICE: --Relation pg_aggregate--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 60: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 92, MaxLen 111; Re-using: Free/Avail. Space 2244/2244; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_aggregate_name_type_index: Pages 2; Tuples 60.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_aggregate_oid_index: Pages 2; Tuples 60.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_aggregate: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_aggregate\nNOTICE: --Relation pg_trigger--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 11: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 152, MaxLen 228; Re-using: Free/Avail. Space 5868/5868; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_trigger_tgconstrname_index: Pages 2; Tuples 11.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_trigger_tgconstrrelid_index: Pages 2; Tuples 11.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_trigger_tgrelid_index: Pages 2; Tuples 11.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_trigger_oid_index: Pages 2; Tuples 11.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_trigger: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_trigger\nNOTICE: --Relation pg_listener--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_listener\nNOTICE: --Relation pg_shadow--\nNOTICE: Pages 1: Changed 0, reaped 1, Empty 0, New 0; Tup 8: Vac 0, Keep/VTL 0/0, UnUsed 2, MinLen 76, MaxLen 76; Re-using: Free/Avail. Space 7524/7524; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_shadow_usename_index: Pages 2; Tuples 8: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_shadow_usesysid_index: Pages 2; Tuples 8: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_shadow: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_shadow\nNOTICE: --Relation auoto_reach--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index auoto_reach_id_key: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing auoto_reach\nNOTICE: --Relation pg_attrdef--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 27: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 140, MaxLen 370; Re-using: Free/Avail. Space 2820/2820; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_attrdef_adrelid_adnum_index: Pages 2; Tuples 27.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_attrdef: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_16384--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_16384_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_attrdef\nNOTICE: --Relation pg_description--\nNOTICE: Pages 12: Changed 0, reaped 1, Empty 0, New 0; Tup 1303: Vac 0, Keep/VTL 0/0, UnUsed 1, MinLen 50, MaxLen 118; Re-using: Free/Avail. Space 3712/3468; EndEmpty/Avail. Pages 0/4.\n\tCPU 0.01s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_description_o_c_o_index: Pages 7; Tuples 1303: Deleted 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_description: Pages: 12 --> 12; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_16416--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_16416_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_description\nNOTICE: --Relation areas--\nNOTICE: Pages 21: Changed 0, reaped 0, Empty 0, New 0; Tup 1199: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 53, MaxLen 184; Re-using: Free/Avail. Space 5688/5208; EndEmpty/Avail. Pages 0/8.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index areas_prefix: Pages 6; Tuples 1199.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel areas: Pages: 21 --> 21; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202896--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202896_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing areas\nNOTICE: --Relation pg_proc--\nNOTICE: Pages 33: Changed 0, reaped 0, Empty 0, New 0; Tup 1314: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 177, MaxLen 1845; Re-using: Free/Avail. Space 4944/2524; EndEmpty/Avail. Pages 0/3.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_proc_oid_index: Pages 6; Tuples 1314.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_proc_proname_narg_type_index: Pages 30; Tuples 1314.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_proc: Pages: 33 --> 33; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_1255--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_1255_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_proc\nNOTICE: --Relation pg_relcheck--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_relcheck_rcrelid_index: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_16386--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_16386_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_relcheck\nNOTICE: --Relation pg_rewrite--\nNOTICE: Pages 4: Changed 0, reaped 0, Empty 0, New 0; Tup 23: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 104, MaxLen 1456; Re-using: Free/Avail. Space 8496/8496; EndEmpty/Avail. Pages 0/4.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_rewrite_oid_index: Pages 2; Tuples 23.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_rewrite_rulename_index: Pages 2; Tuples 23.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_rewrite: Pages: 4 --> 4; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_16410--\nNOTICE: Pages 2: Changed 0, reaped 0, Empty 0, New 0; Tup 5: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 163, MaxLen 2034; Re-using: Free/Avail. Space 8088/8088; EndEmpty/Avail. Pages 0/2.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_16410_idx: Pages 2; Tuples 5.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pg_toast_16410: Pages: 2 --> 2; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pg_rewrite\nNOTICE: --Relation plans--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 22: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 60, MaxLen 100; Re-using: Free/Avail. Space 6200/6200; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pl_id: Pages 2; Tuples 22.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel plans: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202903--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202903_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing plans\nNOTICE: --Relation gateway_number--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 44, MaxLen 44; Re-using: Free/Avail. Space 8028/8028; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index gateway_number_id_key: Pages 2; Tuples 3.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel gateway_number: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing gateway_number\nNOTICE: --Relation sessions--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202305--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202305_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing sessions\nNOTICE: --Relation typ_bills--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 15: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 50, MaxLen 75; Re-using: Free/Avail. Space 7136/7136; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index typ_bills_id_key: Pages 2; Tuples 15.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel typ_bills: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing typ_bills\nNOTICE: --Relation pg_statistic--\nNOTICE: Pages 27: Changed 27, reaped 13, Empty 0, New 0; Tup 395: Vac 395, Keep/VTL 0/0, UnUsed 106, MinLen 80, MaxLen 1448; Re-using: Free/Avail. Space 113664/113584; EndEmpty/Avail. Pages 0/25.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_statistic_relid_att_index: Pages 5; Tuples 395: Deleted 395.\n\tCPU 0.00s/0.01u sec elapsed 0.00 sec.\nNOTICE: Rel pg_statistic: Pages: 27 --> 13; Tuple(s) moved: 393.\n\tCPU 0.00s/0.01u sec elapsed 0.02 sec.\nNOTICE: Index pg_statistic_relid_att_index: Pages 5; Tuples 395: Deleted 393.\n\tCPU 0.00s/0.01u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_16408--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_16408_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation users_update_col--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 12: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 60, MaxLen 159; Re-using: Free/Avail. Space 7200/7200; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index users_update_col_id_key: Pages 2; Tuples 12.\n\tCPU 0.01s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel users_update_col: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202951--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202951_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing users_update_col\nNOTICE: --Relation clients--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202310--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202310_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing clients\nNOTICE: --Relation specphones--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 4: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 47, MaxLen 47; Re-using: Free/Avail. Space 7964/7964; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index specphones_id_key: Pages 2; Tuples 4.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel specphones: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing specphones\nNOTICE: --Relation oplaty--\nNOTICE: Pages 9: Changed 0, reaped 0, Empty 0, New 0; Tup 592: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 82, MaxLen 198; Re-using: Free/Avail. Space 3528/3140; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel oplaty: Pages: 9 --> 9; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202544--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202544_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing oplaty\nNOTICE: --Relation lines_names--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing lines_names\nNOTICE: --Relation lines_release_causes--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index lines_names_id_key: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing lines_release_causes\nNOTICE: --Relation lines_users--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index lines_users_id_key: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing lines_users\nNOTICE: --Relation lines_log--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing lines_log\nNOTICE: --Relation lines--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202315--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202315_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing lines\nNOTICE: --Relation auth_user--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 15: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 82, MaxLen 161; Re-using: Free/Avail. Space 6520/6520; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index auth_user_pkey: Pages 2; Tuples 15.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel auth_user: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202335--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202335_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing auth_user\nNOTICE: --Relation bills--\nNOTICE: Pages 1431: Changed 0, reaped 0, Empty 0, New 0; Tup 103185: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 60, MaxLen 236; Re-using: Free/Avail. Space 52408/22520; EndEmpty/Avail. Pages 0/269.\n\tCPU 0.10s/0.01u sec elapsed 0.10 sec.\nNOTICE: Index bills_dat: Pages 228; Tuples 103185.\n\tCPU 0.02s/0.00u sec elapsed 0.02 sec.\nNOTICE: Index bill_uid: Pages 284; Tuples 103185.\n\tCPU 0.02s/0.01u sec elapsed 0.02 sec.\nNOTICE: Rel bills: Pages: 1431 --> 1431; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202340--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202340_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing bills\nNOTICE: --Relation line_types--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202320--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202320_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing line_types\nNOTICE: --Relation areas_old--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index areas_id: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202390--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202390_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing areas_old\nNOTICE: --Relation pltcl_modfuncs--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 23: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 96, MaxLen 96; Re-using: Free/Avail. Space 5872/5872; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pltcl_modfuncs_i: Pages 4; Tuples 23.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pltcl_modfuncs: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pltcl_modfuncs\nNOTICE: --Relation pltcl_modules--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 5: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 797, MaxLen 1854; Re-using: Free/Avail. Space 1588/1588; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pltcl_modules_i: Pages 2; Tuples 5.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel pltcl_modules: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202402--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202402_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pltcl_modules\nNOTICE: --Relation callbacks--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 34: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 72, MaxLen 232; Re-using: Free/Avail. Space 1836/1836; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel callbacks: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202325--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202325_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing callbacks\nNOTICE: --Relation zone4area--\nNOTICE: Pages 8: Changed 0, reaped 0, Empty 0, New 0; Tup 1216: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 48; Re-using: Free/Avail. Space 2144/2088; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index z4a_area: Pages 5; Tuples 1216.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel zone4area: Pages: 8 --> 8; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing zone4area\nNOTICE: --Relation cards--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index cards_uid: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202479--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202479_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing cards\nNOTICE: --Relation new_countries--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index nc_zone: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202594--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202594_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing new_countries\nNOTICE: --Relation context--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index ctx_name: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index ctx_idi: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing context\nNOTICE: --Relation context_chunk--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index cx_chunk: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202792--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202792_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing context_chunk\nNOTICE: --Relation calls--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index calls_dat_index: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202820--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202820_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing calls\nNOTICE: --Relation sm_temp--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index sm_temp_id_key: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing sm_temp\nNOTICE: --Relation abon_bill_freq_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 2: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 45, MaxLen 47; Re-using: Free/Avail. Space 8068/8068; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel abon_bill_freq_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202912--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202912_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing abon_bill_freq_types\nNOTICE: --Relation plans_ch_journal--\nNOTICE: Pages 3: Changed 0, reaped 0, Empty 0, New 0; Tup 115: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 126, MaxLen 144; Re-using: Free/Avail. Space 8020/8020; EndEmpty/Avail. Pages 0/2.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel plans_ch_journal: Pages: 3 --> 3; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202921--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202921_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing plans_ch_journal\nNOTICE: --Relation sms_account--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 4: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 43, MaxLen 52; Re-using: Free/Avail. Space 7956/7956; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel sms_account: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202931--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202931_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing sms_account\nNOTICE: --Relation golden_stat--\nNOTICE: Pages 4: Changed 0, reaped 0, Empty 0, New 0; Tup 109: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 207, MaxLen 513; Re-using: Free/Avail. Space 7992/7516; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel golden_stat: Pages: 4 --> 4; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing golden_stat\nNOTICE: --Relation zal_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 2: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 51, MaxLen 58; Re-using: Free/Avail. Space 8052/8052; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel zal_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202942--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202942_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing zal_types\nNOTICE: --Relation users_update--\nNOTICE: Pages 5: Changed 0, reaped 0, Empty 0, New 0; Tup 515: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 56, MaxLen 76; Re-using: Free/Avail. Space 1496/1396; EndEmpty/Avail. Pages 0/2.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel users_update: Pages: 5 --> 5; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing users_update\nNOTICE: --Relation groups--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202345--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202345_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing groups\nNOTICE: --Relation managers--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202960--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202960_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing managers\nNOTICE: --Relation users--\nNOTICE: Pages 13: Changed 0, reaped 0, Empty 0, New 0; Tup 290: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 118, MaxLen 602; Re-using: Free/Avail. Space 5036/4520; EndEmpty/Avail. Pages 0/8.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel users: Pages: 13 --> 13; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202970--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202970_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing users\nNOTICE: --Relation user_states--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 52, MaxLen 56; Re-using: Free/Avail. Space 8000/8000; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel user_states: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202350--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202350_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing user_states\nNOTICE: --Relation voice_mailbox--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing voice_mailbox\nNOTICE: --Relation providers--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 42, MaxLen 43; Re-using: Free/Avail. Space 8028/8028; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel providers: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing providers\nNOTICE: --Relation active_sessions--\nNOTICE: Pages 14: Changed 0, reaped 0, Empty 0, New 0; Tup 595: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 138, MaxLen 1890; Re-using: Free/Avail. Space 9128/8692; EndEmpty/Avail. Pages 0/5.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index sid_idx: Pages 4; Tuples 595.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel active_sessions: Pages: 14 --> 14; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202330--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202330_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing active_sessions\nNOTICE: --Relation params--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202355--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202355_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing params\nNOTICE: --Relation registry--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202837--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202837_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing registry\nNOTICE: --Relation fb_modes--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202360--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202360_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fb_modes\nNOTICE: --Relation tmp_b--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202842--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202842_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_b\nNOTICE: --Relation currensy--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing currensy\nNOTICE: --Relation regions--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 20: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 60; Re-using: Free/Avail. Space 7000/7000; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel regions: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing regions\nNOTICE: --Relation callback_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 5: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 60, MaxLen 64; Re-using: Free/Avail. Space 7844/7844; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel callback_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202365--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202365_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing callback_types\nNOTICE: --Relation abonka--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 1: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 48; Re-using: Free/Avail. Space 8120/8120; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel abonka: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing abonka\nNOTICE: --Relation courses--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 9: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 48; Re-using: Free/Avail. Space 7704/7704; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel courses: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing courses\nNOTICE: --Relation auth--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202802--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202802_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing auth\nNOTICE: --Relation cb_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 6: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 52; Re-using: Free/Avail. Space 7852/7852; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel cb_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202370--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202370_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing cb_types\nNOTICE: --Relation reply--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202807--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202807_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing reply\nNOTICE: --Relation log--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing log\nNOTICE: --Relation d_types--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202375--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202375_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing d_types\nNOTICE: --Relation sm_stat--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing sm_stat\nNOTICE: --Relation sm_reg--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing sm_reg\nNOTICE: --Relation sm_usr--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing sm_usr\nNOTICE: --Relation period_types--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202380--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202380_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing period_types\nNOTICE: --Relation _oldobjects--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202827--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202827_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing _oldobjects\nNOTICE: --Relation periods--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202385--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202385_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing periods\nNOTICE: --Relation mycalls--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202832--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202832_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing mycalls\nNOTICE: --Relation menus--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202775--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202775_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing menus\nNOTICE: --Relation vox--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202780--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202780_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing vox\nNOTICE: --Relation realms--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202785--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202785_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing realms\nNOTICE: --Relation messages--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202395--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202395_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing messages\nNOTICE: --Relation oprosy--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202797--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202797_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing oprosy\nNOTICE: --Relation tele_areas_backup--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202733--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202733_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tele_areas_backup\nNOTICE: --Relation tpl_prices--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202738--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202738_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tpl_prices\nNOTICE: --Relation gt_zone--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202743--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202743_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing gt_zone\nNOTICE: --Relation n_tpl_prices--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202748--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202748_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing n_tpl_prices\nNOTICE: --Relation old_prices--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202753--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202753_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing old_prices\nNOTICE: --Relation prices--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 69: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 52, MaxLen 56; Re-using: Free/Avail. Space 4096/4096; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel prices: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing prices\nNOTICE: --Relation servers--\nNOTICE: Pages 3: Changed 0, reaped 0, Empty 0, New 0; Tup 329: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 68, MaxLen 68; Re-using: Free/Avail. Space 828/756; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel servers: Pages: 3 --> 3; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202760--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202760_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing servers\nNOTICE: --Relation zone_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 43, MaxLen 44; Re-using: Free/Avail. Space 8028/8028; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel zone_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202409--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202409_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing zone_types\nNOTICE: --Relation temp_pbx_calls--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202765--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202765_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing temp_pbx_calls\nNOTICE: --Relation objects--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202770--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202770_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing objects\nNOTICE: --Relation pbx_ani--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202691--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202691_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pbx_ani\nNOTICE: --Relation zones--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 20: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 72, MaxLen 120; Re-using: Free/Avail. Space 6244/6244; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel zones: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202414--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202414_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing zones\nNOTICE: --Relation temp_bubu--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202696--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202696_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing temp_bubu\nNOTICE: --Relation tele_src--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202419--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202419_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tele_src\nNOTICE: --Relation users_backup--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202701--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202701_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing users_backup\nNOTICE: --Relation prints--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 55: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 48; Re-using: Free/Avail. Space 5312/5312; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel prints: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing prints\nNOTICE: --Relation prints_val--\nNOTICE: Pages 5: Changed 0, reaped 0, Empty 0, New 0; Tup 586: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 52, MaxLen 70; Re-using: Free/Avail. Space 2488/2376; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel prints_val: Pages: 5 --> 5; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202708--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202708_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing prints_val\nNOTICE: --Relation opl_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 5: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 51, MaxLen 59; Re-using: Free/Avail. Space 7860/7860; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel opl_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202424--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202424_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing opl_types\nNOTICE: --Relation ani_descr--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 40: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 71, MaxLen 103; Re-using: Free/Avail. Space 4776/4776; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel ani_descr: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202713--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202713_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing ani_descr\nNOTICE: --Relation fsm_chunk--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202429--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202429_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_chunk\nNOTICE: --Relation tele_areas--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202718--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202718_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tele_areas\nNOTICE: --Relation tele_zones--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202723--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202723_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tele_zones\nNOTICE: --Relation fsm_object--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202434--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202434_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_object\nNOTICE: --Relation t2--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202728--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202728_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing t2\nNOTICE: --Relation notify_events--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202647--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202647_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing notify_events\nNOTICE: --Relation fsm_states--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202439--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202439_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_states\nNOTICE: --Relation events--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202652--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202652_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing events\nNOTICE: --Relation fsm--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202444--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202444_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm\nNOTICE: --Relation event_ttl--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 5: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 76; Re-using: Free/Avail. Space 7820/7820; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel event_ttl: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202657--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202657_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing event_ttl\nNOTICE: --Relation fsm_errors--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202449--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202449_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_errors\nNOTICE: --Relation cur_stat--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202662--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202662_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing cur_stat\nNOTICE: --Relation fsm_modules--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202454--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202454_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_modules\nNOTICE: --Relation cb_groups--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 64, MaxLen 72; Re-using: Free/Avail. Space 7960/7960; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel cb_groups: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202667--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202667_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing cb_groups\nNOTICE: --Relation isdn_cause--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202672--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202672_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing isdn_cause\nNOTICE: --Relation fsm_modules_body--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202459--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202459_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_modules_body\nNOTICE: --Relation fixed_phones--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202677--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202677_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fixed_phones\nNOTICE: --Relation pere1--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pere1\nNOTICE: --Relation fsm_modules_chunk--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202464--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202464_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing fsm_modules_chunk\nNOTICE: --Relation p3--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing p3\nNOTICE: --Relation lines_backup--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202686--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202686_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing lines_backup\nNOTICE: --Relation k_s--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202469--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202469_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing k_s\nNOTICE: --Relation tmp_recalc--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202599--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202599_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_recalc\nNOTICE: --Relation tmp_r1--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202604--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202604_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_r1\nNOTICE: --Relation val_types--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 3: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 48, MaxLen 48; Re-using: Free/Avail. Space 8016/8016; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel val_types: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202474--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202474_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing val_types\nNOTICE: --Relation tmp_r3--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202609--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202609_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_r3\nNOTICE: --Relation tmp_r4_costs--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_r4_costs\nNOTICE: --Relation tmp_recalc_2--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_recalc_2\nNOTICE: --Relation tmp_r_3--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tmp_r_3\nNOTICE: --Relation todo--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202620--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202620_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing todo\nNOTICE: --Relation gt_calls--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202625--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202625_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing gt_calls\nNOTICE: --Relation options--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202630--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202630_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing options\nNOTICE: --Relation card_states--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202484--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202484_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing card_states\nNOTICE: --Relation option_values--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202635--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202635_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing option_values\nNOTICE: --Relation plans4user--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing plans4user\nNOTICE: --Relation last_notify--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202642--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202642_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing last_notify\nNOTICE: --Relation cb_calls--\nNOTICE: Pages 170: Changed 0, reaped 0, Empty 0, New 0; Tup 11183: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 96, MaxLen 132; Re-using: Free/Avail. Space 15216/8216; EndEmpty/Avail. Pages 0/15.\n\tCPU 0.01s/0.00u sec elapsed 0.01 sec.\nNOTICE: Rel cb_calls: Pages: 170 --> 170; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202489--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202489_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing cb_calls\nNOTICE: --Relation pga_queries--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202564--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202564_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pga_queries\nNOTICE: --Relation pga_forms--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202569--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202569_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pga_forms\nNOTICE: --Relation contact_types--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202494--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202494_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing contact_types\nNOTICE: --Relation pga_scripts--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202574--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202574_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pga_scripts\nNOTICE: --Relation pga_reports--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202579--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202579_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pga_reports\nNOTICE: --Relation contacts--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202499--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202499_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing contacts\nNOTICE: --Relation pga_schema--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202584--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202584_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pga_schema\nNOTICE: --Relation pga_layout--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202589--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202589_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pga_layout\nNOTICE: --Relation errors--\nNOTICE: Pages 35: Changed 0, reaped 0, Empty 0, New 0; Tup 3279: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 74, MaxLen 103; Re-using: Free/Avail. Space 8344/7292; EndEmpty/Avail. Pages 0/4.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel errors: Pages: 35 --> 35; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202524--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202524_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing errors\nNOTICE: --Relation contact_info--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202504--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202504_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing contact_info\nNOTICE: --Relation auth_ani--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202529--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202529_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing auth_ani\nNOTICE: --Relation pbx_dnis--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202534--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202534_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pbx_dnis\nNOTICE: --Relation speed_dials--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202509--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202509_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing speed_dials\nNOTICE: --Relation pbx_dest--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202539--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202539_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing pbx_dest\nNOTICE: --Relation gt_country--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202514--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202514_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing gt_country\nNOTICE: --Relation area_realms--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202549--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202549_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing area_realms\nNOTICE: --Relation dnis--\nNOTICE: Pages 1: Changed 0, reaped 0, Empty 0, New 0; Tup 10: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 82, MaxLen 154; Re-using: Free/Avail. Space 6892/6892; EndEmpty/Avail. Pages 0/1.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Rel dnis: Pages: 1 --> 1; Tuple(s) moved: 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202554--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202554_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing dnis\nNOTICE: --Relation n_areas--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202519--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202519_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing n_areas\nNOTICE: --Relation tad_messages--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: --Relation pg_toast_6202559--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL 0/0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Index pg_toast_6202559_idx: Pages 1; Tuples 0.\n\tCPU 0.00s/0.00u sec elapsed 0.00 sec.\nNOTICE: Analyzing tad_messages\nVACUUM\n\nit taked:\nreal 0m3.083s\nuser 0m0.047s\nsys 0m0.028s\n\ncommand was:\nvacuumdb -z -f -v db\n\n(after this query takes the same very long time)\n\nThanks,\n Andriy.\n\n\n", "msg_date": "Fri, 27 Sep 2002 12:50:14 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, Sep 27, 2002 at 12:50:14PM +0300, Andriy Tkachuk wrote:\n> On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> \n> > On Fri, Sep 27, 2002 at 11:49:08AM +0300, Andriy Tkachuk wrote:\n> > > On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n> > > > was that vacuum full or vacuum analyze? Vacuum full should help in this case..\n> > >\n> > > it was full with analize\n> > > That's what i want to say, that this is very strange for me that vacuum\n> > > not helpfull in this situation!\n> >\n> > Ok, can you post the result of VACUUM FULL VERBOSE ANALYSE ?\n\n<snip>\n\nUm, from the looks of that output, it seems your entire DB is less than 2MB,\nright? So it should be totally cached. So it must be your query at fault.\nWhat is the output of EXPLAIN ANALYSE <query>;\n\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Fri, 27 Sep 2002 20:03:19 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n\n> On 27 Sep 2002 at 11:49, Andriy Tkachuk wrote:\n> > As I said there was no work with this test database during the time after which\n> > query becomes overloading. There was just work with another databases.\n>\n> >From testing point of view, that is not much good. Try halting all work and\n> just do this testing. One variable at a time..\n\nAll work was halted. Testing was true, in appropriate enviroment.\n\n>\n> I understand it may be difficult if it's a production system but with other DBs\n> working on site, too many variables come into picture..\n\nIn testing time there was only our testing query, no others.\nThere was no any noticeable variable.\n\nThanks,\n Andriy.\n\n", "msg_date": "Fri, 27 Sep 2002 13:04:28 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n\n> On Fri, Sep 27, 2002 at 12:50:14PM +0300, Andriy Tkachuk wrote:\n> > On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> >\n> > > On Fri, Sep 27, 2002 at 11:49:08AM +0300, Andriy Tkachuk wrote:\n> > > > On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n> > > > > was that vacuum full or vacuum analyze? Vacuum full should help in this case..\n> > > >\n> > > > it was full with analize\n> > > > That's what i want to say, that this is very strange for me that vacuum\n> > > > not helpfull in this situation!\n> > >\n> > > Ok, can you post the result of VACUUM FULL VERBOSE ANALYSE ?\n>\n> <snip>\n>\n> Um, from the looks of that output, it seems your entire DB is less than 2MB,\n> right? So it should be totally cached. So it must be your query at fault.\n> What is the output of EXPLAIN ANALYSE <query>;\n\ndb ~ 10M, but i like your guess.\n\nmy OS:\nLinux 2.4.9-13custom #1 Fri Feb 15 20:03:52 EST 2002 i686\n\nThere is EXPLAIN ANALYSE when query is heavy:\n\nNOTICE: QUERY PLAN:\n\nSort (cost=26.09..26.09 rows=123 width=89) (actual time=168091.22..168091.31 rows=119 loops=1)\n -> Hash Join (cost=1.27..21.81 rows=123 width=89) (actual time=1404.81..168090.21 rows=119 loops=1)\n -> Seq Scan on users u (cost=0.00..18.07 rows=123 width=81) (actual time=0.14..5.67 rows=119 loops=1)\n -> Hash (cost=1.22..1.22 rows=22 width=8) (actual time=0.24..0.24 rows=0 loops=1)\n -> Seq Scan on plans p (cost=0.00..1.22 rows=22 width=8) (actual time=0.12..0.19 rows=22 loops=1)\n SubPlan\n -> Seq Scan on plans (cost=0.00..1.27 rows=1 width=7) (actual time=0.09..0.11 rows=1 loops=119)\n -> Aggregate (cost=23.80..23.80 rows=1 width=4) (actual time=0.93..0.94 rows=1 loops=119)\n -> Seq Scan on oplaty (cost=0.00..23.80 rows=1 width=4) (actual time=0.87..0.91 rows=0 loops=119)\n -> Aggregate (cost=20.84..20.84 rows=1 width=4) (actual time=0.85..0.86 rows=1 loops=119)\n -> Seq Scan on oplaty (cost=0.00..20.84 rows=1 width=4) (actual time=0.83..0.84 rows=0 loops=119)\n -> Aggregate (cost=22.32..22.32 rows=1 width=4) (actual time=0.84..0.85 rows=1 loops=119)\n -> Seq Scan on oplaty (cost=0.00..22.32 rows=1 width=4) (actual time=0.83..0.84 rows=0 loops=119)\n -> Aggregate (cost=216.00..216.00 rows=1 width=4) (actual time=1.27..1.27 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.88 rows=47 width=4) (actual time=0.25..1.18 rows=39 loops=119)\n -> Aggregate (cost=215.68..215.68 rows=1 width=4) (actual time=0.69..0.69 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.61 rows=30 width=4) (actual time=0.07..0.62 rows=32 loops=119)\n -> Aggregate (cost=215.68..215.68 rows=1 width=4) (actual time=0.69..0.69 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.61 rows=30 width=4) (actual time=0.06..0.62 rows=32 loops=119)\n -> Aggregate (cost=215.47..215.47 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.47 rows=1 width=4) (actual time=0.23..0.41 rows=3 loops=119)\n -> Aggregate (cost=215.47..215.47 rows=1 width=4) (actual time=0.44..0.44 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.47 rows=1 width=4) (actual time=0.23..0.43 rows=3 loops=119)\n -> Aggregate (cost=215.47..215.47 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.47 rows=1 width=4) (actual time=0.14..0.41 rows=4 loops=119)\n -> Aggregate (cost=215.47..215.47 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.47 rows=1 width=4) (actual time=0.14..0.41 rows=4 loops=119)\n -> Aggregate (cost=215.47..215.47 rows=1 width=4) (actual time=0.41..0.42 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.47 rows=1 width=4) (actual time=0.40..0.40 rows=0 loops=119)\n -> Aggregate (cost=216.44..216.44 rows=1 width=4) (actual time=0.76..0.76 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..216.44 rows=2 width=4) (actual time=0.61..0.74 rows=4 loops=119)\n -> Aggregate (cost=215.61..215.61 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..215.61 rows=1 width=4) (actual time=0.28..0.42 rows=1 loops=119)\nTotal runtime: 168092.92 msec\n\nEXPLAIN\n\n\nand there is, when query is light:\n\nNOTICE: QUERY PLAN:\n\nSort (cost=28.90..28.90 rows=1 width=136) (actual time=3863.35..3863.43 rows=119 loops=1)\n -> Hash Join (cost=1.27..28.89 rows=1 width=136) (actual time=74.98..3861.69 rows=119 loops=1)\n -> Seq Scan on users u (cost=0.00..27.50 rows=10 width=128) (actual time=0.17..5.26 rows=119 loops=1)\n -> Hash (cost=1.22..1.22 rows=22 width=8) (actual time=0.16..0.16 rows=0 loops=1)\n -> Seq Scan on plans p (cost=0.00..1.22 rows=22 width=8) (actual time=0.03..0.11 rows=22 loops=1)\n SubPlan\n -> Seq Scan on plans (cost=0.00..1.27 rows=1 width=32) (actual time=0.03..0.05 rows=1 loops=119)\n -> Aggregate (cost=35.00..35.00 rows=1 width=4) (actual time=0.91..0.91 rows=1 loops=119)\n -> Seq Scan on oplaty (cost=0.00..35.00 rows=1 width=4) (actual time=0.85..0.89 rows=0 loops=119)\n -> Aggregate (cost=30.00..30.00 rows=1 width=4) (actual time=0.85..0.85 rows=1 loops=119)\n -> Seq Scan on oplaty (cost=0.00..30.00 rows=1 width=4) (actual time=0.83..0.84 rows=0 loops=119)\n -> Aggregate (cost=32.50..32.50 rows=1 width=4) (actual time=0.84..0.84 rows=1 loops=119)\n -> Seq Scan on oplaty (cost=0.00..32.50 rows=1 width=4) (actual time=0.83..0.83 rows=0 loops=119)\n -> Aggregate (cost=12.39..12.39 rows=1 width=4) (actual time=1.06..1.06 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.38 rows=1 width=4) (actual time=0.07..0.98 rows=39 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.69..0.69 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.37 rows=1 width=4) (actual time=0.07..0.62 rows=32 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.69..0.69 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.37 rows=1 width=4) (actual time=0.06..0.62 rows=32 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.36 rows=1 width=4) (actual time=0.23..0.41 rows=3 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.36 rows=1 width=4) (actual time=0.23..0.41 rows=3 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.43..0.44 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.36 rows=1 width=4) (actual time=0.14..0.41 rows=4 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.36 rows=1 width=4) (actual time=0.14..0.41 rows=4 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.41..0.41 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.36 rows=1 width=4) (actual time=0.40..0.40 rows=0 loops=119)\n -> Aggregate (cost=12.41..12.41 rows=1 width=4) (actual time=0.73..0.73 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.41 rows=1 width=4) (actual time=0.58..0.71 rows=4 loops=119)\n -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.42..0.42 rows=1 loops=119)\n -> Index Scan using bill_uid on bills (cost=0.00..12.37 rows=1 width=4) (actual time=0.27..0.41 rows=1 loops=119)\nTotal runtime: 3865.89 msec\n\nEXPLAIN\n\n", "msg_date": "Fri, 27 Sep 2002 13:28:13 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, 27 Sep 2002, Andriy Tkachuk wrote:\n\n> On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n>\n> > On Fri, Sep 27, 2002 at 12:50:14PM +0300, Andriy Tkachuk wrote:\n> > > On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> > >\n> > > > On Fri, Sep 27, 2002 at 11:49:08AM +0300, Andriy Tkachuk wrote:\n> > > > > On Fri, 27 Sep 2002, Shridhar Daithankar wrote:\n> > > > > > was that vacuum full or vacuum analyze? Vacuum full should help in this case..\n> > > > >\n> > > > > it was full with analize\n> > > > > That's what i want to say, that this is very strange for me that vacuum\n> > > > > not helpfull in this situation!\n> > > >\n> > > > Ok, can you post the result of VACUUM FULL VERBOSE ANALYSE ?\n> >\n> > <snip>\n> >\n> > Um, from the looks of that output, it seems your entire DB is less than 2MB,\n> > right? So it should be totally cached. So it must be your query at fault.\n> > What is the output of EXPLAIN ANALYSE <query>;\n>\n> db ~ 10M, but i like your guess.\n>\n> my OS:\n> Linux 2.4.9-13custom #1 Fri Feb 15 20:03:52 EST 2002 i686\n\nand 256M phys_mem and\nshared_buffers = 1024\n...\n\ni test it on linux and FreeBSD (with kern.ipc.shm_use_phys=1, ~400M phys_mem,\nand soft-updates on UFS) and there is the same behavior so i think that\nthis problem is not OS specific.\n\nAlso just after dumping and restoring the test db i restart pg and\nthe query was fast though, so i think that it's not caching.\n\nIt seems like there are some relations between databases that speed\nof this query depends on with time.\n\nThanks,\n Andriy.\n\n", "msg_date": "Fri, 27 Sep 2002 14:46:43 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, Sep 27, 2002 at 01:28:13PM +0300, Andriy Tkachuk wrote:\n> On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> > What is the output of EXPLAIN ANALYSE <query>;\n> \n> There is EXPLAIN ANALYSE when query is heavy:\n\nOookaaay. Your query is *evil*. 14 subqueries executed for *each* row of\noutput!?! I reackon you could improve your query just by rewriting it into a\nbetter form. How can you have 10 subqueries to the same table?\n\nAnyway, the only thing that seems to change is the statistics, which leads\nme to beleive that all that is happening is that the planner is reordering some\nof your clauses causing it to execute expensive ones it may otherwise be\nable to avoid. In your case the default statistics do better than the real\nones.\n\nI think I need to understand your query to help any further.\n\nSnipped plans follow:\n\n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=26.09..26.09 rows=123 width=89) (actual time=168091.22..168091.31 rows=119 loops=1)\n> -> Hash Join (cost=1.27..21.81 rows=123 width=89) (actual time=1404.81..168090.21 rows=119 loops=1)\n> -> Seq Scan on users u (cost=0.00..18.07 rows=123 width=81) (actual time=0.14..5.67 rows=119 loops=1)\n> SubPlan\n> -> Aggregate (cost=215.61..215.61 rows=1 width=4) (actual time=0.43..0.43 rows=1 loops=119)\n> -> Index Scan using bill_uid on bills (cost=0.00..215.61 rows=1 width=4) (actual time=0.28..0.42 rows=1 loops=119)\n> Total runtime: 168092.92 msec\n> \n> EXPLAIN\n> \n> \n> and there is, when query is light:\n> \n> NOTICE: QUERY PLAN:\n> \n> Sort (cost=28.90..28.90 rows=1 width=136) (actual time=3863.35..3863.43 rows=119 loops=1)\n> -> Hash Join (cost=1.27..28.89 rows=1 width=136) (actual time=74.98..3861.69 rows=119 loops=1)\n> -> Seq Scan on users u (cost=0.00..27.50 rows=10 width=128) (actual time=0.17..5.26 rows=119 loops=1)\n> -> Hash (cost=1.22..1.22 rows=22 width=8) (actual time=0.16..0.16 rows=0 loops=1)\n> -> Seq Scan on plans p (cost=0.00..1.22 rows=22 width=8) (actual time=0.03..0.11 rows=22 loops=1)\n> SubPlan\n> -> Aggregate (cost=12.37..12.37 rows=1 width=4) (actual time=0.69..0.69 rows=1 loops=119)\n> -> Index Scan using bill_uid on bills (cost=0.00..12.37 rows=1 width=4) (actual time=0.06..0.62 rows=32 loops=119)\n> Total runtime: 3865.89 msec\n> \n> EXPLAIN\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Fri, 27 Sep 2002 22:12:30 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n\n> On Fri, Sep 27, 2002 at 01:28:13PM +0300, Andriy Tkachuk wrote:\n> > On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> > > What is the output of EXPLAIN ANALYSE <query>;\n> >\n> > There is EXPLAIN ANALYSE when query is heavy:\n>\n> Oookaaay. Your query is *evil*. 14 subqueries executed for *each* row of\n> output!?! I reackon you could improve your query just by rewriting it into a\n> better form. How can you have 10 subqueries to the same table?\n>\n> Anyway, the only thing that seems to change is the statistics, which leads\n> me to beleive that all that is happening is that the planner is reordering some\n> of your clauses causing it to execute expensive ones it may otherwise be\n> able to avoid. In your case the default statistics do better than the real\n> ones.\n\nYES! You right!\nJust after restirong db i made vacuumdb -z -f\nand query become heavy!\n\nDoes one have any ideas how to ovecome this!?\n\nThanks a lot Martijn,\n Andriy.\n\n", "msg_date": "Fri, 27 Sep 2002 17:58:05 +0300 (EEST)", "msg_from": "Andriy Tkachuk <ant@imt.com.ua>", "msg_from_op": true, "msg_subject": "Re: query speed depends on lifetime of frozen db?" }, { "msg_contents": "Andriy Tkachuk <ant@imt.com.ua> writes:\n> There is EXPLAIN ANALYSE when query is heavy:\n> ...\n> and there is, when query is light:\n> ...\n\nThese sure appear to be the same query plan. I am thinking that the\ncost differential is not in the plan itself at all, but in some bit of\nprocessing that doesn't show in the plan. In particular, since all the\nextra runtime shows up in the top join node (where the SELECT result\nlist would be evaluated), I am thinking that there's something funny\ngoing on in some user-defined function that's called in the SELECT list.\nYou have not shown us the actual query yet, AFAIR. Any\npotentially-expensive functions in there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Sep 2002 12:51:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db? " }, { "msg_contents": "On Fri, Sep 27, 2002 at 05:58:05PM +0300, Andriy Tkachuk wrote:\n> On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> \n> > On Fri, Sep 27, 2002 at 01:28:13PM +0300, Andriy Tkachuk wrote:\n> > > On Fri, 27 Sep 2002, Martijn van Oosterhout wrote:\n> > > > What is the output of EXPLAIN ANALYSE <query>;\n> > >\n> > > There is EXPLAIN ANALYSE when query is heavy:\n> >\n> > Oookaaay. Your query is *evil*. 14 subqueries executed for *each* row of\n> > output!?! I reackon you could improve your query just by rewriting it into a\n> > better form. How can you have 10 subqueries to the same table?\n> \n> YES! You right!\n> Just after restirong db i made vacuumdb -z -f\n> and query become heavy!\n> \n> Does one have any ideas how to ovecome this!?\n\n\nFirstly, how is calc_account() defined? Is it doing subqueries? If it is\nthen the planner won't be seeing them. Is it optimised?\n\n calc_account (u.uid, 1030827600) as start_account,\n calc_account (u.uid, 1032178388) as end_account,\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Sat, 28 Sep 2002 12:05:58 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: query speed depends on lifetime of frozen db?" } ]
[ { "msg_contents": "\n> What could you recommend? Locking the table and selecting\n> max(invoice_id) wouldn't really be much faster, with max(invoice_id)\n> not using an index...\n\nselect invoice_id from table order by invoice_id desc limit 1;\n\nshould get you the maximum fast if you have a unique index on invoice_id.\n\nAndreas\n", "msg_date": "Thu, 26 Sep 2002 15:48:42 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Insert Performance " }, { "msg_contents": "Zeugswetter Andreas SB SD <ZeugswetterA@spardat.at> wrote:\n\n> > What could you recommend? Locking the table and selecting\n> > max(invoice_id) wouldn't really be much faster, with max(invoice_id)\n> > not using an index...\n>\n> select invoice_id from table order by invoice_id desc limit 1;\n>\n> should get you the maximum fast if you have a unique index on invoice_id.\n>\n> Andreas\n\nI've figured that out after reading the TODO about max()/min() using\nindexes.\nThank you anyway!\n\nThe second problem I had was that I have invoices here that have not been\nsent into accounting. An actual invoice_id is something like 210309 at the\nmoment. So I used invoice_ids > 30000000 for \"pre\" invoice_ids. Having much\nof those \"pre\" invoices makes select ... desc limit 1 too slow.\n\nI figured out that I can use a partial index as a solution:\n\nCREATE INDEX idx_real_invoice_id ON invoice (invoice_id) WHERE invoice_id <\n300000000;\n\nNow it works great.\nI have a function getNextInvoiceID():\n\nCREATE OR REPLACE FUNCTION getNextInvoiceId() RETURNS bigint AS'\nDECLARE\n ret bigint;\nBEGIN\n LOCK TABLE invoice IN SHARE ROW EXCLUSIVE MODE;\n SELECT INTO ret invoice_id FROM invoice WHERE invoice_id < \\'3000000000\\'\nORDER BY invoice_id DESC limit 1;\n RETURN ret + 1;\nEND;\n' LANGUAGE 'plpgsql';\n\nUsing that is nearly as fast as a regular sequence.\n\nThanks to all of you for your help.\n\nBest Regards,\nMichael Paesold\n\n", "msg_date": "Thu, 26 Sep 2002 16:20:31 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: Insert Performance " } ]
[ { "msg_contents": "\n> > shared libs on AIX need to be able to resolve all symbols at linkage time.\n> > Those two symbols are in backend/utils/SUBSYS.o but not in the postgres\n> > executable.\n> > My guess is, that they are eliminated by the linker ? Do they need an extern\n> > declaration ?\n\nFurther research prooved, that the AIX linker eliminates functions on a per\nc file basis if none of them is referenced elsewhere (declared extern or not).\nThus it eliminates the whole conv.c file from the postgres executable since \nthose functions are only used by the conversion shared objects.\n\nAnybody have an idea what I can do ?\n\nAndreas\n", "msg_date": "Thu, 26 Sep 2002 16:19:15 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Further research prooved, that the AIX linker eliminates functions on a per\n> c file basis if none of them is referenced elsewhere (declared extern or not).\n> Thus it eliminates the whole conv.c file from the postgres executable since \n> those functions are only used by the conversion shared objects.\n\nYipes. Surely there is a linker switch to suppress that behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 10:55:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...) " } ]
[ { "msg_contents": "On Sat, 07 Sep 2002 10:21:21 -0700\nJoe Conway <mail@joeconway.com> wrote:\n\n> I just sent in a patch using the ancestor check method. It turned out \n> that the performance hit was pretty small on a moderate sized tree.\n> \n> My test case was a 220000 record bill-of-material table. The tree built \n> was 9 levels deep with about 3800 nodes. The performance hit was only \n> about 1%.\n\n\nThe previous patch fixed an infinite recursion bug in \ncontrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error\nseems to occur even if a table has commonplace tree data(see below).\n\n\nI would think the patch, ancestor check, should be\n\n if (strstr(branch_delim || branchstr->data || branch_delim,\n branch_delim || current_key || branch_delim))\n\nThis is my image, not a real code. However, if branchstr->data includes\nbranch_delim, my image will not be perfect.\n\n\n\n\n-- test connectby with int based hierarchy\nDROP TABLE connectby_tree;\nCREATE TABLE connectby_tree(keyid int, parent_keyid int);\n\nINSERT INTO connectby_tree VALUES(11,NULL);\nINSERT INTO connectby_tree VALUES(10,11);\nINSERT INTO connectby_tree VALUES(111,11);\nINSERT INTO connectby_tree VALUES(1,111);\n\nSELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', '11', 0, '-')\n AS t(keyid int, parent_keyid int, level int, branch text)\n\nERROR: infinite recursion detected\n\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Fri, 27 Sep 2002 02:02:49 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "About connectby() again" }, { "msg_contents": "On Fri, 27 Sep 2002 02:02:49 +0900\nI wrote <rk73@sea.plala.or.jp> wrote:\n\n\n> On Sat, 07 Sep 2002 10:21:21 -0700\n> Joe Conway <mail@joeconway.com> wrote:\n> \n> > I just sent in a patch using the ancestor check method. It turned out \n> > that the performance hit was pretty small on a moderate sized tree.\n> > \n> > My test case was a 220000 record bill-of-material table. The tree built \n> > was 9 levels deep with about 3800 nodes. The performance hit was only \n> > about 1%.\n> \n> \n> The previous patch fixed an infinite recursion bug in \n> contrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error\n> seems to occur even if a table has commonplace tree data(see below).\n> \n> \n> I would think the patch, ancestor check, should be\n> \n> if (strstr(branch_delim || branchstr->data || branch_delim,\n> branch_delim || current_key || branch_delim))\n> \n> This is my image, not a real code. However, if branchstr->data includes\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n keyid or parent_keyid\n\n> branch_delim, my image will not be perfect.\n> \n> \n> \n> \n> -- test connectby with int based hierarchy\n> DROP TABLE connectby_tree;\n> CREATE TABLE connectby_tree(keyid int, parent_keyid int);\n> \n> INSERT INTO connectby_tree VALUES(11,NULL);\n> INSERT INTO connectby_tree VALUES(10,11);\n> INSERT INTO connectby_tree VALUES(111,11);\n> INSERT INTO connectby_tree VALUES(1,111);\n> \n> SELECT * FROM connectby('connectby_tree', 'keyid', 'parent_keyid', '11', 0, '-')\n> AS t(keyid int, parent_keyid int, level int, branch text)\n> \n> ERROR: infinite recursion detected\n> \n> \n> \n> Regards,\n> Masaru Sugawara\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Fri, 27 Sep 2002 02:27:05 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Re: About connectby() again" }, { "msg_contents": "Masaru Sugawara wrote:\n> The previous patch fixed an infinite recursion bug in \n> contrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error\n> seems to occur even if a table has commonplace tree data(see below).\n> \n> I would think the patch, ancestor check, should be\n> \n> if (strstr(branch_delim || branchstr->data || branch_delim,\n> branch_delim || current_key || branch_delim))\n> \n> This is my image, not a real code. However, if branchstr->data includes\n> branch_delim, my image will not be perfect.\n\nGood point. Thank you Masaru for the suggested fix.\n\nAttached is a patch to fix the bug found by Masaru. His example now produces:\n\nregression=# SELECT * FROM connectby('connectby_tree', 'keyid', \n'parent_keyid', '11', 0, '-') AS t(keyid int, parent_keyid int, level int, \nbranch text);\n keyid | parent_keyid | level | branch\n-------+--------------+-------+----------\n 11 | | 0 | 11\n 10 | 11 | 1 | 11-10\n 111 | 11 | 1 | 11-111\n 1 | 111 | 2 | 11-111-1\n(4 rows)\n\nWhile making the patch I also realized that the \"no show branch\" form of the \nfunction was not going to work very well for recursion detection. Therefore \nthere is now a default branch delimiter ('~') that is used internally, for \nthat case, to enable recursion detection to work. If you need a different \ndelimiter for your specific data, you will have to use the \"show branch\" form \nof the function.\n\nIf there are no objections, please apply. Thanks,\n\nJoe", "msg_date": "Thu, 26 Sep 2002 16:32:08 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: About connectby() again" }, { "msg_contents": "On Thu, 26 Sep 2002 16:32:08 -0700\nJoe Conway <mail@joeconway.com> wrote:\n\n\n> Masaru Sugawara wrote:\n> > The previous patch fixed an infinite recursion bug in \n> > contrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error\n> > seems to occur even if a table has commonplace tree data(see below).\n> > \n> > I would think the patch, ancestor check, should be\n> > \n> > if (strstr(branch_delim || branchstr->data || branch_delim,\n> > branch_delim || current_key || branch_delim))\n> > \n> > This is my image, not a real code. However, if branchstr->data includes\n> > branch_delim, my image will not be perfect.\n> \n> Good point. Thank you Masaru for the suggested fix.\n> \n> Attached is a patch to fix the bug found by Masaru. His example now produces:\n> \n> regression=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '11', 0, '-') AS t(keyid int, parent_keyid int, level int, \n> branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+----------\n> 11 | | 0 | 11\n> 10 | 11 | 1 | 11-10\n> 111 | 11 | 1 | 11-111\n> 1 | 111 | 2 | 11-111-1\n> (4 rows)\n> \n> While making the patch I also realized that the \"no show branch\" form of the \n> function was not going to work very well for recursion detection. Therefore \n> there is now a default branch delimiter ('~') that is used internally, for \n> that case, to enable recursion detection to work. If you need a different \n> delimiter for your specific data, you will have to use the \"show branch\" form \n> of the function.\n> \n> If there are no objections, please apply. Thanks,\n\n\n I have no objection to your internally adding strings to detect a recursion.\nAnd I agree with your definition--the default delimiter is a tilde.\nThanks a lot.\n\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Fri, 27 Sep 2002 23:53:04 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Re: About connectby() again" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Masaru Sugawara wrote:\n> > The previous patch fixed an infinite recursion bug in \n> > contrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error\n> > seems to occur even if a table has commonplace tree data(see below).\n> > \n> > I would think the patch, ancestor check, should be\n> > \n> > if (strstr(branch_delim || branchstr->data || branch_delim,\n> > branch_delim || current_key || branch_delim))\n> > \n> > This is my image, not a real code. However, if branchstr->data includes\n> > branch_delim, my image will not be perfect.\n> \n> Good point. Thank you Masaru for the suggested fix.\n> \n> Attached is a patch to fix the bug found by Masaru. His example now produces:\n> \n> regression=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '11', 0, '-') AS t(keyid int, parent_keyid int, level int, \n> branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+----------\n> 11 | | 0 | 11\n> 10 | 11 | 1 | 11-10\n> 111 | 11 | 1 | 11-111\n> 1 | 111 | 2 | 11-111-1\n> (4 rows)\n> \n> While making the patch I also realized that the \"no show branch\" form of the \n> function was not going to work very well for recursion detection. Therefore \n> there is now a default branch delimiter ('~') that is used internally, for \n> that case, to enable recursion detection to work. If you need a different \n> delimiter for your specific data, you will have to use the \"show branch\" form \n> of the function.\n> \n> If there are no objections, please apply. Thanks,\n> \n> Joe\n\n> Index: contrib/tablefunc/README.tablefunc\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/tablefunc/README.tablefunc,v\n> retrieving revision 1.3\n> diff -c -r1.3 README.tablefunc\n> *** contrib/tablefunc/README.tablefunc\t2 Sep 2002 05:44:04 -0000\t1.3\n> --- contrib/tablefunc/README.tablefunc\t26 Sep 2002 22:57:27 -0000\n> ***************\n> *** 365,371 ****\n> \n> branch_delim\n> \n> ! if optional branch value is desired, this string is used as the delimiter\n> \n> Outputs\n> \n> --- 365,373 ----\n> \n> branch_delim\n> \n> ! If optional branch value is desired, this string is used as the delimiter.\n> ! When not provided, a default value of '~' is used for internal \n> ! recursion detection only, and no \"branch\" field is returned.\n> \n> Outputs\n> \n> ***************\n> *** 388,394 ****\n> the level value output\n> \n> 3. If the branch field is not desired, omit both the branch_delim input\n> ! parameter *and* the branch field in the query column definition\n> \n> 4. If the branch field is desired, it must be the forth column in the query\n> column definition, and it must be type TEXT\n> --- 390,399 ----\n> the level value output\n> \n> 3. If the branch field is not desired, omit both the branch_delim input\n> ! parameter *and* the branch field in the query column definition. Note\n> ! that when branch_delim is not provided, a default value of '~' is used\n> ! for branch_delim for internal recursion detection, even though the branch\n> ! field is not returned.\n> \n> 4. If the branch field is desired, it must be the forth column in the query\n> column definition, and it must be type TEXT\n> Index: contrib/tablefunc/tablefunc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/tablefunc/tablefunc.c,v\n> retrieving revision 1.9\n> diff -c -r1.9 tablefunc.c\n> *** contrib/tablefunc/tablefunc.c\t14 Sep 2002 19:32:54 -0000\t1.9\n> --- contrib/tablefunc/tablefunc.c\t26 Sep 2002 23:09:27 -0000\n> ***************\n> *** 652,657 ****\n> --- 652,660 ----\n> \t\tbranch_delim = GET_STR(PG_GETARG_TEXT_P(5));\n> \t\tshow_branch = true;\n> \t}\n> + \telse\n> + \t\t/* default is no show, tilde for the delimiter */\n> + \t\tbranch_delim = pstrdup(\"~\");\n> \n> \tper_query_ctx = rsinfo->econtext->ecxt_per_query_memory;\n> \toldcontext = MemoryContextSwitchTo(per_query_ctx);\n> ***************\n> *** 798,807 ****\n> --- 801,816 ----\n> \t\tchar\t *current_branch;\n> \t\tchar\t **values;\n> \t\tStringInfo\tbranchstr = NULL;\n> + \t\tStringInfo\tchk_branchstr = NULL;\n> + \t\tStringInfo\tchk_current_key = NULL;\n> \n> \t\t/* start a new branch */\n> \t\tbranchstr = makeStringInfo();\n> \n> + \t\t/* need these to check for recursion */\n> + \t\tchk_branchstr = makeStringInfo();\n> + \t\tchk_current_key = makeStringInfo();\n> + \n> \t\tif (show_branch)\n> \t\t\tvalues = (char **) palloc(CONNECTBY_NCOLS * sizeof(char *));\n> \t\telse\n> ***************\n> *** 854,875 ****\n> \t\t{\n> \t\t\t/* initialize branch for this pass */\n> \t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> \n> \t\t\t/* get the current key and parent */\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> - \t\t\t/* check to see if this key is also an ancestor */\n> - \t\t\tif (strstr(branchstr->data, current_key))\n> - \t\t\t\telog(ERROR, \"infinite recursion detected\");\n> - \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> ! \t\t\t/* extend the branch */\n> \t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> \t\t\tcurrent_branch = branchstr->data;\n> \n> --- 863,886 ----\n> \t\t{\n> \t\t\t/* initialize branch for this pass */\n> \t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> + \t\t\tappendStringInfo(chk_branchstr, \"%s%s%s\", branch_delim, branch, branch_delim);\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> \n> \t\t\t/* get the current key and parent */\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> + \t\t\tappendStringInfo(chk_current_key, \"%s%s%s\", branch_delim, current_key, branch_delim);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> ! \t\t\t/* check to see if this key is also an ancestor */\n> ! \t\t\tif (strstr(chk_branchstr->data, chk_current_key->data))\n> ! \t\t\t\telog(ERROR, \"infinite recursion detected\");\n> ! \n> ! \t\t\t/* OK, extend the branch */\n> \t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> \t\t\tcurrent_branch = branchstr->data;\n> \n> ***************\n> *** 913,918 ****\n> --- 924,935 ----\n> \t\t\t/* reset branch for next pass */\n> \t\t\txpfree(branchstr->data);\n> \t\t\tinitStringInfo(branchstr);\n> + \n> + \t\t\txpfree(chk_branchstr->data);\n> + \t\t\tinitStringInfo(chk_branchstr);\n> + \n> + \t\t\txpfree(chk_current_key->data);\n> + \t\t\tinitStringInfo(chk_current_key);\n> \t\t}\n> \t}\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 00:21:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About connectby() again" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Masaru Sugawara wrote:\n> > The previous patch fixed an infinite recursion bug in \n> > contrib/tablefunc/tablefunc.c:connectby. But, other unmanageable error\n> > seems to occur even if a table has commonplace tree data(see below).\n> > \n> > I would think the patch, ancestor check, should be\n> > \n> > if (strstr(branch_delim || branchstr->data || branch_delim,\n> > branch_delim || current_key || branch_delim))\n> > \n> > This is my image, not a real code. However, if branchstr->data includes\n> > branch_delim, my image will not be perfect.\n> \n> Good point. Thank you Masaru for the suggested fix.\n> \n> Attached is a patch to fix the bug found by Masaru. His example now produces:\n> \n> regression=# SELECT * FROM connectby('connectby_tree', 'keyid', \n> 'parent_keyid', '11', 0, '-') AS t(keyid int, parent_keyid int, level int, \n> branch text);\n> keyid | parent_keyid | level | branch\n> -------+--------------+-------+----------\n> 11 | | 0 | 11\n> 10 | 11 | 1 | 11-10\n> 111 | 11 | 1 | 11-111\n> 1 | 111 | 2 | 11-111-1\n> (4 rows)\n> \n> While making the patch I also realized that the \"no show branch\" form of the \n> function was not going to work very well for recursion detection. Therefore \n> there is now a default branch delimiter ('~') that is used internally, for \n> that case, to enable recursion detection to work. If you need a different \n> delimiter for your specific data, you will have to use the \"show branch\" form \n> of the function.\n> \n> If there are no objections, please apply. Thanks,\n> \n> Joe\n\n> Index: contrib/tablefunc/README.tablefunc\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/tablefunc/README.tablefunc,v\n> retrieving revision 1.3\n> diff -c -r1.3 README.tablefunc\n> *** contrib/tablefunc/README.tablefunc\t2 Sep 2002 05:44:04 -0000\t1.3\n> --- contrib/tablefunc/README.tablefunc\t26 Sep 2002 22:57:27 -0000\n> ***************\n> *** 365,371 ****\n> \n> branch_delim\n> \n> ! if optional branch value is desired, this string is used as the delimiter\n> \n> Outputs\n> \n> --- 365,373 ----\n> \n> branch_delim\n> \n> ! If optional branch value is desired, this string is used as the delimiter.\n> ! When not provided, a default value of '~' is used for internal \n> ! recursion detection only, and no \"branch\" field is returned.\n> \n> Outputs\n> \n> ***************\n> *** 388,394 ****\n> the level value output\n> \n> 3. If the branch field is not desired, omit both the branch_delim input\n> ! parameter *and* the branch field in the query column definition\n> \n> 4. If the branch field is desired, it must be the forth column in the query\n> column definition, and it must be type TEXT\n> --- 390,399 ----\n> the level value output\n> \n> 3. If the branch field is not desired, omit both the branch_delim input\n> ! parameter *and* the branch field in the query column definition. Note\n> ! that when branch_delim is not provided, a default value of '~' is used\n> ! for branch_delim for internal recursion detection, even though the branch\n> ! field is not returned.\n> \n> 4. If the branch field is desired, it must be the forth column in the query\n> column definition, and it must be type TEXT\n> Index: contrib/tablefunc/tablefunc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/tablefunc/tablefunc.c,v\n> retrieving revision 1.9\n> diff -c -r1.9 tablefunc.c\n> *** contrib/tablefunc/tablefunc.c\t14 Sep 2002 19:32:54 -0000\t1.9\n> --- contrib/tablefunc/tablefunc.c\t26 Sep 2002 23:09:27 -0000\n> ***************\n> *** 652,657 ****\n> --- 652,660 ----\n> \t\tbranch_delim = GET_STR(PG_GETARG_TEXT_P(5));\n> \t\tshow_branch = true;\n> \t}\n> + \telse\n> + \t\t/* default is no show, tilde for the delimiter */\n> + \t\tbranch_delim = pstrdup(\"~\");\n> \n> \tper_query_ctx = rsinfo->econtext->ecxt_per_query_memory;\n> \toldcontext = MemoryContextSwitchTo(per_query_ctx);\n> ***************\n> *** 798,807 ****\n> --- 801,816 ----\n> \t\tchar\t *current_branch;\n> \t\tchar\t **values;\n> \t\tStringInfo\tbranchstr = NULL;\n> + \t\tStringInfo\tchk_branchstr = NULL;\n> + \t\tStringInfo\tchk_current_key = NULL;\n> \n> \t\t/* start a new branch */\n> \t\tbranchstr = makeStringInfo();\n> \n> + \t\t/* need these to check for recursion */\n> + \t\tchk_branchstr = makeStringInfo();\n> + \t\tchk_current_key = makeStringInfo();\n> + \n> \t\tif (show_branch)\n> \t\t\tvalues = (char **) palloc(CONNECTBY_NCOLS * sizeof(char *));\n> \t\telse\n> ***************\n> *** 854,875 ****\n> \t\t{\n> \t\t\t/* initialize branch for this pass */\n> \t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> \n> \t\t\t/* get the current key and parent */\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> - \t\t\t/* check to see if this key is also an ancestor */\n> - \t\t\tif (strstr(branchstr->data, current_key))\n> - \t\t\t\telog(ERROR, \"infinite recursion detected\");\n> - \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> ! \t\t\t/* extend the branch */\n> \t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> \t\t\tcurrent_branch = branchstr->data;\n> \n> --- 863,886 ----\n> \t\t{\n> \t\t\t/* initialize branch for this pass */\n> \t\t\tappendStringInfo(branchstr, \"%s\", branch);\n> + \t\t\tappendStringInfo(chk_branchstr, \"%s%s%s\", branch_delim, branch, branch_delim);\n> \n> \t\t\t/* get the next sql result tuple */\n> \t\t\tspi_tuple = tuptable->vals[i];\n> \n> \t\t\t/* get the current key and parent */\n> \t\t\tcurrent_key = SPI_getvalue(spi_tuple, spi_tupdesc, 1);\n> + \t\t\tappendStringInfo(chk_current_key, \"%s%s%s\", branch_delim, current_key, branch_delim);\n> \t\t\tcurrent_key_parent = pstrdup(SPI_getvalue(spi_tuple, spi_tupdesc, 2));\n> \n> \t\t\t/* get the current level */\n> \t\t\tsprintf(current_level, \"%d\", level);\n> \n> ! \t\t\t/* check to see if this key is also an ancestor */\n> ! \t\t\tif (strstr(chk_branchstr->data, chk_current_key->data))\n> ! \t\t\t\telog(ERROR, \"infinite recursion detected\");\n> ! \n> ! \t\t\t/* OK, extend the branch */\n> \t\t\tappendStringInfo(branchstr, \"%s%s\", branch_delim, current_key);\n> \t\t\tcurrent_branch = branchstr->data;\n> \n> ***************\n> *** 913,918 ****\n> --- 924,935 ----\n> \t\t\t/* reset branch for next pass */\n> \t\t\txpfree(branchstr->data);\n> \t\t\tinitStringInfo(branchstr);\n> + \n> + \t\t\txpfree(chk_branchstr->data);\n> + \t\t\tinitStringInfo(chk_branchstr);\n> + \n> + \t\t\txpfree(chk_current_key->data);\n> + \t\t\tinitStringInfo(chk_current_key);\n> \t\t}\n> \t}\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:11:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About connectby() again" } ]
[ { "msg_contents": "\n\nfixing a problem with the aliases, just want to make sure it goes through\npropelry ...\n\n\n\n", "msg_date": "Thu, 26 Sep 2002 14:39:06 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "ignore" } ]
[ { "msg_contents": "\n> > Further research prooved, that the AIX linker eliminates functions on a per\n> > c file basis if none of them is referenced elsewhere (declared extern or not).\n> > Thus it eliminates the whole conv.c file from the postgres executable since \n> > those functions are only used by the conversion shared objects.\n> \n> Yipes. Surely there is a linker switch to suppress that behavior?\n\n-brtl , but that does a lot more that we don't want and does not work :-(\n\nI think the best thing to do would be to do the following:\n\nlink a postgres.so from all SUBSYS.o's\ncreate postgres.imp from postgres.so (since it is a lib it has all symbols)\nlink postgres with postgres.imp and the SUBSYS.o's\n\nCurrently it does\n\nlink postgres\ncreate postgres.imp from postgres\nlink postgres again using postgres.imp \n\nThis is not so good anyways, since it would actually require a cyclic dependency.\nA remake currently requires to manually remove postgres and postgres.imp .\n\nNot sure how to do this in the Makefiles however :-(\nShould this be done in src/backend/Makefile with a if portname ? I don't like that.\nCan sombody help, please ?\n\nAndreas\n", "msg_date": "Thu, 26 Sep 2002 20:51:04 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...) " }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> -brtl , but that does a lot more that we don't want and does not work :-(\n\nI think -bnogc is the switch you want.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 28 Sep 2002 13:10:37 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...) " } ]
[ { "msg_contents": "I think I've identified a primary cause for the \"no such pg_clog file\"\nproblem that we've seen reported several times.\n\nA look at htup.h shows that the WAL only stores the low 8 bits of a\ntuple's t_infomask (see xl_heap_header struct). There is some fooling\naround in heapam.c's WAL redo routines to try to reconstitute some of\nthe high-order bits, for example this:\n\n htup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n\nBut this is implicitly assuming that we can reconstruct the\nXMIN_COMMITTED bit at will. That was true when the WAL code was\nwritten, but with 7.2's ability to recycle allegedly-no-longer-needed\npg_clog data, we cannot simply drop commit status bits.\n\nThe only scenario I've been able to identify in which this actually\ncauses a failure is when VACUUM FULL moves an old tuple and then shortly\nafterwards (before the next checkpoint) there is a crash. Post-crash,\nthe tuple move will be redone from WAL, and the moved tuple will be\ninserted with zeroed-out commit status bits. When we next examine the\ntuple, we have to try to retrieve its commit status from pg_clog ... but\nit's not there anymore.\n\nAs far as I can see, the only realistic solution is to store the full 16\nbits of t_infomask in the WAL. We could do this without increasing WAL\nsize by dropping the t_hoff field from xl_heap_header --- t_hoff is\ncomputable given the number of attributes and the HASNULL/HASOID bits,\nboth of which are available. (Actually, we could save some space now\nby getting rid of t_oid in xl_heap_header; it's not necessary given that\nOID isn't in the fixed tuple header anymore.)\n\nThis will require a WAL format change of course. Fortunately we can do\nthat without forcing a complete initdb (people will have to run\npg_resetxlog if they want to update a 7.3beta2 database without initdb).\n\nI see no way to fix the problem in the context of 7.2. Perhaps we\nshould put out a bulletin warning people to avoid VACUUM FULL in 7.2,\nor at least to do CHECKPOINT as soon as possible after one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 16:27:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "WAL shortcoming causes missing-pg_clog-segment problem" }, { "msg_contents": "\n[ Subject changed.]\n\nMarc, please hold on announcing beta2 until we get this resolved. Thanks.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I think I've identified a primary cause for the \"no such pg_clog file\"\n> problem that we've seen reported several times.\n> \n> A look at htup.h shows that the WAL only stores the low 8 bits of a\n> tuple's t_infomask (see xl_heap_header struct). There is some fooling\n> around in heapam.c's WAL redo routines to try to reconstitute some of\n> the high-order bits, for example this:\n> \n> htup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n> \n> But this is implicitly assuming that we can reconstruct the\n> XMIN_COMMITTED bit at will. That was true when the WAL code was\n> written, but with 7.2's ability to recycle allegedly-no-longer-needed\n> pg_clog data, we cannot simply drop commit status bits.\n> \n> The only scenario I've been able to identify in which this actually\n> causes a failure is when VACUUM FULL moves an old tuple and then shortly\n> afterwards (before the next checkpoint) there is a crash. Post-crash,\n> the tuple move will be redone from WAL, and the moved tuple will be\n> inserted with zeroed-out commit status bits. When we next examine the\n> tuple, we have to try to retrieve its commit status from pg_clog ... but\n> it's not there anymore.\n> \n> As far as I can see, the only realistic solution is to store the full 16\n> bits of t_infomask in the WAL. We could do this without increasing WAL\n> size by dropping the t_hoff field from xl_heap_header --- t_hoff is\n> computable given the number of attributes and the HASNULL/HASOID bits,\n> both of which are available. (Actually, we could save some space now\n> by getting rid of t_oid in xl_heap_header; it's not necessary given that\n> OID isn't in the fixed tuple header anymore.)\n> \n> This will require a WAL format change of course. Fortunately we can do\n> that without forcing a complete initdb (people will have to run\n> pg_resetxlog if they want to update a 7.3beta2 database without initdb).\n> \n> I see no way to fix the problem in the context of 7.2. Perhaps we\n> should put out a bulletin warning people to avoid VACUUM FULL in 7.2,\n> or at least to do CHECKPOINT as soon as possible after one.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:32:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "HOLD ON BETA2" }, { "msg_contents": "\nOops, I see beta2's on the web site with yesterday's date. Have they\nbeen announced?\n\n-rw-r--r-- 1 70 70 1072573 Sep 25 10:15 postgresql-test-7.3b2.tar.gz\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I think I've identified a primary cause for the \"no such pg_clog file\"\n> problem that we've seen reported several times.\n> \n> A look at htup.h shows that the WAL only stores the low 8 bits of a\n> tuple's t_infomask (see xl_heap_header struct). There is some fooling\n> around in heapam.c's WAL redo routines to try to reconstitute some of\n> the high-order bits, for example this:\n> \n> htup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n> \n> But this is implicitly assuming that we can reconstruct the\n> XMIN_COMMITTED bit at will. That was true when the WAL code was\n> written, but with 7.2's ability to recycle allegedly-no-longer-needed\n> pg_clog data, we cannot simply drop commit status bits.\n> \n> The only scenario I've been able to identify in which this actually\n> causes a failure is when VACUUM FULL moves an old tuple and then shortly\n> afterwards (before the next checkpoint) there is a crash. Post-crash,\n> the tuple move will be redone from WAL, and the moved tuple will be\n> inserted with zeroed-out commit status bits. When we next examine the\n> tuple, we have to try to retrieve its commit status from pg_clog ... but\n> it's not there anymore.\n> \n> As far as I can see, the only realistic solution is to store the full 16\n> bits of t_infomask in the WAL. We could do this without increasing WAL\n> size by dropping the t_hoff field from xl_heap_header --- t_hoff is\n> computable given the number of attributes and the HASNULL/HASOID bits,\n> both of which are available. (Actually, we could save some space now\n> by getting rid of t_oid in xl_heap_header; it's not necessary given that\n> OID isn't in the fixed tuple header anymore.)\n> \n> This will require a WAL format change of course. Fortunately we can do\n> that without forcing a complete initdb (people will have to run\n> pg_resetxlog if they want to update a 7.3beta2 database without initdb).\n> \n> I see no way to fix the problem in the context of 7.2. Perhaps we\n> should put out a bulletin warning people to avoid VACUUM FULL in 7.2,\n> or at least to do CHECKPOINT as soon as possible after one.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:33:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "HOLD ON BETA2" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Oops, I see beta2's on the web site with yesterday's date. Have they\n> been announced?\n\n> -rw-r--r-- 1 70 70 1072573 Sep 25 10:15 postgresql-test-7.3b2.tar.gz\n\nNo, but they've been out there for more than a day. I think it's too\nlate to retract beta2. We could fix this problem and then do a quick\nbeta3, though...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 16:38:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: HOLD ON BETA2 " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Oops, I see beta2's on the web site with yesterday's date. Have they\n> > been announced?\n> \n> > -rw-r--r-- 1 70 70 1072573 Sep 25 10:15 postgresql-test-7.3b2.tar.gz\n> \n> No, but they've been out there for more than a day. I think it's too\n> late to retract beta2. We could fix this problem and then do a quick\n> beta3, though...\n\nYes, that's the only solution. Let's not announce beta2 and make things\nworse.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 26 Sep 2002 16:46:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HOLD ON BETA2" } ]
[ { "msg_contents": "\nI am so glad that postgres now keeps track of relationships between rule,\nviews, functions, tables, etc. I've had to re-do all my creation and drop\nscripts but this is definitely for the better.\n\nDuring my testing of my scripts, I have come across this message:\npsql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n\nWhat does this indicate?\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n-----------------------------------\nNextBus say: \nRiders prefer to arrive just minute \nbefore bus than just minute after.\n\n", "msg_date": "Thu, 26 Sep 2002 13:46:33 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "pg7.3b1" }, { "msg_contents": "On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:\n> \n> I am so glad that postgres now keeps track of relationships between rule,\n> views, functions, tables, etc. I've had to re-do all my creation and drop\n> scripts but this is definitely for the better.\n> \n> During my testing of my scripts, I have come across this message:\n> psql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n> \n> What does this indicate?\n\nSomeone (probably me) made a mistake and forgot to release a cache\nhandle.\n\nDo you happen to have a sequence of commands that can reproduce this?\n\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 16:52:57 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: pg7.3b1" }, { "msg_contents": "I'll see if I can pare down my scripts (they are long) to reproduce this\neasier.\n\nL.\nOn 26 Sep 2002, Rod Taylor wrote:\n\n> On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:\n> > \n> > I am so glad that postgres now keeps track of relationships between rule,\n> > views, functions, tables, etc. I've had to re-do all my creation and drop\n> > scripts but this is definitely for the better.\n> > \n> > During my testing of my scripts, I have come across this message:\n> > psql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n> > \n> > What does this indicate?\n> \n> Someone (probably me) made a mistake and forgot to release a cache\n> handle.\n> \n> Do you happen to have a sequence of commands that can reproduce this?\n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n-----------------------------------\nNextBus say: \nRiders prefer to arrive just minute \nbefore bus than just minute after.\n\n", "msg_date": "Thu, 26 Sep 2002 13:55:53 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: pg7.3b1" }, { "msg_contents": "Ok, finally had time to narrow this down.\n\nHere's the simplified script that will reproduce this (this sequence\nreroduces on my system using 7.3b2):\n\n\\echo BEGIN tst.sql\n\ncreate table pp\n ( x integer\n , i\t\ttext\n );\n\ncreate view p as\n select * from pp where i is null;\n \ncomment on view p is\n'This is a comment.';\n \ncreate rule p_ins as on insert to p do instead\n insert into pp\n values ( new.x\n , null\n );\n \ncomment on rule p_ins is 'insert to p goes to pp';\n\n\\echo END tst.sql\n\n\nOn 26 Sep 2002, Rod Taylor wrote:\n\n> On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:\n> > \n> > I am so glad that postgres now keeps track of relationships between rule,\n> > views, functions, tables, etc. I've had to re-do all my creation and drop\n> > scripts but this is definitely for the better.\n> > \n> > During my testing of my scripts, I have come across this message:\n> > psql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n> > \n> > What does this indicate?\n> \n> Someone (probably me) made a mistake and forgot to release a cache\n> handle.\n> \n> Do you happen to have a sequence of commands that can reproduce this?\n> \n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n------------------------------\nDo you know where your bus is?\n\n", "msg_date": "Wed, 2 Oct 2002 16:03:34 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: pg7.3b1" }, { "msg_contents": "Same rule patch, right email this time :)\n\nLock on the rule relation wasn't removed after adding the comment.\n\nOn Wed, 2002-10-02 at 19:03, Laurette Cisneros wrote:\n> Ok, finally had time to narrow this down.\n> \n> Here's the simplified script that will reproduce this (this sequence\n> reroduces on my system using 7.3b2):\n> \n> \\echo BEGIN tst.sql\n> \n> create table pp\n> ( x integer\n> , i\t\ttext\n> );\n> \n> create view p as\n> select * from pp where i is null;\n> \n> comment on view p is\n> 'This is a comment.';\n> \n> create rule p_ins as on insert to p do instead\n> insert into pp\n> values ( new.x\n> , null\n> );\n> \n> comment on rule p_ins is 'insert to p goes to pp';\n> \n> \\echo END tst.sql\n> \n> \n> On 26 Sep 2002, Rod Taylor wrote:\n> \n> > On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:\n> > > \n> > > I am so glad that postgres now keeps track of relationships between rule,\n> > > views, functions, tables, etc. I've had to re-do all my creation and drop\n> > > scripts but this is definitely for the better.\n> > > \n> > > During my testing of my scripts, I have come across this message:\n> > > psql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n> > > \n> > > What does this indicate?\n> > \n> > Someone (probably me) made a mistake and forgot to release a cache\n> > handle.\n> > \n> > Do you happen to have a sequence of commands that can reproduce this?\n> > \n> > \n> \n> -- \n> Laurette Cisneros\n> The Database Group\n> (510) 420-3137\n> NextBus Information Systems, Inc.\n> www.nextbus.com\n> ------------------------------\n> Do you know where your bus is?\n> \n> \n-- \n Rod Taylor", "msg_date": "02 Oct 2002 20:16:12 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg7.3b1" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Same rule patch, right email this time :)\n> \n> Lock on the rule relation wasn't removed after adding the comment.\n> \n> On Wed, 2002-10-02 at 19:03, Laurette Cisneros wrote:\n> > Ok, finally had time to narrow this down.\n> > \n> > Here's the simplified script that will reproduce this (this sequence\n> > reroduces on my system using 7.3b2):\n> > \n> > \\echo BEGIN tst.sql\n> > \n> > create table pp\n> > ( x integer\n> > , i\t\ttext\n> > );\n> > \n> > create view p as\n> > select * from pp where i is null;\n> > \n> > comment on view p is\n> > 'This is a comment.';\n> > \n> > create rule p_ins as on insert to p do instead\n> > insert into pp\n> > values ( new.x\n> > , null\n> > );\n> > \n> > comment on rule p_ins is 'insert to p goes to pp';\n> > \n> > \\echo END tst.sql\n> > \n> > \n> > On 26 Sep 2002, Rod Taylor wrote:\n> > \n> > > On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:\n> > > > \n> > > > I am so glad that postgres now keeps track of relationships between rule,\n> > > > views, functions, tables, etc. I've had to re-do all my creation and drop\n> > > > scripts but this is definitely for the better.\n> > > > \n> > > > During my testing of my scripts, I have come across this message:\n> > > > psql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n> > > > \n> > > > What does this indicate?\n> > > \n> > > Someone (probably me) made a mistake and forgot to release a cache\n> > > handle.\n> > > \n> > > Do you happen to have a sequence of commands that can reproduce this?\n> > > \n> > > \n> > \n> > -- \n> > Laurette Cisneros\n> > The Database Group\n> > (510) 420-3137\n> > NextBus Information Systems, Inc.\n> > www.nextbus.com\n> > ------------------------------\n> > Do you know where your bus is?\n> > \n> > \n> -- \n> Rod Taylor\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 22:29:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg7.3b1" }, { "msg_contents": "\nPatch applied with Tom's fix for heap_close(NoLock). Thanks.\n\n---------------------------------------------------------------------------\n\n\nRod Taylor wrote:\n> Same rule patch, right email this time :)\n> \n> Lock on the rule relation wasn't removed after adding the comment.\n> \n> On Wed, 2002-10-02 at 19:03, Laurette Cisneros wrote:\n> > Ok, finally had time to narrow this down.\n> > \n> > Here's the simplified script that will reproduce this (this sequence\n> > reroduces on my system using 7.3b2):\n> > \n> > \\echo BEGIN tst.sql\n> > \n> > create table pp\n> > ( x integer\n> > , i\t\ttext\n> > );\n> > \n> > create view p as\n> > select * from pp where i is null;\n> > \n> > comment on view p is\n> > 'This is a comment.';\n> > \n> > create rule p_ins as on insert to p do instead\n> > insert into pp\n> > values ( new.x\n> > , null\n> > );\n> > \n> > comment on rule p_ins is 'insert to p goes to pp';\n> > \n> > \\echo END tst.sql\n> > \n> > \n> > On 26 Sep 2002, Rod Taylor wrote:\n> > \n> > > On Thu, 2002-09-26 at 16:46, Laurette Cisneros wrote:\n> > > > \n> > > > I am so glad that postgres now keeps track of relationships between rule,\n> > > > views, functions, tables, etc. I've had to re-do all my creation and drop\n> > > > scripts but this is definitely for the better.\n> > > > \n> > > > During my testing of my scripts, I have come across this message:\n> > > > psql:/u1/cvs73/DataBase/Config/Schema/logconfig.sql:142: WARNING: Relcache reference leak: relation \"positions\" has refcnt 1 instead of 0\n> > > > \n> > > > What does this indicate?\n> > > \n> > > Someone (probably me) made a mistake and forgot to release a cache\n> > > handle.\n> > > \n> > > Do you happen to have a sequence of commands that can reproduce this?\n> > > \n> > > \n> > \n> > -- \n> > Laurette Cisneros\n> > The Database Group\n> > (510) 420-3137\n> > NextBus Information Systems, Inc.\n> > www.nextbus.com\n> > ------------------------------\n> > Do you know where your bus is?\n> > \n> > \n> -- \n> Rod Taylor\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 12:27:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg7.3b1" } ]
[ { "msg_contents": "Patrick Welche's recent problems (see pgsql-general) point out that the\nold CREATE CONSTRAINT TRIGGER syntax that optionally omits a \"FROM\ntable\" clause doesn't work anymore --- the system *needs* tgconstrrelid\nto be set in an RI constraint trigger record, because the RI triggers\nnow use that OID to find the referenced table. (The table name in the\ntgargs field isn't used anymore, mainly because it's not schema-aware.)\n\nThis means that RI trigger definitions dating back to 7.0 (or whenever\nit was that we fixed the pg_dump bug about not dumping tgconstrrelid)\ndon't work anymore.\n\nThere are a couple things I think we should do. One: modify the CREATE\nCONSTRAINT TRIGGER code to try to extract a foreign relation name from\nthe tgargs if FROM is missing. Without this, we have no hope of loading\nworking FK trigger definitions from old dumps. Two: modify pg_dump to\nextract a name from the tgargs in the same fashion. I'd rather have\npg_dump do this than the backend, and this will at least make things\nbetter in the case where you're using a 7.3 pg_dump against an older\ndatabase.\n\nHowever, if we are going to put that kind of knowledge into pg_dump,\nit would only be a small further step to have it dump these triggers\nas ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\nbetter for forward compatibility than dumping the raw triggers.\n\nThoughts?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 16:57:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Reconstructing FKs in pg_dump" }, { "msg_contents": "On Thu, 2002-09-26 at 16:57, Tom Lane wrote:\n> This means that RI trigger definitions dating back to 7.0 (or whenever\n> it was that we fixed the pg_dump bug about not dumping tgconstrrelid)\n> don't work anymore.\n\nI thought 7.0 introduced foreign keys in the first place, so perhaps\n7.1?\n\nHowever, if they're coming from 7.0 or earlier would it be appropriate\nto have them bounce through 7.2 / 7.1 first?\n\nPain in the ass to dump and reload twice to get to the latest, but since\nthey only upgrade once every 2 to 3 years...\n\nIs this the only problem that 7.0 people are going to experience (server\nside, SQL changes are abundant)?\n\n> However, if we are going to put that kind of knowledge into pg_dump,\n> it would only be a small further step to have it dump these triggers\n> as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\n> better for forward compatibility than dumping the raw triggers.\n\nIf this type of stuff has to be done, then this is probably the best way\nto go.\n\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 17:12:09 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" }, { "msg_contents": "On Thu, 2002-09-26 at 16:57, Tom Lane wrote:\n<snip>\n> There are a couple things I think we should do. One: modify the CREATE\n> CONSTRAINT TRIGGER code to try to extract a foreign relation name from\n> the tgargs if FROM is missing. Without this, we have no hope of loading\n> working FK trigger definitions from old dumps. Two: modify pg_dump to\n> extract a name from the tgargs in the same fashion. I'd rather have\n> pg_dump do this than the backend, and this will at least make things\n> better in the case where you're using a 7.3 pg_dump against an older\n> database.\n<snip>\n> \n> Thoughts?\n> \n\nI'm trying to think of the cases where this extraction might fail, but\nmaybe more important is what happens if it does fail? \n\nRobert Treat\n\n", "msg_date": "26 Sep 2002 17:15:05 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> However, if they're coming from 7.0 or earlier would it be appropriate\n> to have them bounce through 7.2 / 7.1 first?\n\nWon't help. 7.2 will dump 'em out without a FROM clause, just like they\nwere loaded.\n\n> Is this the only problem that 7.0 people are going to experience (server\n> side, SQL changes are abundant)?\n\nYou're missing the point. Welche was upgrading *from 7.2*. But his\ntrigger definitions had a dump/reload history going back to 7.0.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 17:20:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reconstructing FKs in pg_dump " }, { "msg_contents": "Robert Treat <xzilla@users.sourceforge.net> writes:\n> I'm trying to think of the cases where this extraction might fail, but\n> maybe more important is what happens if it does fail? \n\nThen you have broken RI triggers ... which is the problem now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 17:22:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reconstructing FKs in pg_dump " }, { "msg_contents": "> > Is this the only problem that 7.0 people are going to experience (server\n> > side, SQL changes are abundant)?\n> \n> You're missing the point. Welche was upgrading *from 7.2*. But his\n> trigger definitions had a dump/reload history going back to 7.0.\n\nOh.. I certainly did.\n\n-- \n Rod Taylor\n\n", "msg_date": "26 Sep 2002 17:35:48 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" }, { "msg_contents": "On Thu, 2002-09-26 at 17:22, Tom Lane wrote:\n> Robert Treat <xzilla@users.sourceforge.net> writes:\n> > I'm trying to think of the cases where this extraction might fail, but\n> > maybe more important is what happens if it does fail? \n> \n> Then you have broken RI triggers ... which is the problem now.\n> \n\nUh...yeah, I got that part. I meant what will be done if/when it fails?\nThrow a WARNING and keep going? Throw an ERROR and die? \n\nRobert Treat \n\n", "msg_date": "26 Sep 2002 17:39:50 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" }, { "msg_contents": "On Thu, 26 Sep 2002, Tom Lane wrote:\n\n> Patrick Welche's recent problems (see pgsql-general) point out that the\n> old CREATE CONSTRAINT TRIGGER syntax that optionally omits a \"FROM\n> table\" clause doesn't work anymore --- the system *needs* tgconstrrelid\n> to be set in an RI constraint trigger record, because the RI triggers\n> now use that OID to find the referenced table. (The table name in the\n> tgargs field isn't used anymore, mainly because it's not schema-aware.)\n>\n> This means that RI trigger definitions dating back to 7.0 (or whenever\n> it was that we fixed the pg_dump bug about not dumping tgconstrrelid)\n> don't work anymore.\n>\n> There are a couple things I think we should do. One: modify the CREATE\n> CONSTRAINT TRIGGER code to try to extract a foreign relation name from\n> the tgargs if FROM is missing. Without this, we have no hope of loading\n> working FK trigger definitions from old dumps. Two: modify pg_dump to\n> extract a name from the tgargs in the same fashion. I'd rather have\n> pg_dump do this than the backend, and this will at least make things\n> better in the case where you're using a 7.3 pg_dump against an older\n> database.\n\nI'd worry about doing things only to pg_dump since that'd still leave\npeople that did use the old dump in the dark and there'd be nothing even\nindicating a problem until they did something that used the constraint.\nEven a notice for a missing FROM would be better (although at that\npoint how far is it to just fixing the problem). I can look at it this\nweekend (since it probably was my bug in the first place) unless you'd\nrather do it.\n\n> However, if we are going to put that kind of knowledge into pg_dump,\n> it would only be a small further step to have it dump these triggers\n> as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\n> better for forward compatibility than dumping the raw triggers.\n\nWasn't there still some question about the fact that ATAC causes a\ncheck of the constraint which for large tables is not insignificant.\nI don't remember if there was any consensus on how to deal with that.\n\n\n", "msg_date": "Thu, 26 Sep 2002 15:10:35 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" }, { "msg_contents": "Robert Treat <xzilla@users.sourceforge.net> writes:\n> On Thu, 2002-09-26 at 17:22, Tom Lane wrote:\n>> Robert Treat <xzilla@users.sourceforge.net> writes:\n> I'm trying to think of the cases where this extraction might fail, but\n> maybe more important is what happens if it does fail? \n>> \n>> Then you have broken RI triggers ... which is the problem now.\n\n> Uh...yeah, I got that part. I meant what will be done if/when it fails?\n> Throw a WARNING and keep going? Throw an ERROR and die? \n\nWhat I was thinking of was to do the following in CREATE CONSTRAINT\nTRIGGER:\n\n\tif (no FROM clause)\n\t{\n\t\ttry to extract table name from given tgargs;\n\t\ttry to look up table OID;\n\t\tif successful, insert table OID into tgconstrrelid;\n\t}\n\nIf the lookup fails, you'd be left creating a constraint trigger with\nzero tgconstrrelid, which is what's happening now. That would error\nout upon use (if it's really an RI trigger), thus alerting you that\nyou have a broken trigger. (We could add a couple of lines in the\nRI triggers to cause the error message to be more helpful than\n\"Relation 0 not found\".)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 18:22:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reconstructing FKs in pg_dump " }, { "msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n>> However, if we are going to put that kind of knowledge into pg_dump,\n>> it would only be a small further step to have it dump these triggers\n>> as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\n>> better for forward compatibility than dumping the raw triggers.\n\n> Wasn't there still some question about the fact that ATAC causes a\n> check of the constraint which for large tables is not insignificant.\n> I don't remember if there was any consensus on how to deal with that.\n\nHmm, good point. That's probably why we didn't go ahead and do it\nalready...\n\nMaybe we should just put the lookup hack into the backend's CREATE\nCONSTRAINT TRIGGER code and leave it at that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 18:25:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reconstructing FKs in pg_dump " }, { "msg_contents": "\nOn Thu, 26 Sep 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> >> However, if we are going to put that kind of knowledge into pg_dump,\n> >> it would only be a small further step to have it dump these triggers\n> >> as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\n> >> better for forward compatibility than dumping the raw triggers.\n>\n> > Wasn't there still some question about the fact that ATAC causes a\n> > check of the constraint which for large tables is not insignificant.\n> > I don't remember if there was any consensus on how to deal with that.\n>\n> Hmm, good point. That's probably why we didn't go ahead and do it\n> already...\n>\n> Maybe we should just put the lookup hack into the backend's CREATE\n> CONSTRAINT TRIGGER code and leave it at that.\n\nThat seems reasonable. And probably not too hard. There might still\nbe cases where we can't get it, and I think we probably should at least\nthrow a notice on the create in that case, the admin will *probably*\nignore it, but if they want to fix the situation right away they can.\n\n\n\n", "msg_date": "Thu, 26 Sep 2002 15:44:43 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump " }, { "msg_contents": "From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> However, if we are going to put that kind of knowledge into pg_dump,\n> it would only be a small further step to have it dump these triggers\n> as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\n> better for forward compatibility than dumping the raw triggers.\n\nThere was some talk of adding Rod Taylor's identifies upgrade script to\ncontrib, or mentioning it in the release. I think that it upgrades Foreign\nkey, Unique, and Serial constraints, is that relevant here? Could it be\nused (or modified) to handle this situation? Just a thought.\n", "msg_date": "Fri, 27 Sep 2002 23:03:25 -0400", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" }, { "msg_contents": "\nBoth are done, and in CVS in /contrib/adddepend.\n\n---------------------------------------------------------------------------\n\nMatthew T. O'Connor wrote:\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> > However, if we are going to put that kind of knowledge into pg_dump,\n> > it would only be a small further step to have it dump these triggers\n> > as ALTER TABLE ADD CONSTRAINT commands instead. Which would be a lot\n> > better for forward compatibility than dumping the raw triggers.\n> \n> There was some talk of adding Rod Taylor's identifies upgrade script to\n> contrib, or mentioning it in the release. I think that it upgrades Foreign\n> key, Unique, and Serial constraints, is that relevant here? Could it be\n> used (or modified) to handle this situation? Just a thought.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 01:04:11 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reconstructing FKs in pg_dump" } ]
[ { "msg_contents": "\nthe following was sent to the php developer's list, and they came back with:\n\n> > Isn't it generally better (where \"better\" means more secure,\n> > efficient, and easily maintained) to handle database access\n> > control using PostgreSQL's native access mappings?\n>\n> I would think so, and IMHO, that's where pgsql access control\n> belongs, with pgsql.\n\nas best i can understand, there is no way to get apach/php/pgsql configured\n(using \"PostgreSQL's native access mappings\") that would disallow php code\nin one virtual host from connecting to any database on the system.\n\ni understand that on a user level, we can control which tables they have\naccess to (disregarding that almost all access will be by the \"web\" user).\n\ncan anyone make some valid arguments for/against the patch in php?\nor any suggestions on how to do it in pgsql?\n\n----- Forwarded message from Jim Mercer <jim@reptiles.org> -----\n\nDate: Thu, 26 Sep 2002 14:54:45 -0400\nFrom: Jim Mercer <jim@reptiles.org>\nTo: pgsql-general@postgreSQL.org\nSubject: PHP-4.2.3 patch to allow restriction of database access\n\n\nthe patch is at:\nftp://ftp.reptiles.org/pub/php/php-pgsql.patch\n\nthis patch adds the config variable pgsql.allowed_dblist\n\nby default it has no value, meaning all databases are accessible\n\nit can contain a colon delimited list of databases that are accessible.\n\nif the database accessed is not in the list, and the list is not null,\nthen an error is returned as if the database did not exist\n\nthis patch is relative to php-4.2.3\n\nthis function would be very useful to apache/virtual hosting.\n\ni have tested with the following in my apache httpd.conf:\n\n<Directory /home/www/htdocs/jim>\n php_admin_value pgsql.allowed_dblist \"jim:billing\"\n</Directory>\n\nalthough it can be accomplished by other means, setting the variable to a\nvalue of \":\" effectively locks the code out of pgsql.\n\nalso, a special tag of \"-all-\" will allow access to all databases.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n\n----- End forwarded message -----\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 26 Sep 2002 20:26:51 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": true, "msg_subject": "hacker help: PHP-4.2.3 patch to allow restriction of database access" }, { "msg_contents": "On Thu, 26 Sep 2002, Jim Mercer wrote:\n\n> \n> the following was sent to the php developer's list, and they came back with:\n> \n> > > Isn't it generally better (where \"better\" means more secure,\n> > > efficient, and easily maintained) to handle database access\n> > > control using PostgreSQL's native access mappings?\n> >\n> > I would think so, and IMHO, that's where pgsql access control\n> > belongs, with pgsql.\n\nI totally disagree. It is a language level restriction, not a database\nlevel one, so why back it into Postgres? Just parse 'conninfo' when it is \npg_(p)connect() and check it against the configuration setting.\n\nThe patch seems fine. I am unsure as to how useful it is.\n\nsystem(\"/usr/local/pgsql/bin/psql -U jim -c \\\"select 'i got\n\t\t\tin';\\\" template1\");\n\nGavin\n\n\n", "msg_date": "Fri, 27 Sep 2002 11:15:35 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of" }, { "msg_contents": "On Fri, Sep 27, 2002 at 11:15:35AM +1000, Gavin Sherry wrote:\n> On Thu, 26 Sep 2002, Jim Mercer wrote:\n> > > I would think so, and IMHO, that's where pgsql access control\n> > > belongs, with pgsql.\n> \n> I totally disagree. It is a language level restriction, not a database\n> level one, so why back it into Postgres? Just parse 'conninfo' when it is \n> pg_(p)connect() and check it against the configuration setting.\n\nwhich is effectively what my code does, except i was lazy, and i let the\nconnection proceed, then check if PQdb() is in the auth list, and fail\nif it isn't. (i figured that way if there was any silliness in the conninfo\nstring, PQconnect would figure it out).\n\n> The patch seems fine. I am unsure as to how useful it is.\n> \n> system(\"/usr/local/pgsql/bin/psql -U jim -c \\\"select 'i got\n> \t\t\tin';\\\" template1\");\n\nthat wouldn't work so well in safe_mode. which is the area i'm playing with.\n\nmaybe not _totally_ secure, but much moreso than nothing.\n\nand retricting virtual hosts to their own data sets relieves me of worry\nabout \"GRANT all ON blah TO public;\".\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 26 Sep 2002 21:49:54 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": true, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "On Thu, 26 Sep 2002, Jim Mercer wrote:\n\n> On Fri, Sep 27, 2002 at 11:15:35AM +1000, Gavin Sherry wrote:\n> > On Thu, 26 Sep 2002, Jim Mercer wrote:\n> > > > I would think so, and IMHO, that's where pgsql access control\n> > > > belongs, with pgsql.\n> > \n> > I totally disagree. It is a language level restriction, not a database\n> > level one, so why back it into Postgres? Just parse 'conninfo' when it is \n> > pg_(p)connect() and check it against the configuration setting.\n> \n> which is effectively what my code does, except i was lazy, and i let the\n> connection proceed, then check if PQdb() is in the auth list, and fail\n\nAhh yes. I meant to say this. No point being lazy when it comes to\nsecurity.\n\n> maybe not _totally_ secure, but much moreso than nothing.\n> \n\nI was basically just suggesting that its effect needs to be\ndocumented. \"This needs to be used in conjunction with other forms of\nsecurity....\"\n\nGavin\n\n\n", "msg_date": "Fri, 27 Sep 2002 12:06:43 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of" }, { "msg_contents": "On Fri, Sep 27, 2002 at 12:06:43PM +1000, Gavin Sherry wrote:\n> On Thu, 26 Sep 2002, Jim Mercer wrote:\n> > maybe not _totally_ secure, but much moreso than nothing.\n> \n> I was basically just suggesting that its effect needs to be\n> documented. \"This needs to be used in conjunction with other forms of\n> security....\"\n\ndocumentation? what's that? 8^)\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Thu, 26 Sep 2002 22:08:29 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": true, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "Jim Mercer <jim@reptiles.org> writes:\n> as best i can understand, there is no way to get apach/php/pgsql configured\n> (using \"PostgreSQL's native access mappings\") that would disallow php code\n> in one virtual host from connecting to any database on the system.\n\nBetraying my ignorance of PHP here: what does a server supporting\nmultiple virtual hosts look like from the database's end? Can we\ntell the difference at all between connections initiated on behalf\nof one virtual host from those initiated on behalf of another?\n\nIf we can tell 'em apart (for instance, if they differ in apparent\nclient IP address) then it'd make sense to put enforcement on the\ndatabase side. If we can't tell 'em apart, then we need some help\nfrom the PHP interface code so that we can tell 'em apart.\n\nProceeding on the assumption that we do need some help ...\n\n> this patch adds the config variable pgsql.allowed_dblist\n> by default it has no value, meaning all databases are accessible\n> it can contain a colon delimited list of databases that are accessible.\n\nSeems like this hard-wires a rather narrow view of what sorts of\nprotection restrictions you need. Might I suggest instead that\nan appropriate config variable would be a list of Postgres user ids\nthat the virtual host is allowed to connect as? Then the database's\nusual protection mechanisms could be used to allow/disallow connection\nto particular databases, if that's what you want. But this does more:\nit lets different virtual hosts connect to the same database as\ndifferent users, and then access within that DB can be controlled using\nthe regular Postgres access-control mechanisms.\n\nEssentially, the idea here is to ensure that the DB can tell virtual\nhosts apart as different users.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 23:42:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "On Thu, Sep 26, 2002 at 11:42:44PM -0400, Tom Lane wrote:\n> Jim Mercer <jim@reptiles.org> writes:\n> > as best i can understand, there is no way to get apach/php/pgsql configured\n> > (using \"PostgreSQL's native access mappings\") that would disallow php code\n> > in one virtual host from connecting to any database on the system.\n> \n> Betraying my ignorance of PHP here: what does a server supporting\n> multiple virtual hosts look like from the database's end? Can we\n> tell the difference at all between connections initiated on behalf\n> of one virtual host from those initiated on behalf of another?\n\nnormally (in my experience) php is linked into apache, and pgsql is linked into\nphp.\n\napache runs as the same user (unless you use suexec and a cgi version of php).\n\npgsql's knowledge of the php process is only what is passed on by the user.\n\nsince there is no IP addr specific to the process, we can't easily use\nhost-based authentication.\n\nfor domain sockets, pgsql only gets the UID of the httpd process.\n\nsince all of the virtual hosts are run by the same uid, there is no way\nto differentiate between the virtual hosts.\n\none could attempt to use a specific username and hardcoded password, but\nthat leaves the password in plain text in the php code.\n\nand that does not stop someone from writing code to browse the available\ndatabases for tables set with \"GRANT ALL ON blah to PUBLIC;\".\n\nmy patch is an attempt to put an immutable list of databases in the apache\nconfig (safe from modification by normal users). and to have PQconnect()\ncheck against that list before allowing access. the list would be specific\nto a virtual host (and/or the directort hierarchy of the pages).\n\nit is possible to pass such a list to pgsql through environment variables,\nbut those can be overridden by users.\n\nthe php-dev people are giving me a hard time saying that this level of\nsecurity should be managed internally by pgsql.\n\ni'm trying to explain to them that it isn't, and that my patch allows this\nsecurity to happen.\n\nif libpq had an additional facility where PQconnect checked against a list\npassed to it in some fashion, then we could probably just pass that through\nin the php modules, and they'd probably be more content with that as it is\njust part of the pgsql API.\n\ni'm thinking something like a wrapper function like:\n\nPGconn *PQconnect_restricted(char *conninfo, char *restricted_list)\n{\n\t// break out conninfo\n\t...\n\n\tif (restricted_list != NULL) {\n\t\t// check to see if dbName is in the list\n\t\t....\n\t\tif (not in list) {\n\t\t\tfail as if dbName did not exist\n\t\t}\n\t}\n\treturn(PQconnect(conninfo);\n}\n\n(i'm sure someone more familiar with the code could come up with a more\n refined way of doing this)\n\n> > this patch adds the config variable pgsql.allowed_dblist\n> > by default it has no value, meaning all databases are accessible\n> > it can contain a colon delimited list of databases that are accessible.\n> \n> Seems like this hard-wires a rather narrow view of what sorts of\n> protection restrictions you need. Might I suggest instead that\n> an appropriate config variable would be a list of Postgres user ids\n> that the virtual host is allowed to connect as? Then the database's\n> usual protection mechanisms could be used to allow/disallow connection\n> to particular databases, if that's what you want. But this does more:\n> it lets different virtual hosts connect to the same database as\n> different users, and then access within that DB can be controlled using\n> the regular Postgres access-control mechanisms.\n\nideally, i'd like to have users-per-database, as opposed to the global\nmodel we have now. i'm tired of maintaining seperate pgsql userlists and\napplication userlists. probably a pipe dream, but, well, there you are. 8^)\n\nif we are willing to modify libpq to support a \"white-list\", then what you\nare suggesting is quite possible.\n\nto satisfy the php-dev people, we just need to extend the API to require/allow\nsuch a white-list to be processed.\n\npassing the white-list from httpd.conf -> php -> libpq is an easy enough\ntweak.\n\ni suspect the php-dev people are unhappy with my patch because it is including\nlogic (ie. parsing the white-list) which they don't think php should be\nresponsible for.\n\npersonally, i think such an attitude is too rigid, but i'm also thinking a\nwhite-list mechanism would be useful in other contexts as well.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Fri, 27 Sep 2002 00:31:48 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": true, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "On Thu, 2002-09-26 at 22:42, Tom Lane wrote:\n> Jim Mercer <jim@reptiles.org> writes:\n> > as best i can understand, there is no way to get apach/php/pgsql configured\n> > (using \"PostgreSQL's native access mappings\") that would disallow php code\n> > in one virtual host from connecting to any database on the system.\n> \n> Betraying my ignorance of PHP here: what does a server supporting\n> multiple virtual hosts look like from the database's end? Can we\n> tell the difference at all between connections initiated on behalf\n> of one virtual host from those initiated on behalf of another?\nNope, you can't to the best of my knowledge. You just get a standard\nconnect string. (Assuming NAME based vHosts here, which is what most\nshould be, modulo SSL based ones). \n\n[snip]\n \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "27 Sep 2002 05:49:27 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of" }, { "msg_contents": "Jim Mercer writes:\n\n> ideally, i'd like to have users-per-database, as opposed to the global\n> model we have now.\n\nThat's in the works. Some form of this will be in 7.3.\n\n> if we are willing to modify libpq to support a \"white-list\", then what you\n> are suggesting is quite possible.\n\nHow would you store such a list and prevent users from simply unsetting\nit?\n\n> i suspect the php-dev people are unhappy with my patch because it is including\n> logic (ie. parsing the white-list) which they don't think php should be\n> responsible for.\n\n From my reading of the discussion, I think they have not understood that\nthe PostgreSQL server has no way to distinguish different virtual host\nidentities. I think your feature is quite reasonable, if you list users\ninstead of databases.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sat, 28 Sep 2002 13:08:36 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of" }, { "msg_contents": "On Sat, Sep 28, 2002 at 01:08:36PM +0200, Peter Eisentraut wrote:\n> Jim Mercer writes:\n> > ideally, i'd like to have users-per-database, as opposed to the global\n> > model we have now.\n> \n> That's in the works. Some form of this will be in 7.3.\n\ncool!\n\n> > if we are willing to modify libpq to support a \"white-list\", then what you\n> > are suggesting is quite possible.\n> \n> How would you store such a list and prevent users from simply unsetting\n> it?\n\nthe list is something determined by the client, effectively, in this scenario.\n\nbasically, i'm just looking at a libpq function that will take a white-list,\nand only allow connections through PQconnect() based on that list.\n\nthe reasoning for this is that postmaster has no ability to differentiate\nbetween incoming sessions, and as such, storing the list in the server makes\nno sense, the server won't know how to apply the list.\n\nin the scenario i'm working with, apache/php/libpq are safe from change by\nthe users. apache has the ability to pass values through php to libpq which\nthe user cannot change.\n\nso apache would tell libpq what tables _this_ instance of apache/php/libpq\ncan access.\n\nsimply using environment variables is not good enough, as the user can\nchange their values in their php scripts.\n\n> > i suspect the php-dev people are unhappy with my patch because it is including\n> > logic (ie. parsing the white-list) which they don't think php should be\n> > responsible for.\n> \n> From my reading of the discussion, I think they have not understood that\n> the PostgreSQL server has no way to distinguish different virtual host\n> identities. I think your feature is quite reasonable, if you list users\n> instead of databases.\n\nwell, for my purposes, it is _databases_ i'm more concerned about. each\nvirtual host should be restricted to specific databases. this way each user\nis entitled to mess up their own world, but not mess with other people's.\n\nas it currently stands, virtual hosts can trample all over other databases,\nand with the nature of a single uid for all apache/php/libpq proceses,\nthey are generally doing it with the same pgsql user.\n\nvigilience over the user-level permissions is not something i trust the users\nto do. 8^(\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Sat, 28 Sep 2002 09:23:34 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": true, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "Jim Mercer <jim@reptiles.org> wrote:\n\n> as it currently stands, virtual hosts can trample all over other\ndatabases,\n> and with the nature of a single uid for all apache/php/libpq proceses,\n> they are generally doing it with the same pgsql user.\n\nI haven't followed the whole thread, so perhaps I missed something. But why\nnot just use password authentication to the database with a different user\nfor each database? Ok, one has to store the plain-text passwords in the php\nfiles. You have to protect your users from reading each others files anyway;\nthis can be done.\n\nAt least you can set up different users per database, so that it doesn't\nmatter if the proposed restriction setting is by database or by user.\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Sat, 28 Sep 2002 15:57:27 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "On Sat, Sep 28, 2002 at 03:57:27PM +0200, Michael Paesold wrote:\n> Jim Mercer <jim@reptiles.org> wrote:\n> > as it currently stands, virtual hosts can trample all over other\n> databases,\n> > and with the nature of a single uid for all apache/php/libpq proceses,\n> > they are generally doing it with the same pgsql user.\n> \n> I haven't followed the whole thread, so perhaps I missed something. But why\n> not just use password authentication to the database with a different user\n> for each database? Ok, one has to store the plain-text passwords in the php\n> files. You have to protect your users from reading each others files anyway;\n> this can be done.\n\nthat can be done, but plain-text passwords are not good. also, it doesn't\nstop users from cruising other databases for unprotected data.\nmy patch will control that, at least in the context of apach/php/libpq.\n\n> At least you can set up different users per database, so that it doesn't\n> matter if the proposed restriction setting is by database or by user.\n\nmost of the databases have one user, that of the httpd process.\nfrom what i've seen, this is fairly standard with virtual hosting.\n\nuntil we have per-database users, generally what you end up doing is creating\na per-database user/password table, and then write applications that control\nthings based on that table, as opposed to the system table. this means that\nall of the tables in a database need to be read/write by one central user.\n\ni've always found this hokey, but necessary.\n\nthe up-coming per-table userlist will help this alot.\n\n-- \n[ Jim Mercer jim@reptiles.org +1 416 410-5633 ]\n[ I want to live forever, or die trying. ]\n", "msg_date": "Sat, 28 Sep 2002 10:02:36 -0400", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": true, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of database\n\taccess" }, { "msg_contents": "Jim Mercer writes:\n\n> the reasoning for this is that postmaster has no ability to differentiate\n> between incoming sessions, and as such, storing the list in the server makes\n> no sense, the server won't know how to apply the list.\n\nRight, but libpq also has no concept of what you call \"incoming session\".\nPHP is the first interface that came up with that notion. If we have more\nclients requesting that kind of support, we can think about it, but for\nnow you should think of putting it into PHP first.\n\n> well, for my purposes, it is _databases_ i'm more concerned about.\n\nOK, so *you* put\n\nlocal sameuser ...\n\ninto pg_hba.conf and be done. The rest of the user community can decide\nfor themselves. This is especially important since with the arrival of\nschemas there is a whole new way to manage multiple users on a server,\nwhich other users might be interested in.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 29 Sep 2002 14:26:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: hacker help: PHP-4.2.3 patch to allow restriction of" } ]
[ { "msg_contents": "I heard people asking for listen/notify support in the jdbc driver from different pgsql lists. I don't know what the status is about that, but here's one solution.\n\nI did this quick hack for our own purposes, maybe someone would like to pick this code up and use it as we do...\n\nThis code is not well tested, but it will be in the near future, if anyone have any comments/spot any bugs, please tell me.\n\nThe code is in c++ & java, and was tested/works on win32 and linux/alpha (64bit).\nIt needs the jdk JNI headers, since it uses the java native interface ...\n\nJust to be clear, this way we're using one separate connection to look at notifys, and do the work (if any) in a regular JDBC connection when we get the notify.\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\n", "msg_date": "Fri, 27 Sep 2002 02:47:52 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "[ANNOUNCE] PQNotify java listen / notify hack" } ]
[ { "msg_contents": "... And maybe i should attach the code aswell :)\n\nI'm not subscribed to pgsql-jdbc or pgsql-announce, so please CC me if your responding...\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-", "msg_date": "Fri, 27 Sep 2002 02:53:52 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "[ANNOUNCE] PQNotify java listen / notify hack" } ]
[ { "msg_contents": "When a cascaded column drop is removing the last column, drop the table\ninstead. Regression tests via domains.\n\nReported by Tim Knowles.\n\n-- \n Rod Taylor", "msg_date": "26 Sep 2002 21:00:16 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": true, "msg_subject": "Cascaded Column Drop" }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> When a cascaded column drop is removing the last column, drop the table\n> instead. Regression tests via domains.\n\nIs that a good idea, or should we refuse the drop entirely? A table\ndrop zaps a lot more stuff than a column drop.\n\nWhat I was actually wondering about after reading Tim's report was\nwhether we could support zero-column tables, which would eliminate the\nneed for the special case altogether. I have not looked to see how\nextensive are the places that assume tuples have > 0 columns ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 26 Sep 2002 23:47:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop " }, { "msg_contents": "Tom Lane wrote:\n> Rod Taylor <rbt@rbt.ca> writes:\n> > When a cascaded column drop is removing the last column, drop the table\n> > instead. Regression tests via domains.\n> \n> Is that a good idea, or should we refuse the drop entirely? A table\n> drop zaps a lot more stuff than a column drop.\n\nI think we should refuse the drop. It is just too strange. You can\nsuggest if they want the column dropped, just drop the table.\n\n> What I was actually wondering about after reading Tim's report was\n> whether we could support zero-column tables, which would eliminate the\n> need for the special case altogether. I have not looked to see how\n> extensive are the places that assume tuples have > 0 columns ...\n\nZero-width tables do sound interesting. Is it somehow non-relational?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 27 Sep 2002 00:00:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n>>What I was actually wondering about after reading Tim's report was\n>>whether we could support zero-column tables, which would eliminate the\n>>need for the special case altogether. I have not looked to see how\n>>extensive are the places that assume tuples have > 0 columns ...\n> \n> Zero-width tables do sound interesting. Is it somehow non-relational?\n> \n\nI can see a zero column table as a transition state maybe, but what else would \nthey be useful for?\n\nJoe\n\n", "msg_date": "Thu, 26 Sep 2002 21:11:28 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > Tom Lane wrote:\n> >>What I was actually wondering about after reading Tim's report was\n> >>whether we could support zero-column tables, which would eliminate the\n> >>need for the special case altogether. I have not looked to see how\n> >>extensive are the places that assume tuples have > 0 columns ...\n> > \n> > Zero-width tables do sound interesting. Is it somehow non-relational?\n> > \n> \n> I can see a zero column table as a transition state maybe, but what else would \n> they be useful for?\n\nIt's the /dev/null of the SQL world, but I'm not sure what that means. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 27 Sep 2002 00:16:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop" }, { "msg_contents": "Bruce Momjian dijo: \n\n> Tom Lane wrote:\n> > Rod Taylor <rbt@rbt.ca> writes:\n> > > When a cascaded column drop is removing the last column, drop the table\n> > > instead. Regression tests via domains.\n> > \n> > Is that a good idea, or should we refuse the drop entirely? A table\n> > drop zaps a lot more stuff than a column drop.\n> \n> I think we should refuse the drop. It is just too strange. You can\n> suggest if they want the column dropped, just drop the table.\n\nYeah... you can't have triggers, you can't have constraints. Hey, you\ncan create a view using it, and possibly you can inherit the table...\nbut what's that good for?\n\nBut think about the inheritance case again: suppose\n\ncreate table p (f1 int);\ncreate table c (f2 int) inherits (p);\n\nNow you just change your mind and want to drop p but not c. You can't\ndo it because f1 is the last column on it, and c inherits it. So a way\nto drop the last column inherited (thus freeing the dependency on p)\nmakes c independent, and you can drop p.\n\nBut note that this drop of p is not just drop-cascade: the inheritance\ntree has to get out of the dependency info first (it's not drop-restrict\neither, is it?)\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La espina, desde que nace, ya pincha\" (Proverbio africano)\n\n", "msg_date": "Fri, 27 Sep 2002 00:18:46 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> What I was actually wondering about after reading Tim's report was\n>> whether we could support zero-column tables, which would eliminate the\n>> need for the special case altogether. I have not looked to see how\n>> extensive are the places that assume tuples have > 0 columns ...\n\n> Zero-width tables do sound interesting. Is it somehow non-relational?\n\nDunno. I wasn't really thinking that zero-width tables are all that\nuseful by themselves, but it does seem natural that you should be able\nto redefine a column by\n\tALTER TABLE mytab DROP COLUMN foo;\n\tALTER TABLE mytab ADD COLUMN foo ...;\neven if foo is the only column in mytab. Right now we reject that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Sep 2002 00:28:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop " }, { "msg_contents": "> > Is that a good idea, or should we refuse the drop entirely? A table\n> > drop zaps a lot more stuff than a column drop.\n> \n> I think we should refuse the drop. It is just too strange. You can\n> suggest if they want the column dropped, just drop the table.\n\nLeaving a zero-width table would be best, even if its not so useful. I\ndon't like rejecting a CASCADE as it kinda defeats the purpose of having\nCASCADE.\n\nThe patch may not work right in complex cases anyway. It seems the\nswitch to remove the table should be earlier.\n\n-- \n Rod Taylor\n\n", "msg_date": "27 Sep 2002 00:29:21 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": true, "msg_subject": "Re: Cascaded Column Drop" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> But think about the inheritance case again: suppose\n\n> create table p (f1 int);\n> create table c (f2 int) inherits (p);\n\n> Now you just change your mind and want to drop p but not c. You can't\n> do it because f1 is the last column on it, and c inherits it. So a way\n> to drop the last column inherited (thus freeing the dependency on p)\n> makes c independent, and you can drop p.\n\nHmm, no I don't think so. Parent-to-child dependence is a property of\nthe two tables, not of their columns, and should not go away just\nbecause you reduce the parent to zero columns. I would expect that if\nI dropped p.f1 (assuming this were allowed) and then added p.g1, that\nc would also now have c.g1. So the parent/child relationship outlives\nany specific column ... IMHO anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Sep 2002 01:43:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cascaded Column Drop " }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> Leaving a zero-width table would be best, even if its not so useful. I\n> don't like rejecting a CASCADE as it kinda defeats the purpose of having\n> CASCADE.\n\nI did something about this --- as of CVS tip, you can do\n\nregression=# create table foo (f1 int);\nCREATE TABLE\nregression=# alter table foo drop column f1;\nALTER TABLE\nregression=# select * from foo;\n\n--\n(0 rows)\n\nI fixed the places that were exposed by the regression tests as not\ncoping with zero-column tables, but it is likely that there are some\nmore places that will give odd errors with such a table. Feel free\nto beat on it.\n\npsql seems to have some minor issues with a zero-column select.\nYou can do this:\n\nregression=# insert into foo default values;\nINSERT 720976 1\nregression=# select * from foo;\n\n--\n(1 row)\n\nregression=# insert into foo default values;\nINSERT 720977 1\nregression=# select * from foo;\n\n--\n(2 rows)\n\nregression=# \n\nSeems like nothing's being printed for an empty row.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 16:38:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Cascaded Column Drop " }, { "msg_contents": "On Sat, 2002-09-28 at 16:38, Tom Lane wrote:\n> Rod Taylor <rbt@rbt.ca> writes:\n> > Leaving a zero-width table would be best, even if its not so useful. I\n> > don't like rejecting a CASCADE as it kinda defeats the purpose of having\n> > CASCADE.\n> \n> I did something about this --- as of CVS tip, you can do\n> \n> regression=# create table foo (f1 int);\n> CREATE TABLE\n> regression=# alter table foo drop column f1;\n> ALTER TABLE\n> regression=# select * from foo;\n\nWhich of course would dump as 'create table foo ();'.\n\nI don't think relcache would like a table without any columns, which is\nwhy the above is rejected.\n\n\nAnyway, should pg_dump ignore the table entirely? Or do we try to allow\ncreate table () without any attributes?\n\n-- \n Rod Taylor\n\n", "msg_date": "28 Sep 2002 22:19:50 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Cascaded Column Drop" }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n>> I did something about this --- as of CVS tip, you can do\n>> \n>> regression=# create table foo (f1 int);\n>> CREATE TABLE\n>> regression=# alter table foo drop column f1;\n>> ALTER TABLE\n>> regression=# select * from foo;\n\n> Which of course would dump as 'create table foo ();'.\n\nTrue. I didn't say that everything would be happy with it ;-). I think\nthat a zero-column table is only useful as a transient state, and so I'm\nhappy as long as the backend doesn't core dump.\n\n> I don't think relcache would like a table without any columns, which is\n> why the above is rejected.\n\nRelcache doesn't seem to have a problem with it.\n\n> Anyway, should pg_dump ignore the table entirely? Or do we try to allow\n> create table () without any attributes?\n\nI feel no strong need to do either. But it likely would only take\nremoval of this error check:\n\nregression=# create table foo ();\nERROR: DefineRelation: please inherit from a relation or define an attribute\n\nat least as far as the backend goes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 22:41:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Cascaded Column Drop " }, { "msg_contents": "\n> regression=# create table foo ();\n> ERROR: DefineRelation: please inherit from a relation or define an attribute\n> \n> at least as far as the backend goes.\n\nFound in relcache.c earlier:\n\tAssertArg(natts > 0);\n\nDidn't look too hard to see what it protects, because it's more effort\nthan it's worth.\n\n-- \n Rod Taylor\n\n", "msg_date": "28 Sep 2002 22:50:32 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] Cascaded Column Drop" }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> Found in relcache.c earlier:\n> \tAssertArg(natts > 0);\n\nOkay, one other place to change ... there are probably more such ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 22:58:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] Cascaded Column Drop " } ]
[ { "msg_contents": "Hi,\n\nI have a set of points defined in two columns x,y and...\nhow to convert it to PATH data type using pgplsql ?\n\nThanks for help\nThomas\n\n\n", "msg_date": "Fri, 27 Sep 2002 13:57:25 +0200", "msg_from": "\"Tomasz Zdybicki\" <zdybicki@ambient.com.pl>", "msg_from_op": true, "msg_subject": "How to convert" } ]
[ { "msg_contents": "Yesterday I reported a WAL problem that could lead to tuples not being\nmarked as committed-good or committed-dead after we'd already removed\nthe pg_clog segment that had their transaction's commit status.\nI wasn't completely satisfied with that, though, because on further\nreflection it seemed a very low-probability mechanism. I kept digging,\nand finally came to the kind of bug that qualifies as a big DOH :-(\n\nIf you run a database-wide VACUUM (one with no specific target table\nmentioned) as a non-superuser, then the VACUUM doesn't process tables\nthat don't belong to you. But it will advance pg_database.datvacuumxid\nanyway, which means that pg_clog could be truncated while old transaction\nreferences still remain unmarked in those other tables.\n\nIn words of one syllable: running VACUUM as a non-superuser can cause\nirrecoverable data loss in any 7.2.* release.\n\nI think this qualifies as a \"must fix\" bug. I recommend we back-patch\na fix for this into the REL7_2 branch and put out a 7.2.3 release.\nWe should also fix the \"can't wait without a PROC\" bug that was solved\na few days ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 27 Sep 2002 18:40:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Cause of missing pg_clog files" }, { "msg_contents": "Tom Lane wrote:\n> Yesterday I reported a WAL problem that could lead to tuples not being\n> marked as committed-good or committed-dead after we'd already removed\n> the pg_clog segment that had their transaction's commit status.\n> I wasn't completely satisfied with that, though, because on further\n> reflection it seemed a very low-probability mechanism. I kept digging,\n> and finally came to the kind of bug that qualifies as a big DOH :-(\n> \n> If you run a database-wide VACUUM (one with no specific target table\n> mentioned) as a non-superuser, then the VACUUM doesn't process tables\n> that don't belong to you. But it will advance pg_database.datvacuumxid\n> anyway, which means that pg_clog could be truncated while old transaction\n> references still remain unmarked in those other tables.\n> \n> In words of one syllable: running VACUUM as a non-superuser can cause\n> irrecoverable data loss in any 7.2.* release.\n> \n> I think this qualifies as a \"must fix\" bug. I recommend we back-patch\n> a fix for this into the REL7_2 branch and put out a 7.2.3 release.\n> We should also fix the \"can't wait without a PROC\" bug that was solved\n> a few days ago.\n\nWow, you sure have found some good bugs in the past few days. Nice job.\n\nYes, I agree we should push out a 7.2.3, and I think we are now ready\nfor beta3. I will work on docs and packaging now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 27 Sep 2002 20:08:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cause of missing pg_clog files" }, { "msg_contents": "\nOK, we need a decision on whether we are going to do a 7.2,3 or just\nhave it in beta3. If it is in 7.2.3, I would not mention it in the\nbeta3 release notes.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Yesterday I reported a WAL problem that could lead to tuples not being\n> marked as committed-good or committed-dead after we'd already removed\n> the pg_clog segment that had their transaction's commit status.\n> I wasn't completely satisfied with that, though, because on further\n> reflection it seemed a very low-probability mechanism. I kept digging,\n> and finally came to the kind of bug that qualifies as a big DOH :-(\n> \n> If you run a database-wide VACUUM (one with no specific target table\n> mentioned) as a non-superuser, then the VACUUM doesn't process tables\n> that don't belong to you. But it will advance pg_database.datvacuumxid\n> anyway, which means that pg_clog could be truncated while old transaction\n> references still remain unmarked in those other tables.\n> \n> In words of one syllable: running VACUUM as a non-superuser can cause\n> irrecoverable data loss in any 7.2.* release.\n> \n> I think this qualifies as a \"must fix\" bug. I recommend we back-patch\n> a fix for this into the REL7_2 branch and put out a 7.2.3 release.\n> We should also fix the \"can't wait without a PROC\" bug that was solved\n> a few days ago.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 27 Sep 2002 20:30:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cause of missing pg_clog files" }, { "msg_contents": "On Fri, Sep 27, 2002 at 08:30:38PM -0400, Bruce Momjian wrote:\n> \n> OK, we need a decision on whether we are going to do a 7.2,3 or just\n> have it in beta3. If it is in 7.2.3, I would not mention it in the\n> beta3 release notes.\n\nIf there won't be any 7.2.3, could a note be put up on the website at\nleast? This is a pretty serious problem, and new users right now\nwill be using 7.2.2, which has this bug. They should be warned.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 30 Sep 2002 10:31:38 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Cause of missing pg_clog files" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, we need a decision on whether we are going to do a 7.2,3 or just\n> have it in beta3. If it is in 7.2.3, I would not mention it in the\n> beta3 release notes.\n\nWe definitely should have a 7.2.3. If we can release a 7.2.2 to fix\nbugs and a security flaw, then we should definitely have a 7.2.3 to\nensure the usability of the 7.2.x series.\n\nSome places will still be using 7.2.x for 2 years to come, just because\n7.2.x was what their projects started developing against, etc.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> ---------------------------------------------------------------------------\n> \n> Tom Lane wrote:\n> > Yesterday I reported a WAL problem that could lead to tuples not being\n> > marked as committed-good or committed-dead after we'd already removed\n> > the pg_clog segment that had their transaction's commit status.\n> > I wasn't completely satisfied with that, though, because on further\n> > reflection it seemed a very low-probability mechanism. I kept digging,\n> > and finally came to the kind of bug that qualifies as a big DOH :-(\n> >\n> > If you run a database-wide VACUUM (one with no specific target table\n> > mentioned) as a non-superuser, then the VACUUM doesn't process tables\n> > that don't belong to you. But it will advance pg_database.datvacuumxid\n> > anyway, which means that pg_clog could be truncated while old transaction\n> > references still remain unmarked in those other tables.\n> >\n> > In words of one syllable: running VACUUM as a non-superuser can cause\n> > irrecoverable data loss in any 7.2.* release.\n> >\n> > I think this qualifies as a \"must fix\" bug. I recommend we back-patch\n> > a fix for this into the REL7_2 branch and put out a 7.2.3 release.\n> > We should also fix the \"can't wait without a PROC\" bug that was solved\n> > a few days ago.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 01 Oct 2002 00:37:13 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Cause of missing pg_clog files" }, { "msg_contents": "Justin Clift wrote:\n> Bruce Momjian wrote:\n> > \n> > OK, we need a decision on whether we are going to do a 7.2,3 or just\n> > have it in beta3. If it is in 7.2.3, I would not mention it in the\n> > beta3 release notes.\n> \n> We definitely should have a 7.2.3. If we can release a 7.2.2 to fix\n> bugs and a security flaw, then we should definitely have a 7.2.3 to\n> ensure the usability of the 7.2.x series.\n> \n> Some places will still be using 7.2.x for 2 years to come, just because\n> 7.2.x was what their projects started developing against, etc.\n\nThere will be a 7.2.3. Tom is going to back-port his fixes, then I am\ngoing to brand it, and then Marc will release it; it is in-process.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 11:09:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Cause of missing pg_clog files" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> On Fri, Sep 27, 2002 at 08:30:38PM -0400, Bruce Momjian wrote:\n>> OK, we need a decision on whether we are going to do a 7.2,3 or just\n>> have it in beta3. If it is in 7.2.3, I would not mention it in the\n>> beta3 release notes.\n\n> If there won't be any 7.2.3,\n\nThere will be; I will backport the fixes today, and Marc promised to\nroll the tarball tonight.\n\nOne thing I am undecided about: I am more than half tempted to put in\nthe fix that makes us able to cope with mktime's broken-before-1970\nbehavior in recent glibc versions (e.g., Red Hat 7.3). This seems like\na good idea considering that other Linux distros will surely be updating\nglibc soon too. On the other hand, it's hard to call it a critical bug\nfix --- it ain't on a par with the vacuum/clog problem, for sure. And\nthe patch has received only limited testing (basically just whatever\nuse 7.3beta1 has had). On the third hand, the patch only does something\nif mktime() has already failed, so it's hard to see how it could make\nlife worse even if it's buggy.\n\nAny votes on whether to fix that or leave it alone in 7.2.3? I need\nsome input in the next few hours ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 11:18:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "7.2.3 fixes (was Re: Cause of missing pg_clog files)" }, { "msg_contents": "Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > On Fri, Sep 27, 2002 at 08:30:38PM -0400, Bruce Momjian wrote:\n> >> OK, we need a decision on whether we are going to do a 7.2,3 or just\n> >> have it in beta3. If it is in 7.2.3, I would not mention it in the\n> >> beta3 release notes.\n> \n> > If there won't be any 7.2.3,\n> \n> There will be; I will backport the fixes today, and Marc promised to\n> roll the tarball tonight.\n> \n> One thing I am undecided about: I am more than half tempted to put in\n> the fix that makes us able to cope with mktime's broken-before-1970\n> behavior in recent glibc versions (e.g., Red Hat 7.3). This seems like\n> a good idea considering that other Linux distros will surely be updating\n> glibc soon too. On the other hand, it's hard to call it a critical bug\n> fix --- it ain't on a par with the vacuum/clog problem, for sure. And\n> the patch has received only limited testing (basically just whatever\n> use 7.3beta1 has had). On the third hand, the patch only does something\n> if mktime() has already failed, so it's hard to see how it could make\n> life worse even if it's buggy.\n> \n> Any votes on whether to fix that or leave it alone in 7.2.3? I need\n> some input in the next few hours ...\n\nI think it should be put in. You work for Red Hat, and that's the least\nwe can do for them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 11:29:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 fixes (was Re: Cause of missing pg_clog files)" }, { "msg_contents": "Tom Lane wrote:\n> One thing I am undecided about: I am more than half tempted to put in\n> the fix that makes us able to cope with mktime's broken-before-1970\n> behavior in recent glibc versions (e.g., Red Hat 7.3). This seems like\n> a good idea considering that other Linux distros will surely be updating\n> glibc soon too. On the other hand, it's hard to call it a critical bug\n> fix --- it ain't on a par with the vacuum/clog problem, for sure. And\n> the patch has received only limited testing (basically just whatever\n> use 7.3beta1 has had). On the third hand, the patch only does something\n> if mktime() has already failed, so it's hard to see how it could make\n> life worse even if it's buggy.\n> \n> Any votes on whether to fix that or leave it alone in 7.2.3? I need\n> some input in the next few hours ...\n> \n\n+1 for fixing it\n\nJoe\n\n", "msg_date": "Mon, 30 Sep 2002 08:30:48 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 fixes (was Re: Cause of missing pg_clog files)" }, { "msg_contents": "Tom Lane wrote:\n<snip>\n> Any votes on whether to fix that or leave it alone in 7.2.3? I need\n> some input in the next few hours ...\n\nIncluding it sounds like a good idea.\n\n'Yes' from me.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Tue, 01 Oct 2002 01:40:49 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 fixes (was Re: Cause of missing pg_clog files)" }, { "msg_contents": "\nNothing against including it from me ...\n\n\nOn Mon, 30 Sep 2002, Tom Lane wrote:\n\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > On Fri, Sep 27, 2002 at 08:30:38PM -0400, Bruce Momjian wrote:\n> >> OK, we need a decision on whether we are going to do a 7.2,3 or just\n> >> have it in beta3. If it is in 7.2.3, I would not mention it in the\n> >> beta3 release notes.\n>\n> > If there won't be any 7.2.3,\n>\n> There will be; I will backport the fixes today, and Marc promised to\n> roll the tarball tonight.\n>\n> One thing I am undecided about: I am more than half tempted to put in\n> the fix that makes us able to cope with mktime's broken-before-1970\n> behavior in recent glibc versions (e.g., Red Hat 7.3). This seems like\n> a good idea considering that other Linux distros will surely be updating\n> glibc soon too. On the other hand, it's hard to call it a critical bug\n> fix --- it ain't on a par with the vacuum/clog problem, for sure. And\n> the patch has received only limited testing (basically just whatever\n> use 7.3beta1 has had). On the third hand, the patch only does something\n> if mktime() has already failed, so it's hard to see how it could make\n> life worse even if it's buggy.\n>\n> Any votes on whether to fix that or leave it alone in 7.2.3? I need\n> some input in the next few hours ...\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Mon, 30 Sep 2002 13:56:23 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 fixes (was Re: Cause of missing pg_clog files)" }, { "msg_contents": "On Mon, Sep 30, 2002 at 11:18:27AM -0400, Tom Lane wrote:\n> use 7.3beta1 has had). On the third hand, the patch only does something\n> if mktime() has already failed, so it's hard to see how it could make\n> life worse even if it's buggy.\n\nOn those grounds alone, it seems worth putting in. As you say, it's\nhard to see how it can be worse than not putting it in, even if it\nturns out to be buggy. It's probably worth noting prominently in the\nrelease notes that it has received minimal testing, though.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Mon, 30 Sep 2002 17:07:18 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 fixes (was Re: Cause of missing pg_clog files)" } ]
[ { "msg_contents": "Hackers,\n\nSeems the functionality to detect old versions of the postmaster with\nnewer psql doesn't work. Here, server is 7.2.1:\n\n$ psql alvherre\nERROR: parser: parse error at or near \".\"\nWelcome to psql 7.3b1, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nalvherre=> select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\nalvherre=> \n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Nunca confiar� en un traidor. Ni siquiera si el traidor lo he creado yo\"\n(Bar�n Vladimir Harkonnen)\n", "msg_date": "Sat, 28 Sep 2002 00:39:27 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "version mismatch detection doesn't work" }, { "msg_contents": "\nI didn't think we were supposed to throw an error on a mismatch, were\nwe?\n\n---------------------------------------------------------------------------\n\nAlvaro Herrera wrote:\n> Hackers,\n> \n> Seems the functionality to detect old versions of the postmaster with\n> newer psql doesn't work. Here, server is 7.2.1:\n> \n> $ psql alvherre\n> ERROR: parser: parse error at or near \".\"\n> Welcome to psql 7.3b1, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> alvherre=> select version();\n> version \n> -------------------------------------------------------------\n> PostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\n> (1 row)\n> \n> alvherre=> \n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"Nunca confiar? en un traidor. Ni siquiera si el traidor lo he creado yo\"\n> (Bar?n Vladimir Harkonnen)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 01:03:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: version mismatch detection doesn't work" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Seems the functionality to detect old versions of the postmaster with\n> newer psql doesn't work.\n\nWhat functionality? psql has never had such a test. I think you\nare thinking of pg_dump.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 12:14:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: version mismatch detection doesn't work " }, { "msg_contents": "Tom Lane dijo: \n\n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > Seems the functionality to detect old versions of the postmaster with\n> > newer psql doesn't work.\n> \n> What functionality? psql has never had such a test. I think you\n> are thinking of pg_dump.\n\nNo, I was thinking of psql. There was a discussion some time ago about\nmismatching versions; I don't know where I got the idea that the\nconclusion had been that if versions mismatched, psql would barf. (The\nconclusion was to add the version number to psql.)\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"No hay ausente sin culpa ni presente sin disculpa\" (Prov. frances)\n\n", "msg_date": "Sat, 28 Sep 2002 12:28:11 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": true, "msg_subject": "Re: version mismatch detection doesn't work " }, { "msg_contents": "It was I that originally brought the topic up. I don't really remember\nthe exact details but I do seem to recall that the author thought it was\na horrid idea. Basically and poorly paraphrased the response was that\neveryone should use select version() after they connect and if they\ndon't know to do that or simply forget, that's tough. I also seem to\nrecall that even the prospect of having some slash command that showed\npsql and back end version was considered a waste and a bad/redundant\nidea. So, as it stands, only the psql version is displayed.\n\nI still think it makes so much more sense to simply state something\nlike, \"Welcome to psql 7.3b1, the PostgreSQL interactive terminal. You\nare currently connected with a 7.1.1 server named 'foobar'\". It's\nsimple and makes the information very obvious. It also helps re-enforce\nthe name of the server that you've connected with. I should clarify,\nthe host name par is not something I originally asked about but does\nseem to make sense. I honestly could care less about the exact text as\nmaking the information obviously available is all that I care really\nabout.\n\nPersonally, I never understood how making even marginally redundant\ninformation readily and obviously available, especially when it can\nprevent some potential peril, is a bad idea. But, for each is own. ;)\n\nGreg\n\n\n\nOn Sat, 2002-09-28 at 11:28, Alvaro Herrera wrote:\n> Tom Lane dijo: \n> \n> > Alvaro Herrera <alvherre@atentus.com> writes:\n> > > Seems the functionality to detect old versions of the postmaster with\n> > > newer psql doesn't work.\n> > \n> > What functionality? psql has never had such a test. I think you\n> > are thinking of pg_dump.\n> \n> No, I was thinking of psql. There was a discussion some time ago about\n> mismatching versions; I don't know where I got the idea that the\n> conclusion had been that if versions mismatched, psql would barf. (The\n> conclusion was to add the version number to psql.)\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"No hay ausente sin culpa ni presente sin disculpa\" (Prov. frances)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly", "msg_date": "28 Sep 2002 13:48:21 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: version mismatch detection doesn't work" } ]
[ { "msg_contents": "Hi all,\n\nAm experimenting to find out what kind of performance gain are achieved\nfrom moving indexes to a different scsi drives than the WAL files, than\nthe data itself, etc.\n\nHave come across an interesting problem.\n\nHave moved the indexes to another drive, then created symlinks to them.\n\nRan a benchmark against the database, REINDEX'd the tables, VACUUM FULL\nANALYZE'd, prepared to re-run the benchmark again and guess what?\n\nThe indexes were back on the original drive.\n\nThe process of REINDEX-ing obviously creates another file then drops the\noriginal.\n\nIs there a way to allow REINDEX to work without having this side affect?\n\nPre-creating a bunch of dangling symlinks doesn't work (tried that, it\ngives a \"ERROR: cannot create accounts_pkey: File exists\" on FreeBSD\n4.6.2 when using the REINDEX).\n\nAny suggestions?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 28 Sep 2002 17:08:30 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "How to REINDEX in high volume environments?" }, { "msg_contents": "On 28 Sep 2002 at 17:08, Justin Clift wrote:\n\n> Have moved the indexes to another drive, then created symlinks to them.\n> Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL\n> ANALYZE'd, prepared to re-run the benchmark again and guess what?\n> \n> The indexes were back on the original drive.\n> Is there a way to allow REINDEX to work without having this side affect?\n> \n> Pre-creating a bunch of dangling symlinks doesn't work (tried that, it\n> gives a \"ERROR: cannot create accounts_pkey: File exists\" on FreeBSD\n> 4.6.2 when using the REINDEX).\n\nLooks like we should have a subdirectory in database directory which stores \nindex.\n\nMay be transaction logs, indexes goes in separte directory which can be \nsymlinked. Linking a directory is much simpler solution than linking a file.\n\nI suggest we have per database transaction log and indexes created in separate \nsubdirectories for each database. Furhter given that large tables are segmented \nafter one GB size, a table should have it's own subdirectory optionally..\n\nAt the cost of few inodes, postgresql would gain much more flexibility and \nhence tunability..\n\nMay be TODO for 7.4? Anyone?\n\nBye\n Shridhar\n\n--\nSoftware, n.:\tFormal evening attire for female computer analysts.\n\n", "msg_date": "Sat, 28 Sep 2002 12:46:04 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> Looks like we should have a subdirectory in database directory which stores\n> index.\n\nThat was my first thought also, but an alternative/additional approach\nwould be this (not sure if it's workable):\n\n - As each index already has a bunch of information stored stored for\nit, would it be possible to have an additional column added called\n'idxpath' or something?\n\n - This would mean that the index location would be stable per index,\nand would allow for *really* high volume environments to keep different\nindexes on different drives.\n\nNot sure what the default value would be, maybe the PGDATA directory,\nmaybe something as a GUC variable, etc, but that's the concept.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> May be transaction logs, indexes goes in separte directory which can be\n> symlinked. Linking a directory is much simpler solution than linking a file.\n> \n> I suggest we have per database transaction log and indexes created in separate\n> subdirectories for each database. Furhter given that large tables are segmented\n> after one GB size, a table should have it's own subdirectory optionally..\n> \n> At the cost of few inodes, postgresql would gain much more flexibility and\n> hence tunability..\n> \n> May be TODO for 7.4? Anyone?\n> \n> Bye\n> Shridhar\n> \n> --\n> Software, n.: Formal evening attire for female computer analysts.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sat, 28 Sep 2002 17:51:15 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "On 28 Sep 2002 at 17:51, Justin Clift wrote:\n\n> Shridhar Daithankar wrote:\n> <snip>\n> > Looks like we should have a subdirectory in database directory which stores\n> > index.\n> \n> That was my first thought also, but an alternative/additional approach\n> would be this (not sure if it's workable):\n> \n> - As each index already has a bunch of information stored stored for\n> it, would it be possible to have an additional column added called\n> 'idxpath' or something?\n> \n> - This would mean that the index location would be stable per index,\n> and would allow for *really* high volume environments to keep different\n> indexes on different drives.\n\nI have to disagree.. Completely.. This is like turning PG-Metadata into \nregistry...\n\nAnd what happens when index starts splitting when it grows beyond 1GB in size?\n\nPutting indexes into a separate subdirectoy and mount/link that directory on a \ndevice that is on a separate SCSI channel is what I can think of as last drop \nof performance out of it..\n\nJust a thought, as usual..\n\nI don't know how much efforts it would take but if we have pg_xlog in separte \nconfigurable dir. now, putting indexes as well and having per database pg_xlog \nshould be on the same line. The later aspect is also important IMO..\n\nBye\n Shridhar\n\n--\nVMS, n.:\tThe world's foremost multi-user adventure game.\n\n", "msg_date": "Sat, 28 Sep 2002 13:47:00 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> And what happens when index starts splitting when it grows beyond 1GB in size?\n\nHaving an index directory:\n\ni.e. $PGDATA/data/<oid>/indexes/\n\n(that's the kind of thing you mean isn't it?)\n\nSounds workable, and sounds better than the present approach.\n\nThe reason that I was thinking of having a different path per index\nwould be for high volume situations like this:\n\n/dev/dsk1 : /pgdata <- data here\n/dev/dsk2 : /pgindexes1 <- some indexes here\n/dev/dsk3 : /pgindexes2 <- some ultra-high volume activity here\n\nLet's say that there's a bunch of data on /dev/dsk1, and for performance\nreasons it's been decided to move the indexes to another drive\n/dev/dsk2.\n\nNow, if just one of those indexes is getting *a lot* of the drive\nactivity, it would make sense to move it to it's own dedicated drive. \nHaving an um... PGINDEX (that's just an identifier for this example, not\nan environment variable suggestion) directory location defined would\nmean that each time a REINDEX operation occurs, then all new indexes\nwould be created in the same spot. That sounds better than the present\napproach thus far, but wouldn't work for situations where indexes are\nspread across multiple disk drives.\n\nThe suggestion of having some kind of path info for each index is merely\na thought of how to meet that potential future need, not necessarily the\nbest method anyone has ever thought of. Like someone might pipe up and\nsay \"Nah, it could be done better XYZ way\", etc.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Putting indexes into a separate subdirectoy and mount/link that directory on a\n> device that is on a separate SCSI channel is what I can think of as last drop\n> of performance out of it..\n> \n> Just a thought, as usual..\n> \n> I don't know how much efforts it would take but if we have pg_xlog in separte\n> configurable dir. now, putting indexes as well and having per database pg_xlog\n> should be on the same line. The later aspect is also important IMO..\n> \n> Bye\n> Shridhar\n> \n> --\n> VMS, n.: The world's foremost multi-user adventure game.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 00:43:12 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "Justin Clift dijo: \n\nHi,\n\n> Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL\n> ANALYZE'd, prepared to re-run the benchmark again and guess what?\n> \n> The indexes were back on the original drive.\n\nYes, this is expected. Same for CLUSTER. They create a different\nfilenode and point the relation (table or index) at it.\n\nI think the separate space for indexes is a good idea. However, and\nthis is orthogonal, I feel the way REINDEX works now is not the best,\nbecause it precludes you from using the index while you are doing it.\n\nI'm trying to implement a way to concurrently compact the indexes.\nI hope to have it for 7.4.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nY una voz del caos me hablo y me dijo\n\"Sonrie y se feliz, podria ser peor\".\nY sonrei. Y fui feliz.\nY fue peor.\n\n", "msg_date": "Sat, 28 Sep 2002 11:02:30 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "On Sat, 2002-09-28 at 02:16, Shridhar Daithankar wrote:\n> On 28 Sep 2002 at 17:08, Justin Clift wrote:\n> \n> > Have moved the indexes to another drive, then created symlinks to them.\n> > Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL\n> > ANALYZE'd, prepared to re-run the benchmark again and guess what?\n> > \n> > The indexes were back on the original drive.\n> > Is there a way to allow REINDEX to work without having this side affect?\n> > \n> > Pre-creating a bunch of dangling symlinks doesn't work (tried that, it\n> > gives a \"ERROR: cannot create accounts_pkey: File exists\" on FreeBSD\n> > 4.6.2 when using the REINDEX).\n> \n> Looks like we should have a subdirectory in database directory which stores \n> index.\n> \n> May be transaction logs, indexes goes in separte directory which can be \n> symlinked. Linking a directory is much simpler solution than linking a file.\n> \n> I suggest we have per database transaction log and indexes created in separate \n> subdirectories for each database. Furhter given that large tables are segmented \n> after one GB size, a table should have it's own subdirectory optionally..\n> \n> At the cost of few inodes, postgresql would gain much more flexibility and \n> hence tunability..\n> \n> May be TODO for 7.4? Anyone?\n\n\nVery neat idea! Sounds like an excellent way of gaining lots of\ngranularity!\n\nI can't even think of a reason not to use the directory per table scheme\nall the time. Perhaps simply allowing for a script/tool that will\nautomatically perform such a physical table migration to a distinct \ndirectory would be in order too.\n\nEither way, sounds like a good idea.\n\nGreg", "msg_date": "28 Sep 2002 10:44:02 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Shridhar Daithankar wrote:\n>> Looks like we should have a subdirectory in database directory which stores\n>> index.\n\n> That was my first thought also, but an alternative/additional approach\n> would be this (not sure if it's workable):\n\nSee the tablespaces TODO item. I'm not excited about building\nhalf-baked versions of tablespaces before we get around to doing the\nreal thing ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 28 Sep 2002 12:18:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments? " }, { "msg_contents": "On 28 Sep 2002 at 12:18, Tom Lane wrote:\n\n> Justin Clift <justin@postgresql.org> writes:\n> > Shridhar Daithankar wrote:\n> >> Looks like we should have a subdirectory in database directory which stores\n> >> index.\n> \n> > That was my first thought also, but an alternative/additional approach\n> > would be this (not sure if it's workable):\n> \n> See the tablespaces TODO item. I'm not excited about building\n> half-baked versions of tablespaces before we get around to doing the\n> real thing ...\n\nI wen thr. the messages posted regarding tablespaces. It looks like\n\nTablesspaces should provide\n\n1. Managability \n2. Performance tuning\n3. Better Administration..\n\nCreating a directory for each object or object type would allow to do same \nthing.\n\nWhy directory?\n\n1. You can mount it someplace else.\n2. You can symlink it without worrying about postgresql creating new files \ninstead of symlink while drop/recreate.\n\nWhether to choose directory or tablespaces? I say directory. Why?\n\n1. PostgreSQL philosphy has always been using facilities provided by OS and not \nto duplicate that work. Tablespaces directly violates that. Directory mounting \ndoes not.\n\n2. Tablespaces combines objects on them, adding a layer of abstraction. and \nthen come ideas like vacuuming a tablespace. Frankly given what vacuum does, I \ncan't imagine what vacuuming tablespace would exactly do.\n\n3. Tablespace would be a single file or structure of directories? How do we \nconfigure it? What tuning option do we provide?\n\nBasically table spaces I feel is a layer of abstraction that can be avoided if \nwe layout the DB in a directory tree with sufficient levels. That would be easy \nto deal with as configuration and maitainance delegated to OS and it would be \nflexible enough to.\n\nAnyway if we have a directory per object/object type, how much different it's \ngoing to be from a table space? \n\nFrankly I am wary of table spaces because I have seen them in oracle and not \neaxctly convinced that's the best way of doing things. \n\nIf we introdude word tablespace, users will be expecting all those idiocies \nlike taking a table space offline/online, adding data files aka pre-claiming \nspace etc. All these are responsibilities of OS. Let OS handle it. PostgreSQL \nshould just create a file structure which would grow as and when required.\n\nThe issue looks similimar to having raw disk I/O. Oracle might have good reason \nto do it but are we sure postgresql needs this? Just another policy decision \nwaiting..\n\nHere are some links I found in archive. Would like to know more about this \nissue..\n\nhttp://candle.pha.pa.us/mhonarc/todo.detail/tablespaces/msg00006.html\nhttp://candle.pha.pa.us/mhonarc/todo.detail/tablespaces/msg00007.html\n\nJust a thought..\n\nBye\n Shridhar\n\n--\nThe sooner our happiness together begins, the longer it will last.\t\t-- \nMiramanee, \"The Paradise Syndrome\", stardate 4842.6\n\n", "msg_date": "Sun, 29 Sep 2002 13:09:26 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments? " }, { "msg_contents": "On 29 Sep 2002 at 0:43, Justin Clift wrote:\n\n> Shridhar Daithankar wrote:\n> The reason that I was thinking of having a different path per index\n> would be for high volume situations like this:\n> \n> /dev/dsk1 : /pgdata <- data here\n> /dev/dsk2 : /pgindexes1 <- some indexes here\n> /dev/dsk3 : /pgindexes2 <- some ultra-high volume activity here\n\nI would say this would look better..\n\n/pgdata\n-indexes\n--index1\n---indexfiles\n--index2\n---indexfiles\n\nWhere index1 and index2 are two different indexes. Just like each table gets \nit's own directory, each index gets it's own directory as well. \n\nSo the admin would/can tune on per object basis rather than worrying about \ncreating right group of objects and then tuning about that group.\n\nIf required throwing per database transaction log there as well might prove a \ngood idea. It would insulate one db from load of other, as far as I/O is \nconcerned..\n\nThis possiblity is not lost with this scheme but it just gets something simpler \nIMO..\n\nJust illustration of my another post on hackers on this topic.. \n\n\nBye\n Shridhar\n\n--\nYou're too beautiful to ignore. Too much woman.\t\t-- Kirk to Yeoman Rand, \"The \nEnemy Within\", stardate unknown\n\n", "msg_date": "Sun, 29 Sep 2002 13:16:19 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "Am Samstag, 28. September 2002 10:17 schrieb Shridhar Daithankar:\n(snip)\n> I have to disagree.. Completely.. This is like turning PG-Metadata into\n> registry...\n>\n> And what happens when index starts splitting when it grows beyond 1GB in\n> size?\n>\n> Putting indexes into a separate subdirectoy and mount/link that directory\n> on a device that is on a separate SCSI channel is what I can think of as\n> last drop of performance out of it..\n(snip)\n\nI think a good approach would be the introduction of tablespaces like oracle has, and assigning locations to that tablespace.\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Sun, 29 Sep 2002 10:12:56 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" }, { "msg_contents": "Just wanted to pipe in here. I am still very interested in tablespaces ( I have many database systems that are over\n500GB and growing) and am willing to port my tablespace patch to 7.4. I have everything (but only tested here) working\nin 7.2 but the patch was not accepted. I didn't see a great speed improvement but the patch helps with storage management.\n\nRecap. the patch would enable the following\n\n a database to have a default data tablespace and index tablespace\n a user to have a default data and index tablespace\n a table to have a specific tablespace\n an index to have a specfic tablespace\n\n I would like to also add namespace (schema) to have a default data and index tablespaces\n\nJim\n\n\n\n\n> Justin Clift <justin@postgresql.org> writes:\n> > Shridhar Daithankar wrote:\n> >> Looks like we should have a subdirectory in database directory which stores\n> >> index.\n> \n> > That was my first thought also, but an alternative/additional approach\n> > would be this (not sure if it's workable):\n> \n> See the tablespaces TODO item. I'm not excited about building\n> half-baked versions of tablespaces before we get around to doing the\n> real thing ...\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n\n\n\n", "msg_date": "Mon, 30 Sep 2002 09:24:28 -0400", "msg_from": "\"Jim Buttafuoco\" <jim@contactbda.com>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments? " }, { "msg_contents": "\nJim, glad you are still around. Yes, we would love to get tablespaces\nin 7.4. I think we need to think bigger and get something where we can\nname tablespaces and place tables/indexes into these named spaces. I\ncan reread the TODO.detail stuff and give you an outline. How does that\nsound? Thomas Lockhart is also interested in this feature.\n\n---------------------------------------------------------------------------\n\nJim Buttafuoco wrote:\n> Just wanted to pipe in here. I am still very interested in tablespaces ( I have many database systems that are over\n> 500GB and growing) and am willing to port my tablespace patch to 7.4. I have everything (but only tested here) working\n> in 7.2 but the patch was not accepted. I didn't see a great speed improvement but the patch helps with storage management.\n> \n> Recap. the patch would enable the following\n> \n> a database to have a default data tablespace and index tablespace\n> a user to have a default data and index tablespace\n> a table to have a specific tablespace\n> an index to have a specfic tablespace\n> \n> I would like to also add namespace (schema) to have a default data and index tablespaces\n> \n> Jim\n> \n> \n> \n> \n> > Justin Clift <justin@postgresql.org> writes:\n> > > Shridhar Daithankar wrote:\n> > >> Looks like we should have a subdirectory in database directory which stores\n> > >> index.\n> > \n> > > That was my first thought also, but an alternative/additional approach\n> > > would be this (not sure if it's workable):\n> > \n> > See the tablespaces TODO item. I'm not excited about building\n> > half-baked versions of tablespaces before we get around to doing the\n> > real thing ...\n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 22:23:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to REINDEX in high volume environments?" } ]
[ { "msg_contents": "Hello, \n\nI did a vacuum from within a function, and it went sig11 on me.\nIs it illegal to do that?\n\nThe function:\n\ndrop function xorder1_cleanup();\ncreate function xorder1_cleanup() RETURNS integer AS '\ndeclare\n x record;\n c integer;\nbegin\n c:=0;\n FOR x IN SELECT order_id,count(*) as cnt FROM xorder1_updates group by order_id LOOP\n if x.cnt > 1 then\n c:=c+x.cnt; \n delete from xorder1_updates where order_id = x.order_id;\n insert into xorder1_updates(order_id) values (x.order_id);\n end if;\n END LOOP;\n execute ''vacuum full analyse xorder1_updates;'';\n return c;\nend;\n' LANGUAGE 'plpgsql';\n\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sat, 28 Sep 2002 17:02:00 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Vacuum from within a function crashes backend" }, { "msg_contents": "Magnus Naeslund(f) dijo: \n\n> Hello, \n> \n> I did a vacuum from within a function, and it went sig11 on me.\n> Is it illegal to do that?\n\nHuh... what version is this? In current sources, VACUUM cannot be run\ninside a function (it will throw an ERROR). In 7.2[.1] I see there is\nno protection against this.\n\nMaybe the fix for this should be backported to 7.2 also.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Hay quien adquiere la mala costumbre de ser infeliz\" (M. A. Evans)\n\n\n", "msg_date": "Sat, 28 Sep 2002 11:09:24 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: Vacuum from within a function crashes backend" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> wrote:\n> Magnus Naeslund(f) dijo:\n> \n>> Hello,\n>> \n>> I did a vacuum from within a function, and it went sig11 on me.\n>> Is it illegal to do that?\n> \n> Huh... what version is this? In current sources, VACUUM cannot be\n> run inside a function (it will throw an ERROR). In 7.2[.1] I see\n> there is no protection against this.\n> \n> Maybe the fix for this should be backported to 7.2 also.\n\nArgh!\nSorry i forgot the version, it's as you say 7.2.1..\nThen i'll just not do that :)\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sat, 28 Sep 2002 18:12:09 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: Vacuum from within a function crashes backend" } ]
[ { "msg_contents": "\nAs was previously discussed (and now that I'm mostly back from the dead\n... damn colds) I've just branched off REL7_3_STABLE ... all future beta's\nwill be made based off of that branch, so that development may resume on\nthe main branch ...\n\nSo, for those doing commits or anoncvs, remember that the 'stable' branch\nrequires you to use:\n\n\t-rREL7_3_STABLE\n\nwhile the development branch is 'as per normal' ...\n\n\n", "msg_date": "Sat, 28 Sep 2002 13:47:50 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.3 Branched ..." }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> As was previously discussed (and now that I'm mostly back from the dead\n> ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's\n> will be made based off of that branch, so that development may resume on\n> the main branch ...\n\nWhat is the attitude towards getting stuff from Gborg to the main\nPostgreSQL distribution (contrib or otherwise)?\n\nFor example, the pg_autotune utility recently started on GBorg. It's an\nongoing project, useful to many installations, and the additional size\nwould be barely noticeable.\n\nNot saying it's ready right now, but am hoping that maybe 7.4 would be\nable to include it.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> So, for those doing commits or anoncvs, remember that the 'stable' branch\n> requires you to use:\n> \n> -rREL7_3_STABLE\n> \n> while the development branch is 'as per normal' ...\n\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 02:58:46 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "\nNot going to happen ... there are oodles of \"not big, but useful\" pieces\nof software out there that we could include ... but th epoint of Gborg is\nyou download the main repository, and then you go to gborg to look for the\nadd-ons you might like to have ...\n\n\n\n\nOn Sun, 29 Sep 2002, Justin Clift wrote:\n\n> \"Marc G. Fournier\" wrote:\n> >\n> > As was previously discussed (and now that I'm mostly back from the dead\n> > ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's\n> > will be made based off of that branch, so that development may resume on\n> > the main branch ...\n>\n> What is the attitude towards getting stuff from Gborg to the main\n> PostgreSQL distribution (contrib or otherwise)?\n>\n> For example, the pg_autotune utility recently started on GBorg. It's an\n> ongoing project, useful to many installations, and the additional size\n> would be barely noticeable.\n>\n> Not saying it's ready right now, but am hoping that maybe 7.4 would be\n> able to include it.\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n> > So, for those doing commits or anoncvs, remember that the 'stable' branch\n> > requires you to use:\n> >\n> > -rREL7_3_STABLE\n> >\n> > while the development branch is 'as per normal' ...\n>\n>\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n\n", "msg_date": "Sat, 28 Sep 2002 17:24:43 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> Not going to happen ... there are oodles of \"not big, but useful\" pieces\n> of software out there that we could include ... but th epoint of Gborg is\n> you download the main repository, and then you go to gborg to look for the\n> add-ons you might like to have ...\n\nOk. Wonder if it's worth someone creating a \"PostgreSQL Powertools\"\ntype of package, that includes in one download all of these nifty tools\n(pg_autotune, oid2name, etc) that would be beneficial to have compiled\nand already available. Kind of like \"contrib\" is (oid2name is already\nthere I know), but so people don't have to go hunting all over GBorg to\nfind the bits that they'd want.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 14:54:30 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "\n[ I am starting to change subject headings to make things easier for\npeople.]\n\nI don't think we want a branch for 7.4 yet. We still have lots of open\nissues and the branch will require double-patching.\n\nMarc, I know we said branch after beta2 but I think we need another week\nor two before we can start using that branch effectively. Even if we\nstarted using it, like adding PITR, the code would drift so much that\nthe double-patching would start to fail when applied.\n\nCan the branch be undone, or can we not use it and just apply a\nmega-patch later to make it match HEAD?\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> \n> As was previously discussed (and now that I'm mostly back from the dead\n> ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's\n> will be made based off of that branch, so that development may resume on\n> the main branch ...\n> \n> So, for those doing commits or anoncvs, remember that the 'stable' branch\n> requires you to use:\n> \n> \t-rREL7_3_STABLE\n> \n> while the development branch is 'as per normal' ...\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 01:43:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Do we want a CVS branch now?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Marc, I know we said branch after beta2 but I think we need another week\n> or two before we can start using that branch effectively. Even if we\n> started using it, like adding PITR, the code would drift so much that\n> the double-patching would start to fail when applied.\n\nAnother problem is that with all the open issues, we still really need\nto focus on 7.3, not on 7.4 development. I don't want to see massive\npatches like PITR or the Windows-port stuff coming in just yet, because\nwe don't have the bandwidth to review them now.\n\n> Can the branch be undone, or can we not use it and just apply a\n> mega-patch later to make it match HEAD?\n\nAFAIK there's no convenient way to undo the branch creation.\n\nI concur with treating HEAD as the active 7.3 area for the next week or\nso and then doing a bulk merge into the REL7_3 branch, so as to avoid\nthe labor of individual double-patches.\n\nMarc previously proposed releasing beta3 in about a week --- will that\nbe a good time to open HEAD for 7.4 work, or will we need to delay still\nlonger? (I'm not sure yet, myself.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Sep 2002 11:30:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now? " }, { "msg_contents": "Tom Lane wrote:\n<snip>\n> Marc previously proposed releasing beta3 in about a week --- will that\n> be a good time to open HEAD for 7.4 work, or will we need to delay still\n> longer? (I'm not sure yet, myself.)\n\nPerhaps it's too early to be able to effectively say when a\nreal+effective branch is likely to be really needed? Stuff still feels\na bit too chaotic.\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 30 Sep 2002 02:15:18 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now?" }, { "msg_contents": "Bruce Momjian writes:\n\n> I don't think we want a branch for 7.4 yet. We still have lots of open\n> issues and the branch will require double-patching.\n\nMerge the changes on the 7.3 branch into the 7.4 branch after 7.3 is\nreleased.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 29 Sep 2002 22:00:59 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now?" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I don't think we want a branch for 7.4 yet. We still have lots of open\n> > issues and the branch will require double-patching.\n> \n> Merge the changes on the 7.3 branch into the 7.4 branch after 7.3 is\n> released.\n\nYes, there is something to be said for this idea. We can single-patch\ninto 7.3 and make one mega-patch to bring 7.4 up to 7.3. I think that\nwill work _if_ 7.4 doesn't drift too much, and even then, I just need to\nspend some time manually doing it. However, there is the danger that\n7.4 changes will not hit all the areas coming in from the 7.3 patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 17:53:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> I don't think we want a branch for 7.4 yet. We still have lots of open\n>> issues and the branch will require double-patching.\n\n> Merge the changes on the 7.3 branch into the 7.4 branch after 7.3 is\n> released.\n\nWhy is that better than the other direction?\n\nWe can't afford to allow much divergence between the two branches so\nlong as we are engaged in wholesale double-patching, so I think it\nreally comes down to the same thing in the end: we are not ready for 7.4\ndevelopment to start in earnest, whether there's a CVS branch for it or\nnot.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 00:20:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now? " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Bruce Momjian writes:\n> >> I don't think we want a branch for 7.4 yet. We still have lots of open\n> >> issues and the branch will require double-patching.\n> \n> > Merge the changes on the 7.3 branch into the 7.4 branch after 7.3 is\n> > released.\n> \n> Why is that better than the other direction?\n> \n> We can't afford to allow much divergence between the two branches so\n> long as we are engaged in wholesale double-patching, so I think it\n> really comes down to the same thing in the end: we are not ready for 7.4\n> development to start in earnest, whether there's a CVS branch for it or\n> not.\n\nYes. We need a decision now because I don't know which branch to touch.\nMarc, I need your feedback on these ideas. There is discussion about\nfixing earthdistance. Perhaps we fix that and remove the 7.3 tag and\njust have everyone CVS checkout again.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 10:48:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now?" }, { "msg_contents": "On Mon, 30 Sep 2002, Bruce Momjian wrote:\n\n> Tom Lane wrote:\n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > Bruce Momjian writes:\n> > >> I don't think we want a branch for 7.4 yet. We still have lots of open\n> > >> issues and the branch will require double-patching.\n> >\n> > > Merge the changes on the 7.3 branch into the 7.4 branch after 7.3 is\n> > > released.\n> >\n> > Why is that better than the other direction?\n> >\n> > We can't afford to allow much divergence between the two branches so\n> > long as we are engaged in wholesale double-patching, so I think it\n> > really comes down to the same thing in the end: we are not ready for 7.4\n> > development to start in earnest, whether there's a CVS branch for it or\n> > not.\n>\n> Yes. We need a decision now because I don't know which branch to touch.\n> Marc, I need your feedback on these ideas. There is discussion about\n> fixing earthdistance. Perhaps we fix that and remove the 7.3 tag and\n> just have everyone CVS checkout again.\n\nGo with Peter's suggestion about committing on one of the branches (v7.3\nor v7.4, doesn't matter, unless Peter knows something I don't insofar as\nmerging from branch->trunk vs trunk->branch?) ... then when we are ready\nto start letting it all diverge, we can just re-sync the opposite branch\nand keep on with development ...\n\n", "msg_date": "Mon, 30 Sep 2002 13:55:11 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Do we want a CVS branch now?" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Mon, 30 Sep 2002, Bruce Momjian wrote:\n>> Yes. We need a decision now because I don't know which branch to touch.\n>> Marc, I need your feedback on these ideas. There is discussion about\n>> fixing earthdistance. Perhaps we fix that and remove the 7.3 tag and\n>> just have everyone CVS checkout again.\n\n> Go with Peter's suggestion about committing on one of the branches (v7.3\n> or v7.4, doesn't matter,\n\nLet's go with committing to HEAD then. It's just easier (don't need a\nbranch-tagged checkout tree to work in).\n\nWe'll sync up the REL7_3 branch when we're ready to put out beta3.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 14:00:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now? " }, { "msg_contents": "Tom Lane writes:\n\n> Why is that better than the other direction?\n\nIt isn't. Let's just keep committing to the head and merge it into 7.3\nlater.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 30 Sep 2002 22:07:24 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Do we want a CVS branch now? " }, { "msg_contents": "\nJust a reminder, we are not using this tag. All 7.3 patches are going\nto HEAD. Once we decide to split the tree for 7.4, we will update this\nbranch and announce it is ready to be used.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> \n> As was previously discussed (and now that I'm mostly back from the dead\n> ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's\n> will be made based off of that branch, so that development may resume on\n> the main branch ...\n> \n> So, for those doing commits or anoncvs, remember that the 'stable' branch\n> requires you to use:\n> \n> \t-rREL7_3_STABLE\n> \n> while the development branch is 'as per normal' ...\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 21:15:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "\n\n\n\n\n\nJustin Clift wrote:\n\n\"Marc G. Fournier\" wrote:\n \n\nNot going to happen ... there are oodles of \"not big, but useful\" pieces\nof software out there that we could include ... but th epoint of Gborg is\nyou download the main repository, and then you go to gborg to look for the\nadd-ons you might like to have ...\n \n\n\nOk. Wonder if it's worth someone creating a \"PostgreSQL Powertools\"\ntype of package, that includes in one download all of these nifty tools\n(pg_autotune, oid2name, etc) that would be beneficial to have compiled\nand already available. Kind of like \"contrib\" is (oid2name is already\nthere I know), but so people don't have to go hunting all over GBorg to\nfind the bits that they'd want.\n\nThat would be wonderful if it included some of the more stable tools / add-ons\nthat have been removed from the main distribution or have existed independent\nof the main PostgreSQL development.\n\n\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n\n\n\n\n\n", "msg_date": "Wed, 16 Oct 2002 14:55:14 -0500", "msg_from": "Thomas Swan <tswan@idigx.com>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "> Thomas Swan wrote:\n> \n> Justin Clift wrote:\n<snip>\n> > Ok. Wonder if it's worth someone creating a \"PostgreSQL Powertools\"\n> > type of package, that includes in one download all of these nifty\n> > tools (pg_autotune, oid2name, etc) that would be beneficial to have\n> > compiled and already available. Kind of like \"contrib\" is (oid2name is\n> > already there I know), but so people don't have to go hunting all over GBorg\n> > to find the bits that they'd want.\n> >\n> That would be wonderful if it included some of the more stable tools /\n> add-ons that have been removed from the main distribution or have\n> existed independent of the main PostgreSQL development.\n\nHi Thomas,\n\nWant to get it together?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 17 Oct 2002 06:00:59 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "Perhaps one could just create a \"PostgreSQL Powertools\" section on\ntechdocs, naming the packages and where to get them. This would\neliminate the need to maintain a package that just duplicates other\npackages...\n\nRobert Treat\n\nOn Wed, 2002-10-16 at 16:00, Justin Clift wrote:\n> > Thomas Swan wrote:\n> > \n> > Justin Clift wrote:\n> <snip>\n> > > Ok. Wonder if it's worth someone creating a \"PostgreSQL Powertools\"\n> > > type of package, that includes in one download all of these nifty\n> > > tools (pg_autotune, oid2name, etc) that would be beneficial to have\n> > > compiled and already available. Kind of like \"contrib\" is (oid2name is\n> > > already there I know), but so people don't have to go hunting all over GBorg\n> > > to find the bits that they'd want.\n> > >\n> > That would be wonderful if it included some of the more stable tools /\n> > add-ons that have been removed from the main distribution or have\n> > existed independent of the main PostgreSQL development.\n> \n> Hi Thomas,\n> \n> Want to get it together?\n> \n> :-)\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n\n", "msg_date": "16 Oct 2002 16:56:42 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "On Wed, 2002-10-16 at 16:56, Robert Treat wrote:\n> Perhaps one could just create a \"PostgreSQL Powertools\" section on\n> techdocs, naming the packages and where to get them. This would\n> eliminate the need to maintain a package that just duplicates other\n> packages...\n\nLet ye-old package managers make a shell package which simply points to\nthe others as dependencies.\n\nI'd be willing to do this for FreeBSD (think Sean? would help as well)\nif someone comes up with the list.\n\n> On Wed, 2002-10-16 at 16:00, Justin Clift wrote:\n> > > Thomas Swan wrote:\n> > > \n> > > Justin Clift wrote:\n> > <snip>\n> > > > Ok. Wonder if it's worth someone creating a \"PostgreSQL Powertools\"\n> > > > type of package, that includes in one download all of these nifty\n> > > > tools (pg_autotune, oid2name, etc) that would be beneficial to have\n> > > > compiled and already available. Kind of like \"contrib\" is (oid2name is\n> > > > already there I know), but so people don't have to go hunting all over GBorg\n> > > > to find the bits that they'd want.\n> > > >\n> > > That would be wonderful if it included some of the more stable tools /\n> > > add-ons that have been removed from the main distribution or have\n> > > existed independent of the main PostgreSQL development.\n> > \n> > Hi Thomas,\n> > \n> > Want to get it together?\n> > \n> > :-)\n> > \n> > Regards and best wishes,\n> > \n> > Justin Clift\n> > \n> > \n> > -- \n> > \"My grandfather once told me that there are two kinds of people: those\n> > who work and those who take the credit. He told me to try to be in the\n> > first group; there was less competition there.\"\n> > - Indira Gandhi\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n-- \n Rod Taylor\n\n", "msg_date": "16 Oct 2002 17:05:09 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "On Wednesday 16 October 2002 05:05 pm, Rod Taylor wrote:\n> On Wed, 2002-10-16 at 16:56, Robert Treat wrote:\n> > Perhaps one could just create a \"PostgreSQL Powertools\" section on\n> > techdocs, naming the packages and where to get them. This would\n> > eliminate the need to maintain a package that just duplicates other\n> > packages...\n\n> Let ye-old package managers make a shell package which simply points to\n> the others as dependencies.\n\nI'm going to attempt to do up RPMs of all those.... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 16 Oct 2002 17:11:42 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "On Wed, 2002-10-16 at 16:05, Rod Taylor wrote:\n> On Wed, 2002-10-16 at 16:56, Robert Treat wrote:\n> > Perhaps one could just create a \"PostgreSQL Powertools\" section on\n> > techdocs, naming the packages and where to get them. This would\n> > eliminate the need to maintain a package that just duplicates other\n> > packages...\n> \n> Let ye-old package managers make a shell package which simply points to\n> the others as dependencies.\nSort of like a meta-port? \n> \n> I'd be willing to do this for FreeBSD (think Sean? would help as well)\n> if someone comes up with the list.\nThat would be useful, and port(s) for the rest of contrib as well (like\ncontrib/tsearch). \n\n:-)\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "16 Oct 2002 16:13:00 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "On Thu, 17 Oct 2002, Justin Clift wrote:\n\n> > Thomas Swan wrote:\n> >\n> > Justin Clift wrote:\n> <snip>\n> > > Ok. Wonder if it's worth someone creating a \"PostgreSQL Powertools\"\n> > > type of package, that includes in one download all of these nifty\n> > > tools (pg_autotune, oid2name, etc) that would be beneficial to have\n> > > compiled and already available. Kind of like \"contrib\" is (oid2name is\n> > > already there I know), but so people don't have to go hunting all over GBorg\n> > > to find the bits that they'd want.\n> > >\n> > That would be wonderful if it included some of the more stable tools /\n> > add-ons that have been removed from the main distribution or have\n> > existed independent of the main PostgreSQL development.\n>\n> Hi Thomas,\n>\n> Want to get it together?\n\nJust a thought, and I've included chris in this ... is there some way of\nsetting up maybe a 'meta package' on Gborg that would auto-pull in and\npackage stuff like this?\n\nFor instance, in FreeBSD ports, you can make such that when you type in\n'make', it just goes to other ports and builds/installs those ...\n\nBaring that, how about the ability to create a new category that is\nmaintained by someone that various project maintains could 'cross-link'\ntheir projects into?\n\n\n\n", "msg_date": "Fri, 18 Oct 2002 00:30:30 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.3 Branched ..." }, { "msg_contents": "\nOn a different note ... if anyone out there would like to maintain/package\nup binaries for various OS similar to what Lamar does with RPMs, I'd love\nto see us extend our binaries section on the ftp server ...\n\nOn 16 Oct 2002, Larry Rosenman wrote:\n\n> On Wed, 2002-10-16 at 16:05, Rod Taylor wrote:\n> > On Wed, 2002-10-16 at 16:56, Robert Treat wrote:\n> > > Perhaps one could just create a \"PostgreSQL Powertools\" section on\n> > > techdocs, naming the packages and where to get them. This would\n> > > eliminate the need to maintain a package that just duplicates other\n> > > packages...\n> >\n> > Let ye-old package managers make a shell package which simply points to\n> > the others as dependencies.\n> Sort of like a meta-port?\n> >\n> > I'd be willing to do this for FreeBSD (think Sean? would help as well)\n> > if someone comes up with the list.\n> That would be useful, and port(s) for the rest of contrib as well (like\n> contrib/tsearch).\n>\n> :-)\n>\n>\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n\n", "msg_date": "Fri, 18 Oct 2002 00:33:54 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Various OS Binaries (Was: Re: v7.3 Branched ...)" }, { "msg_contents": "> > Perhaps one could just create a \"PostgreSQL Powertools\" section on\n> > techdocs, naming the packages and where to get them. This would\n> > eliminate the need to maintain a package that just duplicates other\n> > packages...\n> \n> Let ye-old package managers make a shell package which simply points to\n> the others as dependencies.\n> \n> I'd be willing to do this for FreeBSD (think Sean? would help as well)\n> if someone comes up with the list.\n\nThere is a postgresql-devel port in FreeBSD now that I am maintaining\nthat is where DBAs and anxious developers can cut their teeth on the\nnew features/bugs/interactions in PostgreSQL. As soon as we get out\nof beta here, I'm going to likely get in the habbit of updating the\nport once a month or so with snapshots from the tree.\n\nFWIW, at some point I'm going to SPAM the CVS tree with a\nPOSTGRESQL_PORT tunable that will let users decide which PostgreSQL\ninstance they want (stable version vs -devel). I've been really busy\nrecently and haven't gotten around to double checking things since I\nmade the changes a month ago during the freeze. Maybe this weekend\nI'll get around to touching down on all of the various files.... no\npromises, I'm getting ready to move. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 21 Oct 2002 23:56:02 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: v7.3 Branched ..." } ]
[ { "msg_contents": "Was there a workaround for the errors in time handling for rh7.3 dist?\n\nI get there regression failures:\n abstime ... FAILED\n tinterval ... FAILED\ntest horology ... FAILED\n\nI remember the discussion about old dates, but not if there was any fix for it...\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sat, 28 Sep 2002 20:07:44 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "The rh7.3 time errors" }, { "msg_contents": "Magnus Naeslund(f) wrote:\n> Was there a workaround for the errors in time handling for rh7.3 dist?\n> \n> I get there regression failures:\n> abstime ... FAILED\n> tinterval ... FAILED\n> test horology ... FAILED\n> \n> I remember the discussion about old dates, but not if there was any fix for it...\n> \n\nTom fixed this just before we went into beta. Are you using a recent snapshot?\n\nJoe\n\n\n\n", "msg_date": "Sat, 28 Sep 2002 11:08:22 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: The rh7.3 time errors" }, { "msg_contents": "Joe Conway <mail@joeconway.com> wrote:\n> Magnus Naeslund(f) wrote:\n>> Was there a workaround for the errors in time handling for rh7.3\n>> dist? \n>> \n>> I get there regression failures:\n>> abstime ... FAILED\n>> tinterval ... FAILED\n>> test horology ... FAILED\n>> \n>> I remember the discussion about old dates, but not if there was any\n>> fix for it... \n>> \n> \n> Tom fixed this just before we went into beta. Are you using a recent\n> snapshot? \n> \n> Joe\n\nAs usual, i never remember to supply version information.\nI'm using latest stable, 7.2.2.\nIs there a quick workaround for this version, or must there be code ?\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n\nMagnus\n", "msg_date": "Sat, 28 Sep 2002 20:20:59 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: The rh7.3 time errors" }, { "msg_contents": "Magnus Naeslund(f) wrote:\n> Joe Conway <mail@joeconway.com> wrote:\n> > Magnus Naeslund(f) wrote:\n> >> Was there a workaround for the errors in time handling for rh7.3\n> >> dist? \n> >> \n> >> I get there regression failures:\n> >> abstime ... FAILED\n> >> tinterval ... FAILED\n> >> test horology ... FAILED\n> >> \n> >> I remember the discussion about old dates, but not if there was any\n> >> fix for it... \n> >> \n> > \n> > Tom fixed this just before we went into beta. Are you using a recent\n> > snapshot? \n> > \n> > Joe\n> \n> As usual, i never remember to supply version information.\n> I'm using latest stable, 7.2.2.\n> Is there a quick workaround for this version, or must there be code ?\n\nThe change was to use localtime() rather than mktime() in the code. \nThere is no workaround available for 7.2.X, and I don't see that anyone\nbackpatched it to 7.2 CVS. However, we are considering a 7.2.3 and a\nbackpatch of that fix may be worthwhile.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 28 Sep 2002 14:44:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: The rh7.3 time errors" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n> \n> The change was to use localtime() rather than mktime() in the code.\n> There is no workaround available for 7.2.X, and I don't see that\n> anyone backpatched it to 7.2 CVS. However, we are considering a\n> 7.2.3 and a backpatch of that fix may be worthwhile.\n> \n\nThat would be excellent, because it feels awkward installing stuff that doesn't pass the regression tests, as all our new linux boxes will be rh7.3.\nBut right now in our apps we're not relying on the time being right (isn't that the issue?) only the years...\n\nIf it's a simple fix, i think we should include that in the next 7.2.X .\n\nMagnus\n\n-- \n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n Programmer/Networker [|] Magnus Naeslund\n-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-\n", "msg_date": "Sat, 28 Sep 2002 21:05:31 +0200", "msg_from": "\"Magnus Naeslund(f)\" <mag@fbab.net>", "msg_from_op": true, "msg_subject": "Re: The rh7.3 time errors" } ]
[ { "msg_contents": "Hi all,\n\nWould it be beneficial for us to extend \"pg_config\" to update the\npostgresql.conf file?\n\ni.e.\n\npg_config --sort_mem 16384 --shared_buffers 800\n\npg_config -d /some/datadir --sort_mem 16384 --shared_buffers 800\n\netc?\n\nNot sure if it should trigger a restart of postmaster, etc, but the\nconcept sounds useful.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 15:07:46 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "pg_config : postgresql.conf adjustments?" }, { "msg_contents": "Justin Clift writes:\n\n> Would it be beneficial for us to extend \"pg_config\" to update the\n> postgresql.conf file?\n\nThat has nothing to do with pg_config's functions.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 29 Sep 2002 14:29:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_config : postgresql.conf adjustments?" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Justin Clift writes:\n> \n> > Would it be beneficial for us to extend \"pg_config\" to update the\n> > postgresql.conf file?\n> \n> That has nothing to do with pg_config's functions.\n\nAt present, sure. Was thinking a tool for command line changes of\npostgresql.conf parameters would be useful, then thought about what such\na tool would be named. \"pg_cfg\" was a thought, as was \"pg_config\".\n\nHowever we already have a pg_config. At present it's purpose is in the\nrealm of reporting the installation configuration of PostgreSQL. Was\nthinking that adding the ability to do more than \"report\" stuff, but\nalso to \"make changes\" isn't that bad an idea.\n\n?\n\nRegards and best wishes,\n\nJustin Clift\n \n> --\n> Peter Eisentraut peter_e@gmx.net\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Sun, 29 Sep 2002 22:35:55 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: pg_config : postgresql.conf adjustments?" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Would it be beneficial for us to extend \"pg_config\" to update the\n> postgresql.conf file?\n\nThis seems far outside pg_config's charter. It is a simple\ninformation reporter that can be run by anybody. Making it able\nto mess with (or even look at) postgresql.conf introduces a host\nof permissions problems and logistical issues.\n\nI don't really see what's wrong with using a text editor anyway ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 29 Sep 2002 11:03:30 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config : postgresql.conf adjustments? " }, { "msg_contents": "On Sun, 29 Sep 2002, Tom Lane wrote:\n\n> Justin Clift <justin@postgresql.org> writes:\n> > Would it be beneficial for us to extend \"pg_config\" to update the\n> > postgresql.conf file?\n> \n> This seems far outside pg_config's charter. It is a simple\n> information reporter that can be run by anybody. Making it able\n> to mess with (or even look at) postgresql.conf introduces a host\n> of permissions problems and logistical issues.\n> \n> I don't really see what's wrong with using a text editor anyway ;-)\n\nObviously he wants a tool that allows setting parameters from a shell\nscript or something for use within pg_autotune. I don't see why it is\nbad to have a tool to do this; if someone can use it (and modify\npostgresql.conf) obviously he has permission to read (and write)\npostgresql.conf.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Hoy es el primer dia del resto de mi vida\"\n\n", "msg_date": "Sun, 29 Sep 2002 12:34:45 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: pg_config : postgresql.conf adjustments? " }, { "msg_contents": "Alvaro Herrera wrote:\n> Obviously he wants a tool that allows setting parameters from a shell\n> script or something for use within pg_autotune. I don't see why it is\n> bad to have a tool to do this; if someone can use it (and modify\n> postgresql.conf) obviously he has permission to read (and write)\n> postgresql.conf.\n> \n\nBut, if that's the case, why not just:\n\n1. send e.g. \"set sort_mem=8192\" as an SQL statement for runtime changeable\n parameters\n2. use e.g. \"pg_ctl restart -D $PGDATA -o '--shared_buffers=10000'\" for those\n parameters requiring a restart\n\n\nJoe\n\n\n\n", "msg_date": "Sun, 29 Sep 2002 10:55:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: pg_config : postgresql.conf adjustments?" }, { "msg_contents": "Joe Conway wrote:\n> \n> Alvaro Herrera wrote:\n> > Obviously he wants a tool that allows setting parameters from a shell\n> > script or something for use within pg_autotune. I don't see why it is\n> > bad to have a tool to do this; if someone can use it (and modify\n> > postgresql.conf) obviously he has permission to read (and write)\n> > postgresql.conf.\n> >\n> \n> But, if that's the case, why not just:\n> \n> 1. send e.g. \"set sort_mem=8192\" as an SQL statement for runtime changeable\n> parameters\n> 2. use e.g. \"pg_ctl restart -D $PGDATA -o '--shared_buffers=10000'\" for those\n> parameters requiring a restart\n\nDoesn't allow for scriptable permanent changes, only runtime ones. Was\njust trying to think of a \"optimal\" end user solution. Totally hadn't\nthought of the 'set sort_mem=xxx' option either, but it might work for\nthe next version of pg_autotune (am going to have to re-write it\nanyway).\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> Joe\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 30 Sep 2002 04:08:37 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: pg_config : postgresql.conf adjustments?" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> On Sun, 29 Sep 2002, Tom Lane wrote:\n>> This seems far outside pg_config's charter.\n\n> Obviously he wants a tool that allows setting parameters from a shell\n> script or something for use within pg_autotune. I don't see why it is\n> bad to have a tool to do this; if someone can use it (and modify\n> postgresql.conf) obviously he has permission to read (and write)\n> postgresql.conf.\n\nWell, you could do that with a sed command (twinkle). But I wasn't\nnecessarily objecting to the abstract notion of having such a tool ...\nI just don't think it's in pg_config's scope. You could more easily\nmake a case for adding the functionality to pg_ctl. Or make a new tool.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 00:13:04 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_config : postgresql.conf adjustments? " } ]
[ { "msg_contents": "Hi, all\n\nDoes 7.3 support \"SETOF RECORD\" in plpgsql ?\nAs far as I test it, a function using it in plpgsql always seems to return\nno row. On the other hand, a sql function returns correct rows. \n\nIf 7.3 doesn't support it in plpgsql, I would think plpgsql needs to raise\nan error rather than return \"0 rows\" message. Am I misunderstanding\nhow to use? \n\n\n------------------------------------------------------\nCREATE TABLE test (a integer, b text);\nINSERT INTO test VALUES(1, 'function1');\nINSERT INTO test VALUES(2, 'function2');\nINSERT INTO test VALUES(1, 'function11');\nINSERT INTO test VALUES(2, 'function22');\n\n\nCREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF record AS '\n DECLARE\n rec record;\n BEGIN\n FOR rec IN SELECT * FROM test WHERE a = $1 LOOP\n RAISE NOTICE ''a = %, b = %'',rec.a, rec.b;\n END LOOP; \n RETURN rec;\n END;\n' LANGUAGE 'plpgsql';\n\nSELECT * FROM myfunc(1) AS t(a integer, b text);\n\nNOTICE: a = 1, b = function1\nNOTICE: a = 1, b = function11\n a | b \n---+---\n(0 rows)\n\n\n\nCREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF record AS '\n SELECT * FROM test WHERE a = $1;\n' LANGUAGE 'sql';\n\nSELECT * FROM myfunc(1) AS t(a integer, b text);\n\n a | b \n---+------------\n 1 | function1\n 1 | function11\n(2 rows)\n\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Sun, 29 Sep 2002 19:51:01 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Does setof record in plpgsql work well in 7.3?" }, { "msg_contents": "CREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF record AS '\n DECLARE\n rec record;\n BEGIN\n FOR rec IN SELECT * FROM test WHERE a = $1 LOOP\n RAISE NOTICE ''a = %, b = %'',rec.a, rec.b;\n RETURN NEXT rec;\n END LOOP;\n\n RETURN null;\n END;\n ' LANGUAGE 'plpgsql';\n\n SELECT * FROM myfunc(1) AS t(a integer, b text);\n\nNote the use of the \"RETURN NEXT rec\" line in the body\nof the for loop, and also the \"RETURN null\" at the end.\n\nIt is also possible to create typed returns, so in this\ncase, in the declare body, the following would be valid.\nDECLARE\n rec test%ROWTYPE;\n\nThe function definition then becomes:-\n CREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF test ...\n\nOne can also create your own return type in the following\nmanner.\n\ncreate type my_return_type as (\n foo integer,\n bar text\n);\n\nNow, the declare block has the following:-\nDECLARE\n rec my_return_type%ROWTYPE\n\nThe function definition then becomes:-\n CREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF my_return_type ...\n\nRegards,\nGrant Finnemore\n\nMasaru Sugawara wrote:\n> Hi, all\n> \n> Does 7.3 support \"SETOF RECORD\" in plpgsql ?\n> As far as I test it, a function using it in plpgsql always seems to return\n> no row. On the other hand, a sql function returns correct rows. \n> \n> If 7.3 doesn't support it in plpgsql, I would think plpgsql needs to raise\n> an error rather than return \"0 rows\" message. Am I misunderstanding\n> how to use? \n> \n> \n> ------------------------------------------------------\n> CREATE TABLE test (a integer, b text);\n> INSERT INTO test VALUES(1, 'function1');\n> INSERT INTO test VALUES(2, 'function2');\n> INSERT INTO test VALUES(1, 'function11');\n> INSERT INTO test VALUES(2, 'function22');\n> \n> \n> CREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF record AS '\n> DECLARE\n> rec record;\n> BEGIN\n> FOR rec IN SELECT * FROM test WHERE a = $1 LOOP\n> RAISE NOTICE ''a = %, b = %'',rec.a, rec.b;\n> END LOOP; \n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> \n> SELECT * FROM myfunc(1) AS t(a integer, b text);\n> \n> NOTICE: a = 1, b = function1\n> NOTICE: a = 1, b = function11\n> a | b \n> ---+---\n> (0 rows)\n> \n> \n> \n> CREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF record AS '\n> SELECT * FROM test WHERE a = $1;\n> ' LANGUAGE 'sql';\n> \n> SELECT * FROM myfunc(1) AS t(a integer, b text);\n> \n> a | b \n> ---+------------\n> 1 | function1\n> 1 | function11\n> (2 rows)\n> \n> \n> \n> Regards,\n> Masaru Sugawara\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n>", "msg_date": "Sun, 29 Sep 2002 13:42:43 +0200", "msg_from": "Grant Finnemore <grantf@guruhut.co.za>", "msg_from_op": false, "msg_subject": "Re: Does setof record in plpgsql work well in 7.3?" }, { "msg_contents": "On Sun, 29 Sep 2002 13:42:43 +0200\nGrant Finnemore <grantf@guruhut.co.za> wrote:\n\n> Note the use of the \"RETURN NEXT rec\" line in the body\n> of the for loop, and also the \"RETURN null\" at the end.\n> \n> It is also possible to create typed returns, so in this\n> case, in the declare body, the following would be valid.\n> DECLARE\n> rec test%ROWTYPE;\n> \n> The function definition then becomes:-\n> CREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF test ...\n\n\nThank you for your useful info. the previous function turned out to work\ncorrectly by using \"RETURN NEXT rec.\" And, I found out that plpgsql was\nable to nest one.\n\n\n-- for example\nCREATE OR REPLACE FUNCTION myfunc(integer) RETURNS SETOF record AS '\n DECLARE\n rec1 record;\n rec2 record;\n rec3 record;\n BEGIN\n SELECT INTO rec1 max(a) AS max_a FROM test;\n \n FOR rec2 IN SELECT * FROM test WHERE a = $1 LOOP\n SELECT INTO rec3 * FROM\n (SELECT 1::integer AS a, ''test''::text AS b) AS t;\n RETURN NEXT rec3;\n rec2.a = rec2.a + rec3.a + rec1.max_a;\n RETURN NEXT rec2;\n END LOOP;\n RETURN NEXT rec3;\n \n RETURN;\n END;\n' LANGUAGE 'plpgsql';\n\nSELECT * FROM myfunc(1) AS t(a integer, b text);\n\n\n a | b \n---+------------\n 1 | test\n 5 | function1\n 1 | test\n 5 | function11\n 1 | test\n(5 rows)\n\n\n\nRegards,\nMasaru Sugawara\n\n\n", "msg_date": "Mon, 30 Sep 2002 00:58:02 +0900", "msg_from": "Masaru Sugawara <rk73@sea.plala.or.jp>", "msg_from_op": true, "msg_subject": "Re: Does setof record in plpgsql work well in 7.3?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: CoL [mailto:col@mportal.hu] \n> Sent: 24 September 2002 13:23\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Web site\n> \n> \n> Hi,\n> \n> >>So, why not just redirect people to one of the mirrors listed? This \n> >>could be done based on IP (yes it is inaccurate but it is \n> close enough \n> >>and has the same net effect: pushing people off the main \n> web server) \n> >>or it could be done by simply redirecting to a random mirror.\n> I think it would be stupid, I am, who wants to decide where \n> to go. If I \n> feel that .co.uk is better than others I'll chose that, and \n> bookmark if \n> I want.\n> (random??? brbrbrbrbr) :)\n\nI think it's safe to say we will *not* be doing this...\n\nRegards, Dave.\n", "msg_date": "Sun, 29 Sep 2002 19:51:44 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Web site" } ]
[ { "msg_contents": "Hi,\n\nNow that the ODBC driver has moved from the main distro to\nhttp://gborg.postgresql.org/project/psqlodbc/, we can no longer use the\nmain build system under *nix.\n\nCan someone who knows make better than I (which is probably the vast\nmajority of you!) knock up a makefile so the driver will build\nstandalone on *nix systems please? There should be no dependencies on\nany of the rest of the code - certainly there isn't for the Win32 build.\n\nThere are changes to support 7.3 so this is fairly urgent... (maybe it\nshould be added to the open items list Bruce?).\n\nThanks, Dave\n", "msg_date": "Sun, 29 Sep 2002 20:11:53 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "psqlODBC *nix Makefile (new 7.3 open item?)" }, { "msg_contents": "Dave Page writes:\n\n> Can someone who knows make better than I (which is probably the vast\n> majority of you!) knock up a makefile so the driver will build\n> standalone on *nix systems please? There should be no dependencies on\n> any of the rest of the code - certainly there isn't for the Win32 build.\n\nI'm working something out. I'll send it to you tomorrow.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 30 Sep 2002 22:10:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psqlODBC *nix Makefile (new 7.3 open item?)" } ]
[ { "msg_contents": "\nPeter, the author is questioning why his Makefile changes were wrong. \nWould you elaborate?\n\n---------------------------------------------------------------------------\n\npgman wrote:\n> \n> Done.\n> \n> ---------------------------------------------------------------------------\n> \n> Peter Eisentraut wrote:\n> > Please revert the Makefile part of this patch.\n> > \n> > Bruce Momjian - CVS writes:\n> > \n> > > CVSROOT:\t/cvsroot\n> > > Module name:\tpgsql\n> > > Changes by:\tmomjian@postgresql.org\t02/03/06 15:41:38\n> > >\n> > > Modified files:\n> > > \tcontrib/rserv : ApplySnapshot.in CleanLog.in GetSyncID.in\n> > > \t Makefile MasterSync.in PrepareSnapshot.in\n> > > \t Replicate.in\n> > >\n> > > Log message:\n> > > \tThis simple patch fixes broken Makefile, broken ApplySnapshot and\n> > > \tmakes all utilities honour --verbose command line option.\n> > >\n> > > \t--\n> > > \tYours, Alexey V. Borzov, Webmaster of RDW.ru\n> > >\n> > >\n> > \n> > -- \n> > Peter Eisentraut peter_e@gmx.net\n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 18:32:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [COMMITTERS] pgsql/contrib/rserv ApplySnapshot.in CleanLog. ..." }, { "msg_contents": "Bruce Momjian writes:\n\n> Peter, the author is questioning why his Makefile changes were wrong.\n> Would you elaborate?\n\nBecause we rely on the built-in library lookup functionality instead of\nhardcoding the full file name.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 30 Sep 2002 23:42:46 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/contrib/rserv ApplySnapshot.in CleanLog. ..." }, { "msg_contents": "Hello Peter,\n\nTuesday, October 01, 2002, 1:42:46 AM, you wrote:\n\nPE> Bruce Momjian writes:\n\n>> Peter, the author is questioning why his Makefile changes were wrong.\n>> Would you elaborate?\n\nPE> Because we rely on the built-in library lookup functionality instead of\nPE> hardcoding the full file name.\n\n Agh! I finally read up on module loading\nhttp://developer.postgresql.org/docs/postgres/xfunc-c.html#XFUNC-C-DYNLOAD\n and now I seem to understand. You see, the problem with the current\n Makefile is as follows: it substitutes '$libdir' into both .sql and\n perl files. While this is good enough for sql, $libdir is consumed\n by Perl and thus perl scripts do NOT work.\n Thanks for elaborating, I'll try to produce a better patch ASAP.\n\n-- \nпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ, пїЅпїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅ\nпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ-пїЅпїЅпїЅпїЅпїЅпїЅпїЅпїЅ пїЅпїЅпїЅ \"пїЅпїЅпїЅ-пїЅпїЅпїЅпїЅпїЅ\"\nhttp://www.rdw.ru \nhttp://www.vashdosug.ru\n\n", "msg_date": "Tue, 1 Oct 2002 09:00:29 +0400", "msg_from": "\"Alexey V. Borzov\" <borz_off@rdw.ru>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/contrib/rserv ApplySnapshot.in CleanLog. ..." }, { "msg_contents": "Alexey V. Borzov writes:\n\n> Agh! I finally read up on module loading\n> http://developer.postgresql.org/docs/postgres/xfunc-c.html#XFUNC-C-DYNLOAD\n> and now I seem to understand. You see, the problem with the current\n> Makefile is as follows: it substitutes '$libdir' into both .sql and\n> perl files. While this is good enough for sql, $libdir is consumed\n> by Perl and thus perl scripts do NOT work.\n\nThen fix the Perl scripts. Keep the bizarre code close to the cause, so\nit's easier to maintain.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 1 Oct 2002 22:03:40 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql/contrib/rserv ApplySnapshot.in CleanLog." } ]
[ { "msg_contents": "I read a good article about the problem Intel is having with the 64-bit\nItanium. I think there are some good leasons in the article:\n\n\thttp://www.nytimes.com/2002/09/29/technology/circuits/29CHIP.html\n\nThere is also a Slashdot discussion about the article:\n\n\thttp://slashdot.org/article.pl?sid=02/09/29/1752204&mode=nested&tid=118\n\nAlso, here is an article describing the x86 4MB page sizes used by the\nLinux TLB code:\n\n\thttp://www.rcollins.org/ddj/May96/May96.html\n\nx86 usually uses two levels of directory/page tables, while the 4MB\nversion uses only the page directory.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 29 Sep 2002 20:26:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Intel Itanium, TLB" } ]
[ { "msg_contents": "I am using postgres 7.2, and have rule on a table which causes a notify if\nan insert/update/delete is performed on the table.\nThe table is very very small.\nWhen performing a simple (very simple) update on the table this takes about\n3 secs, when I remove the rule it is virtually instantaneous.\nThe rest of the database seems to perform fine, have you any ideas or come\nacross this before??\n\nRegards\nSteve\n\n", "msg_date": "Mon, 30 Sep 2002 09:14:00 +0100", "msg_from": "Steve King <steve.king@ecmsys.co.uk>", "msg_from_op": true, "msg_subject": "Bad rules" }, { "msg_contents": "Steve King <steve.king@ecmsys.co.uk> writes:\n> I am using postgres 7.2, and have rule on a table which causes a notify if\n> an insert/update/delete is performed on the table.\n> The table is very very small.\n> When performing a simple (very simple) update on the table this takes about\n> 3 secs, when I remove the rule it is virtually instantaneous.\n> The rest of the database seems to perform fine, have you any ideas or come\n> across this before??\n\nLet's see the rule exactly? NOTIFY per se is not slow in my experience.\n\n(One thing to ask: have you done a VACUUM FULL on pg_listener in recent\nmemory? Heavy use of LISTEN/NOTIFY does tend to bloat that table if you\ndon't keep after it with VACUUM.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 10:48:10 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bad rules " } ]
[ { "msg_contents": "Hi all,\nI'm trying the following:\nBEGIN;\nselect current_timestamp into mytable;\n.\nsome queries\n.\ninsert current timestamp into mytable;\nCOMMIT;\nWhen I call this with the \\i <filename> command, all is working fine,\nbut the two current_timestamp entries are the same, there is no\ndifference between them but there should. So I've tried:\nBEGIN;\nselect current_timestamp into mytable;\n.\nsome queries\n.\nCOMMIT;\nBEGIN;\ninsert current_timestamp into mytable;\nCOMMIT;\nand now the entries are different.\nI think that the accuracy is not good enough because I've started two\nBEGIN statements and some time is elapsing between them. Am I right?\nOr does anybody know a better solution to store the elapsed time after\nsome queries without writing some code in C or JAVA?\n\nThanks in advance\nGuido Staub\n\n\nHi all,\nI'm trying the following:\nBEGIN;\nselect current_timestamp into mytable;\n.\nsome queries\n.\ninsert current timestamp into mytable;\nCOMMIT;\nWhen I call this with the \\i <filename> command, all is working\nfine, but the two current_timestamp entries are the same, there is no difference\nbetween them but there should. So I've tried:\nBEGIN;\nselect current_timestamp into mytable;\n.\nsome queries\n.\nCOMMIT;\nBEGIN;\ninsert current_timestamp into mytable;\nCOMMIT;\nand now the entries are different.\nI think that the accuracy is not good enough because I've started two\nBEGIN statements and some time is elapsing between them. Am I right?\nOr does anybody know a better solution to store the elapsed time after\nsome queries without writing some code in C or JAVA?\nThanks in advance\nGuido Staub", "msg_date": "Mon, 30 Sep 2002 11:44:17 +0200", "msg_from": "Guido Staub <staub@gik.uni-karlsruhe.de>", "msg_from_op": true, "msg_subject": "current_timestamp after queries" }, { "msg_contents": "On Mon, Sep 30, 2002 at 11:44:17AM +0200, Guido Staub wrote:\n\n[some current_timestamp stuff] \n\n> I think that the accuracy is not good enough because I've started two\n> BEGIN statements and some time is elapsing between them. Am I right?\n> Or does anybody know a better solution to store the elapsed time after\n> some queries without writing some code in C or JAVA?\n\nPerhaps you're looking for timeofday()?\n\nkleptog=# begin; select timeofday(); select timeofday(); commit;\nBEGIN\n timeofday \n-------------------------------------\n Mon Sep 30 19:54:41.559605 2002 EST\n(1 row)\n\n timeofday \n-------------------------------------\n Mon Sep 30 19:54:41.560018 2002 EST\n(1 row)\n\nCOMMIT\n\nHope this helps,\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Mon, 30 Sep 2002 19:55:26 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: current_timestamp after queries" }, { "msg_contents": "\nCURRENT_TIMESTAMP returns the time of the transaction start, not the\nstatement start. We are currently discussing on hackers whether this is\ncorrect or not. We don't currently allow you to access the statement\nstart time. Sorry.\n\n---------------------------------------------------------------------------\n\nGuido Staub wrote:\n> Hi all,\n> I'm trying the following:\n> BEGIN;\n> select current_timestamp into mytable;\n> .\n> some queries\n> .\n> insert current timestamp into mytable;\n> COMMIT;\n> When I call this with the \\i <filename> command, all is working fine,\n> but the two current_timestamp entries are the same, there is no\n> difference between them but there should. So I've tried:\n> BEGIN;\n> select current_timestamp into mytable;\n> .\n> some queries\n> .\n> COMMIT;\n> BEGIN;\n> insert current_timestamp into mytable;\n> COMMIT;\n> and now the entries are different.\n> I think that the accuracy is not good enough because I've started two\n> BEGIN statements and some time is elapsing between them. Am I right?\n> Or does anybody know a better solution to store the elapsed time after\n> some queries without writing some code in C or JAVA?\n> \n> Thanks in advance\n> Guido Staub\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 11:02:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: current_timestamp after queries" }, { "msg_contents": "\nYes, timeofday() will work, but it can change during the statement, right?\n\n---------------------------------------------------------------------------\n\nMartijn van Oosterhout wrote:\n> On Mon, Sep 30, 2002 at 11:44:17AM +0200, Guido Staub wrote:\n> \n> [some current_timestamp stuff] \n> \n> > I think that the accuracy is not good enough because I've started two\n> > BEGIN statements and some time is elapsing between them. Am I right?\n> > Or does anybody know a better solution to store the elapsed time after\n> > some queries without writing some code in C or JAVA?\n> \n> Perhaps you're looking for timeofday()?\n> \n> kleptog=# begin; select timeofday(); select timeofday(); commit;\n> BEGIN\n> timeofday \n> -------------------------------------\n> Mon Sep 30 19:54:41.559605 2002 EST\n> (1 row)\n> \n> timeofday \n> -------------------------------------\n> Mon Sep 30 19:54:41.560018 2002 EST\n> (1 row)\n> \n> COMMIT\n> \n> Hope this helps,\n> -- \n> Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> > There are 10 kinds of people in the world, those that can do binary\n> > arithmetic and those that can't.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 11:02:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: current_timestamp after queries" }, { "msg_contents": "Is this because of time stamp caching and/or transaction coherency\nissues?\n\nGreg\n\n\nOn Mon, 2002-09-30 at 10:02, Bruce Momjian wrote:\n> \n> CURRENT_TIMESTAMP returns the time of the transaction start, not the\n> statement start. We are currently discussing on hackers whether this is\n> correct or not. We don't currently allow you to access the statement\n> start time. Sorry.\n> \n> ---------------------------------------------------------------------------\n> \n> Guido Staub wrote:\n> > Hi all,\n> > I'm trying the following:\n> > BEGIN;\n> > select current_timestamp into mytable;\n> > .\n> > some queries\n> > .\n> > insert current timestamp into mytable;\n> > COMMIT;\n> > When I call this with the \\i <filename> command, all is working fine,\n> > but the two current_timestamp entries are the same, there is no\n> > difference between them but there should. So I've tried:\n> > BEGIN;\n> > select current_timestamp into mytable;\n> > .\n> > some queries\n> > .\n> > COMMIT;\n> > BEGIN;\n> > insert current_timestamp into mytable;\n> > COMMIT;\n> > and now the entries are different.\n> > I think that the accuracy is not good enough because I've started two\n> > BEGIN statements and some time is elapsing between them. Am I right?\n> > Or does anybody know a better solution to store the elapsed time after\n> > some queries without writing some code in C or JAVA?\n> > \n> > Thanks in advance\n> > Guido Staub\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org", "msg_date": "30 Sep 2002 10:10:09 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] current_timestamp after queries" }, { "msg_contents": "Greg Copeland wrote:\n-- Start of PGP signed section.\n> Is this because of time stamp caching and/or transaction coherency\n> issues?\n\nIt is because we thought that is what the standard required; now we are\nnot sure.\n\n> \n> Greg\n> \n> \n> On Mon, 2002-09-30 at 10:02, Bruce Momjian wrote:\n> > \n> > CURRENT_TIMESTAMP returns the time of the transaction start, not the\n> > statement start. We are currently discussing on hackers whether this is\n> > correct or not. We don't currently allow you to access the statement\n> > start time. Sorry.\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Guido Staub wrote:\n> > > Hi all,\n> > > I'm trying the following:\n> > > BEGIN;\n> > > select current_timestamp into mytable;\n> > > .\n> > > some queries\n> > > .\n> > > insert current timestamp into mytable;\n> > > COMMIT;\n> > > When I call this with the \\i <filename> command, all is working fine,\n> > > but the two current_timestamp entries are the same, there is no\n> > > difference between them but there should. So I've tried:\n> > > BEGIN;\n> > > select current_timestamp into mytable;\n> > > .\n> > > some queries\n> > > .\n> > > COMMIT;\n> > > BEGIN;\n> > > insert current_timestamp into mytable;\n> > > COMMIT;\n> > > and now the entries are different.\n> > > I think that the accuracy is not good enough because I've started two\n> > > BEGIN statements and some time is elapsing between them. Am I right?\n> > > Or does anybody know a better solution to store the elapsed time after\n> > > some queries without writing some code in C or JAVA?\n> > > \n> > > Thanks in advance\n> > > Guido Staub\n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 11:17:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] current_timestamp after queries" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, timeofday() will work, but it can change during the statement, right?\n\nFor what he was doing, it seemed perfectly acceptable.\n\nThis comes back to the point I've been making during the pghackers\ndiscussion: start-of-transaction time has clear uses, and\nexact-current-time has clear uses, but it's not nearly as obvious\nwhy you'd need start-of-statement time in preference to either of\nthe others.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 11:29:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: current_timestamp after queries " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, timeofday() will work, but it can change during the statement, right?\n> \n> For what he was doing, it seemed perfectly acceptable.\n> \n> This comes back to the point I've been making during the pghackers\n> discussion: start-of-transaction time has clear uses, and\n> exact-current-time has clear uses, but it's not nearly as obvious\n> why you'd need start-of-statement time in preference to either of\n> the others.\n\nHow about:\n\n\tBEGIN;\n\tLOCK tab; -- could block\n\tINSERT INTO tab VALUES (..., CURRENT_TIMESTAMP);\n\nIf this is an order-entry application, you would want the statement\nstart time, not the transaction start time. However, if you were\ninserting this into several tables, we would want transaction timestamp\nso it is always the same.\n\nIs someone running Oracle 9 that can test this? We need:\n\n\tBEGIN;\n\tSELECT CURRENT_TIMESTAMP;\n\t-- wait 5 seconds\n\tSELECT CURRENT_TIMESTAMP;\n\nAre those two timestamps the same?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 11:34:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Need Oracle 9 tester. was Re: current_timestamp after queries" }, { "msg_contents": "timeofday() works better than current_timestamp, but I have to use a\nCAST-Statement to make it possible to calculate the elapsed time.\nWhat do you mean with change during the statement?\nThe only change I see after using timeofday() is that the calculated time for\ndoing the query the first time and a second time again differ a llittle bit,\nalthough that query1 and query2 are equal in the transaction and that I've used\nthe same relation.\nGuido Staub\n\nBruce Momjian schrieb:\n\n> Yes, timeofday() will work, but it can change during the statement, right?\n>\n> ---------------------------------------------------------------------------\n>\n> Martijn van Oosterhout wrote:\n> > On Mon, Sep 30, 2002 at 11:44:17AM +0200, Guido Staub wrote:\n> >\n> > [some current_timestamp stuff]\n> >\n> > > I think that the accuracy is not good enough because I've started two\n> > > BEGIN statements and some time is elapsing between them. Am I right?\n> > > Or does anybody know a better solution to store the elapsed time after\n> > > some queries without writing some code in C or JAVA?\n> >\n> > Perhaps you're looking for timeofday()?\n> >\n> > kleptog=# begin; select timeofday(); select timeofday(); commit;\n> > BEGIN\n> > timeofday\n> > -------------------------------------\n> > Mon Sep 30 19:54:41.559605 2002 EST\n> > (1 row)\n> >\n> > timeofday\n> > -------------------------------------\n> > Mon Sep 30 19:54:41.560018 2002 EST\n> > (1 row)\n> >\n> > COMMIT\n> >\n> > Hope this helps,\n> > --\n> > Martijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> > > There are 10 kinds of people in the world, those that can do binary\n> > > arithmetic and those that can't.\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n\ntimeofday() works better than current_timestamp, but I have to use a CAST-Statement\nto make it possible to calculate the elapsed time.\nWhat do you mean with change during the statement?\nThe only change I see after using timeofday() is that the calculated\ntime for doing the query the first time and a second time again differ\na llittle bit, although that query1 and query2 are equal in the transaction\nand that I've used the same relation.\nGuido Staub\nBruce Momjian schrieb:\nYes, timeofday() will work, but it can change during\nthe statement, right?\n---------------------------------------------------------------------------\nMartijn van Oosterhout wrote:\n> On Mon, Sep 30, 2002 at 11:44:17AM +0200, Guido Staub wrote:\n>\n> [some current_timestamp stuff]\n>\n> > I think that the accuracy is not good enough because I've started\ntwo\n> > BEGIN statements and some time is elapsing between them. Am I right?\n> > Or does anybody know a better solution to store the elapsed time\nafter\n> > some queries without writing some code in C or JAVA?\n>\n> Perhaps you're looking for timeofday()?\n>\n> kleptog=# begin; select timeofday(); select timeofday(); commit;\n> BEGIN\n>              \ntimeofday\n> -------------------------------------\n>  Mon Sep 30 19:54:41.559605 2002 EST\n> (1 row)\n>\n>              \ntimeofday\n> -------------------------------------\n>  Mon Sep 30 19:54:41.560018 2002 EST\n> (1 row)\n>\n> COMMIT\n>\n> Hope this helps,\n> --\n> Martijn van Oosterhout   <kleptog@svana.org>  \nhttp://svana.org/kleptog/\n> > There are 10 kinds of people in the world, those that can do binary\n> > arithmetic and those that can't.\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n>     (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n>\n--\n  Bruce Momjian                       \n|  http://candle.pha.pa.us\n  pgman@candle.pha.pa.us              \n|  (610) 359-1001\n  +  If your life is a hard drive,    \n|  13 Roberts Road\n  +  Christ can be your backup.       \n|  Newtown Square, Pennsylvania 19073", "msg_date": "Tue, 01 Oct 2002 12:25:02 +0200", "msg_from": "Guido Staub <staub@gik.uni-karlsruhe.de>", "msg_from_op": true, "msg_subject": "Re: current_timestamp after queries" }, { "msg_contents": "Guido Staub wrote:\n> timeofday() works better than current_timestamp, but I have to use a\n> CAST-Statement to make it possible to calculate the elapsed time.\n> What do you mean with change during the statement?\n> The only change I see after using timeofday() is that the calculated time for\n> doing the query the first time and a second time again differ a llittle bit,\n> although that query1 and query2 are equal in the transaction and that I've used\n> the same relation.\n> Guido Staub\n\nHere is an example of timeofday() changing during a query:\n\t\n\ttest=> CREATE TEMP TABLE xx AS select timeofday() UNION select\n\ttimeofday();\n\tSELECT\n\ttest=> SELECT * FROM xx;\n\t timeofday \n\t--------------------------------------\n\t Tue Oct 01 17:09:23.381304 2002 CEST\n\t Tue Oct 01 17:09:23.381393 2002 CEST\n\t(2 rows)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 1 Oct 2002 11:13:31 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: current_timestamp after queries" } ]
[ { "msg_contents": "> > > > and mb conversions (pg_ascii2mic and pg_mic2ascii not\n> > > > found in the postmaster and not included from elsewhere)\n> >\n> > shared libs on AIX need to be able to resolve all symbols at linkage time.\n> > Those two symbols are in backend/utils/SUBSYS.o but not in the postgres\n> > executable.\n> \n> They are defined in backend/utils/mb/conv.c and declared in\n> include/mb/pg_wchar.h. They're also linked into the \n> postmaster. I don't see anything unusual.\n\nAttached is a patch to fix the mb linking problems on AIX. As a nice side effect \nit reduces the duplicate symbol warnings to linking libpq.so and libecpg.so \n(all shlibs that are not postmaster loadable modules).\n\nPlease apply to current (only affects AIX).\n\nThe _LARGE_FILES problem is unfortunately still open, unless Peter \nhas fixed it per his recent idea.\n\nThanx\nAndreas", "msg_date": "Mon, 30 Sep 2002 13:59:54 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "Zeugswetter Andreas SB SD writes:\n\n> Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> (all shlibs that are not postmaster loadable modules).\n\nCan you explain the method behind your patch? Have you tried -bnogc?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 30 Sep 2002 23:42:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nZeugswetter Andreas SB SD wrote:\n> \n> > > > > and mb conversions (pg_ascii2mic and pg_mic2ascii not\n> > > > > found in the postmaster and not included from elsewhere)\n> > >\n> > > shared libs on AIX need to be able to resolve all symbols at linkage time.\n> > > Those two symbols are in backend/utils/SUBSYS.o but not in the postgres\n> > > executable.\n> > \n> > They are defined in backend/utils/mb/conv.c and declared in\n> > include/mb/pg_wchar.h. They're also linked into the \n> > postmaster. I don't see anything unusual.\n> \n> Attached is a patch to fix the mb linking problems on AIX. As a nice side effect \n> it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so \n> (all shlibs that are not postmaster loadable modules).\n> \n> Please apply to current (only affects AIX).\n> \n> The _LARGE_FILES problem is unfortunately still open, unless Peter \n> has fixed it per his recent idea.\n> \n> Thanx\n> Andreas\n\nContent-Description: mb_link_patch4.gz\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 15:17:42 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nZeugswetter Andreas SB SD wrote:\n> \n> > > > > and mb conversions (pg_ascii2mic and pg_mic2ascii not\n> > > > > found in the postmaster and not included from elsewhere)\n> > >\n> > > shared libs on AIX need to be able to resolve all symbols at linkage time.\n> > > Those two symbols are in backend/utils/SUBSYS.o but not in the postgres\n> > > executable.\n> > \n> > They are defined in backend/utils/mb/conv.c and declared in\n> > include/mb/pg_wchar.h. They're also linked into the \n> > postmaster. I don't see anything unusual.\n> \n> Attached is a patch to fix the mb linking problems on AIX. As a nice side effect \n> it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so \n> (all shlibs that are not postmaster loadable modules).\n> \n> Please apply to current (only affects AIX).\n> \n> The _LARGE_FILES problem is unfortunately still open, unless Peter \n> has fixed it per his recent idea.\n> \n> Thanx\n> Andreas\n\nContent-Description: mb_link_patch4.gz\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 12:21:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)" } ]
[ { "msg_contents": "\nOK, I just received this answer from an Oracle 9 tester. It shows\nCURRENT_TIMESTAMP changing during the transaction. Thanks, Dan.\n\nDan, it wasn't clear if this was in a transaction or not. Does Oracle\nhave autocommit off by default so you are always in a transaction?\n\n---------------------------------------------------------------------------\n\nDan Langille wrote:\n> A very quick answer:\n> \n> ------- Forwarded message follows -------\n> Date: Mon, 30 Sep 2002 13:03:51 -0400 (EDT)\n> From: Agent Drek <drek@smashpow.net>\n> To: \"freebsd-database@freebsd.org\" <freebsd-database@freebsd.org>\n> Cc: \"freebsd-chat@freebsd.org\" <freebsd-chat@freebsd.org>\n> Subject: Re: Any Oracle 9 users? A test please...\n> In-Reply-To: <3D984877.19685.801EEC30@localhost>\n> Message-ID: <Pine.BSF.4.44.0209301303030.50384-\n> 100000@bang.smashpow.net>\n> MIME-Version: 1.0\n> Content-Type: TEXT/PLAIN; charset=US-ASCII\n> Sender: owner-freebsd-database@FreeBSD.ORG\n> \n> On Mon, 30 Sep 2002, Dan Langille wrote:\n> \n> > Date: Mon, 30 Sep 2002 12:49:59 -0400\n> > From: Dan Langille <dan@langille.org>\n> > Reply-To: \"freebsd-database@freebsd.org\"\n> > <freebsd-database@freebsd.org> To: \"freebsd-database@freebsd.org\"\n> > <freebsd-database@freebsd.org> Cc: \"freebsd-chat@freebsd.org\"\n> > <freebsd-chat@freebsd.org> Subject: Any Oracle 9 users? A test\n> > please...\n> >\n> > Followups to freebsd-database@freebsd.org please!\n> >\n> > Any Oracle 9 users out there?\n> >\n> > I need this run:\n> >\n> > BEGIN;\n> > SELECT CURRENT_TIMESTAMP;\n> > -- wait 5 seconds\n> > SELECT CURRENT_TIMESTAMP;\n> >\n> > Are those two timestamps the same?\n> >\n> > Thanks\n> >\n> \n> Our DBA says:\n> \n> <snip from irc>\n> \n> <data> SQL> SELECT current_timestamp FROM DUAL;\n> <data> CURRENT_TIMESTAMP\n> <data>\n> ----------------------------------------------------------------------\n> \n> ----- <data> 30-SEP-02 01.06.42.660969 PM -04:00 <data> SQL> SELECT\n> current_timestamp FROM DUAL; <data> CURRENT_TIMESTAMP <data>\n> ----------------------------------------------------------------------\n> \n> ----- <data> 30-SEP-02 01.06.48.837372 PM -04:00 <data> (you have to\n> include 'from dual' for 'non-table' selects)\n> \n> --\n> Derek Marshall\n> \n> Smash and Pow Inc > 'digital plumber'\n> http://www.smashpow.net\n> \n> \n> To Unsubscribe: send mail to majordomo@FreeBSD.org\n> with \"unsubscribe freebsd-database\" in the body of the message\n> \n> ------- End of forwarded message -------\n> -- \n> Dan Langille\n> I'm looking for a computer job:\n> http://www.freebsddiary.org/dan_langille.php\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 13:17:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "\nHowdy All,\n\nYou have to explicitly commit transactions in oracle using SQL*Plus.\nHowever, DUAL (eg. SELECT current_timestamp FROM DUAL;) is special in this\ncase. It is a table in the sys schema, used for selecting constants,\npseudo-columns, etc.\n\nI'm not sure if this helps but see:\n\nhttp://download-east.oracle.com/otndoc/oracle9i/901_doc/server.901/a90125/queries2.htm#2054162http://download-east.oracle.com/otndoc/oracle9i/901_doc/server.901/a90125/queries2.htm#2054162\n\nrob\n'Oracle 9 tester' :P\n\nOn Mon, 30 Sep 2002, Bruce Momjian wrote:\n\n>\n> OK, I just received this answer from an Oracle 9 tester. It shows\n> CURRENT_TIMESTAMP changing during the transaction. Thanks, Dan.\n>\n> Dan, it wasn't clear if this was in a transaction or not. Does Oracle\n> have autocommit off by default so you are always in a transaction?\n>\n> ---------------------------------------------------------------------------\n>\n> Dan Langille wrote:\n> > A very quick answer:\n> >\n> > ------- Forwarded message follows -------\n> > Date: Mon, 30 Sep 2002 13:03:51 -0400 (EDT)\n> > From: Agent Drek <drek@smashpow.net>\n> > To: \"freebsd-database@freebsd.org\" <freebsd-database@freebsd.org>\n> > Cc: \"freebsd-chat@freebsd.org\" <freebsd-chat@freebsd.org>\n> > Subject: Re: Any Oracle 9 users? A test please...\n> > In-Reply-To: <3D984877.19685.801EEC30@localhost>\n> > Message-ID: <Pine.BSF.4.44.0209301303030.50384-\n> > 100000@bang.smashpow.net>\n> > MIME-Version: 1.0\n> > Content-Type: TEXT/PLAIN; charset=US-ASCII\n> > Sender: owner-freebsd-database@FreeBSD.ORG\n> >\n> > On Mon, 30 Sep 2002, Dan Langille wrote:\n> >\n> > > Date: Mon, 30 Sep 2002 12:49:59 -0400\n> > > From: Dan Langille <dan@langille.org>\n> > > Reply-To: \"freebsd-database@freebsd.org\"\n> > > <freebsd-database@freebsd.org> To: \"freebsd-database@freebsd.org\"\n> > > <freebsd-database@freebsd.org> Cc: \"freebsd-chat@freebsd.org\"\n> > > <freebsd-chat@freebsd.org> Subject: Any Oracle 9 users? A test\n> > > please...\n> > >\n> > > Followups to freebsd-database@freebsd.org please!\n> > >\n> > > Any Oracle 9 users out there?\n> > >\n> > > I need this run:\n> > >\n> > > BEGIN;\n> > > SELECT CURRENT_TIMESTAMP;\n> > > -- wait 5 seconds\n> > > SELECT CURRENT_TIMESTAMP;\n> > >\n> > > Are those two timestamps the same?\n> > >\n> > > Thanks\n> > >\n> >\n> > Our DBA says:\n> >\n> > <snip from irc>\n> >\n> > <data> SQL> SELECT current_timestamp FROM DUAL;\n> > <data> CURRENT_TIMESTAMP\n> > <data>\n> > ----------------------------------------------------------------------\n> >\n> > ----- <data> 30-SEP-02 01.06.42.660969 PM -04:00 <data> SQL> SELECT\n> > current_timestamp FROM DUAL; <data> CURRENT_TIMESTAMP <data>\n> > ----------------------------------------------------------------------\n> >\n> > ----- <data> 30-SEP-02 01.06.48.837372 PM -04:00 <data> (you have to\n> > include 'from dual' for 'non-table' selects)\n> >\n> > --\n> > Derek Marshall\n> >\n> > Smash and Pow Inc > 'digital plumber'\n> > http://www.smashpow.net\n> >\n> >\n> > To Unsubscribe: send mail to majordomo@FreeBSD.org\n> > with \"unsubscribe freebsd-database\" in the body of the message\n> >\n> > ------- End of forwarded message -------\n> > --\n> > Dan Langille\n> > I'm looking for a computer job:\n> > http://www.freebsddiary.org/dan_langille.php\n> >\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n>\n\n", "msg_date": "Mon, 30 Sep 2002 13:41:11 -0400 (EDT)", "msg_from": "Rob Fullerton <robf@home.samurai.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." } ]
[ { "msg_contents": "On Tue, 2002-10-01 at 01:10, Bruce Momjian wrote:\n> \n> > Given what Tom has posted regarding the standard, I think Oracle \n> > is wrong. I'm wondering how the others handle multiple \n> > references in CURRENT_TIMESTAMP in a single stored \n> > procedure/function invocation. It seems to me that the lower \n> > bound is #4, not #5, and the upper bound is implementation \n> > dependent. Therefore PostgreSQL is in compliance, but its \n> > compliance is not very popular.\n> \n> I don't see how we can be compliant if SQL92 says:\n> \n> \tThe time of evaluation of the <datetime value function> during the\n> \texecution of the SQL-statement is implementation-dependent.\n> \n> It says it has to be \"during the SQL statement\", or is SQL statement\n> also ambiguous? \n\nIt can be, as \"during the SQL statement\" can mean either the single\nstatement inside the PL/SQL function (SELECT CURRENT_TIMESTAMP INTO\ntime1 FROM DUAL;) or the whole invocation of the Pl/SQL funtion (the /\ncommand in Mikes sample, i believe)\n\n--------------\nHannu\n\n\n", "msg_date": "30 Sep 2002 23:35:57 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "\nIt is not clear to me; is this its own transaction or a function call?\n\n---------------------------------------------------------------------------\n\nDan Langille wrote:\n> And just for another opinion, which supports the first.\n> \n> >From now, unless you indicate otherwise, I'll only report tests which \n> have both values the same.\n> \n> From: \"Shawn O'Connor\" <soconnor@mail.e-perception.com>\n> To: Dan Langille <dan@langille.org>\n> Subject: Re: Any Oracle 9 users? A test please...\n> In-Reply-To: <3D985663.24174.80554E83@localhost>\n> Message-ID: <20020930114241.E45374-100000@mail.e-perception.com>\n> MIME-Version: 1.0\n> Content-Type: TEXT/PLAIN; charset=US-ASCII\n> X-PMFLAGS: 35127424 0 1 P2A7A0.CNM\n> \n> Okay, here you are:\n> ----------------------------------\n> \n> DECLARE\n> time1 TIMESTAMP;\n> time2 TIMESTAMP;\n> sleeptime NUMBER;\n> BEGIN\n> sleeptime := 5;\n> SELECT CURRENT_TIMESTAMP INTO time1 FROM DUAL;\n> DBMS_LOCK.SLEEP(sleeptime);\n> SELECT CURRENT_TIMESTAMP INTO time2 FROM DUAL;\n> DBMS_OUTPUT.PUT_LINE(TO_CHAR(time1));\n> DBMS_OUTPUT.PUT_LINE(TO_CHAR(time2));\n> END;\n> /\n> 30-SEP-02 11.54.09.583576 AM\n> 30-SEP-02 11.54.14.708333 AM\n> \n> PL/SQL procedure successfully completed.\n> \n> ----------------------------------\n> \n> Hope this helps!\n> \n> -Shawn\n> \n> \n> On Mon, 30 Sep 2002, Dan Langille wrote:\n> \n> > We're testing this just to see what Oracle does. What you are\n> > saying is what we expect to happen. But could you do that test for\n> > us from the command line? Thanks.\n> >\n> > On 30 Sep 2002 at 10:31, Shawn O'Connor wrote:\n> >\n> > > I'm assuming your doing this as some sort of anonymous\n> > > PL/SQL function:\n> > >\n> > > Don't you need to do something like:\n> > >\n> > > SELECT CURRENT_TIMESTAMP FROM DUAL INTO somevariable?\n> > >\n> > > and to wait five seconds probably:\n> > >\n> > > EXECUTE DBMS_LOCK.SLEEP(5);\n> > >\n> > > But to answer your question-- When this PL/SQL function\n> > > is run the values of current_timestamp are not the same, they will\n> > > be sepearated by five seconds or so.\n> > >\n> > > Hope this helps!\n> > >\n> > > -Shawn\n> > >\n> > > On Mon, 30 Sep 2002, Dan Langille wrote:\n> > >\n> > > > Followups to freebsd-database@freebsd.org please!\n> > > >\n> > > > Any Oracle 9 users out there?\n> > > >\n> > > > I need this run:\n> > > >\n> > > > BEGIN;\n> > > > SELECT CURRENT_TIMESTAMP;\n> > > > -- wait 5 seconds\n> > > > SELECT CURRENT_TIMESTAMP;\n> > > >\n> > > > Are those two timestamps the same?\n> > > >\n> > > > Thanks\n> > > > --\n> > > > Dan Langille\n> > > > I'm looking for a computer job:\n> > > > http://www.freebsddiary.org/dan_langille.php\n> > > >\n> > > >\n> > > > To Unsubscribe: send mail to majordomo@FreeBSD.org\n> > > > with \"unsubscribe freebsd-database\" in the body of the message\n> > > >\n> > >\n> > >\n> >\n> >\n> > --\n> > Dan Langille\n> > I'm looking for a computer job:\n> > http://www.freebsddiary.org/dan_langille.php\n> >\n> \n> \n> ------- End of forwarded message -------\n> -- \n> Dan Langille\n> I'm looking for a computer job:\n> http://www.freebsddiary.org/dan_langille.php\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 15:07:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Bruce Momjian wrote:\n> It is not clear to me; is this its own transaction or a function call?\n> \n\nThat looks like an anonymous PL/SQL procedure to me. Another \nquestion might be, given:\n\n\"more than one reference to one or more <datetime value \nfunction>s, then all such references are effectively evaluated \nsimultaneously\"\n\nunder what conditions does Oracle report *the same* value for \nCURRENT_TIMESTAMP? So far, in this discussion, we have the \nfollowing scenarios:\n\n1. RDBMS start: No one\n2. Session start: No one\n3. Transaction start: PostgreSQL\n4. Statement start: ???\n5. CURRENT_TIMESTAMP evaluation: Oracle 9, ???\n\nGiven what Tom has posted regarding the standard, I think Oracle \nis wrong. I'm wondering how the others handle multiple \nreferences in CURRENT_TIMESTAMP in a single stored \nprocedure/function invocation. It seems to me that the lower \nbound is #4, not #5, and the upper bound is implementation \ndependent. Therefore PostgreSQL is in compliance, but its \ncompliance is not very popular.\n\nMike Mascari\nmascarm@mascari.com\n\n> Dan Langille wrote:\n>>\n>>\n>>DECLARE\n>> time1 TIMESTAMP;\n>> time2 TIMESTAMP;\n>> sleeptime NUMBER;\n>>BEGIN\n>> sleeptime := 5;\n>> SELECT CURRENT_TIMESTAMP INTO time1 FROM DUAL;\n>> DBMS_LOCK.SLEEP(sleeptime);\n>> SELECT CURRENT_TIMESTAMP INTO time2 FROM DUAL;\n>> DBMS_OUTPUT.PUT_LINE(TO_CHAR(time1));\n>> DBMS_OUTPUT.PUT_LINE(TO_CHAR(time2));\n>>END;\n>>/\n>>30-SEP-02 11.54.09.583576 AM\n>>30-SEP-02 11.54.14.708333 AM\n>>\n>>PL/SQL procedure successfully completed.\n\n\n", "msg_date": "Mon, 30 Sep 2002 15:29:07 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "\nI am starting to see Tom's issue here. If you have a PL/pgSQL function\nthat does:\n\n> >>DECLARE\n\n> >>BEGIN\n> >> SELECT CURRENT_TIMESTAMP INTO time1 FROM DUAL;\n\n> >> SELECT CURRENT_TIMESTAMP INTO time2 FROM DUAL;\n> >>END;\n\nYou would want those two to be the same because they are in the same\nfunction, but by looking at it, they look the same as interactive\nqueries. In a sense if we change CURRENT_TIMESTAMP, we are scoping the\nvariable to match the users/client's perspective.\n\nHowever, we have added statement_timeout, so it does seem we have had to\nmove to a more user-centered perspective on some of these things. The\nbig question is whether a variable that would be inserted into the\ndatabase should have such scoping. I can see cases where people would\nwant that, and others where they wouldn't.\n\n> 1. RDBMS start: No one\n> 2. Session start: No one\n> 3. Transaction start: PostgreSQL\n> 4. Statement start: ???\n> 5. CURRENT_TIMESTAMP evaluation: Oracle 9, ???\n\nThis is a nice chart. Oracle already has transaction start reported by\nsysdate:\n\n> SQL> begin\n> 2 insert into rbr_foo select sysdate from dual;\n> [...wait about 10 seconds...]\n> 3 insert into rbr_foo select sysdate from dual;\n> 4 end;\n> 5 /\n> \n> PL/SQL procedure successfully completed.\n> \n> SQL> select * from rbr_foo;\n> \n> A\n> ---------------------\n> SEP 27, 2002 12:57:27\n> SEP 27, 2002 12:57:27\n\nso for CURRENT_TIMESTAMP it seems they have evaluation-time, while\nMSSQL/Interbase have statement time.\n\n> Given what Tom has posted regarding the standard, I think Oracle \n> is wrong. I'm wondering how the others handle multiple \n> references in CURRENT_TIMESTAMP in a single stored \n> procedure/function invocation. It seems to me that the lower \n> bound is #4, not #5, and the upper bound is implementation \n> dependent. Therefore PostgreSQL is in compliance, but its \n> compliance is not very popular.\n\nI don't see how we can be compliant if SQL92 says:\n\n\tThe time of evaluation of the <datetime value function> during the\n\texecution of the SQL-statement is implementation-dependent.\n\nIt says it has to be \"during the SQL statement\", or is SQL statement\nalso ambiguous? Is that why Oracle did what they did?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 16:10:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Hannu Krosing wrote:\n> On Tue, 2002-10-01 at 01:10, Bruce Momjian wrote:\n> > \n> > > Given what Tom has posted regarding the standard, I think Oracle \n> > > is wrong. I'm wondering how the others handle multiple \n> > > references in CURRENT_TIMESTAMP in a single stored \n> > > procedure/function invocation. It seems to me that the lower \n> > > bound is #4, not #5, and the upper bound is implementation \n> > > dependent. Therefore PostgreSQL is in compliance, but its \n> > > compliance is not very popular.\n> > \n> > I don't see how we can be compliant if SQL92 says:\n> > \n> > \tThe time of evaluation of the <datetime value function> during the\n> > \texecution of the SQL-statement is implementation-dependent.\n> > \n> > It says it has to be \"during the SQL statement\", or is SQL statement\n> > also ambiguous? \n> \n> It can be, as \"during the SQL statement\" can mean either the single\n> statement inside the PL/SQL function (SELECT CURRENT_TIMESTAMP INTO\n> time1 FROM DUAL;) or the whole invocation of the Pl/SQL funtion (the /\n> command in Mikes sample, i believe)\n\nWhich is what Oracle may have done. SQL99 talks about triggers seeing\nthe same date/time, but then again if your trigger is a function, it has\nto see the same values for all of its calls. This doesn't match Oracle,\nunless they have some switch that returns consistent values when the\nfunction is called as a trigger (yuck).\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 16:37:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Bruce Momjian wrote:\n> Hannu Krosing wrote:\n> \n>>It can be, as \"during the SQL statement\" can mean either the single\n>>statement inside the PL/SQL function (SELECT CURRENT_TIMESTAMP INTO\n>>time1 FROM DUAL;) or the whole invocation of the Pl/SQL funtion (the /\n>>command in Mikes sample, i believe)\n> \n> \n> Which is what Oracle may have done. SQL99 talks about triggers seeing\n> the same date/time, but then again if your trigger is a function, it has\n> to see the same values for all of its calls. This doesn't match Oracle,\n> unless they have some switch that returns consistent values when the\n> function is called as a trigger (yuck).\n> \n\nI think there is a #6 level in that chart. For example:\n\nINSERT INTO foo(field1, field2, field3)\nSELECT CURRENT_TIMESTAMP, (some time-intensive subquery), \nCURRENT_TIMESTAMP\nFROM bar;\n\nI'd bet Oracle inserts the same value for CURRENT_TIMESTAMP for \nboth fields for every row. And that is what they view as a \"SQL \nStatement\". I've only got 8, so I can't test. Also, as you point \nout, Oracle may distinguish between PL/SQL created anonymously \nor with CREATE PROCEDURE vs. PL/SQL code created with CREATE \nFUNCTION. It may be that UDFs return a single CURRENT_TIMESTAMP \nfor the life of the invocation, while stored procedures don't. \nIt is PostgreSQL, after all, that has merged the two concepts \ninto one.\n\nMaybe someone could test version 9 with a FUNCTION that executes \nthe same PL/SQL code and returns the difference between the two \ntimes.\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n\n\n", "msg_date": "Mon, 30 Sep 2002 16:53:33 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "On Mon, 30 Sep 2002 15:29:07 -0400, Mike Mascari <mascarm@mascari.com>\nwrote:\n> I'm wondering how the others handle multiple \n>references in CURRENT_TIMESTAMP in a single stored \n>procedure/function invocation.\n\nMSSQL 7 seems to evaluate CURRENT_TIMESTAMP for each statement,\nInterbase 6 once per procedure call. Here are my test procedures:\n\nMSSQL 7\ncreate table tst (i integer, d datetime not null)\ngo\ncreate procedure tstInsert \nas begin\n delete from tst\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\n insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\n from tst a, tst b, tst c, tst d, tst e\nend\ngo\nbegin transaction\nexec tstInsert\ncommit transaction\nselect * from tst\ni d \n----------- --------------------------- \n0 2002-09-30 22:26:06.540\n1 2002-09-30 22:26:06.540\n32 2002-09-30 22:26:06.540\n243 2002-09-30 22:26:06.540\n1024 2002-09-30 22:26:06.550\n3125 2002-09-30 22:26:06.550\n7776 2002-09-30 22:26:06.550\n16807 2002-09-30 22:26:06.560\n32768 2002-09-30 22:26:06.570\n59049 2002-09-30 22:26:06.590\n\n(10 row(s) affected)\n\n\nInterbase 6\nSQL> create table tst(i integer, d timestamp);\nSQL> commit;\nSQL> set term !!;\nSQL> create procedure tstInsert as begin\nCON> delete from tst;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> insert into tst(i, d) select count(*),CURRENT_TIMESTAMP\nCON> from tst a, tst b, tst c, tst d, tst e;\nCON> end;\nCON> !!\n\nSQL> set term ; !!\nSQL> commit;\nSQL> execute procedure tstInsert; -- takes approx. 5 seconds.\nSQL> select * from tst;\n\n I D\n============ =========================\n\n 0 1858-11-17 00:00:00.0000\n 1 2002-09-30 22:37:54.0000\n 32 2002-09-30 22:37:54.0000\n 243 2002-09-30 22:37:54.0000\n 1024 2002-09-30 22:37:54.0000\n 3125 2002-09-30 22:37:54.0000\n 7776 2002-09-30 22:37:54.0000\n 16807 2002-09-30 22:37:54.0000\n 32768 2002-09-30 22:37:54.0000\n 59049 2002-09-30 22:37:54.0000\n\nSQL> commit;\n\n\nBTW, it's interesting (but OT) how they handle\n\n\tselect count(*), current_timestamp, 1 from tst where 0=1;\n\ndifferently.\n\nMSSQL: 0 2002-09-30 22:53:55.920 1\nInterbase: 0 1858-11-17 00:00:00.0000 0 <--- bug here?\nPostgres: 0 2002-09-30 21:10:35.660781+02 1\n\nServus\n Manfred\n", "msg_date": "Mon, 30 Sep 2002 23:04:34 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "The original tester says \"this is an anonymous procedure\".\n\nOn 30 Sep 2002 at 15:07, Bruce Momjian wrote:\n\n> \n> It is not clear to me; is this its own transaction or a function\n> call?\n> \n> ----------------------------------------------------------------------\n> -----\n> \n> Dan Langille wrote:\n> > And just for another opinion, which supports the first.\n> > \n> > >From now, unless you indicate otherwise, I'll only report tests\n> > >which \n> > have both values the same.\n> > \n> > From: \"Shawn O'Connor\" <soconnor@mail.e-perception.com>\n> > To: Dan Langille <dan@langille.org>\n> > Subject: Re: Any Oracle 9 users? A test please...\n> > In-Reply-To: <3D985663.24174.80554E83@localhost>\n> > Message-ID: <20020930114241.E45374-100000@mail.e-perception.com>\n> > MIME-Version: 1.0 Content-Type: TEXT/PLAIN; charset=US-ASCII\n> > X-PMFLAGS: 35127424 0 1 P2A7A0.CNM\n> > \n> > Okay, here you are:\n> > ----------------------------------\n> > \n> > DECLARE\n> > time1 TIMESTAMP;\n> > time2 TIMESTAMP;\n> > sleeptime NUMBER;\n> > BEGIN\n> > sleeptime := 5;\n> > SELECT CURRENT_TIMESTAMP INTO time1 FROM DUAL;\n> > DBMS_LOCK.SLEEP(sleeptime);\n> > SELECT CURRENT_TIMESTAMP INTO time2 FROM DUAL;\n> > DBMS_OUTPUT.PUT_LINE(TO_CHAR(time1));\n> > DBMS_OUTPUT.PUT_LINE(TO_CHAR(time2));\n> > END;\n> > /\n> > 30-SEP-02 11.54.09.583576 AM\n> > 30-SEP-02 11.54.14.708333 AM\n> > \n> > PL/SQL procedure successfully completed.\n> > \n> > ----------------------------------\n> > \n> > Hope this helps!\n> > \n> > -Shawn\n> > \n> > \n> > On Mon, 30 Sep 2002, Dan Langille wrote:\n> > \n> > > We're testing this just to see what Oracle does. What you are\n> > > saying is what we expect to happen. But could you do that test\n> > > for us from the command line? Thanks.\n> > >\n> > > On 30 Sep 2002 at 10:31, Shawn O'Connor wrote:\n> > >\n> > > > I'm assuming your doing this as some sort of anonymous\n> > > > PL/SQL function:\n> > > >\n> > > > Don't you need to do something like:\n> > > >\n> > > > SELECT CURRENT_TIMESTAMP FROM DUAL INTO somevariable?\n> > > >\n> > > > and to wait five seconds probably:\n> > > >\n> > > > EXECUTE DBMS_LOCK.SLEEP(5);\n> > > >\n> > > > But to answer your question-- When this PL/SQL function\n> > > > is run the values of current_timestamp are not the same, they\n> > > > will be sepearated by five seconds or so.\n> > > >\n> > > > Hope this helps!\n> > > >\n> > > > -Shawn\n> > > >\n> > > > On Mon, 30 Sep 2002, Dan Langille wrote:\n> > > >\n> > > > > Followups to freebsd-database@freebsd.org please!\n> > > > >\n> > > > > Any Oracle 9 users out there?\n> > > > >\n> > > > > I need this run:\n> > > > >\n> > > > > BEGIN;\n> > > > > SELECT CURRENT_TIMESTAMP;\n> > > > > -- wait 5 seconds\n> > > > > SELECT CURRENT_TIMESTAMP;\n> > > > >\n> > > > > Are those two timestamps the same?\n> > > > >\n> > > > > Thanks\n> > > > > --\n> > > > > Dan Langille\n> > > > > I'm looking for a computer job:\n> > > > > http://www.freebsddiary.org/dan_langille.php\n> > > > >\n> > > > >\n> > > > > To Unsubscribe: send mail to majordomo@FreeBSD.org\n> > > > > with \"unsubscribe freebsd-database\" in the body of the message\n> > > > >\n> > > >\n> > > >\n> > >\n> > >\n> > > --\n> > > Dan Langille\n> > > I'm looking for a computer job:\n> > > http://www.freebsddiary.org/dan_langille.php\n> > >\n> > \n> > \n> > ------- End of forwarded message -------\n> > -- \n> > Dan Langille\n> > I'm looking for a computer job:\n> > http://www.freebsddiary.org/dan_langille.php\n> > \n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001 + If your\n> life is a hard drive, | 13 Roberts Road + Christ can be your\n> backup. | Newtown Square, Pennsylvania 19073\n> \n\n\n-- \nDan Langille\nI'm looking for a computer job:\nhttp://www.freebsddiary.org/dan_langille.php\n\n", "msg_date": "Mon, 30 Sep 2002 17:33:23 -0400", "msg_from": "\"Dan Langille\" <dan@langille.org>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't see how we can be compliant if SQL92 says:\n> \tThe time of evaluation of the <datetime value function> during the\n> \texecution of the SQL-statement is implementation-dependent.\n> It says it has to be \"during the SQL statement\", or is SQL statement\n> also ambiguous? Is that why Oracle did what they did?\n\nYes, you're finally seeing my issue: \"SQL statement\" isn't all that\nwell-defined a concept.\n\nISTM that the reported behavior of Oracle's pl/sql is *clearly* in\nviolation of SQL92: the body of a pl/sql function is a single <SQL\nprocedure statement> per SQL92 4.17, so how can they allow\ncurrent_timestamp to change within it?\n\nIt would be even more interesting to try the same function called\nfrom another pl/sql function --- in that scenario, hardly anyone\ncould deny that the whole execution of the inner function is contained\nwithin one statement of the outer function, and therefore\ncurrent_timestamp should not be changing within it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 18:40:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please... " }, { "msg_contents": "Hello!\n\nOn Mon, 30 Sep 2002, Bruce Momjian wrote:\n\n> It is not clear to me; is this its own transaction or a function call?\n\nBTW.\nAs reported by my friend:\nOracle 8.1.7 (ver.9 behaves the same way):\n\n--- cut ---\nSQL> SET TRANSACTION READ WRITE;\n\nTransaction set.\n\nSQL> SELECT TO_CHAR(SYSDATE, 'DD-MM-YYYY HH24:MI:SS') FROM DUAL;\n\nTO_CHAR(SYSDATE,'MM\n-------------------\n02-10-2002 10:04:19\n\nSQL> -- wait a lot\n\nSQL> SELECT TO_CHAR(SYSDATE, 'DD-MM-YYYY HH24:MI:SS') FROM DUAL;\n\nTO_CHAR(SYSDATE,'MM\n-------------------\n02-10-2002 10:04:27\n\nSQL> COMMIT;\n\nCommit complete.\n--- cut ---\n\n\n> > > > > Any Oracle 9 users out there?\n> > > > >\n> > > > > I need this run:\n> > > > >\n> > > > > BEGIN;\n> > > > > SELECT CURRENT_TIMESTAMP;\n> > > > > -- wait 5 seconds\n> > > > > SELECT CURRENT_TIMESTAMP;\n> > > > >\n> > > > > Are those two timestamps the same?\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Wed, 2 Oct 2002 14:33:29 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <byg@center-f1.ru>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Yury Bokhoncovich <byg@center-f1.ru> writes:\n> As reported by my friend:\n> Oracle 8.1.7 (ver.9 behaves the same way):\n> [ to_char(sysdate) advances in a transaction ]\n\nNow I'm really confused; this directly contradicts the report of Oracle\n8's behavior that we had earlier from Roland Roberts. Can someone\nexplain why the different results?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 09:17:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please... " }, { "msg_contents": "Tom Lane wrote:\n> Yury Bokhoncovich <byg@center-f1.ru> writes:\n> \n>>As reported by my friend:\n>>Oracle 8.1.7 (ver.9 behaves the same way):\n>>[ to_char(sysdate) advances in a transaction ]\n> \n> \n> Now I'm really confused; this directly contradicts the report of Oracle\n> 8's behavior that we had earlier from Roland Roberts. Can someone\n> explain why the different results?\n\nRoland used an anonymous PL/SQL procedure:\n\nSQL> begin\n 2 insert into rbr_foo select sysdate from dual;\n[...wait about 10 seconds...]\n 3 insert into rbr_foo select sysdate from dual;\n 4 end;\n 5 /\n\nPL/SQL procedure successfully completed.\n\nSQL> select * from rbr_foo;\n\nOracle isn't processing those statements interactively. SQL*Plus \nis waiting on the \"/\" to send the PL/SQL block to the database. \nI suspect its not going to take Oracle more than a second to \ninsert a row...\n\nMike Mascari\nmascarm@mascari.com\n\n", "msg_date": "Wed, 02 Oct 2002 10:08:34 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": ">>>>> \"Mike\" == Mike Mascari <mascarm@mascari.com> writes:\n\n Mike> Tom Lane wrote:\n >> Yury Bokhoncovich <byg@center-f1.ru> writes:\n\n >>> As reported by my friend: Oracle 8.1.7 (ver.9 behaves the same way):\n\n >>> [ to_char(sysdate) advances in a transaction ]\n\n >> Now I'm really confused; this directly contradicts the report\n >> of Oracle 8's behavior that we had earlier from Roland Roberts.\n >> Can someone explain why the different results?\n\n Mike> Roland used an anonymous PL/SQL procedure:\n\nYou're right and I didn't think enough about what was happening. This\nalso explains why I so often see the same timestamp throughout a\ntransaction---the transaction is all taking place inside a PL/SQL\nprocedure.\n\nroland\n-- \n\t\t PGP Key ID: 66 BC 3B CD\nRoland B. Roberts, PhD RL Enterprises\nroland@rlenter.com 76-15 113th Street, Apt 3B\nroland@astrofoto.org Forest Hills, NY 11375\n", "msg_date": "02 Oct 2002 10:48:45 -0400", "msg_from": "Roland Roberts <roland@astrofoto.org>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Mike Mascari wrote:\n> Tom Lane wrote:\n> > Yury Bokhoncovich <byg@center-f1.ru> writes:\n> > \n> >>As reported by my friend:\n> >>Oracle 8.1.7 (ver.9 behaves the same way):\n> >>[ to_char(sysdate) advances in a transaction ]\n> > \n> > \n> > Now I'm really confused; this directly contradicts the report of Oracle\n> > 8's behavior that we had earlier from Roland Roberts. Can someone\n> > explain why the different results?\n> \n> Roland used an anonymous PL/SQL procedure:\n> \n> SQL> begin\n> 2 insert into rbr_foo select sysdate from dual;\n> [...wait about 10 seconds...]\n> 3 insert into rbr_foo select sysdate from dual;\n> 4 end;\n> 5 /\n> \n> PL/SQL procedure successfully completed.\n> \n> SQL> select * from rbr_foo;\n> \n> Oracle isn't processing those statements interactively. SQL*Plus \n> is waiting on the \"/\" to send the PL/SQL block to the database. \n> I suspect its not going to take Oracle more than a second to \n> insert a row...\n\nOh, I understand now. He delayed when entering the function body, but\nthat has no effect when he sends it. Can someone add an explicit sleep\nin the function body and try that?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 11:14:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Bruce Momjian wrote:\n> Mike Mascari wrote:\n >>\n>>Oracle isn't processing those statements interactively. SQL*Plus \n>>is waiting on the \"/\" to send the PL/SQL block to the database. \n>>I suspect its not going to take Oracle more than a second to \n>>insert a row...\n> \n> \n> Oh, I understand now. He delayed when entering the function body, but\n> that has no effect when he sends it. Can someone add an explicit sleep\n> in the function body and try that?\n> \n\nSQL> create table foo (a date);\n\nTable created.\n\nSQL> begin\n 2 insert into foo select sysdate from dual;\n 3 dbms_lock.sleep(5);\n 4 insert into foo select sysdate from dual;\n 5 end;\n 6 /\n\nPL/SQL procedure successfully completed.\n\nSQL> select to_char(a, 'HH24:MI:SS') from foo;\n\nTO_CHAR(\n--------\n11:31:02\n11:31:07\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n", "msg_date": "Wed, 02 Oct 2002 11:29:29 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Mike Mascari wrote:\n> Bruce Momjian wrote:\n> > Mike Mascari wrote:\n> >>\n> >>Oracle isn't processing those statements interactively. SQL*Plus \n> >>is waiting on the \"/\" to send the PL/SQL block to the database. \n> >>I suspect its not going to take Oracle more than a second to \n> >>insert a row...\n> > \n> > \n> > Oh, I understand now. He delayed when entering the function body, but\n> > that has no effect when he sends it. Can someone add an explicit sleep\n> > in the function body and try that?\n> > \n> \n> SQL> create table foo (a date);\n> \n> Table created.\n> \n> SQL> begin\n> 2 insert into foo select sysdate from dual;\n> 3 dbms_lock.sleep(5);\n> 4 insert into foo select sysdate from dual;\n> 5 end;\n> 6 /\n> \n> PL/SQL procedure successfully completed.\n> \n> SQL> select to_char(a, 'HH24:MI:SS') from foo;\n> \n> TO_CHAR(\n> --------\n> 11:31:02\n> 11:31:07\n\nOK, two requests. First, would you create a _named_ PL/SQL function\nwith those contents and try it again. Also, would you test\nCURRENT_TIMESTAMP too?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 11:41:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Bruce Momjian wrote:\n> \n> OK, two requests. First, would you create a _named_ PL/SQL function\n> with those contents and try it again. Also, would you test\n> CURRENT_TIMESTAMP too?\n> \n\nSQL> CREATE TABLE foo(a date);\n\nTable created.\n\nAs a PROCEDURE:\n\nSQL> CREATE PROCEDURE test\n 2 AS\n 3 BEGIN\n 4 INSERT INTO foo SELECT SYSDATE FROM dual;\n 5 dbms_lock.sleep(5);\n 6 INSERT INTO foo SELECT SYSDATE FROM dual;\n 7 END;\n 8 /\n\nProcedure created.\n\nSQL> execute test;\n\nPL/SQL procedure successfully completed.\n\nSQL> select to_char(a, 'HH24:MI:SS') from foo;\n\nTO_CHAR(\n--------\n12:01:07\n12:01:12\n\nAs a FUNCTION:\n\nSQL> CREATE FUNCTION mydiff\n 2 RETURN NUMBER\n 3 IS\n 4 time1 DATE;\n 5 time2 DATE;\n 6 c NUMBER;\n 7 BEGIN\n 8 SELECT SYSDATE\n 9 INTO time1\n 10 FROM DUAL;\n 11 SELECT COUNT(*)\n 12 INTO c\n 13 FROM bar, bar, bar, bar, bar, bar, bar, bar;\n 14 SELECT SYSDATE\n 15 INTO time2\n 16 FROM DUAL;\n 17 RETURN (time2 - time1);\n 18 END;\n 19 /\n\nFunction created.\n\nSQL> select mydiff FROM dual;\n\n MYDIFF\n----------\n.000034722\n\nI can't test the use of CURRENT_TIMESTAMP because I have Oracle \n8, not 9.\n\nMike Mascari\nmascarm@mascari.com\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "Wed, 02 Oct 2002 12:20:01 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> SQL> CREATE PROCEDURE test\n> 2 AS\n> 3 BEGIN\n> 4 INSERT INTO foo SELECT SYSDATE FROM dual;\n> 5 dbms_lock.sleep(5);\n> 6 INSERT INTO foo SELECT SYSDATE FROM dual;\n> 7 END;\n> 8 /\n\n> Procedure created.\n\n> SQL> execute test;\n\n> PL/SQL procedure successfully completed.\n\n> SQL> select to_char(a, 'HH24:MI:SS') from foo;\n\n> TO_CHAR(\n> --------\n> 12:01:07\n> 12:01:12\n\n\nWhat fun. So in reality, SYSDATE on Oracle behaves like timeofday():\ntrue current time. That's certainly not a spec-compliant interpretation\nfor CURRENT_TIMESTAMP :-(\n\nHas anyone done the corresponding experiments on the other DBMSes to\nidentify exactly when they allow CURRENT_TIMESTAMP to advance?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 13:19:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please... " }, { "msg_contents": "Mike Mascari <mascarm@mascari.com> wrote:\n\n\n> I can't test the use of CURRENT_TIMESTAMP because I have Oracle\n> 8, not 9.\n\nWhat about NOW()? It should be available in Oracle 8? Is it the same as\nSYSDATE?\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Thu, 3 Oct 2002 00:43:55 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Michael Paesold wrote:\n\n> What about NOW()? It should be available in Oracle 8? Is it the same as\n> SYSDATE?\n> \n\nUnless I'm missing something, NOW() neither works in Oracle 8 \nnor appears in the Oracle 9i online documentation:\n\nhttp://download-west.oracle.com/otndoc/oracle9i/901_doc/server.901/a90125/functions2.htm#80856\n\nMike Mascari\nmascarm@mascari.com\n\n", "msg_date": "Wed, 02 Oct 2002 18:52:34 -0400", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Mike Mascari <mascarm@mascari.com> wrote:\n\n> Michael Paesold wrote:\n>\n> > What about NOW()? It should be available in Oracle 8? Is it the same as\n> > SYSDATE?\n> >\n>\n> Unless I'm missing something, NOW() neither works in Oracle 8\n> nor appears in the Oracle 9i online documentation:\n>\n>\nhttp://download-west.oracle.com/otndoc/oracle9i/901_doc/server.901/a90125/fu\nnctions2.htm#80856\n>\n> Mike Mascari\n\nI am sorry, if that is so. I thought it was available, but obviously, I was\nwrong.\n\nRegards,\nMichael\n\n", "msg_date": "Thu, 3 Oct 2002 01:00:56 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." }, { "msg_contents": "Tom Lane wrote:\n\n> \n>\n>Has anyone done the corresponding experiments on the other DBMSes to\n>identify exactly when they allow CURRENT_TIMESTAMP to advance ?\n>\n\nI have Db2 on hand and examined CURRENT TIMESTAMP in an sql procedure.\n(IBM have implemented it without the \"_\" ....)\n\nThe short of it is that CURRENT TIMESTAMP is the not frozen to the \ntransaction start,\nbut reflects time movement within the transaction.\n\nNote that \"db2 +c\" is equivalent to issueing BEGIN in Pg,\nand the command line tool (db2) keeps (the same) connection open until\nthe TERMINATE is issued :\n\n\n$ cat stamp.sql\n\ncreate procedure stamp()\nlanguage sql\nbegin\n insert into test values(1,current timestamp);\n insert into test values(2,current timestamp);\n insert into test values(3,current timestamp);\n insert into test values(4,current timestamp);\n insert into test values(5,current timestamp);\n insert into test values(6,current timestamp);\n insert into test values(7,current timestamp);\n insert into test values(8,current timestamp);\n insert into test values(9,current timestamp);\nend\n@\n\n$ db2 connect to dss\n Database Connection Information\n\n Database server = DB2/LINUX 7.2.3\n SQL authorization ID = DB2\n Local database alias = DSS\n\n$ db2 -td@ -f stamp.sql\nDB20000I The SQL command completed successfully.\n\n$ db2 +c\ndb2 => call stamp();\n\n\"STAMP\" RETURN_STATUS: \"0\"\n\ndb2 => commit;\n\nDB20000I The SQL command completed successfully.\n\ndb2 => select * from test;\n\nID VAL\n----------- --------------------------\n 1 2002-10-03-19.35.16.286019\n 2 2002-10-03-19.35.16.286903\n 3 2002-10-03-19.35.16.287549\n 4 2002-10-03-19.35.16.288235\n 5 2002-10-03-19.35.16.288925\n 6 2002-10-03-19.35.16.289571\n 7 2002-10-03-19.35.16.290209\n 8 2002-10-03-19.35.16.290884\n 9 2002-10-03-19.35.16.291522\n\n 9 record(s) selected.\n\ndb2 => terminate;\n\n\n\nregards\n\nMark\n\n", "msg_date": "Thu, 03 Oct 2002 19:56:47 +1200", "msg_from": "Mark Kirkwood <markir@paradise.net.nz>", "msg_from_op": false, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." } ]
[ { "msg_contents": "On Tue, 2002-10-01 at 03:31, Tom Lane wrote:\n> Offhand this seems kinda inconsistent to me --- I'd expect \n> \n> regression=# select extract(week from date '2002-09-30');\n> date_part\n> -----------\n> 40\n> (1 row)\n> \n> to produce 39, not 40, on the grounds that the first day of Week 40\n> is tomorrow not today. Alternatively, if today is the first day of\n> Week 40 (as EXTRACT(week) seems to think), then ISTM that the to_date\n> expression should produce today not tomorrow.\n> \n> I notice that 2001-12-31 is considered part of the first week of 2002,\n> which is also pretty surprising:\n\nThere are at least 3 different ways to start week numbering:\n\n1. from first week with any days in current year\n\n2. from first full week in current year\n\n3. from first week with thursday in current year\n\nperhaps more...\n\nI suspect it depends on locale which should be used.\n\n---------------\nHannu\n\n\n", "msg_date": "01 Oct 2002 01:46:38 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Postgresql likes Tuesday..." }, { "msg_contents": "On Tue, 2002-10-01 at 03:49, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > On Tue, 2002-10-01 at 03:31, Tom Lane wrote:\n> >> I notice that 2001-12-31 is considered part of the first week of 2002,\n> >> which is also pretty surprising:\n> \n> > There are at least 3 different ways to start week numbering:\n> > ...\n> > I suspect it depends on locale which should be used.\n> \n> Perhaps. But I think there are two distinct issues here. One is\n> whether EXTRACT(week) is assigning reasonable week numbers to dates;\n> this depends on your convention for which day is the first of a week\n> as well as your convention for the first week of a year (both possibly\n> should depend on locale as Hannu suggests). The other issue is what\n> to_date(...,'WWYYYY') should do to produce a date representing a week\n> number. Shouldn't it always produce the first date of that week?\n\nProducing middle-of-the week date is least likely to get a date in last\nyear.\n\nAlso should \n\nselect to_timestamp('01102002','DDMMYYYY');\n\nalso produce midday (12:00) for time, instead of current 00:00 ?\n\n-----------------\nHannu\n\n\n", "msg_date": "01 Oct 2002 02:11:23 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": true, "msg_subject": "Re: Postgresql likes Tuesday..." }, { "msg_contents": "select to_char(\n to_date(\n CAST(extract(week from CURRENT_TIMESTAMP) as text)\n || CAST(extract(year from CURRENT_TIMESTAMP) as text)\n , 'WWYYYY')\n , 'FMDay, D');\n\n to_char \n------------\n Tuesday, 3\n(1 row)\n\n\nNot that it matters for me at the moment (I care that it's in the week\nof..), but why does it pick Tuesday?\n\n-- \n Rod Taylor\n\n", "msg_date": "30 Sep 2002 17:37:47 -0400", "msg_from": "Rod Taylor <rbt@rbt.ca>", "msg_from_op": false, "msg_subject": "Postgresql likes Tuesday..." }, { "msg_contents": "Rod Taylor <rbt@rbt.ca> writes:\n> select to_char(\n> to_date(\n> CAST(extract(week from CURRENT_TIMESTAMP) as text)\n> || CAST(extract(year from CURRENT_TIMESTAMP) as text)\n> , 'WWYYYY')\n> , 'FMDay, D');\n\n> to_char \n> ------------\n> Tuesday, 3\n> (1 row)\n\n> Not that it matters for me at the moment (I care that it's in the week\n> of..), but why does it pick Tuesday?\n\nThe middle part of that boils down (as of today) to\n\nregression=# select to_date('402002', 'WWYYYY');\n to_date\n------------\n 2002-10-01\n(1 row)\n\nand Oct 1 (tomorrow) is Tuesday. As to why it picks that day to\nrepresent Week 40 of 2002, it's probably related to the fact that Week 1\nof 2002 is converted to\n\nregression=# select to_date('012002', 'WWYYYY');\n to_date\n------------\n 2002-01-01\n(1 row)\n\nwhich was a Tuesday.\n\nOffhand this seems kinda inconsistent to me --- I'd expect \n\nregression=# select extract(week from date '2002-09-30');\n date_part\n-----------\n 40\n(1 row)\n\nto produce 39, not 40, on the grounds that the first day of Week 40\nis tomorrow not today. Alternatively, if today is the first day of\nWeek 40 (as EXTRACT(week) seems to think), then ISTM that the to_date\nexpression should produce today not tomorrow.\n\nI notice that 2001-12-31 is considered part of the first week of 2002,\nwhich is also pretty surprising:\n\nregression=# select extract(week from date '2001-12-31');\n date_part\n-----------\n 1\n(1 row)\n\n\nAnyone able to check this stuff on Oracle? What exactly are the\nboundary points for EXTRACT(week), and does to_date() agree?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 18:31:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql likes Tuesday... " }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> On Tue, 2002-10-01 at 03:31, Tom Lane wrote:\n>> I notice that 2001-12-31 is considered part of the first week of 2002,\n>> which is also pretty surprising:\n\n> There are at least 3 different ways to start week numbering:\n> ...\n> I suspect it depends on locale which should be used.\n\nPerhaps. But I think there are two distinct issues here. One is\nwhether EXTRACT(week) is assigning reasonable week numbers to dates;\nthis depends on your convention for which day is the first of a week\nas well as your convention for the first week of a year (both possibly\nshould depend on locale as Hannu suggests). The other issue is what\nto_date(...,'WWYYYY') should do to produce a date representing a week\nnumber. Shouldn't it always produce the first date of that week?\nIf not, what other conventions make sense?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 30 Sep 2002 18:49:34 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql likes Tuesday... " }, { "msg_contents": "On Mon, Sep 30, 2002 at 06:49:34PM -0400, Tom Lane wrote:\n| The other issue is what\n| to_date(...,'WWYYYY') should do to produce a date representing a week\n| number. Shouldn't it always produce the first date of that week?\n| If not, what other conventions make sense?\n\nIMHO, it should choose the \"Week Ending\" date. This is\nusually what all of the companies that I've worked with\nwant to see for the \"day\" column. For example, the \ndefect^H^H^H^H^H^H quality reports at Ford Motor in 1993\nused a Predo of part by defect by week-ending. Where\nweek ending date was the Sunday following the work \nweek (monday-sunday). In various project data in\ncompanies that I've worked with before and after 1993\nI've yet to see a \"weekly\" report that didn't give\nthe week ending... alhtough some did use Friday or\nSaturday for the week ending.\n\nOne hickup with this choice is that you'd probably \nwant the time portion to be 23:59:59.999 so that it\nincludes everything up to the end of the day. Hmm.\n\nClark\n", "msg_date": "Tue, 1 Oct 2002 03:07:53 +0000", "msg_from": "\"Clark C. Evans\" <cce@clarkevans.com>", "msg_from_op": false, "msg_subject": "Re: Postgresql likes Tuesday..." }, { "msg_contents": "On Mon, Sep 30, 2002 at 06:31:15PM -0400, Tom Lane wrote:\n> The middle part of that boils down (as of today) to\n> \n> regression=# select to_date('402002', 'WWYYYY');\n> to_date\n> ------------\n> 2002-10-01\n> (1 row)\n> \n> and Oct 1 (tomorrow) is Tuesday. As to why it picks that day to\n> represent Week 40 of 2002, it's probably related to the fact that Week 1\n> of 2002 is converted to\n> \n> regression=# select to_date('012002', 'WWYYYY');\n> to_date\n> ------------\n> 2002-01-01\n> (1 row)\n> \n> which was a Tuesday.\n> \n> Offhand this seems kinda inconsistent to me --- I'd expect \n> \n> regression=# select extract(week from date '2002-09-30');\n> date_part\n> -----------\n> 40\n> (1 row)\n> \n> to produce 39, not 40, on the grounds that the first day of Week 40\n> is tomorrow not today. Alternatively, if today is the first day of\n> Week 40 (as EXTRACT(week) seems to think), then ISTM that the to_date\n> expression should produce today not tomorrow.\n> \n> I notice that 2001-12-31 is considered part of the first week of 2002,\n> which is also pretty surprising:\n> \n> regression=# select extract(week from date '2001-12-31');\n> date_part\n> -----------\n> 1\n> (1 row)\n> \n> \n> Anyone able to check this stuff on Oracle? What exactly are the\n> boundary points for EXTRACT(week), and does to_date() agree?\n\n Please, read docs -- to_() functions know two versions of \"number of\n week\" \n IW = iso-week\n WW = \"oracle\" week\n\ntest=# select to_date('402002', 'WWYYYY');\n to_date \n------------\n 2002-10-01\n(1 row)\n\ntest=# select to_date('402002', 'IWYYYY');\n to_date \n------------\n 2002-09-30\n(1 row)\n\ntest=# select to_date('012002', 'WWYYYY');\n to_date \n------------\n 2002-01-01\n(1 row)\n\ntest=# select to_date('012002', 'IWYYYY');\n to_date \n------------\n 2001-12-31\n(1 row)\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 1 Oct 2002 08:26:55 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Postgresql likes Tuesday..." }, { "msg_contents": "On Mon, Sep 30, 2002 at 05:37:47PM -0400, Rod Taylor wrote:\n> select to_char(\n> to_date(\n> CAST(extract(week from CURRENT_TIMESTAMP) as text)\n> || CAST(extract(year from CURRENT_TIMESTAMP) as text)\n> , 'WWYYYY')\n> , 'FMDay, D');\n> \n> to_char \n> ------------\n> Tuesday, 3\n> (1 row)\n> \n\n The PostgreSQL not loves Thuesday, but WW for year 2002 loves it. Why?\n\n Because 'WW' = (day_of_year - 1) / 7 + 1, other words this year\n start on Thuesday (see 01-JAN-2002) and WW start weeks each 7 days\n after this first day of year.\n\n If you need \"human\" week you must use IW (iso-week) that start every\n Monday. \n \n I know there're countries where week start on Sunday, but it's not supported \n -- the problem is with 'D' it returns day-of-week for Sunday-based-week.\n\n Your example (I use to_xxx () only, it's more readable):\n\n If you need correct for Sunday-based-week:\n\nselect to_char( to_date(to_char(now(), 'IWYYYY'), 'IWYYYY')-'1d'::interval, 'FMDay, D');\n to_char \n-----------\n Sunday, 1\n\n\n If you need Monday-based-week (ISO week):\n \ntest=# select to_char( to_date(to_char(now(), 'IWYYYY'), 'IWYYYY'), 'FMDay, D');\n to_char \n-----------\n Monday, 2\n \n\n '2' is problem -- maybe add to to_xxx() functions 'ID' as day-of-isoweek.\n It's really small change I think we can do it for 7.3 too. \n\n What think about it our Toms?\n\n\n In the Oracle it's same (means WW vs. IW vs. D)\n\n SVRMGR> select to_char(to_date('30-SEP-02'), 'WW IW Day D') from dual;\n TO_CHAR(TO_DATE('\n -----------------\n 39 40 Monday 2\n\n test=# select to_char('30-SEP-02'::date, 'WW IW Day D');\n to_char \n -------------------\n 39 40 Monday 2\n\n\n SVRMGR> select to_char(to_date('29-SEP-02'), 'WW IW Day D') from dual;\n TO_CHAR(TO_DATE('\n -----------------\n 39 39 Sunday 1\n\n test=# select to_char('29-SEP-02'::date, 'WW IW Day D');\n to_char \n -------------------\n 39 39 Sunday 1\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 1 Oct 2002 09:54:49 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Postgresql likes Tuesday..." }, { "msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> What think about it our Toms?\n> ...\n> In the Oracle it's same (means WW vs. IW vs. D)\n\nIf it works the same as Oracle then I'm happy with it; that's what it's\nsupposed to do.\n\nThe real point here seems to be that EXTRACT(week) corresponds to\nto_date's IW conversion, not WW conversion. This is indeed implied by\nthe docs, but it's not stated plainly (there's just a reference to ISO\nin each of the relevant pages). Perhaps we need more documentation, or\na different layout that would offer a place to put notes like this one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Oct 2002 09:51:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgresql likes Tuesday... " } ]
[ { "msg_contents": "I'm done making back-patches for 7.2.3. Over to you, Bruce ...\nattached is the REL7_2_STABLE branch history since 7.2.2.\n\n\t\t\tregards, tom lane\n\n2002-09-30 16:57 tgl\n\n\t* src/backend/utils/adt/: date.c, datetime.c (REL7_2_STABLE):\n\tBack-patch fixes to work around broken mktime() in recent glibc\n\treleases.\n\n2002-09-30 16:47 tgl\n\n\t* src/backend/: commands/async.c, tcop/postgres.c (REL7_2_STABLE):\n\tBack-patch fix for bad SIGUSR2 interrupt handling during backend\n\tshutdown.\n\n2002-09-30 16:24 tgl\n\n\t* src/: backend/storage/lmgr/s_lock.c, include/storage/s_lock.h\n\t(REL7_2_STABLE): Back-patch fix for correct TAS operation on\n\tmulti-CPU PPC machines.\n\n2002-09-30 16:18 tgl\n\n\t* src/backend/: bootstrap/bootstrap.c, storage/buffer/buf_init.c,\n\tstorage/lmgr/lwlock.c, storage/lmgr/proc.c (REL7_2_STABLE):\n\tBack-patch fix for 'can't wait without a PROC structure' failures:\n\tremove separate ShutdownBufferPoolAccess exit callback, and do the\n\twork in ProcKill instead, before we delete MyProc.\n\n2002-09-30 15:55 tgl\n\n\t* src/: backend/access/transam/clog.c,\n\tbackend/access/transam/xlog.c, backend/bootstrap/bootstrap.c,\n\tbackend/tcop/utility.c, include/access/xlog.h (REL7_2_STABLE):\n\tBack-patch fix to ensure a checkpoint occurs before truncating\n\tCLOG, even if no recent WAL activity has occurred.\n\n2002-09-30 15:45 tgl\n\n\t* src/backend/commands/vacuum.c (REL7_2_STABLE): Back-patch fix to\n\tnot change pg_database.datvacuumxid or truncate CLOG when an\n\tunprivileged user runs VACUUM.\n\n2002-09-20 17:37 tgl\n\n\t* src/backend/utils/adt/ruleutils.c (REL7_2_STABLE): Back-patch fix\n\tfor failure to dump views containing FULL JOIN USING. The bug is\n\tnot present in CVS tip due to cleanup of JOIN handling, but 7.2.*\n\tis broken.\n", "msg_date": "Mon, 30 Sep 2002 17:02:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "7.2.3 patching done" }, { "msg_contents": "\nOK, 7.2.3 is all branded and ready to go. HISTORY/release.sgml shows:\n\n---------------------------------------------------------------------------\n\n\n Release Notes\n\n\n Release 7.2.3\n\n Release date: 2002-10-01\n\n This has a variety of fixes from 7.2.2, including fixes to prevent\n possible data loss.\n\n ----------------------------------------------------------------------\n\nMigration to version 7.2.3\n\n A dump/restore is *not* required for those running 7.2.X.\n\n ----------------------------------------------------------------------\n\nChanges\n\n Prevent possible compressed transaction log loss (Tom)\n Prevent non-superuser from increasing most recent vacuum info (Tom)\n Handle pre-1970 date values in newer versions of glibc (Tom)\n Fix possible hang during server shutdown\n Prevent spinlock hangs on SMP PPC machines (Tomoyuki Niijima)\n Fix pg_dump to properly dump FULL JOIN USING (Tom)\n\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> I'm done making back-patches for 7.2.3. Over to you, Bruce ...\n> attached is the REL7_2_STABLE branch history since 7.2.2.\n> \n> \t\t\tregards, tom lane\n> \n> 2002-09-30 16:57 tgl\n> \n> \t* src/backend/utils/adt/: date.c, datetime.c (REL7_2_STABLE):\n> \tBack-patch fixes to work around broken mktime() in recent glibc\n> \treleases.\n> \n> 2002-09-30 16:47 tgl\n> \n> \t* src/backend/: commands/async.c, tcop/postgres.c (REL7_2_STABLE):\n> \tBack-patch fix for bad SIGUSR2 interrupt handling during backend\n> \tshutdown.\n> \n> 2002-09-30 16:24 tgl\n> \n> \t* src/: backend/storage/lmgr/s_lock.c, include/storage/s_lock.h\n> \t(REL7_2_STABLE): Back-patch fix for correct TAS operation on\n> \tmulti-CPU PPC machines.\n> \n> 2002-09-30 16:18 tgl\n> \n> \t* src/backend/: bootstrap/bootstrap.c, storage/buffer/buf_init.c,\n> \tstorage/lmgr/lwlock.c, storage/lmgr/proc.c (REL7_2_STABLE):\n> \tBack-patch fix for 'can't wait without a PROC structure' failures:\n> \tremove separate ShutdownBufferPoolAccess exit callback, and do the\n> \twork in ProcKill instead, before we delete MyProc.\n> \n> 2002-09-30 15:55 tgl\n> \n> \t* src/: backend/access/transam/clog.c,\n> \tbackend/access/transam/xlog.c, backend/bootstrap/bootstrap.c,\n> \tbackend/tcop/utility.c, include/access/xlog.h (REL7_2_STABLE):\n> \tBack-patch fix to ensure a checkpoint occurs before truncating\n> \tCLOG, even if no recent WAL activity has occurred.\n> \n> 2002-09-30 15:45 tgl\n> \n> \t* src/backend/commands/vacuum.c (REL7_2_STABLE): Back-patch fix to\n> \tnot change pg_database.datvacuumxid or truncate CLOG when an\n> \tunprivileged user runs VACUUM.\n> \n> 2002-09-20 17:37 tgl\n> \n> \t* src/backend/utils/adt/ruleutils.c (REL7_2_STABLE): Back-patch fix\n> \tfor failure to dump views containing FULL JOIN USING. The bug is\n> \tnot present in CVS tip due to cleanup of JOIN handling, but 7.2.*\n> \tis broken.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 23:35:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Hello!\n\nBTW, is it possible to have just patch against previous version (to reduce \ntraffic and CPU)? I.e. something like 7.2.2-7.2.3.diff.gz?\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Tue, 1 Oct 2002 15:01:08 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <byg@center-f1.ru>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Yury Bokhoncovich wrote:\n> Hello!\n> \n> BTW, is it possible to have just patch against previous version (to reduce \n> traffic and CPU)? I.e. something like 7.2.2-7.2.3.diff.gz?\n\nIn some releases, it is possible, in others we add/remove files and it\nisn't possible. I think because it isn't always possible we normally\ndon't do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 1 Oct 2002 11:17:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Bruce Momjian writes:\n\n> > BTW, is it possible to have just patch against previous version (to reduce\n> > traffic and CPU)? I.e. something like 7.2.2-7.2.3.diff.gz?\n>\n> In some releases, it is possible, in others we add/remove files and it\n> isn't possible. I think because it isn't always possible we normally\n> don't do it.\n\nAdding or removing files isn't the problem (see -N option). Binary files\nare the problem.\n\nUsing xdelta would be safe, though.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 1 Oct 2002 22:04:04 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > BTW, is it possible to have just patch against previous version (to reduce\n> > > traffic and CPU)? I.e. something like 7.2.2-7.2.3.diff.gz?\n> >\n> > In some releases, it is possible, in others we add/remove files and it\n> > isn't possible. I think because it isn't always possible we normally\n> > don't do it.\n> \n> Adding or removing files isn't the problem (see -N option). Binary files\n> are the problem.\n\nDo we change any binary files in minor releases, or even major ones?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 1 Oct 2002 16:08:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Peter Eisentraut wrote:\n>> Adding or removing files isn't the problem (see -N option). Binary files\n>> are the problem.\n\n> Do we change any binary files in minor releases, or even major ones?\n\nBut the source distribution hasn't *got* any binary files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Oct 2002 16:12:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.2.3 patching done " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Peter Eisentraut wrote:\n> >> Adding or removing files isn't the problem (see -N option). Binary files\n> >> are the problem.\n> \n> > Do we change any binary files in minor releases, or even major ones?\n> \n> But the source distribution hasn't *got* any binary files.\n\nYea, that was sort of my point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 1 Oct 2002 18:10:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Hello!\n\nOn Tue, 1 Oct 2002, Bruce Momjian wrote:\n\n> In some releases, it is possible, in others we add/remove files and it\n> isn't possible. I think because it isn't always possible we normally\n> don't do it.\n\nI think it's enough to do diffs for minor release (i.e. 7.2.2->7.2.3, \n7.3.0->7.3.1 and so on). BTW, I had no problems with patching Linux kernel \nthis way (e.g. having vanilla 2.2.16 then sequentially patch for 2.2.17, \n.18, .19, .20, .21, .22) though there were added directories.\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n", "msg_date": "Wed, 2 Oct 2002 08:57:45 +0700 (NOVST)", "msg_from": "Yury Bokhoncovich <byg@center-f1.ru>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" }, { "msg_contents": "Tom Lane writes:\n\n> But the source distribution hasn't *got* any binary files.\n\nThere are some under doc/src/graphics, and then there are\ndoc/postgres.tar.gz and doc/man.tar.gz.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 7 Oct 2002 21:22:38 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done " }, { "msg_contents": "On Mon, Oct 07, 2002 at 09:22:38PM +0200, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > But the source distribution hasn't *got* any binary files.\n> \n> There are some under doc/src/graphics, and then there are\n> doc/postgres.tar.gz and doc/man.tar.gz.\n\nAnd what about publishing xdelta patches?\n\n-- \nAlvaro Herrera (<alvherre[a]dcc.uchile.cl>)\n\"Escucha y olvidar�s; ve y recordar�s; haz y entender�s\" (Confucio)\n", "msg_date": "Mon, 7 Oct 2002 17:45:57 -0400", "msg_from": "Alvaro Herrera <alvherre@dcc.uchile.cl>", "msg_from_op": false, "msg_subject": "Re: 7.2.3 patching done" } ]
[ { "msg_contents": "\nHere is an email from Gavin showing a problem with subqueries and casts\ncausing errors when they shouldn't.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Hi Bruce,\n> \n> Thanks to a user query (handle: lltd, IRC) I came across a bug in the\n> planner. The query was:\n> \n> ---\n> select o1.timestamp::date as date, count(*), (select sum(oi.price) from\n> \"order\" o2, \"order_item\" oi where oi.order_id = o2.id and\n> o2.timestamp::date = o1.timestamp::date and o2.timestamp is not\n> null) as total from \"order\" o1 where o1.timestamp is not null\n> group by o1.timestamp::date order by o1.timestamp::date desc;\n> ---\n> \n> The error he was receiving:\n> \n> ---\n> ERROR: Sub-SELECT uses un-GROUPed attribute o1.timestamp from outer query\n> ---\n> \n> After a bit of looking around, I determined that the cast in the order by\n> clause was causing the error, not the fact that the query had an ungrouped\n> attribute in the outer query.\n> \n> I have come up with a simpler demonstration which works under 7.1 and CVS.\n> \n> create table a ( i int);\n> insert into a values(1);\n> insert into a values(1);\n> insert into a values(1);\n> insert into a values(1);\n> insert into a values(1);\n> insert into a values(2);\n> insert into a values(2);\n> insert into a values(2);\n> insert into a values(2);\n> insert into a values(3);\n> insert into a values(3);\n> insert into a values(3);\n> insert into a values(3);\n> \n> --- NO ERROR ---\n> \n> select o1.i::smallint,count(*),(select sum(o2.i) from a o2 where\n> o2.i=o1.i::smallint) as sum from a o1 group by o1.i;\n> \n> --- ERROR ---\n> select o1.i::smallint,count(*),(select sum(o2.i) from a o2 where\n> o2.i=o1.i::smallint) as sum from a o1 group by o1.i::smallint;\n> \n> ----\n> \n> Notice that the difference is only the cast in the order by clause. Here\n> are my results:\n> \n> template1=# select version();\n> version\n> ------------------------------------------------------------------------\n> PostgreSQL 7.3devel on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n> (1 row)\n> \n> template1=# \\d a\n> Table \"public.a\"\n> Column | Type | Modifiers\n> --------+---------+-----------\n> i | integer |\n> \n> template1=# select * from a;\n> i\n> ---\n> 1\n> 1\n> 1\n> 1\n> 1\n> 2\n> 2\n> 2\n> 2\n> 3\n> 3\n> 3\n> 3\n> (13 rows)\n> \n> te1=# select o1.i::smallint,count(*),(select sum(o2.i) from a o2 where\n> o2.i=o1.i::smallint) as sum from a o1 group by o1.i;\n> i | count | sum\n> ---+-------+-----\n> 1 | 5 | 5\n> 2 | 4 | 8\n> 3 | 4 | 12\n> (3 rows)\n> \n> template1=# select o1.i::smallint,count(*),(select sum(o2.i) from a o2\n> where o2.i=o1.i::smallint) as sum from a o1 group by o1.i::smallint;\n> ERROR: Sub-SELECT uses un-GROUPed attribute o1.i from outer query\n> \n> [under patched version]\n> \n> template1=# select o1.i::smallint,count(*),(select sum(o2.i) from a o2\n> where o2.i=o1.i::smallint) as sum from a o1 group by o1.i;\n> i | count | sum\n> ---+-------+-----\n> 1 | 5 | 5\n> 2 | 4 | 8\n> 3 | 4 | 12\n> (3 rows)\n> \n> template1=# select o1.i::smallint,count(*),(select sum(o2.i) from a o2\n> where o2.i=o1.i::smallint) as sum from a o1 group by o1.i::smallint;\n> i | count | sum\n> ---+-------+-----\n> 1 | 5 | 5\n> 2 | 4 | 8\n> 3 | 4 | 12\n> (3 rows)\n> \n> \n> As it works out, the bug is caused by these lines in\n> optimizer/util/clauses.c\n> \n> if (equal(thisarg, lfirst(gl)))\n> {\n> contained_in_group_clause = true;\n> break;\n> }\n> \n> 'thisarg' is an item from the args list used by Expr *. We only access\n> this code inside check_subplans_for_ungrouped_vars_walker() if the node is\n> a subplan. The problem is that equal() is not sufficiently intelligent to\n> consider the equality of 'thisarg' and lfirst(gl) (an arg from the group\n> by clause) considering that thisarg and lfirst(gl) are not necessarily of\n> the same node type. This means we fail out in equal():\n> \n> /*\n> * are they the same type of nodes?\n> */\n> if (nodeTag(a) != nodeTag(b))\n> return false;\n> \n> \n> The patch below 'fixes' this (and possibly breaks everything else). I\n> haven't tested it rigorously and it *just* special cases group by\n> clauses with functions in them. Here's the patch:\n> \n> Index: ./src/backend/optimizer/util/clauses.c\n> ===================================================================\n> RCS\n> file: /projects/cvsroot/pgsql-server/src/backend/optimizer/util/clauses.c,v\n> retrieving revision 1.107\n> diff -2 -c -r1.107 clauses.c\n> *** ./src/backend/optimizer/util/clauses.c 2002/08/31 22:10:43 1.107\n> --- ./src/backend/optimizer/util/clauses.c 2002/09/30 15:02:47\n> ***************\n> *** 703,706 ****\n> --- 703,718 ----\n> contained_in_group_clause = true;\n> break;\n> + } else {\n> + if(IsA(lfirst(gl),Expr) &&\n> + length(((Expr *)lfirst(gl))->args) == 1 &&\n> + IsA(lfirst(((Expr *)lfirst(gl))->args),Var) ) {\n> +\n> + Var *tvar = (Var *) lfirst(((Expr *)lfirst(gl))->args);\n> + if(var->varattno == tvar->varattno) {\n> + contained_in_group_clause = true;\n> + break;\n> + }\n> +\n> + }\n> }\n> }\n> \n> ----\n> \n> There are two assumptions here: 1) the only time this bug occurs is when\n> the group by clause argument is an expression and a function at that (even\n> though I do no test for this correctly) 2) We can see whether thisarg ==\n> lfirst(gl) by looking at the varattno of each and comparing. It occurs to\n> me that this is just plain wrong and works only for the specific query.\n> \n> The reason why I've sent this email to you and not to the list is I do not\n> have time to follow through on this -- as much as I would like to. I\n> simply do no have the time. :-(\n> \n> Gavin\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 30 Sep 2002 23:48:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Postgres Planner Bug" }, { "msg_contents": "\n> > Thanks to a user query (handle: lltd, IRC) I came across a bug in the\n> > planner. The query was:\n> > \n> > ---\n> > select o1.timestamp::date as date, count(*), (select sum(oi.price) from\n> > \"order\" o2, \"order_item\" oi where oi.order_id = o2.id and\n> > o2.timestamp::date = o1.timestamp::date and o2.timestamp is not\n> > null) as total from \"order\" o1 where o1.timestamp is not null\n> > group by o1.timestamp::date order by o1.timestamp::date desc;\n> > ---\n> > \n> > The error he was receiving:\n> > \n> > ---\n> > ERROR: Sub-SELECT uses un-GROUPed attribute o1.timestamp from outer query\n> > ---\n> > \n> > After a bit of looking around, I determined that the cast in the order by\n> > clause was causing the error, not the fact that the query had an ungrouped\n\nMistake here. It relates to the group by clause, not order by.\n\nGavin\n\n", "msg_date": "Tue, 1 Oct 2002 14:34:03 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Postgres Planner Bug" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> The patch below 'fixes' this (and possibly breaks everything else). I\n>> haven't tested it rigorously and it *just* special cases group by\n>> clauses with functions in them.\n\nSurely this cure is worse than the disease.\n\nThe general problem is that we don't attempt to match\narbitrary-expression GROUP BY clauses against arbitrary subexpressions\nof sub-SELECTs. While that could certainly be done, I'm concerned about\nthe cycles that we'd expend trying to match everything against\neverything else. This would be an exponential cost imposed on every\ngroup-by-with-subselect query whether it needed the feature or not.\n\nGiven that GROUP BY is restricted to a simple column reference in both\nSQL92 and SQL99, is it worth a large performance hit on unrelated\nqueries to support this feature? What other DBs support it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Oct 2002 00:40:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres Planner Bug " }, { "msg_contents": "The following patch completes the open item:\n\n\tChange log_min_error_statement to be off by default (Gavin)\n\nGavin was busy so I did the work. Basically, it allows fatal/panic as a\nvalue, and defaults it to panic so it is effectively OFF by default.\n\nThere was agreement that we can allow these values as a way of turning\nthis option off. Because of this, we can continue using the same\nvalidation routines for all the server message level GUC parameters.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.141\ndiff -c -c -r1.141 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t27 Sep 2002 02:04:39 -0000\t1.141\n--- doc/src/sgml/runtime.sgml\t2 Oct 2002 16:05:57 -0000\n***************\n*** 1036,1050 ****\n <term><varname>LOG_MIN_ERROR_STATEMENT</varname> (<type>string</type>)</term>\n <listitem>\n <para>\n! This controls which log messages are accompanied by the original\n! query which generated the message. All queries matching the setting\n! or which are of a higher severity than the setting are logged. The\n! default is <literal>ERROR</literal>. Valid values are\n! <literal>DEBUG5</literal>, <literal>DEBUG4</literal>, \n! <literal>DEBUG3</literal>, <literal>DEBUG2</literal>, \n <literal>DEBUG1</literal>, <literal>INFO</literal>,\n! <literal>NOTICE</literal>, <literal>WARNING</literal>\n! and <literal>ERROR</literal>.\n </para>\n <para>\n It is recommended you enable <literal>LOG_PID</literal> as well\n--- 1036,1050 ----\n <term><varname>LOG_MIN_ERROR_STATEMENT</varname> (<type>string</type>)</term>\n <listitem>\n <para>\n! This controls which message types output the original query to\n! the server logs. All queries matching the setting or higher are\n! logged. The default is <literal>PANIC</literal>. Valid values\n! are <literal>DEBUG5</literal>, <literal>DEBUG4</literal>,\n! <literal>DEBUG3</literal>, <literal>DEBUG2</literal>,\n <literal>DEBUG1</literal>, <literal>INFO</literal>,\n! <literal>NOTICE</literal>, <literal>WARNING</literal>,\n! <literal>ERROR</literal>, <literal>FATAL</literal>, and\n! \t<literal>PANIC</literal>.\n </para>\n <para>\n It is recommended you enable <literal>LOG_PID</literal> as well\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/guc.c,v\nretrieving revision 1.96\ndiff -c -c -r1.96 guc.c\n*** src/backend/utils/misc/guc.c\t22 Sep 2002 19:52:38 -0000\t1.96\n--- src/backend/utils/misc/guc.c\t2 Oct 2002 16:06:09 -0000\n***************\n*** 104,110 ****\n \n int\t\t\tlog_min_error_statement = ERROR;\n char\t *log_min_error_statement_str = NULL;\n! const char\tlog_min_error_statement_str_default[] = \"error\";\n \n int\t\t\tserver_min_messages = NOTICE;\n char\t *server_min_messages_str = NULL;\n--- 104,110 ----\n \n int\t\t\tlog_min_error_statement = ERROR;\n char\t *log_min_error_statement_str = NULL;\n! const char\tlog_min_error_statement_str_default[] = \"panic\";\n \n int\t\t\tserver_min_messages = NOTICE;\n char\t *server_min_messages_str = NULL;\n***************\n*** 2999,3004 ****\n--- 2999,3015 ----\n \t{\n \t\tif (doit)\n \t\t\t(*var) = ERROR;\n+ \t}\n+ \t/* We allow FATAL/PANIC for client-side messages too. */\n+ \telse if (strcasecmp(newval, \"fatal\") == 0)\n+ \t{\n+ \t\tif (doit)\n+ \t\t\t(*var) = FATAL;\n+ \t}\n+ \telse if (strcasecmp(newval, \"panic\") == 0)\n+ \t{\n+ \t\tif (doit)\n+ \t\t\t(*var) = PANIC;\n \t}\n \telse\n \t\treturn NULL;\t\t\t/* fail */", "msg_date": "Wed, 2 Oct 2002 12:26:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "GUC log_min_error_statement" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net] \n> Sent: 30 September 2002 21:11\n> To: Dave Page\n> Cc: pgsql-odbc@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)\n> \n> \n> Dave Page writes:\n> \n> > Can someone who knows make better than I (which is probably \n> the vast \n> > majority of you!) knock up a makefile so the driver will build \n> > standalone on *nix systems please? There should be no \n> dependencies on \n> > any of the rest of the code - certainly there isn't for the Win32 \n> > build.\n> \n> I'm working something out. I'll send it to you tomorrow.\n\nThanks Peter.\n", "msg_date": "Tue, 1 Oct 2002 08:08:58 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)" }, { "msg_contents": "Dave Page writes:\n\n> > > majority of you!) knock up a makefile so the driver will build\n> > > standalone on *nix systems please? There should be no\n> > dependencies on\n> > > any of the rest of the code - certainly there isn't for the Win32\n> > > build.\n> >\n> > I'm working something out. I'll send it to you tomorrow.\n\nHah. I tried to put something together based on Automake and Libtool, but\nI must conclude that Libtool is just completely utterly broken. I also\nconsidered copying over Makefile.shlib, but that would draw in too many\nauxiliary files and create a different kind of mess. So what I would\nsuggest right now as the course of action is to copy your local psqlodbc\nsubtree to its old location under interfaces/ and try to hook things\ntogether that way.\n\nPerhaps one of these days we should convert Makefile.shlib into a shell\nscript that we can deploy more easily to different projects.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 1 Oct 2002 22:04:57 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)" } ]
[ { "msg_contents": "\n> > Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> > it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> > (all shlibs that are not postmaster loadable modules).\n> \n> Can you explain the method behind your patch? Have you tried -bnogc?\n\n-bnogc would (probably) have been the correct switch reading the man page,\nbut the method was previously not good since it involved the following:\n\n1. create a static postgres executable from the SUBSYS.o's\n2. create an exports file from above\n3. recreate a shared postgres executable\n\nThis naturally had a cyclic dependency, that could not properly be \nreflected in the Makefile (thus a second make sometimes left you with \na static postgres unless you manually removed postgres.imp).\n\nNow it does:\npostgres.imp: $(OBJS)\ncreate a temporary SUBSYS.o from all $(OBJS)\ncreate a postgres.imp from SUBSYS.o\nrm temporary SUBSYS.o\n\npostgres: postgres.imp\nlink a shared postgres\n\nA second change was to move the import and export files to the end of the link line,\nthen the linker knows not to throw a duplicate symbol warning, and keeps all symbols\nthat are mentioned in the exports file (== -bnogc restricted to $(OBJS) symbols).\n\nThus now only libpq.so and libecpg.so still show the duplicate symbol warnings since their\nlink line should actually not include postgres.imp . I did not see how to make a difference \nbetween loadable modules (need postgres.imp) and interface libraries (do not need postgres.imp),\nbut since the resulting libs are ok, I left it at that.\n\nI tested both gcc and xlc including regression tests.\n\nAndreas\n", "msg_date": "Tue, 1 Oct 2002 10:23:13 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "has this patched been applied to the CVS yet?\n\n\nOn Tue, 1 Oct 2002, Zeugswetter \nAndreas SB SD wrote:\n\n> Date: Tue, 1 Oct 2002 10:23:13 +0200\n> From: Zeugswetter Andreas SB SD <ZeugswetterA@spardat.at>\n> To: Peter Eisentraut <peter_e@gmx.net>\n> Cc: PostgreSQL Development <pgsql-hackers@postgresql.org>\n> Subject: Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)\n> \n> \n> > > Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> > > it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> > > (all shlibs that are not postmaster loadable modules).\n> > \n> > Can you explain the method behind your patch? Have you tried -bnogc?\n> \n> -bnogc would (probably) have been the correct switch reading the man page,\n> but the method was previously not good since it involved the following:\n> \n> 1. create a static postgres executable from the SUBSYS.o's\n> 2. create an exports file from above\n> 3. recreate a shared postgres executable\n> \n> This naturally had a cyclic dependency, that could not properly be \n> reflected in the Makefile (thus a second make sometimes left you with \n> a static postgres unless you manually removed postgres.imp).\n> \n> Now it does:\n> postgres.imp: $(OBJS)\n> create a temporary SUBSYS.o from all $(OBJS)\n> create a postgres.imp from SUBSYS.o\n> rm temporary SUBSYS.o\n> \n> postgres: postgres.imp\n> link a shared postgres\n> \n> A second change was to move the import and export files to the end of the link line,\n> then the linker knows not to throw a duplicate symbol warning, and keeps all symbols\n> that are mentioned in the exports file (== -bnogc restricted to $(OBJS) symbols).\n> \n> Thus now only libpq.so and libecpg.so still show the duplicate symbol warnings since their\n> link line should actually not include postgres.imp . I did not see how to make a difference \n> between loadable modules (need postgres.imp) and interface libraries (do not need postgres.imp),\n> but since the resulting libs are ok, I left it at that.\n> \n> I tested both gcc and xlc including regression tests.\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\nhorwitz@argoscomp.com (Samuel A Horwitz)\n\n\n", "msg_date": "Thu, 3 Oct 2002 12:06:41 -0400 (EDT)", "msg_from": "Samuel A Horwitz <horwitz@argoscomp.com>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "Samuel A Horwitz wrote:\n> has this patched been applied to the CVS yet?\n\n\nNo, I was waiting to see if there were any negative comments, but seeing\nnone, I will add it to the patch queue today.\n\n---------------------------------------------------------------------------\n\n> \n> \n> On Tue, 1 Oct 2002, Zeugswetter \n> Andreas SB SD wrote:\n> \n> > Date: Tue, 1 Oct 2002 10:23:13 +0200\n> > From: Zeugswetter Andreas SB SD <ZeugswetterA@spardat.at>\n> > To: Peter Eisentraut <peter_e@gmx.net>\n> > Cc: PostgreSQL Development <pgsql-hackers@postgresql.org>\n> > Subject: Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)\n> > \n> > \n> > > > Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> > > > it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> > > > (all shlibs that are not postmaster loadable modules).\n> > > \n> > > Can you explain the method behind your patch? Have you tried -bnogc?\n> > \n> > -bnogc would (probably) have been the correct switch reading the man page,\n> > but the method was previously not good since it involved the following:\n> > \n> > 1. create a static postgres executable from the SUBSYS.o's\n> > 2. create an exports file from above\n> > 3. recreate a shared postgres executable\n> > \n> > This naturally had a cyclic dependency, that could not properly be \n> > reflected in the Makefile (thus a second make sometimes left you with \n> > a static postgres unless you manually removed postgres.imp).\n> > \n> > Now it does:\n> > postgres.imp: $(OBJS)\n> > create a temporary SUBSYS.o from all $(OBJS)\n> > create a postgres.imp from SUBSYS.o\n> > rm temporary SUBSYS.o\n> > \n> > postgres: postgres.imp\n> > link a shared postgres\n> > \n> > A second change was to move the import and export files to the end of the link line,\n> > then the linker knows not to throw a duplicate symbol warning, and keeps all symbols\n> > that are mentioned in the exports file (== -bnogc restricted to $(OBJS) symbols).\n> > \n> > Thus now only libpq.so and libecpg.so still show the duplicate symbol warnings since their\n> > link line should actually not include postgres.imp . I did not see how to make a difference \n> > between loadable modules (need postgres.imp) and interface libraries (do not need postgres.imp),\n> > but since the resulting libs are ok, I left it at that.\n> > \n> > I tested both gcc and xlc including regression tests.\n> > \n> > Andreas\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> \n> horwitz@argoscomp.com (Samuel A Horwitz)\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:23:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "Has this fix been applied to the cvs yet, I am still getting the same \nerror\n\n\nOn Tue, 1 Oct 2002, Zeugswetter Andreas SB SD wrote:\n\n> Date: Tue, 1 Oct 2002 10:23:13 +0200\n> From: Zeugswetter Andreas SB SD <ZeugswetterA@spardat.at>\n> To: Peter Eisentraut <peter_e@gmx.net>\n> Cc: PostgreSQL Development <pgsql-hackers@postgresql.org>\n> Subject: Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)\n> \n> \n> > > Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> > > it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> > > (all shlibs that are not postmaster loadable modules).\n> > \n> > Can you explain the method behind your patch? Have you tried -bnogc?\n> \n> -bnogc would (probably) have been the correct switch reading the man page,\n> but the method was previously not good since it involved the following:\n> \n> 1. create a static postgres executable from the SUBSYS.o's\n> 2. create an exports file from above\n> 3. recreate a shared postgres executable\n> \n> This naturally had a cyclic dependency, that could not properly be \n> reflected in the Makefile (thus a second make sometimes left you with \n> a static postgres unless you manually removed postgres.imp).\n> \n> Now it does:\n> postgres.imp: $(OBJS)\n> create a temporary SUBSYS.o from all $(OBJS)\n> create a postgres.imp from SUBSYS.o\n> rm temporary SUBSYS.o\n> \n> postgres: postgres.imp\n> link a shared postgres\n> \n> A second change was to move the import and export files to the end of the link line,\n> then the linker knows not to throw a duplicate symbol warning, and keeps all symbols\n> that are mentioned in the exports file (== -bnogc restricted to $(OBJS) symbols).\n> \n> Thus now only libpq.so and libecpg.so still show the duplicate symbol warnings since their\n> link line should actually not include postgres.imp . I did not see how to make a difference \n> between loadable modules (need postgres.imp) and interface libraries (do not need postgres.imp),\n> but since the resulting libs are ok, I left it at that.\n> \n> I tested both gcc and xlc including regression tests.\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\nhorwitz@argoscomp.com (Samuel A Horwitz)\n\n\n", "msg_date": "Wed, 9 Oct 2002 08:26:26 -0400 (EDT)", "msg_from": "Samuel A Horwitz <horwitz@argoscomp.com>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" }, { "msg_contents": "\nStill in queue. I will apply today.\n\n---------------------------------------------------------------------------\n\nSamuel A Horwitz wrote:\n> Has this fix been applied to the cvs yet, I am still getting the same \n> error\n> \n> \n> On Tue, 1 Oct 2002, Zeugswetter Andreas SB SD wrote:\n> \n> > Date: Tue, 1 Oct 2002 10:23:13 +0200\n> > From: Zeugswetter Andreas SB SD <ZeugswetterA@spardat.at>\n> > To: Peter Eisentraut <peter_e@gmx.net>\n> > Cc: PostgreSQL Development <pgsql-hackers@postgresql.org>\n> > Subject: Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)\n> > \n> > \n> > > > Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> > > > it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> > > > (all shlibs that are not postmaster loadable modules).\n> > > \n> > > Can you explain the method behind your patch? Have you tried -bnogc?\n> > \n> > -bnogc would (probably) have been the correct switch reading the man page,\n> > but the method was previously not good since it involved the following:\n> > \n> > 1. create a static postgres executable from the SUBSYS.o's\n> > 2. create an exports file from above\n> > 3. recreate a shared postgres executable\n> > \n> > This naturally had a cyclic dependency, that could not properly be \n> > reflected in the Makefile (thus a second make sometimes left you with \n> > a static postgres unless you manually removed postgres.imp).\n> > \n> > Now it does:\n> > postgres.imp: $(OBJS)\n> > create a temporary SUBSYS.o from all $(OBJS)\n> > create a postgres.imp from SUBSYS.o\n> > rm temporary SUBSYS.o\n> > \n> > postgres: postgres.imp\n> > link a shared postgres\n> > \n> > A second change was to move the import and export files to the end of the link line,\n> > then the linker knows not to throw a duplicate symbol warning, and keeps all symbols\n> > that are mentioned in the exports file (== -bnogc restricted to $(OBJS) symbols).\n> > \n> > Thus now only libpq.so and libecpg.so still show the duplicate symbol warnings since their\n> > link line should actually not include postgres.imp . I did not see how to make a difference \n> > between loadable modules (need postgres.imp) and interface libraries (do not need postgres.imp),\n> > but since the resulting libs are ok, I left it at that.\n> > \n> > I tested both gcc and xlc including regression tests.\n> > \n> > Andreas\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> \n> horwitz@argoscomp.com (Samuel A Horwitz)\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 9 Oct 2002 11:23:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AIX compilation problems (was Re: Proposal ...)" } ]
[ { "msg_contents": "\nIs it my imagination, or is there a problem with the way pg_dump uses off_t \netc. My understanding is that off_t may be 64 bits on systems with 32 bit \nints. But it looks like pg_dump writes them as 4 byte values in all cases. \nIt also reads them as 4 byte values. Does this seem like a problem to \nanybody else?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Tue, 01 Oct 2002 22:13:04 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "pg_dump and large files - is this a problem?" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Is it my imagination, or is there a problem with the way pg_dump uses off_t \n> etc. My understanding is that off_t may be 64 bits on systems with 32 bit \n> ints. But it looks like pg_dump writes them as 4 byte values in all cases. \n> It also reads them as 4 byte values. Does this seem like a problem to \n> anybody else?\n\nYes, it does --- the implication is that the custom format, at least,\ncan't support dumps > 4Gb. What exactly is pg_dump writing off_t's\ninto files for; maybe there's not really a problem?\n\nIf there is a problem, seems like we'd better fix it. Perhaps there\nneeds to be something in the header to tell the reader the sizeof\noff_t.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Oct 2002 09:59:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Tom Lane wrote:\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > Is it my imagination, or is there a problem with the way pg_dump uses off_t \n> > etc. My understanding is that off_t may be 64 bits on systems with 32 bit \n> > ints. But it looks like pg_dump writes them as 4 byte values in all cases. \n> > It also reads them as 4 byte values. Does this seem like a problem to \n> > anybody else?\n> \n> Yes, it does --- the implication is that the custom format, at least,\n> can't support dumps > 4Gb. What exactly is pg_dump writing off_t's\n> into files for; maybe there's not really a problem?\n> \n> If there is a problem, seems like we'd better fix it. Perhaps there\n> needs to be something in the header to tell the reader the sizeof\n> off_t.\n\nBSD/OS has 64-bit off_t's so it does support large files. Is there\nsomething I can test?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 1 Oct 2002 11:20:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "Tom Lane writes:\n\n> Yes, it does --- the implication is that the custom format, at least,\n> can't support dumps > 4Gb. What exactly is pg_dump writing off_t's\n> into files for; maybe there's not really a problem?\n\nThat's kind of what I was wondering, too.\n\nNot that it's an excuse, but I think that large file access through zlib\nwon't work anyway. Zlib uses the integer types in fairly random ways.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 1 Oct 2002 22:03:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 09:59 AM 1/10/2002 -0400, Tom Lane wrote:\n>If there is a problem, seems like we'd better fix it. Perhaps there\n>needs to be something in the header to tell the reader the sizeof\n>off_t.\n\nYes, and do the peripheral stuff to support old archives etc. We also need \nto be careful about the places where we do file-position-arithmetic - if \nthere are any, I can't recall.\n\nI am not sure we need to worry about whether zlib supports large files \nsince I am pretty sure we don't use zlib for file IO - we just pass it \nin-memory blocks; so it should work no matter how much data is in the stream.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 02 Oct 2002 09:42:16 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "At 11:20 AM 1/10/2002 -0400, Bruce Momjian wrote:\n>BSD/OS has 64-bit off_t's so it does support large files. Is there\n>something I can test?\n\nNot really since it saves only the first 32 bits of the 64 bit positions it \nwill do no worse than a version that supports 32 bits only. It might even \ndo slightly better. When this is sorted out, we need to verify that:\n\n- large dump files are restorable\n\n- dump files with 32 bit off_t restore properly on systems with 64 biy off_t\n\n- dump files with 64 bit off_t restore properly on systems with 32 bit off_tAS\nLONG AS the offsets are less than 32 bits.\n\n- old dump files restore properly.\n\n- new dump files have a new version number so that old pg_restore will not \ntry to restore them.\n\nWe probably need to add Read/WriteOffset to pg_backup_archiver.c to read \nthe appropriate sized value from a dump file, in the same way that \nRead/WriteInt works now.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Wed, 02 Oct 2002 09:54:13 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 09:42 AM 2/10/2002 +1000, Philip Warner wrote:\n>Yes, and do the peripheral stuff to support old archives etc.\n\nDoes silence mean people agree? Does it also mean someone is doing this \n(eg. whoever did the off_t support)? Or does it mean somebody else needs to \ndo it?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 03 Oct 2002 00:07:16 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 09:42 AM 2/10/2002 +1000, Philip Warner wrote:\n>> Yes, and do the peripheral stuff to support old archives etc.\n\n> Does silence mean people agree? Does it also mean someone is doing this \n> (eg. whoever did the off_t support)? Or does it mean somebody else needs to \n> do it?\n\nIt needs to get done; AFAIK no one has stepped up to do it. Do you want\nto?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 11:06:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem? " }, { "msg_contents": "Philip Warner wrote:\n> At 09:42 AM 2/10/2002 +1000, Philip Warner wrote:\n> >Yes, and do the peripheral stuff to support old archives etc.\n> \n> Does silence mean people agree? Does it also mean someone is doing this \n> (eg. whoever did the off_t support)? Or does it mean somebody else needs to \n> do it?\n\nAdded to open items:\n\n\tFix pg_dump to handle 64-bit off_t offsets for custom format\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://momjian.postgresql.org/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix BeOS, QNX4 ports\nFix AIX large file compile failure of 2002-09-11 (Andreas)\nGet bison upgrade on postgresql.org for ecpg only (Marc)\nFix vacuum btree bug (Tom)\nFix client apps for autocommit = off\nChange log_min_error_statement to be off by default (Gavin)\nFix return tuple counts/oid/tag for rules, SPI\nAdd schema dump option to pg_dump\nMake SET not start a transaction with autocommit off, document it\nRemove GRANT EXECUTE to all /contrib functions?\nChange NUMERIC to have 16 digit precision\nHandle CREATE CONSTRAINT TRIGGER without FROM in loads from old db's\nFix pg_dump to handle 64-bit off_t offsets for custom format\n\nOn Going\n--------\nSecurity audit\n\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages\nMove documation to gborg for moved projects\n\n\n7.2.X\n-----\nCLOG\nWAL checkpoint\nLinux mktime()", "msg_date": "Wed, 2 Oct 2002 11:45:40 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump and large files - is this a problem?" }, { "msg_contents": "At 11:06 AM 2/10/2002 -0400, Tom Lane wrote:\n>It needs to get done; AFAIK no one has stepped up to do it. Do you want\n>to?\n\nI'll have a look; my main concern at the moment is that off_t and size_t \nare totally non-committal as to structure; in particular I can probably \nsafely assume that they are unsigned, but can I assume that they have the \nsame endian--ness as int etc?\n\nIf so, then will it be valid to just read/write each byte in endian order? \nHow likely is it that the 64 bit value will actually be implemented as a \nstructure like:\n\noff_t { int lo; int hi; }\n\nwhich effectively ignores endian-ness at the 32 bit scale?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n", "msg_date": "Thu, 03 Oct 2002 14:39:13 +1000", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " } ]
[ { "msg_contents": "I noticed that some of my queries don't work anymore because they're using\nthe floor function:\ne.g.: select type, floor(date_part('epoch', dataend)) as ts from\nlast_modified\n\nWhy is floor not working anymore? I guess I can use round too, but I don't\nwant to modify semantics.\n\nRegards,\n\tMario Weilguni\n\n icomedias <communication solutions/> graz . berlin\n-------------------------------------------------------\n icomedias ist Hersteller von ico>>cms: Information-\n und Content Management System für Inter- UND INTRAnet\n-------------------------------------------------------\n\nMario Weilguni icomedias gmbh\nsoftware engineering entenplatz 1\ntel: +43-316-721.671-17 8020 graz, austria\nfax: +43-316-721.671-26 http://www.icomedias.com\ne-mail: mario.weilguni@icomedias.com\n", "msg_date": "Tue, 1 Oct 2002 16:17:01 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "floor function in 7.3b2" }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> I noticed that some of my queries don't work anymore because they're using\n> the floor function:\n> e.g.: select type, floor(date_part('epoch', dataend)) as ts from\n> last_modified\n> Why is floor not working anymore?\n\nMph. Seems we have floor(numeric) but not floor(float8), and the latter\nis what you need here.\n\nYou could cast date_part's result to numeric; or perhaps you could use\ntrunc() which exists in both numeric and float8 flavors. It's got\ndifferent semantics for negative inputs though.\n\nFor 7.4 we should take another look at the operator/function set and\nfill in this hole and any others like it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Oct 2002 10:31:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floor function in 7.3b2 " }, { "msg_contents": "I noticed some other minor differences between 7.2 and 7.3:\n* 7.2: select now() + '1 minute'::timespan => works\n* 7.2: select now() + '1 minute'::reltime => works\n* 7.3: select now() + '1 minute'::timespan => does not work (Type \"timespan\" does not exist)\n* 7.3 select now() + '1 minute'::reltime => works\n\nSo timespan is no longer supported I guess, but reltime will work as well. Is there a compatibility or migration section in the documentation that might help users to handle this?\nMaybe we can collect such reports and prepare a upgrade tutorial?\n\nRegards,\n\tMario\n\n", "msg_date": "Wed, 2 Oct 2002 08:02:46 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "some more minor incompatibilties 7.2 <-> 7.3" }, { "msg_contents": "Mario Weilguni <mweilguni@sime.com> writes:\n> So timespan is no longer supported I guess, but reltime will work as\n> well. Is there a compatibility or migration section in the\n> documentation that might help users to handle this?\n\nThe release notes are still in a pretty crude state, but they do mention\nthis issue:\n\n: The last vestiges of support for type names datetime and timespan are\n: gone; use timestamp and interval instead\n\nSee\nhttp://developer.postgresql.org/docs/postgres/release.html\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 09:07:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: some more minor incompatibilties 7.2 <-> 7.3 " }, { "msg_contents": "\nAdded to TODO:\n\n\t* Add floor(float8) and other missing functions\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > I noticed that some of my queries don't work anymore because they're using\n> > the floor function:\n> > e.g.: select type, floor(date_part('epoch', dataend)) as ts from\n> > last_modified\n> > Why is floor not working anymore?\n> \n> Mph. Seems we have floor(numeric) but not floor(float8), and the latter\n> is what you need here.\n> \n> You could cast date_part's result to numeric; or perhaps you could use\n> trunc() which exists in both numeric and float8 flavors. It's got\n> different semantics for negative inputs though.\n> \n> For 7.4 we should take another look at the operator/function set and\n> fill in this hole and any others like it.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 22:26:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floor function in 7.3b2" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n> \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > Why is floor not working anymore?\n> \n> Mph. Seems we have floor(numeric) but not floor(float8), and the latter\n> is what you need here.\n\nSorry, I missed much of the casting discussion -- but is there a\nreason why we can't cast from float8 -> numeric implicitely? IIRC the\nidea was to allow implicit casts from lower precision types to higher\nprecision ones.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "04 Oct 2002 02:31:22 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: floor function in 7.3b2" }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> Sorry, I missed much of the casting discussion -- but is there a\n> reason why we can't cast from float8 -> numeric implicitely? IIRC the\n> idea was to allow implicit casts from lower precision types to higher\n> precision ones.\n\nThe implicit casting hierarchy is now\n\n\tint2 -> int4 -> int8 -> numeric -> float4 -> float8\n\nMoving to the left requires an explicit cast (or at least an assignment\nto a column). I know this looks strange to someone who knows that our\nnumeric type beats float4/float8 on both range and precision, but it's\neffectively mandated by the SQL spec. Any combination of \"exact\" and\n\"inexact\" numeric types is supposed to yield an \"inexact\" result per\nspec, thus numeric + float8 yields float8 not numeric. Another reason\nfor doing it this way is that a numeric literal like \"123.456\" can be\ninitially typed as numeric, and later implicitly promoted to float4 or\nfloat8 if context demands it. Doing that the other way 'round would\nintroduce problems with precision loss. We had speculated about\nintroducing an \"unknown_numeric\" pseudo-type to avoid that problem, but\nthe above hierarchy eliminates the need for \"unknown_numeric\". We can\ninitially type a literal as the smallest thing it will fit in, and then\ndo implicit promotion as needed. (7.3 is not all the way there on that\nplan, but 7.4 will be.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 09:37:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "numeric hierarchy again (was Re: floor function in 7.3b2)" }, { "msg_contents": "Tom Lane wrote:\n> Moving to the left requires an explicit cast (or at least an assignment\n> to a column). I know this looks strange to someone who knows that our\n> numeric type beats float4/float8 on both range and precision, but it's\n> effectively mandated by the SQL spec. Any combination of \"exact\" and\n> \"inexact\" numeric types is supposed to yield an \"inexact\" result per\n> spec, thus numeric + float8 yields float8 not numeric. Another reason\n> for doing it this way is that a numeric literal like \"123.456\" can be\n> initially typed as numeric, and later implicitly promoted to float4 or\n> float8 if context demands it. Doing that the other way 'round would\n> introduce problems with precision loss. We had speculated about\n> introducing an \"unknown_numeric\" pseudo-type to avoid that problem, but\n> the above hierarchy eliminates the need for \"unknown_numeric\". We can\n> initially type a literal as the smallest thing it will fit in, and then\n> do implicit promotion as needed. (7.3 is not all the way there on that\n> plan, but 7.4 will be.)\n\nDo we know that defaulting floating constants will not be a performance\nhit?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 12:01:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: numeric hierarchy again (was Re: floor function in 7.3b2)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we know that defaulting floating constants will not be a performance\n> hit?\n\nUh ... what's your concern exactly? The datatype coercion (if any) will\nhappen once at parse time, not at runtime.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 12:10:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: numeric hierarchy again (was Re: floor function in 7.3b2) " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do we know that defaulting floating constants will not be a performance\n> > hit?\n> \n> Uh ... what's your concern exactly? The datatype coercion (if any) will\n> happen once at parse time, not at runtime.\n\nYes, I realize it is during parsing. I was just wondering if making\nconstants coming in from the parser NUMERIC is a performance hit? I see\nin gram.y that FCONST comes in as a Float so I don't even see were we\nmake it NUMERIC.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:45:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: numeric hierarchy again (was Re: floor function in 7.3b2)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, I realize it is during parsing. I was just wondering if making\n> constants coming in from the parser NUMERIC is a performance hit?\n\nOffhand I don't see a reason to think that coercing to NUMERIC (and then\nsomething else) is slower than coercing to FLOAT (and then something else).\nYeah, you would come out a little behind when the final destination type\nis FLOAT, but on the other hand you win a little when it's NUMERIC.\nI see no reason to think this isn't a wash overall.\n\n> I see\n> in gram.y that FCONST comes in as a Float so I don't even see were we\n> make it NUMERIC.\n\nIt's make_const in parse_node.c that has the first contact with the\ngrammar's output. Up to that point the value's just a string, really.\nThe grammar does *not* coerce it to float8.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 15:07:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: numeric hierarchy again (was Re: floor function in 7.3b2) " } ]
[ { "msg_contents": "If it's of any use the following link gives some info on different schemes\nand details on an ISO week numbering standard.\n\nhttp://www.merlyn.demon.co.uk/weekinfo.htm#WkNo\n\nBest Regards,\n\nTim Knowles\n\n", "msg_date": "Tue, 1 Oct 2002 16:36:09 +0100", "msg_from": "\"Tim Knowles\" <tim@ametco.co.uk>", "msg_from_op": true, "msg_subject": "Re: Postgresql likes Tuesday..." } ]
[ { "msg_contents": "what's the default lock in pgsql?\n\nif I issued insert(copy)/or update processed\non the same table but on different records\nthe same time, how those processes will\naffect each other? \n\nthanks.\n\njohnl\n\n", "msg_date": "Tue, 1 Oct 2002 15:30:38 -0500", "msg_from": "\"John Liu\" <johnl@synthesys.com>", "msg_from_op": true, "msg_subject": "table lock and record lock" }, { "msg_contents": "On Tue, Oct 01, 2002 at 03:30:38PM -0500, John Liu wrote:\n> what's the default lock in pgsql?\n> \n> if I issued insert(copy)/or update processed\n> on the same table but on different records\n> the same time, how those processes will\n> affect each other? \n\nYou might want to check out the docs at\n\n<http://developer.postgresql.org/docs/pgsql/src/tools/backend/index.html>\n\nand \n\n<http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/mvcc.html>\n\nto learn the answers to these questions. There's no general answer\nto your question, exactly, since you talk about insert, copy, and\nupdate.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Tue, 1 Oct 2002 16:37:10 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: table lock and record lock" }, { "msg_contents": "On Tue, 1 Oct 2002, John Liu wrote:\n\n> what's the default lock in pgsql?\n> \n> if I issued insert(copy)/or update processed\n> on the same table but on different records\n> the same time, how those processes will\n> affect each other? \n\npostgresql does not do \"locking\" in the sense of how most database do \nlocking. It uses a system called multi-version concurrency control that \nprevents writers from blocking readers and vice versa. It has advantages \nand disadvantages over the row locking methodology used by most other \ndatabases, but you can read for yourself by looking in the docs at:\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/mvcc.html\n\nGood luck.\n\n", "msg_date": "Tue, 1 Oct 2002 15:24:07 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: table lock and record lock" } ]
[ { "msg_contents": "What is the reason for maintaining separate rscale and dscale values in\nnumeric variables?\n\nI am finding that this arrangement leads to some odd results, for\nexample this:\n\nregression=# select (exp(ln(2.0)) - 2.0);\n ?column?\n---------------------\n -0.0000000000000000\n(1 row)\n\nregression=# select (exp(ln(2.0)) - 2.0) * 100000;\n ?column?\n---------------------\n -0.0000000000000010\n(1 row)\n\nThe difference between rscale and dscale allows some \"hidden\" digits to\nbe carried along in an expression result, and then possibly exposed\nlater. This seems pretty nonintuitive for an allegedly exact\ncalculational datatype. ISTM the policy should be \"what you see is what\nyou get\" - no hidden digits. That would mean there's no need for\nseparating rscale and dscale, so I'm wondering why they were put in\nto begin with.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 01 Oct 2002 17:24:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Purpose of rscale/dscale in NUMERIC?" }, { "msg_contents": "Tom Lane wrote:\n> \n> What is the reason for maintaining separate rscale and dscale values in\n> numeric variables?\n> \n> I am finding that this arrangement leads to some odd results, for\n> example this:\n> \n> regression=# select (exp(ln(2.0)) - 2.0);\n> ?column?\n> ---------------------\n> -0.0000000000000000\n> (1 row)\n> \n> regression=# select (exp(ln(2.0)) - 2.0) * 100000;\n> ?column?\n> ---------------------\n> -0.0000000000000010\n> (1 row)\n> \n> The difference between rscale and dscale allows some \"hidden\" digits to\n> be carried along in an expression result, and then possibly exposed\n> later. This seems pretty nonintuitive for an allegedly exact\n> calculational datatype. ISTM the policy should be \"what you see is what\n> you get\" - no hidden digits. That would mean there's no need for\n> separating rscale and dscale, so I'm wondering why they were put in\n> to begin with.\n\nYou need to carry around a decent number of digits when you divide\nalready. Exposing them in a manner that numericcol(15,2) / 3.0 all of\nthe sudden displays 16 or more digits isn't much more intuitive. But\ncarrying around only 2 here leads to nonintuitively fuzzy results on the\nother hand.\n\nIt only applies to division and higher functions, and these are not\n\"exact\" if you calculate the result and represent it decimal. They never\nhave been.\n\nSo to answer your question, they are there to make the NUMERIC datatype\nuseful for non-exact stuff too. You can expect an exact result where an\nexact representation in decimal can be expected. Where this is not the\ncase, you get a good approximation.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Wed, 02 Oct 2002 08:47:11 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Purpose of rscale/dscale in NUMERIC?" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Tom Lane wrote:\n>> What is the reason for maintaining separate rscale and dscale values in\n>> numeric variables?\n\n> You need to carry around a decent number of digits when you divide\n> already. Exposing them in a manner that numericcol(15,2) / 3.0 all of\n> the sudden displays 16 or more digits isn't much more intuitive. But\n> carrying around only 2 here leads to nonintuitively fuzzy results on the\n> other hand.\n\nCertainly you need extra guard digits while you do the calculation.\nWhat I'm wondering is why the delivered result would have hidden digits\nin it. If they're accurate, why not show them? If they're not accurate\n(which they're not, at least in the case I showed) why is it a good idea\nto let them escape?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 09:29:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Purpose of rscale/dscale in NUMERIC? " }, { "msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Tom Lane wrote:\n> >> What is the reason for maintaining separate rscale and dscale values in\n> >> numeric variables?\n> \n> > You need to carry around a decent number of digits when you divide\n> > already. Exposing them in a manner that numericcol(15,2) / 3.0 all of\n> > the sudden displays 16 or more digits isn't much more intuitive. But\n> > carrying around only 2 here leads to nonintuitively fuzzy results on the\n> > other hand.\n> \n> Certainly you need extra guard digits while you do the calculation.\n> What I'm wondering is why the delivered result would have hidden digits\n> in it. If they're accurate, why not show them? If they're not accurate\n> (which they're not, at least in the case I showed) why is it a good idea\n> to let them escape?\n\nSo we need them in the calculation, and if it's a nested tree of\nfunction calls, they have to travel around too. What do you think is a\ngood place to kill these critters then?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Wed, 02 Oct 2002 11:09:10 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Purpose of rscale/dscale in NUMERIC?" } ]
[ { "msg_contents": "Hi all,\n\nOver the last few weeks we've put together a new \"Advocacy and\nMarketing\" website for PostgreSQL:\n\nhttp://advocacy.postgresql.org\n\nIt's now ready for public release. It has the first few case studies,\nlists the major advantages to PostgreSQL, and provides a place you can\npoint your CIO, CTO, and CEO's at, etc.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 02 Oct 2002 10:34:49 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "* Justin Clift <justin@postgresql.org> [2002-10-02 10:34 +1000]:\n> Over the last few weeks we've put together a new \"Advocacy and\n> Marketing\" website for PostgreSQL:\n> \n> http://advocacy.postgresql.org\n\nCool :-)\n\nA few remarks:\n\n* [http://advocacy.postgresql.org/advantages/]\n \"\"\"\n ...\n A point list for some technical features that PostgreSQL offers:\n ...\n * Replication (available commercially) allowing the duplication of\n the database on multiple machines\n \"\"\"\n\n IIRC there is now a replication solution in contrib/ I've never used\n it though. So you can perhaps cut the \"available commercially\" There\n might be other commercial offerings I know nothing about.\n\n* \"PostgreSQL : The worlds most advanced Open Source database\"\n This probably isn't entirely true any more, considering the\n availability of SAP DB. I personally still stick with PostgreSQL,\n however, as I like it and it seems to have much momentum.\n\n* Allows you to win Bullshit-Bingo in 10 seconds. But that's by design\n ;-)\n\n* I don't like serif-fonts like \"Times New Roman\" on web pages. What\n about using a font declaration like (from my homepage):\n\n body { font-family: Verdana, Arial, Helvetica, sans-serif;\n background-color: #FFFFEE}\n\n throughout the site?\n\n-- Gerhard\n", "msg_date": "Wed, 2 Oct 2002 03:52:37 +0200", "msg_from": "Gerhard =?iso-8859-1?Q?H=E4ring?= <gerhard.haering@gmx.de>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Gerhard H�ring wrote:\n> \n> * Justin Clift <justin@postgresql.org> [2002-10-02 10:34 +1000]:\n> > Over the last few weeks we've put together a new \"Advocacy and\n> > Marketing\" website for PostgreSQL:\n> >\n> > http://advocacy.postgresql.org\n> \n> Cool :-)\n\n:-)\n\n> A few remarks:\n\nThanks Gerhard. :)\n \n> * [http://advocacy.postgresql.org/advantages/]\n> \"\"\"\n> ...\n> A point list for some technical features that PostgreSQL offers:\n> ...\n> * Replication (available commercially) allowing the duplication of\n> the database on multiple machines\n> \"\"\"\n> \n> IIRC there is now a replication solution in contrib/ I've never used\n> it though. So you can perhaps cut the \"available commercially\" There\n> might be other commercial offerings I know nothing about.\n\nGood point. Althought the /contrib/rserv version doesn't work for\nPostgreSQL 7.2.x (have tried), there is still Usogres and stuff. Will\nupdate that. :)\n\n> * \"PostgreSQL : The worlds most advanced Open Source database\"\n> This probably isn't entirely true any more, considering the\n> availability of SAP DB. I personally still stick with PostgreSQL,\n> however, as I like it and it seems to have much momentum.\n\nHmmm... suggested new tag line? :)\n\n> * Allows you to win Bullshit-Bingo in 10 seconds. But that's by design\n> ;-)\n\nHeh Heh Heh\n\n> * I don't like serif-fonts like \"Times New Roman\" on web pages. What\n> about using a font declaration like (from my homepage):\n\nThere aren't any typeface changing elements in the page (unless\nsomething got past me). And we don't use CSS anywhere. Seems to be\nreally cross-browser this way. :)\n\n> body { font-family: Verdana, Arial, Helvetica, sans-serif;\n> background-color: #FFFFEE}\n> \n> throughout the site?\n\nUm... not yet. Haven't learnt CSS yet, don't have the spare time too,\nand for the moment \"this works\". Hope that doesn't sound like a bad\nattitude, it's just there's so much stuff going on at the moment.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> -- Gerhard\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 02 Oct 2002 12:07:46 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Gerhard H�ring wrote:\n> > IIRC there is now a replication solution in contrib/ I've never\n> > used it though. So you can perhaps cut the \"available\n> > commercially\" There might be other commercial offerings I know\n> > nothing about.\n> \n> Good point. Althought the /contrib/rserv version doesn't work for\n> PostgreSQL 7.2.x (have tried), there is still Usogres and\n> stuff. Will update that. :)\n\nI think we should only advertise the features that we actually\nsupport. IMHO, there is no open-source code that supports replication\nin a way that is suitable for widespread production use -- so, at\nleast, you should keep the \"commercially available\" caveat.\n\nAlso, the feature list doesn't include foreign keys / ref int.,\ncursors, ANSI outer joins, and MVCC (which you should probably explain\nin terms a PHB can understand).\n\nThe list of interfaces should probably also include ecpg. If we're\nincluding interfaces for languages not bundled with the database, Ruby\nand PHP are probably worth including.\n\nYou can also elaborate on the support for stored procedures --\ne.g. the languages in which they can be defined.\n\nYou might want to be a little more verbose on the support for\nsubqueries (don't just say UNION, etc.)\n\nThe paragraph titled \"Extensible\" seems to end in mid-sentence.\n\nMentioning the limitations of PostgreSQL might be good -- \"we support\ntables up to x gigabytes, with y rows, z columns, etc.\". There's an\nFAQ entry on this which you can probably adapt.\n\nA brief bit on the history of Postgres might be good (it's been around\nfor a while, therefore it's not likely to die off any time soon).\n\nThe SapDB feature list is here:\n\n http://www.sapdb.org/sap_db_features.htm\n\nMight be worth looking at...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "01 Oct 2002 22:37:16 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Hi Neil,\n\nNeil Conway wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > Gerhard H�ring wrote:\n> > > IIRC there is now a replication solution in contrib/ I've never\n> > > used it though. So you can perhaps cut the \"available\n> > > commercially\" There might be other commercial offerings I know\n> > > nothing about.\n> >\n> > Good point. Althought the /contrib/rserv version doesn't work for\n> > PostgreSQL 7.2.x (have tried), there is still Usogres and\n> > stuff. Will update that. :)\n> \n> I think we should only advertise the features that we actually\n> support. IMHO, there is no open-source code that supports replication\n> in a way that is suitable for widespread production use -- so, at\n> least, you should keep the \"commercially available\" caveat.\n\nDoes anyone know how usable Usogres is? Haven't had the time to\nproperly implement it and test it. :-/\n \n> Also, the feature list doesn't include foreign keys / ref int.,\n> cursors, ANSI outer joins, and MVCC (which you should probably explain\n> in terms a PHB can understand).\n\nJust added them (MVCC was already there though).\n \n> The list of interfaces should probably also include ecpg. If we're\n> including interfaces for languages not bundled with the database, Ruby\n> and PHP are probably worth including.\n\nAdded these too. :)\n\n> You can also elaborate on the support for stored procedures --\n> e.g. the languages in which they can be defined.\n\nNot yet. There's only so much that should be on the page, and although\nwe have tonnes of features I'm not sure that it's not overdone already.\n\n> You might want to be a little more verbose on the support for\n> subqueries (don't just say UNION, etc.)\n> \n> The paragraph titled \"Extensible\" seems to end in mid-sentence.\n\nOops. Thanks for pointing that out. Just fixed it. :)\n \n> Mentioning the limitations of PostgreSQL might be good -- \"we support\n> tables up to x gigabytes, with y rows, z columns, etc.\". There's an\n> FAQ entry on this which you can probably adapt.\n\nGood point, but not just yet. Let's see how this goes first. :)\n \n> A brief bit on the history of Postgres might be good (it's been around\n> for a while, therefore it's not likely to die off any time soon).\n\nAny suggestions to add to the presently-small-piece on the About page? \nIf anyone can put together something that sounds good, then that would\nbe an excellent improvement.\n \n> The SapDB feature list is here:\n> \n> http://www.sapdb.org/sap_db_features.htm\n> \n> Might be worth looking at...\n\nThanks heaps Neil. Just added the stuff they mention, that we also\nhave.\n\nSubtransactions was the only thing they seem to have that we don't (from\nthat list).\n\n\"Most Advanced Open Source Database\" might still be correct after all. \nMVCC is only optional for them, and we have a bunch more available\ninterfaces... :)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> Cheers,\n> \n> Neil\n> \n> --\n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 02 Oct 2002 13:27:11 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "On Tue, Oct 01, 2002 at 10:37:16PM -0400, Neil Conway wrote:\n> Mentioning the limitations of PostgreSQL might be good -- \"we support\n> tables up to x gigabytes, with y rows, z columns, etc.\". There's an\n> FAQ entry on this which you can probably adapt.\n\nYou're looking at it wrong. PostgreSQL doesn't have limitations, it has\ncapabilites. For example: PostgreSQL supports multi-terabytes databases with\nbillions of rows and thousands of columns. Examples in <big company a>, <big\ncompany b>.\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Wed, 2 Oct 2002 15:43:22 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Greetings, Justin!\n\nAt 02.10.2002, 6:07, you wrote:\n\n>> IIRC there is now a replication solution in contrib/ I've never used\n>> it though. So you can perhaps cut the \"available commercially\" There\n>> might be other commercial offerings I know nothing about.\n\nJC> Good point. Althought the /contrib/rserv version doesn't work for\nJC> PostgreSQL 7.2.x (have tried), there is still Usogres and stuff. Will\nJC> update that. :)\n\n In fact, contrib/rserv works quite well, I've been using it for half a year\n already. There are gotchas, of course:\n 1) It does not work out of the box. Its build process was broken\n some time ago. I tried to submit a patch for this, but had hard\n time pushing it through the same developer who broke it. Well,\n maybe I will at last be able to produce something that will please\n him and it will be in 7.3.x\n 2) There is no coherent documentation for the package. Well, maybe\n I should write some based on my experience. :]\n 3) If you dump/restore a DB with contrib/rserv, you'll need to do\n some manual tweaking to its tables.\n\n\n-- \n� ���������, ������� ������\n����� ��������-�������� ��� \"���-�����\"\nhttp://www.rdw.ru \nhttp://www.vashdosug.ru\n\n\n", "msg_date": "Wed, 2 Oct 2002 11:31:41 +0400", "msg_from": "Alexey Borzov <borz_off@rdw.ru>", "msg_from_op": false, "msg_subject": "concerning rserv Re[2]: PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Justin,\n\nwhat does world map with fuzzy points supposed to show ?\n\n\tOleg\nOn Wed, 2 Oct 2002, Justin Clift wrote:\n\n> Hi all,\n>\n> Over the last few weeks we've put together a new \"Advocacy and\n> Marketing\" website for PostgreSQL:\n>\n> http://advocacy.postgresql.org\n>\n> It's now ready for public release. It has the first few case studies,\n> lists the major advantages to PostgreSQL, and provides a place you can\n> point your CIO, CTO, and CEO's at, etc.\n>\n> :-)\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 2 Oct 2002 11:26:52 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Hi Oleg,\n\nIt's supposed to show roughly where everyone is.\n\nBased mostly on Vince's map from the developer site, but this one is\nreally easy to update.\n\nIf you're not located on the map correctly (probably hard to tell, but\nif you're wrong on Vince's map then you're wrong on this one) it can be\nupdated pronto.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nOleg Bartunov wrote:\n> \n> Justin,\n> \n> what does world map with fuzzy points supposed to show ?\n> \n> Oleg\n> On Wed, 2 Oct 2002, Justin Clift wrote:\n> \n> > Hi all,\n> >\n> > Over the last few weeks we've put together a new \"Advocacy and\n> > Marketing\" website for PostgreSQL:\n> >\n> > http://advocacy.postgresql.org\n> >\n> > It's now ready for public release. It has the first few case studies,\n> > lists the major advantages to PostgreSQL, and provides a place you can\n> > point your CIO, CTO, and CEO's at, etc.\n> >\n> > :-)\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> >\n> >\n> \n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 02 Oct 2002 21:01:56 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "On Wed, 2 Oct 2002, Justin Clift wrote:\n\n> Hi Oleg,\n>\n> It's supposed to show roughly where everyone is.\n>\n> Based mostly on Vince's map from the developer site, but this one is\n> really easy to update.\n>\n> If you're not located on the map correctly (probably hard to tell, but\n> if you're wrong on Vince's map then you're wrong on this one) it can be\n> updated pronto.\n\nLook for an updated map shortly. I have everyone's coordinates in and\nit looks like the tools build ok. I should have at least a day or two\nbreak from the activities in Congress (re. internet broadcasting), so\nI want to get the new one up asap and before things bust loose again.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 07:15:12 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "On Wed, 2 Oct 2002, Vince Vielhaber wrote:\n\n> On Wed, 2 Oct 2002, Justin Clift wrote:\n>\n> > Hi Oleg,\n> >\n> > It's supposed to show roughly where everyone is.\n> >\n> > Based mostly on Vince's map from the developer site, but this one is\n> > really easy to update.\n> >\n> > If you're not located on the map correctly (probably hard to tell, but\n> > if you're wrong on Vince's map then you're wrong on this one) it can be\n> > updated pronto.\n>\n> Look for an updated map shortly. I have everyone's coordinates in and\n> it looks like the tools build ok. I should have at least a day or two\n> break from the activities in Congress (re. internet broadcasting), so\n> I want to get the new one up asap and before things bust loose again.\n\nCoordinates seems ok (Moscow), I asked if map should present something\nmore like old Bruce's map with photos. I'm using Mozilla and see just\na picture of the world :-)\n\n>\n> Vince.\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 2 Oct 2002 16:51:02 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "Oleg Bartunov wrote:\n<snip>\n> Coordinates seems ok (Moscow), I asked if map should present something\n> more like old Bruce's map with photos. I'm using Mozilla and see just\n> a picture of the world :-)\n\nHeh Heh Heh\n\nNah, just went for the simple approach. :) Could have added them, but\ndidn't want to step on Vince's toes *too much*.\n\n:)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> > Vince.\n> >\n> \n> Regards,\n> Oleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 02 Oct 2002 22:52:52 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "On Wed, 2 Oct 2002, Oleg Bartunov wrote:\n\n> On Wed, 2 Oct 2002, Vince Vielhaber wrote:\n>\n> > On Wed, 2 Oct 2002, Justin Clift wrote:\n> >\n> > > Hi Oleg,\n> > >\n> > > It's supposed to show roughly where everyone is.\n> > >\n> > > Based mostly on Vince's map from the developer site, but this one is\n> > > really easy to update.\n> > >\n> > > If you're not located on the map correctly (probably hard to tell, but\n> > > if you're wrong on Vince's map then you're wrong on this one) it can be\n> > > updated pronto.\n> >\n> > Look for an updated map shortly. I have everyone's coordinates in and\n> > it looks like the tools build ok. I should have at least a day or two\n> > break from the activities in Congress (re. internet broadcasting), so\n> > I want to get the new one up asap and before things bust loose again.\n>\n> Coordinates seems ok (Moscow), I asked if map should present something\n> more like old Bruce's map with photos. I'm using Mozilla and see just\n> a picture of the world :-)\n\n\"old Bruce's map\" ??? No idea what you're referring to.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 08:55:57 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "On Wed, 2 Oct 2002, Vince Vielhaber wrote:\n\n> On Wed, 2 Oct 2002, Oleg Bartunov wrote:\n>\n> > On Wed, 2 Oct 2002, Vince Vielhaber wrote:\n> >\n> > > On Wed, 2 Oct 2002, Justin Clift wrote:\n> > >\n> > > > Hi Oleg,\n> > > >\n> > > > It's supposed to show roughly where everyone is.\n> > > >\n> > > > Based mostly on Vince's map from the developer site, but this one is\n> > > > really easy to update.\n> > > >\n> > > > If you're not located on the map correctly (probably hard to tell, but\n> > > > if you're wrong on Vince's map then you're wrong on this one) it can be\n> > > > updated pronto.\n> > >\n> > > Look for an updated map shortly. I have everyone's coordinates in and\n> > > it looks like the tools build ok. I should have at least a day or two\n> > > break from the activities in Congress (re. internet broadcasting), so\n> > > I want to get the new one up asap and before things bust loose again.\n> >\n> > Coordinates seems ok (Moscow), I asked if map should present something\n> > more like old Bruce's map with photos. I'm using Mozilla and see just\n> > a picture of the world :-)\n>\n> \"old Bruce's map\" ??? No idea what you're referring to.\n>\n\nI may be wrong with author of the map, but it's there\n\nhttp://developer.postgresql.org/index.php\n\n\n> Vince.\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 2 Oct 2002 17:52:17 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" }, { "msg_contents": "On Wed, 2 Oct 2002, Oleg Bartunov wrote:\n\n> On Wed, 2 Oct 2002, Vince Vielhaber wrote:\n>\n> > On Wed, 2 Oct 2002, Oleg Bartunov wrote:\n> >\n> > > On Wed, 2 Oct 2002, Vince Vielhaber wrote:\n> > >\n> > > > On Wed, 2 Oct 2002, Justin Clift wrote:\n> > > >\n> > > > > Hi Oleg,\n> > > > >\n> > > > > It's supposed to show roughly where everyone is.\n> > > > >\n> > > > > Based mostly on Vince's map from the developer site, but this one is\n> > > > > really easy to update.\n> > > > >\n> > > > > If you're not located on the map correctly (probably hard to tell, but\n> > > > > if you're wrong on Vince's map then you're wrong on this one) it can be\n> > > > > updated pronto.\n> > > >\n> > > > Look for an updated map shortly. I have everyone's coordinates in and\n> > > > it looks like the tools build ok. I should have at least a day or two\n> > > > break from the activities in Congress (re. internet broadcasting), so\n> > > > I want to get the new one up asap and before things bust loose again.\n> > >\n> > > Coordinates seems ok (Moscow), I asked if map should present something\n> > > more like old Bruce's map with photos. I'm using Mozilla and see just\n> > > a picture of the world :-)\n> >\n> > \"old Bruce's map\" ??? No idea what you're referring to.\n> >\n>\n> I may be wrong with author of the map, but it's there\n>\n> http://developer.postgresql.org/index.php\n\nJan's map.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 10:02:05 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] New PostgreSQL Website : advocacy.postgresql.org" } ]
[ { "msg_contents": "It seems queries like:\nselect ... from table where id='' (an empty string) do not work anymore, it worked up to 7.2. This will make migration to 7.3 quite difficult for some application, especially for oracle applications. \nWould'nt it be better to evaluate such expressions to false.\n\nRegards,\n\tMario Weilguni\n", "msg_date": "Wed, 2 Oct 2002 08:31:45 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "int type problem in 7.3" } ]
[ { "msg_contents": "Ok, I checked this again. Up until 7.2, it was possible to compare an empty string to a number, and it worked::\ne.g.: select * from mytable where int4id='' \nworked fine, but delivered no result. This is exactly what Oracle did here,\na comparison like this does not work:\n\nSQL> select * from re_eintraege where id='foobar';\nselect * from re_eintraege where id='foobar'\n *\nERROR at line 1:\nORA-01722: invalid number\n\nBut oracle accepts this one:\nSQL> select * from re_eintraege where id='';\n\nno rows selected\n\nbecause oracle treats the empty string as NULL and effectivly checks:\nselect * from re_eintraege where id is null;\n\nI think 7.3 is doing right here and I've to fix all queries (*sigh*), but oracle compatibilty is lost here. \n\nThe bad news for me is, rewriting the queries won't help here, because I'll use indexing when I rewrite my queries to:\nselect 1 from mytable where id::text=''\n\nRegards,\n\tMario Weilguni\n\n---------- Weitergeleitete Nachricht ----------\n\nSubject: [HACKERS] int type problem in 7.3\nDate: Wed, 2 Oct 2002 08:31:45 +0200\nFrom: Mario Weilguni <mweilguni@sime.com>\nTo: pgsql-hackers@postgresql.org\n\nIt seems queries like:\nselect ... from table where id='' (an empty string) do not work anymore, it\n worked up to 7.2. This will make migration to 7.3 quite difficult for some\n application, especially for oracle applications. Would'nt it be better to\n evaluate such expressions to false.\n\nRegards,\n\tMario Weilguni\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-------------------------------------------------------\n\n", "msg_date": "Wed, 2 Oct 2002 09:00:17 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "Fwd: int type problem in 7.3" }, { "msg_contents": "Mario Weilguni <mweilguni@sime.com> writes:\n> Ok, I checked this again. Up until 7.2, it was possible to compare an empty string to a number, and it worked::\n> e.g.: select * from mytable where int4id='' \n> worked fine, but delivered no result.\n\nNo, that was not what it did: in reality, the '' was silently taken as\nzero, and would match rows containing 0. That seems a very error-prone\nbehavior (not to say a flat-out bug) to me.\n\n> But oracle accepts this one:\n> SQL> select * from re_eintraege where id='';\n> no rows selected\n> because oracle treats the empty string as NULL\n\nOracle does that for string data, but it doesn't do it for numerics\ndoes it? In any case, that behavior is surely non-compliant with\nthe SQL spec.\n\nWe were not compatible with Oracle on this behavior before, and I'm\nnot very inclined to become so now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 09:13:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fwd: int type problem in 7.3 " }, { "msg_contents": "Have you looked at transform_null_equals in the postgresql.conf file to \nsee if turning that on makes this work like oracle?\n\nOn Wed, 2 Oct 2002, Mario Weilguni wrote:\n\n> Ok, I checked this again. Up until 7.2, it was possible to compare an empty string to a number, and it worked::\n> e.g.: select * from mytable where int4id='' \n> worked fine, but delivered no result. This is exactly what Oracle did here,\n> a comparison like this does not work:\n> \n> SQL> select * from re_eintraege where id='foobar';\n> select * from re_eintraege where id='foobar'\n> *\n> ERROR at line 1:\n> ORA-01722: invalid number\n> \n> But oracle accepts this one:\n> SQL> select * from re_eintraege where id='';\n> \n> no rows selected\n> \n> because oracle treats the empty string as NULL and effectivly checks:\n> select * from re_eintraege where id is null;\n> \n> I think 7.3 is doing right here and I've to fix all queries (*sigh*), but oracle compatibilty is lost here. \n> \n> The bad news for me is, rewriting the queries won't help here, because I'll use indexing when I rewrite my queries to:\n> select 1 from mytable where id::text=''\n> \n> Regards,\n> \tMario Weilguni\n> \n> ---------- Weitergeleitete Nachricht ----------\n> \n> Subject: [HACKERS] int type problem in 7.3\n> Date: Wed, 2 Oct 2002 08:31:45 +0200\n> From: Mario Weilguni <mweilguni@sime.com>\n> To: pgsql-hackers@postgresql.org\n> \n> It seems queries like:\n> select ... from table where id='' (an empty string) do not work anymore, it\n> worked up to 7.2. This will make migration to 7.3 quite difficult for some\n> application, especially for oracle applications. Would'nt it be better to\n> evaluate such expressions to false.\n> \n> Regards,\n> \tMario Weilguni\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> -------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "Wed, 2 Oct 2002 10:30:48 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Fwd: int type problem in 7.3" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net] \n> Sent: 01 October 2002 21:05\n> To: Dave Page\n> Cc: pgsql-odbc@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: RE: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)\n> \n> \n> Dave Page writes:\n> \n> > > > majority of you!) knock up a makefile so the driver will build \n> > > > standalone on *nix systems please? There should be no\n> > > dependencies on\n> > > > any of the rest of the code - certainly there isn't for \n> the Win32 \n> > > > build.\n> > >\n> > > I'm working something out. I'll send it to you tomorrow.\n> \n> Hah. I tried to put something together based on Automake and \n> Libtool, but I must conclude that Libtool is just completely \n> utterly broken. I also considered copying over \n> Makefile.shlib, but that would draw in too many auxiliary \n> files and create a different kind of mess. So what I would \n> suggest right now as the course of action is to copy your \n> local psqlodbc subtree to its old location under interfaces/ \n> and try to hook things together that way.\n> \n> Perhaps one of these days we should convert Makefile.shlib \n> into a shell script that we can deploy more easily to \n> different projects.\n\nThanks for trying Peter.\n\nAre we going to get the same problems for the other bits (libpq++?) that\nwe've ripped out? Is anyone looking at them, or have they just been\ndumped on Gborg & forgotten?\n\nRegards, Dave.\n", "msg_date": "Wed, 2 Oct 2002 09:00:06 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)" } ]
[ { "msg_contents": "This is small README fix for contrib/intarray. Thank you.\n-- \nTeodor Sigaev\nteodor@stack.net", "msg_date": "Wed, 02 Oct 2002 13:57:51 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": true, "msg_subject": "Please, applay patch to current CVS" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nTeodor Sigaev wrote:\n> This is small README fix for contrib/intarray. Thank you.\n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 09:04:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Please, applay patch to current CVS" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nTeodor Sigaev wrote:\n> This is small README fix for contrib/intarray. Thank you.\n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n\n[ application/gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:16:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Please, applay patch to current CVS" } ]
[ { "msg_contents": ">> But oracle accepts this one:\n>> SQL> select * from re_eintraege where id='';\n>> no rows selected\n>> because oracle treats the empty string as NULL\n>\n>Oracle does that for string data, but it doesn't do it for numerics\n>does it? In any case, that behavior is surely non-compliant with\n>the SQL spec.\n\nNo, oracle accepts this and works correctly with number() datatype. \nHowever I did not know that in postgres '' was treated as '0'.\n\nRegards,\n\tMario Weilguni\n\n\n", "msg_date": "Wed, 2 Oct 2002 15:45:51 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: Fwd: int type problem in 7.3 " }, { "msg_contents": "On Wed, 2 Oct 2002, Mario Weilguni wrote:\n\n> >> But oracle accepts this one:\n> >> SQL> select * from re_eintraege where id='';\n> >> no rows selected\n> >> because oracle treats the empty string as NULL\n> >\n> >Oracle does that for string data, but it doesn't do it for numerics\n> >does it? In any case, that behavior is surely non-compliant with\n> >the SQL spec.\n> \n> No, oracle accepts this and works correctly with number() datatype. \n> However I did not know that in postgres '' was treated as '0'.\n\nSo what would I be selecting in Oracle if I did:\n\nSELECT * FROM mytable WHERE myfield = ''\n\nwhere myfield is of VARCHAR type?\n\nIf you want to select on NULL, whether or not you think the database is more\nintelligent than you in determining what you really want, then write your query\nto select on NULL. The chances are your database is not actually a mind reader.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Wed, 2 Oct 2002 15:20:03 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Fwd: int type problem in 7.3" }, { "msg_contents": "This document:\nhttp://developer.postgresql.org/docs/postgres/release-7-2-3.html\n\nmentions a release date of 2002-10-01 for version 7.2.3.\n\nIt isn't on the main website, tough, is it?\n\nRegards,\nMichael\n\n", "msg_date": "Wed, 2 Oct 2002 17:14:46 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Release of 7.2.3" }, { "msg_contents": "On Wed, 2 Oct 2002, Michael Paesold wrote:\n\n> This document:\n> http://developer.postgresql.org/docs/postgres/release-7-2-3.html\n>\n> mentions a release date of 2002-10-01 for version 7.2.3.\n>\n> It isn't on the main website, tough, is it?\n\nThe documentation on the developers website is not necessarily\naccurate - especially when it comes to dates. Documentation is\ntypically one of the last things finalized and is in a constant\nstate of change. That's one of the reasons why the developer\nsite is separated from the main website - people read things on\nthe developer site and think they're 100% accurate. Nothing is\nfinal until it's announced on the announce mailing list and/or\nthe main website.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 11:34:21 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Release of 7.2.3" } ]
[ { "msg_contents": "\nJust in case anyone enjoys these sorts of things :) It deals with the\nwhole .org TLD assignment ...\n\n\thttp://forum.icann.org/org-eval/gartner-report\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 11:14:55 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Oracle beats up on Open Source Database(s) ... and gets beat back ..." }, { "msg_contents": "On Wed, 2002-10-02 at 16:14, Marc G. Fournier wrote:\n> \n> Just in case anyone enjoys these sorts of things :) It deals with the\n> whole .org TLD assignment ...\n> \n> \thttp://forum.icann.org/org-eval/gartner-report\n\nI like this one:\n\n| Unlike many of the conventional commercial databases, PostgreSQL has\n| offered advanced Object Relational capabilities for years, including\n| inheritance. Ms. Gelhausen is quite correct that these are important \n| capabilities, finally available with the release of Oracle9i. We \n| applaud Oracle's continued efforts to close the gap and stay \n| ompetitive with this, and other open source database features.\n\ncheers\n-- vbi\n\n-- \nsecure email with gpg http://fortytwo.ch/gpg\n\nNOTICE: subkey signature! request key 92082481 from keyserver.kjsl.com", "msg_date": "03 Oct 2002 19:58:48 +0200", "msg_from": "Adrian 'Dagurashibanipal' von Bidder <avbidder@fortytwo.ch>", "msg_from_op": false, "msg_subject": "Re: Oracle beats up on Open Source Database(s) ... and" }, { "msg_contents": "Adrian 'Dagurashibanipal' von Bidder wrote:\n-- Start of PGP signed section.\n> On Wed, 2002-10-02 at 16:14, Marc G. Fournier wrote:\n> > \n> > Just in case anyone enjoys these sorts of things :) It deals with the\n> > whole .org TLD assignment ...\n> > \n> > \thttp://forum.icann.org/org-eval/gartner-report\n> \n> I like this one:\n> \n> | Unlike many of the conventional commercial databases, PostgreSQL has\n> | offered advanced Object Relational capabilities for years, including\n> | inheritance. Ms. Gelhausen is quite correct that these are important \n> | capabilities, finally available with the release of Oracle9i. We \n> | applaud Oracle's continued efforts to close the gap and stay \n> | competitive with this, and other open source database features.\n\nYes, I found the thread assuming. Here are the choice parts from the\nOracle posting:\n\n> PostgreSQL, like many other open source database products, has been in\n> the market for many years with very little adoption. Unlike the\n\nOh, someone should tell our huge user base.\n\n> open-source operating system market, the open-source database market has\n\n\"We support Linux, so I can't bad mouth open-source OS's.\"\n\n> been unsuccessful due to the complexity of customer requirements and\n> sophistication of the technology needed. PostgreSQL is used primarily\n\nFear-uncertainty-doubt. \"Express it as fact and people will belive it.\"\n\n> in the embedded system market because it lacks the transactional\n> features, high availability, security and manageability of any\n> commercial enterprise database.\n\nHe is confusing us with MySQL. \"Oh, they are all the same; doesn't\nmatter.\"\n\n[ Quotes of lots of stuff PostgreSQL has had for years that Oracle just\nadded recently. Is he trying to make Oracle look good? ]\n\n> While there is a place in the industry for open source software. It\n> will be many years, if ever, that an open source database matches\n> Oracle's database technology for the availability, standards support,\n> performance, manageability, security, application support, and stability\n> that most real-world business applications require.\n\nFear-uncertainty-doubt (FUD).\n\n> thank you.\n> Jenny Gelhausen\n> Oracle Marketing\n> \n> ^^^^^^^^^^^^^^^^\n\nOh, that explains it all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 15:07:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oracle beats up on Open Source Database(s) ... and" } ]
[ { "msg_contents": "It's just a cosmetic change, fixes the help screen. Should be applied in /contrib/vacuumlo\n\nRegards,\n \tMario Weilguni", "msg_date": "Wed, 2 Oct 2002 18:05:17 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "small patch for vacuumlo" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nMario Weilguni wrote:\n> It's just a cosmetic change, fixes the help screen. Should be applied in /contrib/vacuumlo\n> \n> Regards,\n> \tMario Weilguni\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 12:06:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: small patch for vacuumlo" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n\nMario Weilguni wrote:\n> It's just a cosmetic change, fixes the help screen. Should be applied in /contrib/vacuumlo\n> \n> Regards,\n> \tMario Weilguni\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:20:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: small patch for vacuumlo" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nMario Weilguni wrote:\n> It's just a cosmetic change, fixes the help screen. Should be applied in /contrib/vacuumlo\n> \n> Regards,\n> \tMario Weilguni\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:21:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: small patch for vacuumlo" } ]
[ { "msg_contents": "> > > Attached is a patch to fix the mb linking problems on AIX. As a nice side effect\n> > > it reduces the duplicate symbol warnings to linking libpq.so and libecpg.so\n> > > (all shlibs that are not postmaster loadable modules).\n> > \n> > Can you explain the method behind your patch? Have you tried -bnogc?\n> \n> -bnogc would (probably) have been the correct switch reading \n> the man page,\n> but the method was previously not good since it involved the \n> following:\n> \n> 1. create a static postgres executable from the SUBSYS.o's\n> 2. create an exports file from above\n> 3. recreate a shared postgres executable\n> \n> This naturally had a cyclic dependency, that could not properly be \n> reflected in the Makefile (thus a second make sometimes left you with \n> a static postgres unless you manually removed postgres.imp).\n> \n> Now it does:\n> postgres.imp: $(OBJS)\n> create a temporary SUBSYS.o from all $(OBJS)\n> create a postgres.imp from SUBSYS.o\n> rm temporary SUBSYS.o\n> \n> postgres: postgres.imp\n> link a shared postgres\n> \n> A second change was to move the import and export files to \n> the end of the link line,\n> then the linker knows not to throw a duplicate symbol \n> warning, and keeps all symbols\n> that are mentioned in the exports file (== -bnogc restricted \n> to $(OBJS) symbols).\n> \n> Thus now only libpq.so and libecpg.so still show the \n> duplicate symbol warnings since their\n> link line should actually not include postgres.imp . I did \n> not see how to make a difference \n> between loadable modules (need postgres.imp) and interface \n> libraries (do not need postgres.imp),\n> but since the resulting libs are ok, I left it at that.\n\nNote that this behavior did thus not change. \n\n> \n> I tested both gcc and xlc including regression tests.\n\nWhat happens with this now ?\n\nThanx\nAndreas", "msg_date": "Wed, 2 Oct 2002 18:42:51 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)" } ]
[ { "msg_contents": "This small patch adds a Makefile for /contrib/reindexdb/ and renames the README to README.reindexdb. \n\nRegards,\n\tMario Weilguni", "msg_date": "Wed, 2 Oct 2002 19:12:45 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "Diff for reindexdb" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nMario Weilguni wrote:\n> This small patch adds a Makefile for /contrib/reindexdb/ and renames the README to README.reindexdb. \n> \n> Regards,\n> \tMario Weilguni\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 17:18:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Diff for reindexdb" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nMario Weilguni wrote:\n> This small patch adds a Makefile for /contrib/reindexdb/ and renames the README to README.reindexdb. \n> \n> Regards,\n> \tMario Weilguni\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:22:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Diff for reindexdb" } ]
[ { "msg_contents": "Any news about new DBD::Pg ?\nIt's a stopper for many projects based on perl interface\nto use 7.3.\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 2 Oct 2002 21:17:21 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "DBD::PG - any works to be compatile with 7.3 ?" }, { "msg_contents": "Oleg Bartunov wrote:\n> Any news about new DBD::Pg ?\n> It's a stopper for many projects based on perl interface\n> to use 7.3.\n\nI have created a gborg project for DBD::Pg and four people have\nsubscribed. I will work with them to improve the driver.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 19 Oct 2002 22:42:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::PG - any works to be compatile with 7.3 ?" }, { "msg_contents": "On Sat, 19 Oct 2002, Bruce Momjian wrote:\n\n> Oleg Bartunov wrote:\n> > Any news about new DBD::Pg ?\n> > It's a stopper for many projects based on perl interface\n> > to use 7.3.\n>\n> I have created a gborg project for DBD::Pg and four people have\n> subscribed. I will work with them to improve the driver.\n\nDoes this means you've taken a full responsibillity for DBD::Pg ?\nWhat's about CPAN where people used to find DBD::Pg ?\n\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 20 Oct 2002 08:40:02 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: DBD::PG - any works to be compatile with 7.3 ?" }, { "msg_contents": "Oleg Bartunov wrote:\n> On Sat, 19 Oct 2002, Bruce Momjian wrote:\n> \n> > Oleg Bartunov wrote:\n> > > Any news about new DBD::Pg ?\n> > > It's a stopper for many projects based on perl interface\n> > > to use 7.3.\n> >\n> > I have created a gborg project for DBD::Pg and four people have\n> > subscribed. I will work with them to improve the driver.\n> \n> Does this means you've taken a full responsibillity for DBD::Pg ?\n> What's about CPAN where people used to find DBD::Pg ?\n\nCPAN will be updated by David Wheeler, already a member of the gborg\nDBD:pg group. He has already been assigned CPAN ownership from the\nprevious maintainer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 20 Oct 2002 00:41:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: DBD::PG - any works to be compatile with 7.3 ?" }, { "msg_contents": "On Sun, 20 Oct 2002, Bruce Momjian wrote:\n\n> Oleg Bartunov wrote:\n> > On Sat, 19 Oct 2002, Bruce Momjian wrote:\n> >\n> > > Oleg Bartunov wrote:\n> > > > Any news about new DBD::Pg ?\n> > > > It's a stopper for many projects based on perl interface\n> > > > to use 7.3.\n> > >\n> > > I have created a gborg project for DBD::Pg and four people have\n> > > subscribed. I will work with them to improve the driver.\n> >\n> > Does this means you've taken a full responsibillity for DBD::Pg ?\n> > What's about CPAN where people used to find DBD::Pg ?\n>\n> CPAN will be updated by David Wheeler, already a member of the gborg\n> DBD:pg group. He has already been assigned CPAN ownership from the\n> previous maintainer.\n\nGood. I was afraid about diverging of development. Now, I have an issue for\nthe driver - support of user defined types. We already have looked into sources\nand think it's doable.\n\n>\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 20 Oct 2002 15:39:24 +0400 (MSD)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "Re: DBD::PG - any works to be compatile with 7.3 ?" }, { "msg_contents": "Hi Oleg (and pgsql hackers!),\n\nRecently I encountered a problem attempting to use the integer array\nfunction for pushing an integer onto an integer array field.\n\nCan you write an example of a sql statement for pushing a single value onto\nan integer array and for popping a specific value off of an integer array?\nI see the function in the documentation, but the actual statement syntax to\nuse is not clear to me.\n\nThanks for any help you can provide!\n\nRyan Mahoney\n\n\n\n", "msg_date": "Mon, 21 Oct 2002 16:19:32 -0400", "msg_from": "\"Ryan Mahoney\" <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "integer array, push and pop" }, { "msg_contents": "regression=# select '{124,567,66}'::int[] + 345;\n ?column?\n------------------\n {124,567,66,345}\n(1 row)\nregression=# select '{124,567,66}'::int[] + '{345,1}'::int[];\n ?column?\n--------------------\n {124,567,66,345,1}\n(1 row)\nselect '{124,567,66}'::int[] - 567;\n ?column?\n----------\n {124,66}\n(1 row)\nregression=# select '{124,567,66}'::int[] - '{567,66}';\n ?column?\n----------\n {124}\n(1 row)\n\n\nRyan Mahoney wrote:\n> Hi Oleg (and pgsql hackers!),\n> \n> Recently I encountered a problem attempting to use the integer array\n> function for pushing an integer onto an integer array field.\n> \n> Can you write an example of a sql statement for pushing a single value onto\n> an integer array and for popping a specific value off of an integer array?\n> I see the function in the documentation, but the actual statement syntax to\n> use is not clear to me.\n> \n> Thanks for any help you can provide!\n> \n> Ryan Mahoney\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n", "msg_date": "Tue, 22 Oct 2002 12:07:23 +0400", "msg_from": "Teodor Sigaev <teodor@stack.net>", "msg_from_op": false, "msg_subject": "Re: integer array, push and pop" }, { "msg_contents": "This is excellent! Thanks so much!\n\n-r\n\nOn Tue, 2002-10-22 at 04:07, Teodor Sigaev wrote:\n> regression=# select '{124,567,66}'::int[] + 345;\n> ?column?\n> ------------------\n> {124,567,66,345}\n> (1 row)\n> regression=# select '{124,567,66}'::int[] + '{345,1}'::int[];\n> ?column?\n> --------------------\n> {124,567,66,345,1}\n> (1 row)\n> select '{124,567,66}'::int[] - 567;\n> ?column?\n> ----------\n> {124,66}\n> (1 row)\n> regression=# select '{124,567,66}'::int[] - '{567,66}';\n> ?column?\n> ----------\n> {124}\n> (1 row)\n> \n> \n> Ryan Mahoney wrote:\n> > Hi Oleg (and pgsql hackers!),\n> > \n> > Recently I encountered a problem attempting to use the integer array\n> > function for pushing an integer onto an integer array field.\n> > \n> > Can you write an example of a sql statement for pushing a single value onto\n> > an integer array and for popping a specific value off of an integer array?\n> > I see the function in the documentation, but the actual statement syntax to\n> > use is not clear to me.\n> > \n> > Thanks for any help you can provide!\n> > \n> > Ryan Mahoney\n> > \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> > \n> \n> -- \n> Teodor Sigaev\n> teodor@stack.net\n> \n> \n> \n\n", "msg_date": "22 Oct 2002 10:41:06 -0400", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "Re: integer array, push and pop" }, { "msg_contents": "What version of postgres are you using? I am using PostgreSQL 7.3b1 on\ni686-pc-linux-gnu, compiled by GCC 2.96 and when I execute the following\nstatement:\n\nselect '{124,567,66}'::int[] + 345;\n\nI get the error:\n\nERROR: cache lookup failed for type 0\n\nAny ideas?\n\nThanks for your help!\n\n-r\n\n", "msg_date": "31 Oct 2002 14:07:50 -0500", "msg_from": "Ryan Mahoney <ryan@paymentalliance.net>", "msg_from_op": false, "msg_subject": "Re: integer array, push and pop" } ]
[ { "msg_contents": "You all know this FAQ: \"Why does Postgres not use my index?\" Half of\nthe time this problem can easily be solved by casting a literal to the\ntype of the respective column; this is not my topic here.\n\nIn many other cases it turns out that the planner over-estimates the\ncost of an index scan. Sometimes this can be worked around by\nlowering random_page_cost. \"Of course, that's a hack that is quite\nunrelated to the real problem.\" I strongly agree ;-)\n\nAFAICS (part of) the real problem is in costsize.c:cost_index() where\nIO_cost is calculated from min_IO_cost, pages_fetched,\nrandom_page_cost, and indexCorrelation. The current implementation\nuses indexCorrelation^2 to interpolate between min_IO_cost and\nmax_IO_cost, which IMHO gives results that are too close to\nmax_IO_cost. This conjecture is supported by the fact, that often\nactual run times are much lower than estimated, when seqscans are\ndisabled.\n\nSo we have to find a cost function, so that\n\n . min_IO_cost <= cost <= max_IO_cost\n for -1 <= indexCorrelation <= 1\n . cost --> min_IO_cost for indexCorrelation --> +/- 1\n . cost --> max_IO_cost for indexCorrelation --> 0\n . cost tends more towards min_IO_cost than current implementation\n\nAfter playing around a bit I propose three functions satisfying above\nconditions. All proposals use absC = abs(indexCorrelation).\n\nProposal 1: Use absC for interpolation.\n\n\tIO_cost = absC * min_IO_cost + (1 - absC) * max_IO_cost;\n\n\nProposal 2: First calculate estimates for numbers of pages and cost\nper page, then multiply the results.\n\n\testPages = absC * minPages + (1 - absC) * maxPages;\n\testPCost = absC * 1 + (1 - absC) * random_page_cost;\n\t /* ^\n\t sequential_page_cost */\n\tIO_cost = estPages * estPCost;\n\n\nProposal 3: Interpolate \"geometrically\", using absC.\n\n\tIO-cost = exp( absC * ln(min_IO_Cost) +\n\t (1 - absC) * ln(max_IO_cost));\n\n\nHere are some numbers for\n seq_page_cost = 1 (constant)\n random_page_cost = 4 (GUC)\n minPages = 61\n maxPages = 1440\n\n corr current p1 p2 p3\n 0 5760.00 5760.00 5760.00 5760.00\n 0.1 5703.01 5190.10 4817.77 3655.22\n 0.2 5532.04 4620.20 3958.28 2319.55\n 0.3 5247.09 4050.30 3181.53 1471.96\n 0.4 4848.16 3480.40 2487.52 934.08\n 0.5 4335.25 2910.50 1876.25 592.76\n 0.6 3708.36 2340.60 1347.72 376.16\n 0.7 2967.49 1770.70 901.93 238.70\n 0.8 2112.64 1200.80 538.88 151.48\n 0.9 1143.81 630.90 258.57 96.13\n 0.95 616.65 345.95 149.44 76.57\n 0.99 174.41 117.99 77.03 63.84\n 0.995 117.85 89.50 68.91 62.40\n 0.999 72.39 66.70 62.57 61.28\n 1 61.00 61.00 61.00 61.00\n\nAnother example for\n seq_page_cost = 1 (constant)\n random_page_cost = 10 (GUC)\n minPages = 20\n maxPages = 938.58\n\n corr current p1 p2 p3\n 0 9385.79 9385.79 9385.79 9385.79\n 0.1 9292.14 8449.21 7705.17 5073.72\n 0.2 9011.16 7512.64 6189.88 2742.73\n 0.3 8542.87 6576.06 4839.94 1482.65\n 0.4 7887.27 5639.48 3655.34 801.48\n 0.5 7044.35 4702.90 2636.09 433.26\n 0.6 6014.11 3766.32 1782.19 234.21\n 0.7 4796.56 2829.74 1093.62 126.61\n 0.8 3391.69 1893.16 570.40 68.44\n 0.9 1799.50 956.58 212.53 37.00\n 0.95 933.16 488.29 95.60 27.20\n 0.99 206.38 113.66 31.81 21.27\n 0.995 113.42 66.83 25.70 20.62\n 0.999 38.72 29.37 21.11 20.12\n 1 20.00 20.00 20.00 20.00\n\n(If you want to play around with your own numbers, I can send my OOo\nspreadsheet privately or to the list.)\n\nThe second example shows that especially with proposal 3 we could\nafford to set random_page_cost to a *higher* value, which in contrast\nto previous recommendations seems to be appropriate, IIRC that\nbenchmark results posted here showed values of up to 60.\n\nAs nobody knows how each of these proposals performs in real life\nunder different conditions, I suggest to leave the current\nimplementation in, add all three algorithms, and supply a GUC variable\nto select a cost function.\n\nComments? Ideas? Objections?\n\nServus\n Manfred\n", "msg_date": "Wed, 02 Oct 2002 21:52:46 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Correlation in cost_index()" }, { "msg_contents": "On Wed, 2 Oct 2002, Manfred Koizar wrote:\n\n> As nobody knows how each of these proposals performs in real life\n> under different conditions, I suggest to leave the current\n> implementation in, add all three algorithms, and supply a GUC variable\n> to select a cost function.\n\nI'd certainly be willing to do some testing on my own data with them. \nGotta patch? I've found that when the planner misses, sometimes it misses \nby HUGE amounts on large tables, and I have been running random page cost \nat 1 lately, as well as running cpu_index_cost at 1/10th the default \nsetting to get good results.\n\n", "msg_date": "Wed, 2 Oct 2002 14:07:19 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> AFAICS (part of) the real problem is in costsize.c:cost_index() where\n> IO_cost is calculated from min_IO_cost, pages_fetched,\n> random_page_cost, and indexCorrelation. The current implementation\n> uses indexCorrelation^2 to interpolate between min_IO_cost and\n> max_IO_cost, which IMHO gives results that are too close to\n> max_IO_cost.\n\nThe indexCorrelation^2 algorithm was only a quick hack with no theory\nbehind it :-(. I've wanted to find some better method to put in there,\nbut have not had any time to research the problem.\n\n> As nobody knows how each of these proposals performs in real life\n> under different conditions, I suggest to leave the current\n> implementation in, add all three algorithms, and supply a GUC variable\n> to select a cost function.\n\nI don't think it's really a good idea to expect users to pick among\nmultiple cost functions that *all* have no guiding theory behind them.\nI'd prefer to see us find a better cost function and use it. Has anyone\ntrawled the database literature on the subject?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 18:48:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Wed, 02 Oct 2002 18:48:49 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>I don't think it's really a good idea to expect users to pick among\n>multiple cost functions\n\nThe idea is that PG is shipped with a default representing the best of\nour knowledge and users are not encouraged to change it. When a user\nsends a \"PG does not use my index\" or \"Why doesn't it scan\nsequentially?\" message to one of the support lists, we advise her/him\nto set index_cost_algorithm to 3 (or whatever we feel appropriate) and\nwatch the feedback we get.\n\nWe don't risk anything, if the default is the current behaviour.\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 09:09:49 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Wed, 2 Oct 2002 14:07:19 -0600 (MDT), \"scott.marlowe\"\n<scott.marlowe@ihs.com> wrote:\n>I'd certainly be willing to do some testing on my own data with them. \n\nGreat!\n\n>Gotta patch?\n\nNot yet.\n\n> I've found that when the planner misses, sometimes it misses \n>by HUGE amounts on large tables, and I have been running random page cost \n>at 1 lately, as well as running cpu_index_cost at 1/10th the default \n>setting to get good results.\n\nMay I ask for more information? What are your settings for\neffective_cache_size and shared_buffers? And which version are you\nrunning?\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 09:28:41 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Wed, 2 Oct 2002 14:07:19 -0600 (MDT), \"scott.marlowe\"\n<scott.marlowe@ihs.com> wrote:\n>I'd certainly be willing to do some testing on my own data with them. \n>Gotta patch?\n\nYes, see below. Disclaimer: Apart from \"make; make check\" this is\ncompletely untested. Use at your own risk. Have fun!\n\nServus\n Manfred\n\ndiff -ruN ../base/src/backend/optimizer/path/costsize.c src/backend/optimizer/path/costsize.c\n--- ../base/src/backend/optimizer/path/costsize.c\t2002-07-04 18:04:57.000000000 +0200\n+++ src/backend/optimizer/path/costsize.c\t2002-10-03 09:53:06.000000000 +0200\n@@ -72,6 +72,7 @@\n double\t\tcpu_tuple_cost = DEFAULT_CPU_TUPLE_COST;\n double\t\tcpu_index_tuple_cost = DEFAULT_CPU_INDEX_TUPLE_COST;\n double\t\tcpu_operator_cost = DEFAULT_CPU_OPERATOR_COST;\n+int\t\t\tindex_cost_algorithm = DEFAULT_INDEX_COST_ALGORITHM;\n \n Cost\t\tdisable_cost = 100000000.0;\n \n@@ -213,8 +214,8 @@\n \tCost\t\tindexStartupCost;\n \tCost\t\tindexTotalCost;\n \tSelectivity indexSelectivity;\n-\tdouble\t\tindexCorrelation,\n-\t\t\t\tcsquared;\n+\tdouble\t\tindexCorrelation;\n+\tCost\t\tIO_cost;\n \tCost\t\tmin_IO_cost,\n \t\t\t\tmax_IO_cost;\n \tCost\t\tcpu_per_tuple;\n@@ -329,13 +330,62 @@\n \tmin_IO_cost = ceil(indexSelectivity * T);\n \tmax_IO_cost = pages_fetched * random_page_cost;\n \n-\t/*\n-\t * Now interpolate based on estimated index order correlation to get\n-\t * total disk I/O cost for main table accesses.\n-\t */\n-\tcsquared = indexCorrelation * indexCorrelation;\n+\tswitch (index_cost_algorithm) {\n+\tcase 1: {\n+\t\t/*\n+\t\t** Use abs(correlation) for linear interpolation\n+\t\t*/\n+\t\tdouble absC = fabs(indexCorrelation);\n+\n+\t\tIO_cost = absC * min_IO_cost + (1 - absC) * max_IO_cost;\n+\t}\n+\n+\tcase 2: {\n+\t\t/*\n+\t\t** First estimate number of pages and cost per page,\n+\t\t** then multiply the results. min_IO_cost is used for\n+\t\t** min_pages, because seq_page_cost = 1.\n+\t\t*/\n+\t\tdouble absC = fabs(indexCorrelation);\n+\n+\t\tdouble estPages = absC * min_IO_cost + (1 - absC) * pages_fetched;\n+\t\tdouble estPCost = absC * 1 + (1 - absC) * random_page_cost;\n+\t\tIO_cost = estPages * estPCost;\n+\t}\n+\n+\tcase 3: {\n+\t\t/*\n+\t\t** Interpolate based on independence squared, thus forcing the\n+\t\t** result to be closer to min_IO_cost\n+\t\t*/\n+\t\tdouble independence = 1 - fabs(indexCorrelation);\n+\t\tdouble isquared = independence * independence;\n+\n+\t\tIO_cost = (1 - isquared) * min_IO_cost + isquared * max_IO_cost;\n+\t}\n+\n+\tcase 4: {\n+\t\t/*\n+\t\t** Interpolate geometrically\n+\t\t*/\n+\t\tdouble absC = fabs(indexCorrelation);\n+\n+\t\tIO_cost = exp(absC * log(min_IO_cost) +\n+\t\t (1 - absC) * log(max_IO_cost));\n+\t}\n+\n+\tdefault: {\n+\t\t/*\n+\t\t * Interpolate based on estimated index order correlation\n+\t\t * to get total disk I/O cost for main table accesses.\n+\t\t */\n+\t\tdouble csquared = indexCorrelation * indexCorrelation;\n+\n+\t\tIO_cost = max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n+\t}\n+\t}\n \n-\trun_cost += max_IO_cost + csquared * (min_IO_cost - max_IO_cost);\n+\trun_cost += IO_cost;\n \n \t/*\n \t * Estimate CPU costs per tuple.\ndiff -ruN ../base/src/backend/utils/misc/guc.c src/backend/utils/misc/guc.c\n--- ../base/src/backend/utils/misc/guc.c\t2002-07-20 17:27:19.000000000 +0200\n+++ src/backend/utils/misc/guc.c\t2002-10-03 10:03:37.000000000 +0200\n@@ -644,6 +644,11 @@\n \t},\n \n \t{\n+\t\t{ \"index_cost_algorithm\", PGC_USERSET }, &index_cost_algorithm,\n+\t\tDEFAULT_INDEX_COST_ALGORITHM, 0, INT_MAX, NULL, NULL\n+\t},\n+\n+\t{\n \t\t{ NULL, 0 }, NULL, 0, 0, 0, NULL, NULL\n \t}\n };\ndiff -ruN ../base/src/include/optimizer/cost.h src/include/optimizer/cost.h\n--- ../base/src/include/optimizer/cost.h\t2002-06-21 02:12:29.000000000 +0200\n+++ src/include/optimizer/cost.h\t2002-10-03 09:56:28.000000000 +0200\n@@ -24,6 +24,7 @@\n #define DEFAULT_CPU_TUPLE_COST\t0.01\n #define DEFAULT_CPU_INDEX_TUPLE_COST 0.001\n #define DEFAULT_CPU_OPERATOR_COST 0.0025\n+#define DEFAULT_INDEX_COST_ALGORITHM 3\n \n /* defaults for function attributes used for expensive function calculations */\n #define BYTE_PCT 100\n@@ -43,6 +44,7 @@\n extern double cpu_tuple_cost;\n extern double cpu_index_tuple_cost;\n extern double cpu_operator_cost;\n+extern int index_cost_algorithm;\n extern Cost disable_cost;\n extern bool enable_seqscan;\n extern bool enable_indexscan;", "msg_date": "Thu, 03 Oct 2002 12:40:20 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Thu, 03 Oct 2002 12:40:20 +0200, I wrote:\n>>Gotta patch?\n>\n>Yes, see below.\n\nOh, did I mention that inserting some break statements after the\nswitch cases helps a lot? :-(\n\nCavus venter non laborat libenter ...\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 17:58:06 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Wed, 2 Oct 2002 14:07:19 -0600 (MDT), \"scott.marlowe\"\n<scott.marlowe@ihs.com> wrote:\n>I've found that when the planner misses, sometimes it misses \n>by HUGE amounts on large tables,\n\nScott,\n\nyet another question: are multicolunm indices involved in your\nestimator problems?\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 18:27:12 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Thu, 3 Oct 2002, Manfred Koizar wrote:\n\n> On Wed, 2 Oct 2002 14:07:19 -0600 (MDT), \"scott.marlowe\"\n> <scott.marlowe@ihs.com> wrote:\n> >I'd certainly be willing to do some testing on my own data with them. \n> \n> Great!\n> \n> >Gotta patch?\n> \n> Not yet.\n> \n> > I've found that when the planner misses, sometimes it misses \n> >by HUGE amounts on large tables, and I have been running random page cost \n> >at 1 lately, as well as running cpu_index_cost at 1/10th the default \n> >setting to get good results.\n> \n> May I ask for more information? What are your settings for\n> effective_cache_size and shared_buffers? And which version are you\n> running?\n\nI'm running 7.2.2 in production and 7.3b2 in testing.\n effective cache size is the default (i.e. commented out)\nshared buffers are at 4000.\n\nI've found that increasing shared buffers past 4000 (32 megs) to 16384 \n(128 Megs) has no great effect on my machine's performance, but I've never \nreally played with effective cache size.\n\nI've got a couple of queries that join a 1M+row table to itself and to a \n50k row table, and the result sets are usually <100 rows at a time. \n\nPlus some other smaller datasets that return larger amounts (i.e. \nsometimes all rows) of data generally.\n\n", "msg_date": "Thu, 3 Oct 2002 10:45:08 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Thu, 3 Oct 2002, Manfred Koizar wrote:\n\n> On Wed, 2 Oct 2002 14:07:19 -0600 (MDT), \"scott.marlowe\"\n> <scott.marlowe@ihs.com> wrote:\n> >I've found that when the planner misses, sometimes it misses \n> >by HUGE amounts on large tables,\n> \n> Scott,\n> \n> yet another question: are multicolunm indices involved in your\n> estimator problems?\n\nNo. Although I use them a fair bit, none of the problems I've encountered \nso far have involved them. But I'd be willing to setup some test indexes \nand get some numbers on them.\n\n", "msg_date": "Thu, 3 Oct 2002 10:59:54 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Thu, 3 Oct 2002 10:59:54 -0600 (MDT), \"scott.marlowe\"\n<scott.marlowe@ihs.com> wrote:\n>>are multicolunm indices involved in your estimator problems?\n>\n>No. Although I use them a fair bit, none of the problems I've encountered \n>so far have involved them. But I'd be willing to setup some test indexes \n>and get some numbers on them.\n\nNever mind! I just stumbled over those lines in selfuncs.c where\nindexCorrelation is calculated by dividing the correlation of the\nfirst index column by the number of index columns.\n\nI have a use case here where this clearly is not the right choice and\nwas hoping to find some examples that help me investigate whether my\ncase is somewhat uncommon ...\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 19:39:44 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Never mind! I just stumbled over those lines in selfuncs.c where\n> indexCorrelation is calculated by dividing the correlation of the\n> first index column by the number of index columns.\n\nYeah, I concluded later that that was bogus. I've been thinking of\njust using the correlation of the first index column and ignoring\nthe rest; that would not be great, but it's probably better than what's\nthere. Have you got a better formula?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 14:50:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Thu, 03 Oct 2002 14:50:00 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>> indexCorrelation is calculated by dividing the correlation of the\n>> first index column by the number of index columns.\n>\n>Yeah, I concluded later that that was bogus. I've been thinking of\n>just using the correlation of the first index column and ignoring\n>the rest; that would not be great, but it's probably better than what's\n>there. Have you got a better formula?\n\nUnfortunately not. I think such a formula does not exist for the\ninformation we have. What we'd need is a notion of correlation of the\nnth (n > 1) index column for constant values of the first n-1 index\ncolumns; or a combined correlation for the first n index columns (1 <\nn <= number of index columns).\n\nI try to understand the problem with the help of use cases.\n[ Jump to the end, if this looks to long-winded. ]\n\n1) Have a look at invoice items with an index on (fyear, invno,\nitemno). Invoice numbers start at 1 for each financial year, item\nnumbers start at 1 for each invoice. In a typical scenario\ncorrelations for fyear, (fyear, invno), and (fyear, invno, itemno) are\nclose to 1; invno correlation is expected to be low; and itemno looks\ntotally chaotic to the analyzer.\n\nWhen we \n\tSELECT * FROM item WHERE fyear = 2001 AND invno < 1000\ndividing the correlation of the first column by the number of columns\ngives 1/3 which has nothing to do with what we want. (And then the\ncurrent implementation of cost_index() squares this and gets 1/9 which\nis even farther away from the truth.) Just using the correlation of\nthe first index column seems right here.\n\n2) OTOH consider bookkeeping with enties identified by (fyear,\naccount, entryno). Again fyear has a correlation near 1. For account\nwe can expect something near 0, and entryno has a distribution\ncomparable to invno in the first use case, i.e. starting at 1 for each\nyear.\n\n\tSELECT * from entry WHERE fyear = 2001 AND account = whatever\nTaking the correlation of fyear would imply that the tuples we are\nlooking for are close to each other, which again can turn out to be\nwrong.\n\nSo what do we know now? Even less than before :-(\n\nI have just one idea that might help a bit: If correlation of the\nfirst index column is near +/1, cost_index() should not use\nbaserel->pages, but baserel->pages * selectivity of the first column.\n\nServus\n Manfred\n", "msg_date": "Fri, 04 Oct 2002 17:13:32 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Thu, 3 Oct 2002 10:45:08 -0600 (MDT), \"scott.marlowe\"\n<scott.marlowe@ihs.com> wrote:\n> effective cache size is the default (i.e. commented out)\n\nThe default is 1000, meaning ca. 8 MB, which seems to be way too low.\nIf your server is (almost) exclusively used by Postgres, try setting\nit to represent most of your OS cache (as reported by free on Linux).\nOtherwise you have to estimate the fraction of the OS cache that gets\nused by Postgres.\n\nI'm still trying to get a feeling for how these settings play\ntogether, so I'd be grateful if you report back the effects this has\non your cost estimates.\n\nCaveat: effective_cache_size is supposed to be the number of cache\npages available to one query (or is it one scan?). So if you have\nseveral concurrent queries (or complex queries with several scans),\nyou should choose a lower value. OTOH if most of your queries operate\non the same data, one query could benefit from pages cached by other\nqueries ... You have to experiment a little.\n\nServus\n Manfred\n", "msg_date": "Fri, 04 Oct 2002 19:57:47 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "> > AFAICS (part of) the real problem is in costsize.c:cost_index() where\n> > IO_cost is calculated from min_IO_cost, pages_fetched,\n> > random_page_cost, and indexCorrelation. The current implementation\n> > uses indexCorrelation^2 to interpolate between min_IO_cost and\n> > max_IO_cost, which IMHO gives results that are too close to\n> > max_IO_cost.\n> \n> The indexCorrelation^2 algorithm was only a quick hack with no theory\n> behind it :-(. I've wanted to find some better method to put in there,\n> but have not had any time to research the problem.\n\nCould we \"quick hack\" it to a geometric mean instead since a mean\nseemed to yield better results than indexCorrelation^2?\n\n> > As nobody knows how each of these proposals performs in real life\n> > under different conditions, I suggest to leave the current\n> > implementation in, add all three algorithms, and supply a GUC variable\n> > to select a cost function.\n> \n> I don't think it's really a good idea to expect users to pick among\n> multiple cost functions that *all* have no guiding theory behind them.\n> I'd prefer to see us find a better cost function and use it. Has anyone\n> trawled the database literature on the subject?\n\nHrm, after an hour of searching and reading, I think one of the better\npapers on the subject can be found here:\n\nhttp://www.cs.ust.hk/faculty/dimitris/PAPERS/TKDE-NNmodels.pdf\n\nPage 13, figure 3-12 is the ticket you were looking for Tom. It's an\ninteresting read with a pretty good analysis and conclusion. The\nauthor notes that his formula begins to fall apart when the number of\ndimensions reaches 10 and suggests the use of a high dimension\nrandom cost estimate algo, but that the use of those comes at great\nexpense to the CPU (which is inline with a few other papers that I\nread). The idea of precomputing values piqued my interest and I\nthought was very clever, albeit space intensive. *shrug*\n\n-sc\n\n\n-- \nSean Chittenden\n", "msg_date": "Thu, 7 Aug 2003 13:44:19 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Hrm, after an hour of searching and reading, I think one of the better\n> papers on the subject can be found here:\n> http://www.cs.ust.hk/faculty/dimitris/PAPERS/TKDE-NNmodels.pdf\n\nInteresting paper, but I don't see the connection to index order\ncorrelation?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 07 Aug 2003 18:51:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Thu, 7 Aug 2003 13:44:19 -0700, Sean Chittenden\n<sean@chittenden.org> wrote:\n>> The indexCorrelation^2 algorithm was only a quick hack with no theory\n>> behind it :-(. I've wanted to find some better method to put in there,\n>> but have not had any time to research the problem.\n>\n>Could we \"quick hack\" it to a geometric mean instead since a mean\n>seemed to yield better results than indexCorrelation^2?\n\nLinear interpolation on (1-indexCorrelation)^2 (algorithm 3 in\nhttp://members.aon.at/pivot/pg/16-correlation-732.diff) is almost as\ngood as geometric interpolation (algorithm 4 in the patch, proposal 3\nin this thread), and its computation is much cheaper because it does\nnot call exp() and log(). Download\nhttp://members.aon.at/pivot/pg/cost_index.sxc and play around with\nyour own numbers to get a feeling.\n\n(1-indexCorrelation)^2 suffers from the same lack of theory behind it\nas indexCorrelation^2. But the results look much more plausible.\nWell, at least to me ;-)\n\nServus\n Manfred\n", "msg_date": "Fri, 08 Aug 2003 18:31:19 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "> > Hrm, after an hour of searching and reading, I think one of the\n> > better papers on the subject can be found here:\n> > http://www.cs.ust.hk/faculty/dimitris/PAPERS/TKDE-NNmodels.pdf\n> \n> Interesting paper, but I don't see the connection to index order\n> correlation?\n\nNothing that I found was nearly that specific, as close as I could\nfind was the paper above on calculating the cost of fetching data from\na disk, which I thought was the bigger problem at hand, but I\ndigress...\n\nIn one paper about large dimension index searches, they did suggest\nthat cost was cumulative for the number of disk reads or nodes in the\ntree that weren't held in cache, which was the biggest hint that I had\nfound on this specific topic. With that as a guiding light (or\nsomething faintly resembling it), it'd seem as though an avg depth of\nnodes in index * tuples_fetched * (random_io_cost * indexCorrelation)\nwould be closer than where we are now... but now also think I/we're\nbarking up the right tree with this thread.\n\nIt's very possible that cost_index() is wrong, but it seems as though\nafter some testing as if PostgreSQL _overly_ _favors_ the use of\nindexes:\n\n# SET enable_seqscan = true; SET enable_indexscan = true;\nSET\nSET\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE utc_date > '2002-10-01'::TIMESTAMP WITH TIME ZONE;\nINFO: cost_seqscan: run_cost: 21472.687500\n startup_cost: 0.000000\n\nINFO: cost_index: run_cost: 21154.308116\n startup_cost: 0.000000\n indexCorrelation: 0.999729\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using report_user_cat_count_utc_date_id_idx on report_user_cat_count rucc (cost=0.00..21154.31 rows=705954 width=64) (actual time=91.36..6625.79 rows=704840 loops=1)\n Index Cond: (utc_date > '2002-10-01 00:00:00-07'::timestamp with time zone)\n Total runtime: 11292.68 msec\n(3 rows)\n\n# SET enable_seqscan = true; SET enable_indexscan = false;\nSET\nSET\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE utc_date > '2002-10-01'::TIMESTAMP WITH TIME ZONE;\nINFO: cost_seqscan: run_cost: 21472.687500\n startup_cost: 0.000000\n\nINFO: cost_index: run_cost: 21154.308116\n startup_cost: 100000000.000000\n indexCorrelation: 0.999729\n QUERY PLAN\n---------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on report_user_cat_count rucc (cost=0.00..21472.69 rows=705954 width=64) (actual time=1091.45..7441.19 rows=704840 loops=1)\n Filter: (utc_date > '2002-10-01 00:00:00-07'::timestamp with time zone)\n Total runtime: 10506.44 msec\n(3 rows)\n\n\nWhich I find surprising and humorous given the popular belief is, mine\nincluded, contrary to those results. I can say with pretty high\nconfidence that the patch to use a geometric mean isn't correct after\nhaving done real world testing as its break even point is vastly\nincorrect and only uses an index when there are less than 9,000 rows\nto fetch, a far cry from the 490K break even I found while testing.\nWhat I did find interesting, however, was that it does work better at\ndetermining the use of multi-column indexes, but I think that's\nbecause the geometric mean pessimizes the value of indexCorrelation,\nwhich gets pretty skewed when using a multi-column index.\n\n# CREATE INDEX report_user_cat_count_utc_date_user_id_idx ON report_user_cat_count (user_id,utc_date);\n# CLUSTER report_user_cat_count_utc_date_user_id_idx ON report_user_cat_count;\n# ANALYZE report_user_cat_count;\n# SET enable_seqscan = true; SET enable_indexscan = true;\nSET\nSET\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2002-01-01'::TIMESTAMP WITH TIME ZONE;\nINFO: cost_seqscan: run_cost: 23685.025000\n startup_cost: 0.000000\n\nINFO: cost_index: run_cost: 366295.018684\n startup_cost: 0.000000\n indexCorrelation: 0.500000\n QUERY PLAN\n------------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on report_user_cat_count rucc (cost=0.00..23685.03 rows=133918 width=64) (actual time=0.28..6100.85 rows=129941 loops=1)\n Filter: ((user_id < 1000) AND (utc_date > '2002-01-01 00:00:00-08'::timestamp with time zone))\n Total runtime: 6649.21 msec\n(3 rows)\n\n# SET enable_seqscan = false; SET enable_indexscan = true;\nSET\nSET\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2002-01-01'::TIMESTAMP WITH TIME ZONE;\nINFO: cost_seqscan: run_cost: 23685.025000\n startup_cost: 100000000.000000\n\nINFO: cost_index: run_cost: 366295.018684\n startup_cost: 0.000000\n indexCorrelation: 0.500000\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using report_user_cat_count_utc_date_user_id_idx on report_user_cat_count rucc (cost=0.00..366295.02 rows=133918 width=64) (actual time=53.91..3110.42 rows=129941 loops=1)\n Index Cond: ((user_id < 1000) AND (utc_date > '2002-01-01 00:00:00-08'::timestamp with time zone))\n Total runtime: 3667.47 msec\n(3 rows)\n\n\nIf I manually set the indexCorrelation to 1.0, however, the planner\nchooses the right plan on its own, which is in effect what setting a\nhigher random_page_cost had been compensating for, a poorly determined\nindexCorrelation.\n\n# SET enable_seqscan = true; SET enable_indexscan = true;\nSET\nSET\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2002-01-01'::TIMESTAMP WITH TIME ZONE;\nINFO: cost_seqscan: run_cost: 23685.025000\n startup_cost: 0.000000\n\nINFO: cost_index: run_cost: 4161.684248\n startup_cost: 0.000000\n indexCorrelation: 1.000000\n QUERY PLAN\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using report_user_cat_count_utc_date_user_id_idx on report_user_cat_count rucc (cost=0.00..4161.68 rows=133918 width=64) (actual time=0.67..1176.63 rows=129941 loops=1)\n Index Cond: ((user_id < 1000) AND (utc_date > '2002-01-01 00:00:00-08'::timestamp with time zone))\n Total runtime: 1705.40 msec\n(3 rows)\n\n\nWhich suggests to me that line 3964 in\n./src/backend/utils/adt/selfuncs.c isn't right for multi-column\nindexes, esp for indexes that are clustered. I don't know how to\naddress this though... Tom, any hints?\n\nFWIW, this is an old data/schema from a defunct project that I can\ngive people access to if they'd like. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 8 Aug 2003 11:06:56 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Which suggests to me that line 3964 in\n> ./src/backend/utils/adt/selfuncs.c isn't right for multi-column\n> indexes, esp for indexes that are clustered. I don't know how to\n> address this though... Tom, any hints?\n\nYes, we knew that already. Oliver had suggested simply dropping the\ndivision by nKeys, thus pretending that the first-column correlation\nis close enough. That seems to me to be going too far in the other\ndirection, but clearly dividing by nKeys is far too pessimistic.\nI'd change this in a moment if someone could point me to a formula\nwith any basis at all ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Aug 2003 15:32:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Fri, 8 Aug 2003 11:06:56 -0700, Sean Chittenden\n<sean@chittenden.org> wrote:\n>[...] it'd seem as though an avg depth of\n>nodes in index * tuples_fetched * (random_io_cost * indexCorrelation)\n>would be closer than where we are now...\n\nIndex depth does not belong here because we walk down the index only\nonce per index scan not once per tuple. It might be part of the\nstartup cost.\n\nThe rest of your formula doesn't seem right, too, because you get\nhigher costs for better correlation. Did you mean\n\trandom_io_cost * (1 - abs(indexCorrelation))?\n\nFWIW, for small effective_cache_size max_IO_cost is almost equal to\ntuples_fetched * random_page_cost. So your formula (with the\ncorrections presumed above) boils down to ignoring\neffective_cache_size and linear interpolation between 0 and\nmax_IO_cost.\n\n>It's very possible that cost_index() is wrong, but it seems as though\n>after some testing as if PostgreSQL _overly_ _favors_ the use of\n>indexes:\n\nWas this an unpatched backend? What were the values of\neffective_cache_size and random_page_cost?\n\n># SET enable_seqscan = true; SET enable_indexscan = true;\n>SET\n>SET\n># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE utc_date > '2002-10-01'::TIMESTAMP WITH TIME ZONE;\n>INFO: cost_seqscan: run_cost: 21472.687500\n> startup_cost: 0.000000\n>\n>INFO: cost_index: run_cost: 21154.308116\n> startup_cost: 0.000000\n> indexCorrelation: 0.999729\n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using report_user_cat_count_utc_date_id_idx on report_user_cat_count rucc (cost=0.00..21154.31 rows=705954 width=64) (actual time=91.36..6625.79 rows=704840 loops=1)\n> Index Cond: (utc_date > '2002-10-01 00:00:00-07'::timestamp with time zone)\n> Total runtime: 11292.68 msec\n>(3 rows)\n\n\"actual time=91.36..6625.79\" but \"Total runtime: 11292.68 msec\"!\nWhere did those 4.7 seconds go?\n\n># SET enable_seqscan = true; SET enable_indexscan = false;\n>SET\n>SET\n># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE utc_date > '2002-10-01'::TIMESTAMP WITH TIME ZONE;\n>INFO: cost_seqscan: run_cost: 21472.687500\n> startup_cost: 0.000000\n>\n>INFO: cost_index: run_cost: 21154.308116\n> startup_cost: 100000000.000000\n> indexCorrelation: 0.999729\n> QUERY PLAN\n>---------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on report_user_cat_count rucc (cost=0.00..21472.69 rows=705954 width=64) (actual time=1091.45..7441.19 rows=704840 loops=1)\n> Filter: (utc_date > '2002-10-01 00:00:00-07'::timestamp with time zone)\n> Total runtime: 10506.44 msec\n>(3 rows)\n\nSame here: \"actual time=1091.45..7441.19\" but \"Total runtime: 10506.44\nmsec\" - more than 3 seconds lost.\n\nWhen we ignore total runtime and look at actual time we get\n\n seq idx\nestimated 21473 21154\nactual 7441 6626\n\nThis doesn't look too bad, IMHO.\n\nBTW, I believe that with your example (single-column index, almost\nperfect correlation, index cond selects almost all tuples) all\ninterpolation methods give an index cost estimation that is very close\nto seq scan cost, and the actual runtimes show that this is correct.\n\n>Which I find surprising and humorous given the popular belief is, mine\n>included, contrary to those results.\n\nHow many tuples are in report_user_cat_count? What are the stats for\nreport_user_cat_count.utc_date?\n\n> I can say with pretty high\n>confidence that the patch to use a geometric mean isn't correct after\n>having done real world testing as its break even point is vastly\n>incorrect and only uses an index when there are less than 9,000 rows\n>to fetch, a far cry from the 490K break even I found while testing.\n\nCould you elaborate, please. The intention of my patch was to favour\nindex scans more than the current implementation. If it does not, you\nhave found a bug in my patch. Did you test the other interpolation\nmethods?\n\n>What I did find interesting, however, was that it does work better at\n>determining the use of multi-column indexes,\n\nYes, because it computes the correlation for a two-column-index as\n\tcorrelation_of_first_index_column * 0.95\ninstead of\n\tcorrelation_of_first_index_column / 2\n\n> but I think that's\n>because the geometric mean pessimizes the value of indexCorrelation,\n>which gets pretty skewed when using a multi-column index.\n\nI don't understand this.\n\n># CREATE INDEX report_user_cat_count_utc_date_user_id_idx ON report_user_cat_count (user_id,utc_date);\n># CLUSTER report_user_cat_count_utc_date_user_id_idx ON report_user_cat_count;\n># ANALYZE report_user_cat_count;\n># SET enable_seqscan = true; SET enable_indexscan = true;\n>SET\n>SET\n># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2002-01-01'::TIMESTAMP WITH TIME ZONE;\n>INFO: cost_seqscan: run_cost: 23685.025000\n> startup_cost: 0.000000\n>\n>INFO: cost_index: run_cost: 366295.018684\n> startup_cost: 0.000000\n> indexCorrelation: 0.500000\n ^^^\nThis is certainly not with my patch. The current implementation gives\nca. 366000 for pages_fetched = 122000 and random_page_cost = 4, which\nlooks plausible for 133000 tuples and (too?) small\neffective_cache_size.\n\n> QUERY PLAN\n>------------------------------------------------------------------------------------------------------------------------------------\n> Seq Scan on report_user_cat_count rucc (cost=0.00..23685.03 rows=133918 width=64) (actual time=0.28..6100.85 rows=129941 loops=1)\n> Filter: ((user_id < 1000) AND (utc_date > '2002-01-01 00:00:00-08'::timestamp with time zone))\n> Total runtime: 6649.21 msec\n>(3 rows)\n>\n># SET enable_seqscan = false; SET enable_indexscan = true;\n>SET\n>SET\n># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2002-01-01'::TIMESTAMP WITH TIME ZONE;\n>INFO: cost_seqscan: run_cost: 23685.025000\n> startup_cost: 100000000.000000\n>\n>INFO: cost_index: run_cost: 366295.018684\n> startup_cost: 0.000000\n> indexCorrelation: 0.500000\n> QUERY PLAN\n>-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> Index Scan using report_user_cat_count_utc_date_user_id_idx on report_user_cat_count rucc (cost=0.00..366295.02 rows=133918 width=64) (actual time=53.91..3110.42 rows=129941 loops=1)\n> Index Cond: ((user_id < 1000) AND (utc_date > '2002-01-01 00:00:00-08'::timestamp with time zone))\n> Total runtime: 3667.47 msec\n>(3 rows)\n\nWhich shows that Postgres does not \"_overly_ _favor_ the use of\nindexes\".\n\n>If I manually set the indexCorrelation to 1.0, however, the planner\n>chooses the right plan on its own\n\nOk, with indexCorrelation == 1.0 we dont have to discuss interpolation\nmethods, because they all return min_IO_cost.\n\n>Which suggests to me that line 3964 in\n>./src/backend/utils/adt/selfuncs.c isn't right for multi-column\n>indexes, esp for indexes that are clustered.\n\nAgreed.\n\n> I don't know how to\n>address this though...\n\nI guess there is no chance without index statistics.\n\n>FWIW, this is an old data/schema from a defunct project that I can\n>give people access to if they'd like. \n\nIs there a dump available for download?\n\nServus\n Manfred\n", "msg_date": "Fri, 08 Aug 2003 23:23:04 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "> > Which suggests to me that line 3964 in\n> > ./src/backend/utils/adt/selfuncs.c isn't right for multi-column\n> > indexes, esp for indexes that are clustered. I don't know how to\n> > address this though... Tom, any hints?\n> \n> Yes, we knew that already. Oliver had suggested simply dropping the\n> division by nKeys, thus pretending that the first-column correlation\n> is close enough. That seems to me to be going too far in the other\n> direction,\n\nBut is it really?\n\n> xbut clearly dividing by nKeys is far too pessimistic. I'd change\n> this in a moment if someone could point me to a formula with any\n> basis at all ...\n\nGot it, alright. I'd never paid attention to prior discussions as the\nplanner had generally did the right thing (with a lowered\nrandom_page_cost ::grin::). In terms of statistics and setting\nindexCorrelation correctly, something like Spearman's rho calculation\ncomes to mind, though I don't know how applicable that is to database\ntheory.\n\nindexCorrelation is 1.0 for the 1st key in a multi-column index. The\nonly thing different about a multi-column index and a single column\nindex is the multi-column index takes up more space per key, resulting\nin fewer index entries per page and more pages being fetched than\nwould be in a single column index, but the current cost_index()\nfunction takes increased number of page fetches into account when\ncalculating cost. As things stand, however, if a multi-column key is\nused, the indexCorrelation is penalized by the size of the number of\nkeys found in the multi-column index. As things stand the qual\nuser_id = 42, on a CLUSTER'ed multi-column index (user_id,utc_date)\nhas an indexCorrelation of 0.5, when in fact the correlation is 1.0.\nindexCorrelation == number of random page fetches, which could be next\nto free on a solid state drive, in this case, the page fetches aren't\nrandom, they're perfectly sequential. If it were 'user_id = 42 AND\nutc_date = NOW()', the correlation of a lookup of the user_id would\nstill be 1.0 and the utc_date would be 1.0 because both values are\nlooked up in the index key. A lookup of just the utc_date can never\nuse the index and the planner correctly uses a sequential scan. Cost\n!= Correlation. They're proportional, but not the same and\nindexCorrelation is the wrong place to handle cost as that's done by\nthe Mackert and Lohman formula. Under what circumstances would it be\ncorrect to pessimize the use of indexCorrelation? An indexCorrelation\nof 0.0 doesn't mean that the index is useless either, just that we\ntake the full hit of a completely random page read as opposed to some\nfraction of a random page cost.\n\nI tossed a different index on my test table to see how well things\nfare with a low correlation, and this was a bit disturbing:\n\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE rucc.html_bytes > 8000000::BIGINT;\nINFO: cost_seqscan: run_cost: 21472.687500\n startup_cost: 0.000000\n\nINFO: cost_index: run_cost: 112165.065458\n startup_cost: 0.000000\n indexCorrelation: 0.183380\n QUERY PLAN\n----------------------------------------------------------------------------------------------------------------------------------\n Seq Scan on report_user_cat_count rucc (cost=0.00..21472.69 rows=31893 width=64) (actual time=444.25..2489.27 rows=514 loops=1)\n Filter: (html_bytes > 8000000::bigint)\n Total runtime: 2492.36 msec\n(3 rows)\n\n# SET enable_seqscan = false;\nSET\n# EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE rucc.html_bytes > 8000000::BIGINT;\nINFO: cost_seqscan: run_cost: 21472.687500\n startup_cost: 100000000.000000\n\nINFO: cost_index: run_cost: 112165.065458\n startup_cost: 0.000000\n indexCorrelation: 0.183380\n QUERY PLAN\n-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n Index Scan using report_user_cat_count_html_bytes_idx on report_user_cat_count rucc (cost=0.00..112165.07 rows=31893 width=64) (actual time=68.64..85.75 rows=514 loops=1)\n Index Cond: (html_bytes > 8000000::bigint)\n Total runtime: 97.75 msec\n(3 rows)\n\n\n*shrug* A low indexCorrelation overly pessimizes the cost of an index,\nbut I'm not sure where to attribute this too. :-/\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 8 Aug 2003 15:10:06 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> indexCorrelation is 1.0 for the 1st key in a multi-column index.\n\n... only if it's perfectly correlated.\n\n> As things stand, however, if a multi-column key is\n> used, the indexCorrelation is penalized by the size of the number of\n> keys found in the multi-column index. As things stand the qual\n> user_id = 42, on a CLUSTER'ed multi-column index (user_id,utc_date)\n> has an indexCorrelation of 0.5, when in fact the correlation is 1.0.\n\nRight, in the perfectly-correlated case this calculation is clearly\nwrong. However, what of cases where the first column shows good\ncorrelation with the physical ordering, but the second does not?\n\nThe nasty part of this is that the correlation stat that ANALYZE\ncomputed for the second column is of no value to us. Two examples:\n\n\tX\tY\t\t\t\tX\tY\n\n\tA\tA\t\t\t\tA\tB\n\tA\tB\t\t\t\tA\tC\n\tA\tC\t\t\t\tA\tA\n\tB\tA\t\t\t\tB\tA\n\tB\tB\t\t\t\tB\tC\n\tB\tC\t\t\t\tB\tB\n\tC\tA\t\t\t\tC\tC\n\tC\tB\t\t\t\tC\tA\n\tC\tC\t\t\t\tC\tB\n\nIn both cases ANALYZE will calculate correlation 1.0 for column X,\nand something near zero for column Y. We would like to come out with\nindex correlation 1.0 for the left-hand case and something much less\n(but, perhaps, not zero) for the right-hand case. I don't really see\na way to do this without actually examining the multi-column ordering\nrelationship during ANALYZE.\n\n> I tossed a different index on my test table to see how well things\n> fare with a low correlation, and this was a bit disturbing:\n\nSeems like most of the error in that estimate has to do with the poor\nrowcount estimation. There's very little percentage in trying to\nanalyze the effect of index correlation in examples where we don't have\nthe first-order stats correct ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 08 Aug 2003 18:25:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "On Fri, 8 Aug 2003 15:10:06 -0700, Sean Chittenden\n<sean@chittenden.org> wrote:\n>> Yes, we knew that already. Oliver had suggested simply dropping the\n>> division by nKeys, thus pretending that the first-column correlation\n>> is close enough. That seems to me to be going too far in the other\n>> direction,\n>\n>But is it really?\n\nYes. I once posted two use cases showing that one side is as wrong as\nthe other. Wait a momemt ... Here it is:\nhttp://archives.postgresql.org/pgsql-hackers/2002-10/msg00229.php.\n\n>indexCorrelation is 1.0 for the 1st key in a multi-column index. [...]\n\nUnfortunately there are cases, where the correlation of the first\ncolumn doesn't tell us anything about the real index correlation.\n\n> An indexCorrelation\n>of 0.0 doesn't mean that the index is useless either, just that we\n>take the full hit of a completely random page read as opposed to some\n>fraction of a random page cost.\n\n... and that we cannot expect the tuples we are looking for to be on\nthe same page or on pages near to each other. So we have to read a\nhigher number of pages.\n\n> indexCorrelation: 0.183380\n> Seq Scan on report_user_cat_count rucc (cost=0.00..21472.69 rows=31893 width=64) (actual time=444.25..2489.27 rows=514 loops=1)\n>\n> Index Scan using report_user_cat_count_html_bytes_idx on report_user_cat_count rucc (cost=0.00..112165.07 rows=31893 width=64) (actual time=68.64..85.75 rows=514 loops=1)\n\nThe main problem here is the bad estimation for number of rows: 31893\nvs. 514. The seq scan cost depends on the size of the heap relation\nand is always the same (~ 21000) for different search conditions.\nWith low correlation, index scan cost is roughly proportional to the\nnumber of tuples fetched (ignoring the effects of effective_cache_size\nfor now). So for a correct guess of number of rows we'd get\n\n seq idx\nestimated 21000 1800\nactual 2500 86\nratio 8.4 20.4\n\nAs estimated and actual are not given in the same units, we look at\nthe estimated/actual ratio and see that index scan cost is\nover-estimated by a factor 2.5. index_cost_algorithm 4 (geometric\ninterpolation) should get this right. Can you ANALYSE with a higher\nstatistics_target and then test with different index_cost_algorithms? \n\nServus\n Manfred\n", "msg_date": "Sat, 09 Aug 2003 01:18:55 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Fri, 08 Aug 2003 18:25:41 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n> Two examples: [...]\n\nOne more example:\n\tX\tY\n\n\tA\tA\n\ta\tB\n\tA\tC\n\tb\tA\n\tB\tB\n\tb\tC\n\tC\tA\n\tc\tB\n\tC\tC\n\nCorrelation for column X is something less than 1.0, OTOH correlation\nfor an index on upper(X) is 1.0.\n\n>I don't really see\n>a way to do this without actually examining the multi-column ordering\n>relationship during ANALYZE.\n\nSo did we reach consensus to add a TODO item?\n\n\t* Compute index correlation on CREATE INDEX and ANALYZE,\n\t use it for index scan cost estimation\n\nServus\n Manfred\n", "msg_date": "Sat, 09 Aug 2003 01:48:52 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index() " }, { "msg_contents": "> >[...] it'd seem as though an avg depth of\n> >nodes in index * tuples_fetched * (random_io_cost * indexCorrelation)\n> >would be closer than where we are now...\n> \n> Index depth does not belong here because we walk down the index only\n> once per index scan not once per tuple. It might be part of the\n> startup cost.\n> \n> The rest of your formula doesn't seem right, too, because you get\n> higher costs for better correlation. Did you mean\n> \trandom_io_cost * (1 - abs(indexCorrelation))?\n\nYeah... this was just some off the cuff dribble, don't pay it much\nattention.\n\n> FWIW, for small effective_cache_size max_IO_cost is almost equal to\n> tuples_fetched * random_page_cost. So your formula (with the\n> corrections presumed above) boils down to ignoring\n> effective_cache_size and linear interpolation between 0 and\n> max_IO_cost.\n\nInteresting... \n\n> >It's very possible that cost_index() is wrong, but it seems as though\n> >after some testing as if PostgreSQL _overly_ _favors_ the use of\n> >indexes:\n> \n> Was this an unpatched backend? What were the values of\n> effective_cache_size and random_page_cost?\n\nFor all intents and purposes, yes.\n\n# SHOW effective_cache_size ;\n effective_cache_size\n----------------------\n 4456\n(1 row)\n\n# SHOW random_page_cost ;\n random_page_cost\n------------------\n 4\n(1 row)\n\nAs opposed to setting random_page_cost and avoiding these\ndiscussions/touring through the code, I'm trying to play by the\naccepted rules...\n\n> ># SET enable_seqscan = true; SET enable_indexscan = true;\n> >SET\n> >SET\n> ># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE utc_date > '2002-10-01'::TIMESTAMP WITH TIME ZONE;\n> >INFO: cost_seqscan: run_cost: 21472.687500\n> > startup_cost: 0.000000\n> >\n> >INFO: cost_index: run_cost: 21154.308116\n> > startup_cost: 0.000000\n> > indexCorrelation: 0.999729\n> > QUERY PLAN\n> >-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n> > Index Scan using report_user_cat_count_utc_date_id_idx on report_user_cat_count rucc (cost=0.00..21154.31 rows=705954 width=64) (actual time=91.36..6625.79 rows=704840 loops=1)\n> > Index Cond: (utc_date > '2002-10-01 00:00:00-07'::timestamp with time zone)\n> > Total runtime: 11292.68 msec\n> >(3 rows)\n> \n> \"actual time=91.36..6625.79\" but \"Total runtime: 11292.68 msec\"!\n> Where did those 4.7 seconds go?\n\n*shrug* Don't ask me.\n\n> ># SET enable_seqscan = true; SET enable_indexscan = false;\n> >SET\n> >SET\n> ># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE utc_date > '2002-10-01'::TIMESTAMP WITH TIME ZONE;\n> >INFO: cost_seqscan: run_cost: 21472.687500\n> > startup_cost: 0.000000\n> >\n> >INFO: cost_index: run_cost: 21154.308116\n> > startup_cost: 100000000.000000\n> > indexCorrelation: 0.999729\n> > QUERY PLAN\n> >---------------------------------------------------------------------------------------------------------------------------------------\n> > Seq Scan on report_user_cat_count rucc (cost=0.00..21472.69 rows=705954 width=64) (actual time=1091.45..7441.19 rows=704840 loops=1)\n> > Filter: (utc_date > '2002-10-01 00:00:00-07'::timestamp with time zone)\n> > Total runtime: 10506.44 msec\n> >(3 rows)\n> \n> Same here: \"actual time=1091.45..7441.19\" but \"Total runtime:\n> 10506.44 msec\" - more than 3 seconds lost.\n> \n> When we ignore total runtime and look at actual time we get\n> \n> seq idx\n> estimated 21473 21154\n> actual 7441 6626\n> \n> This doesn't look too bad, IMHO.\n> \n> BTW, I believe that with your example (single-column index, almost\n> perfect correlation, index cond selects almost all tuples) all\n> interpolation methods give an index cost estimation that is very close\n> to seq scan cost, and the actual runtimes show that this is correct.\n> \n> >Which I find surprising and humorous given the popular belief is, mine\n> >included, contrary to those results.\n> \n> How many tuples are in report_user_cat_count? What are the stats for\n> report_user_cat_count.utc_date?\n\n# SELECT COUNT(*) FROM report_user_cat_count ;\n\n count\n--------\n 884906\n(1 row)\n\nThe stats are attached && bzip2 compressed.\n\n> >I can say with pretty high confidence that the patch to use a\n> >geometric mean isn't correct after having done real world testing\n> >as its break even point is vastly incorrect and only uses an index\n> >when there are less than 9,000 rows to fetch, a far cry from the\n> >490K break even I found while testing.\n> \n> Could you elaborate, please. The intention of my patch was to\n> favour index scans more than the current implementation. If it does\n> not, you have found a bug in my patch. Did you test the other\n> interpolation methods?\n\nI didn't, only the geometric mean... the problem with your patch was\nthat it picked an index less often than the current code when there\nwas low correlation. I think you're onto something, I'm just not\nconvinced it's right. I actually think that costs should be converted\ninto estimated msec's of execution and there should be hardware GUCs\nfor CPU speed, RAM access, and basic HDD stats, but that's something\ndifferent.\n\n> >What I did find interesting, however, was that it does work better at\n> >determining the use of multi-column indexes,\n> \n> Yes, because it computes the correlation for a two-column-index as\n> \tcorrelation_of_first_index_column * 0.95\n> instead of\n> \tcorrelation_of_first_index_column / 2\n\n*nods*\n\n> > but I think that's because the geometric mean pessimizes the value\n> >of indexCorrelation, which gets pretty skewed when using a\n> >multi-column index.\n> \n> I don't understand this.\n> \n> ># CREATE INDEX report_user_cat_count_utc_date_user_id_idx ON report_user_cat_count (user_id,utc_date);\n> ># CLUSTER report_user_cat_count_utc_date_user_id_idx ON report_user_cat_count;\n> ># ANALYZE report_user_cat_count;\n> ># SET enable_seqscan = true; SET enable_indexscan = true;\n> >SET\n> >SET\n> ># EXPLAIN ANALYZE SELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2002-01-01'::TIMESTAMP WITH TIME ZONE;\n> >INFO: cost_seqscan: run_cost: 23685.025000\n> > startup_cost: 0.000000\n> >\n> >INFO: cost_index: run_cost: 366295.018684\n> > startup_cost: 0.000000\n> > indexCorrelation: 0.500000\n> ^^^\n> This is certainly not with my patch. The current implementation gives\n> ca. 366000 for pages_fetched = 122000 and random_page_cost = 4, which\n> looks plausible for 133000 tuples and (too?) small\n> effective_cache_size.\n\n*nods* I manually applied bits of it and didn't change the correlation\ncode in adt/selfuncs.c because it wasn't any more founded than what's\nthere.... though I do think it'd yield better results in my case, it\nisn't very adaptive given it uses arbitrary magic numbers.\n\n> >If I manually set the indexCorrelation to 1.0, however, the planner\n> >chooses the right plan on its own\n> \n> Ok, with indexCorrelation == 1.0 we dont have to discuss interpolation\n> methods, because they all return min_IO_cost.\n> \n> >Which suggests to me that line 3964 in\n> >./src/backend/utils/adt/selfuncs.c isn't right for multi-column\n> >indexes, esp for indexes that are clustered.\n> \n> Agreed.\n> \n> > I don't know how to\n> >address this though...\n> \n> I guess there is no chance without index statistics.\n> \n> >FWIW, this is an old data/schema from a defunct project that I can\n> >give people access to if they'd like. \n> \n> Is there a dump available for download?\n\nSure, let me post it.\n\nhttp://people.FreeBSD.org/~seanc/pgsql/rucc.sql.bz2\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 8 Aug 2003 16:53:48 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Fri, 8 Aug 2003 16:53:48 -0700, Sean Chittenden\n<sean@chittenden.org> wrote:\n># SHOW effective_cache_size ;\n> effective_cache_size\n>----------------------\n> 4456\n>(1 row)\n\nOnly 35 MB? Are you testing on such a small machine?\n\n>The stats are attached && bzip2 compressed.\n\nNothing was attached. Did you upload it to your web site?\n\n>> >I can say with pretty high confidence that the patch to use a\n>> >geometric mean isn't correct\n\n>... the problem with your patch was\n>that it picked an index less often than the current code when there\n>was low correlation.\n\nIn cost_index.sxc I get lower estimates for *all* proposed new\ninterpolation methods. Either my C code doesn't implement the same\ncalculations as the spreadsheet, or ... \n\n>I manually applied bits of it [...]\n\n... could this explain the unexpected behaviour?\n\nI'm currently downloading your dump. Can you post the query you\nmentioned above?\n\nServus\n Manfred\n", "msg_date": "Sat, 09 Aug 2003 02:27:45 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "> ># SHOW effective_cache_size ;\n> > effective_cache_size\n> >----------------------\n> > 4456\n> >(1 row)\n> \n> Only 35 MB? Are you testing on such a small machine?\n\nTesting on my laptop right now... can't hack on my production DBs the\nsame way I can my laptop.\n\n> >The stats are attached && bzip2 compressed.\n> \n> Nothing was attached. Did you upload it to your web site?\n\nGah, not yet, forgot to send it.\n\nhttp://people.FreeBSD.org/~seanc/pg_statistic.txt.bz2\n\n> >> >I can say with pretty high confidence that the patch to use a\n> >> >geometric mean isn't correct\n> \n> >... the problem with your patch was that it picked an index less\n> >often than the current code when there was low correlation.\n> \n> In cost_index.sxc I get lower estimates for *all* proposed new\n> interpolation methods. Either my C code doesn't implement the same\n> calculations as the spreadsheet, or ...\n> \n> >I manually applied bits of it [...]\n> \n> ... could this explain the unexpected behaviour?\n\nDon't think so... the run_cost was correct, I didn't modify the\nindexCorrelation behavior beyond forcing it to 1.0.\n\n> I'm currently downloading your dump. Can you post the query you\n> mentioned above?\n\nSELECT * FROM report_user_cat_count AS rucc WHERE rucc.html_bytes > 20000000::BIGINT;\nSELECT * FROM report_user_cat_count AS rucc WHERE user_id = 42 AND utc_date = NOW();\nSELECT * FROM report_user_cat_count AS rucc WHERE user_id = 42;\nSELECT * FROM report_user_cat_count AS rucc WHERE user_id < 1000 AND utc_date > '2003-01-01'::TIMESTAMP WITH TIME ZONE;\n\nAnd various timestamps back to 2002-09-19 and user_id's IN(1,42).\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Fri, 8 Aug 2003 22:06:36 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: Correlation in cost_index()" }, { "msg_contents": "On Fri, 8 Aug 2003 16:53:48 -0700, Sean Chittenden\n<sean@chittenden.org> wrote:\n>the problem with your patch was\n>that it picked an index less often than the current code when there\n>was low correlation.\n\nMaybe bit rot? What version did you apply the patch against? Here is\na new version for Postgres 7.3.4:\nhttp://www.pivot.at/pg/16d-correlation_734.diff\n\nThe only difference to the previous version is that\n\n\tfor (nKeys = 1; index->indexkeys[nKeys] != 0; nKeys++)\n\nis now replaced with\n\n\tfor (nKeys = 1; nKeys < index->ncolumns; nKeys++)\n\nDon't know whether the former just worked by chance when I tested the\n7.3.2 version :-(. Tests with 7.4Beta1 showed that index correlation\ncomes out too low with the old loop termination condition. Anyway,\nthe latter version seems more robust.\n\nIn my tests the new index_cost_algorithms (1, 2, 3, 4) gave\nconsistently lower cost estimates than the old method (set\nindex_cost_algorithm = 0), except of course for correlations of 1.0 or\n0.0, because in these border cases you get always min_IO_cost or\nmax_IO_cost, respectively.\n\nCare to re-evaluate? BTW, there's a version of the patch for 7.4Beta1\n(http://www.pivot.at/pg/16d-correlation_74b1.diff) which also applies\ncleanly against cvs snapshot from 2003-08-17.\n\nServus\n Manfred\n", "msg_date": "Wed, 20 Aug 2003 19:57:12 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": true, "msg_subject": "Re: Correlation in cost_index()" } ]
[ { "msg_contents": "All,\n\nI'd like to help work on some 7.4 features, however, since you've not seen\nmy name before, I'm obviously new to the list and the org.\n\nI really like working on speed optimizations and rewrites. I have 15 years\nexperience with C++-based systems and databases, and have worked on\ncommercial database engines (i.e. indexing and query execution systems), sql\nexecution and optimization, various lex and yacc based compilers and\nparsers. I've generally been able to get code to perform as well or better\nthan competitive systems with similar functionality, and usually have been\nable to beat other code by 3 to 10 X. My unix experience is reasonable but\nI'm not an expert.\n\nAny suggestions for where to start? I don't mind digging into very hairy\ncode or large problems. I'm willing to run the risk of a patch not being\naccepted (for large changes) since I'll make sure whatever I do is well\nknown to those who will do the accept/deny and the approach approved of\nahead of time.\n\nSince I'm new here, I'm thinking a problem that would not otherwise get\nhandled by the experienced group would be the best place to start. Where is\nthe system especially slow?\n\nI've read the TODO's, and the last five months of the archives for this\nlist, so I have some general ideas.\n\nI've also had a lot experience marketing to I.T. organizations so I'd be\nhappy to help out on the Product Marketing for PostgreSQL advocacy, i.e.\ndeveloping a marketing strategy, press releases, etc.\n\n- Curtis\n\nCurtis Faith\nPrincipal\nGalt Capital, LLP\n\n------------------------------------------------------------------\nGalt Capital http://www.galtcapital.com\n12 Wimmelskafts Gade\nPost Office Box 7549 voice: 340.776.0144\nCharlotte Amalie, St. Thomas fax: 340.776.0244\nUnited States Virgin Islands 00801 cell: 340.643.5368\n\n", "msg_date": "Wed, 2 Oct 2002 16:13:25 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Advice: Where could I be of help?" }, { "msg_contents": "I'm not a developer, but I know this item on the todo list has been a\nmagor pain in my side for quite a while:\n\n# Make IN/NOT IN have similar performance to EXISTS/NOT EXISTS\n[http://momjian.postgresql.org/cgi-bin/pgtodo?exists]\n\nAny time I've attempted to use this feature, the query cost is in the\nmillions according to \"explain\", which of course makes it useless to\neven execute. :(\n\nI have managed to work around this performance problem, but it sure\nwould be nice if PGSQL handled such cases better.\n\nThere are probably thousands of other todo items you could spend your\ntime on that would be more useful to more people, but this is just one\nsuggestion. :)\n\n\nOn Wed, 2002-10-02 at 13:13, Curtis Faith wrote:\n> All,\n> \n> I'd like to help work on some 7.4 features, however, since you've not seen\n> my name before, I'm obviously new to the list and the org.\n> \n> I really like working on speed optimizations and rewrites. I have 15 years\n> experience with C++-based systems and databases, and have worked on\n> commercial database engines (i.e. indexing and query execution systems), sql\n> execution and optimization, various lex and yacc based compilers and\n> parsers. I've generally been able to get code to perform as well or better\n> than competitive systems with similar functionality, and usually have been\n> able to beat other code by 3 to 10 X. My unix experience is reasonable but\n> I'm not an expert.\n> \n> Any suggestions for where to start? I don't mind digging into very hairy\n> code or large problems. I'm willing to run the risk of a patch not being\n> accepted (for large changes) since I'll make sure whatever I do is well\n> known to those who will do the accept/deny and the approach approved of\n> ahead of time.\n> \n> Since I'm new here, I'm thinking a problem that would not otherwise get\n> handled by the experienced group would be the best place to start. Where is\n> the system especially slow?\n> \n> I've read the TODO's, and the last five months of the archives for this\n> list, so I have some general ideas.\n> \n> I've also had a lot experience marketing to I.T. organizations so I'd be\n> happy to help out on the Product Marketing for PostgreSQL advocacy, i.e.\n> developing a marketing strategy, press releases, etc.\n> \n> - Curtis\n> \n> Curtis Faith\n> Principal\n> Galt Capital, LLP\n> \n> ------------------------------------------------------------------\n> Galt Capital http://www.galtcapital.com\n> 12 Wimmelskafts Gade\n> Post Office Box 7549 voice: 340.776.0144\n> Charlotte Amalie, St. Thomas fax: 340.776.0244\n> United States Virgin Islands 00801 cell: 340.643.5368\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n", "msg_date": "02 Oct 2002 14:24:29 -0700", "msg_from": "Mike Benoit <mikeb@netnation.com>", "msg_from_op": false, "msg_subject": "Re: Advice: Where could I be of help?" }, { "msg_contents": "\nI would read the developers corner stuff, the developers FAQ, pick a\nTODO item, and try a patch. It's that simple. Feel free to contact me\nfor specific advice. I am on chat at:\n\t\n\tAIM\tbmomjian\n\tICQ\t151255111\n\tYahoo\tbmomjian\n\tMSN\troot@candle.pha.pa.us\n\tIRC\t#postgresql vis efnet\n\n---------------------------------------------------------------------------\n\nCurtis Faith wrote:\n> All,\n> \n> I'd like to help work on some 7.4 features, however, since you've not seen\n> my name before, I'm obviously new to the list and the org.\n> \n> I really like working on speed optimizations and rewrites. I have 15 years\n> experience with C++-based systems and databases, and have worked on\n> commercial database engines (i.e. indexing and query execution systems), sql\n> execution and optimization, various lex and yacc based compilers and\n> parsers. I've generally been able to get code to perform as well or better\n> than competitive systems with similar functionality, and usually have been\n> able to beat other code by 3 to 10 X. My unix experience is reasonable but\n> I'm not an expert.\n> \n> Any suggestions for where to start? I don't mind digging into very hairy\n> code or large problems. I'm willing to run the risk of a patch not being\n> accepted (for large changes) since I'll make sure whatever I do is well\n> known to those who will do the accept/deny and the approach approved of\n> ahead of time.\n> \n> Since I'm new here, I'm thinking a problem that would not otherwise get\n> handled by the experienced group would be the best place to start. Where is\n> the system especially slow?\n> \n> I've read the TODO's, and the last five months of the archives for this\n> list, so I have some general ideas.\n> \n> I've also had a lot experience marketing to I.T. organizations so I'd be\n> happy to help out on the Product Marketing for PostgreSQL advocacy, i.e.\n> developing a marketing strategy, press releases, etc.\n> \n> - Curtis\n> \n> Curtis Faith\n> Principal\n> Galt Capital, LLP\n> \n> ------------------------------------------------------------------\n> Galt Capital http://www.galtcapital.com\n> 12 Wimmelskafts Gade\n> Post Office Box 7549 voice: 340.776.0144\n> Charlotte Amalie, St. Thomas fax: 340.776.0244\n> United States Virgin Islands 00801 cell: 340.643.5368\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 17:27:55 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Advice: Where could I be of help?" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> I'd like to help work on some 7.4 features, however, since you've\n> not seen my name before, I'm obviously new to the list and the org.\n\n[...]\n\n> Any suggestions for where to start?\n\nWell, I'd suggest working on what you find interesting -- there is\nroom for improvement in just about every area of the system, so don't\nlet us dictate how you spend your free time.\n\nThat said, the code for hash indexes requires some *major* changes,\nand AFAIK none of the core developers are planning on working on it\nany time soon (and since the hash index could is somewhat isolated, it\nmight be a good place to start). There's also plenty of work remaining\nto be done for replication -- the pgreplication project could use some\nhelp. You could also improve PL/pgSQL -- there are a bunch of\nrelatively minor improvements that could be made. You could also try\nimplementing bitmap indexes, or improving GEQO (the genetic-algorithm\nbased query optimizer).\n\nHTH,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "02 Oct 2002 18:33:19 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Advice: Where could I be of help?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I would read the developers corner stuff, the developers FAQ, pick a\n> TODO item, and try a patch. It's that simple.\n\nYup. I'd also suggest starting with something relatively small and\nlocalized (the nearby suggestion to fix IN/EXISTS, for example, is\nprobably not a good first project --- and anyway I was going to work\non that myself this month ;-)).\n\nNeil Conway's thought of working on plpgsql seems a good one to me;\nand as he says, there's lots of other possibilities. What do you\nfind interesting in the TODO list?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 18:54:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Advice: Where could I be of help? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I would read the developers corner stuff, the developers FAQ, pick a\n> > TODO item, and try a patch. It's that simple.\n> \n> Yup. I'd also suggest starting with something relatively small and\n> localized (the nearby suggestion to fix IN/EXISTS, for example, is\n> probably not a good first project --- and anyway I was going to work\n> on that myself this month ;-)).\n\nThat's good news. I am getting a little embarassed because I had to\nexplain the work arounds to someone this week, twice.\n\nAs it stands now, when is EXISTS quicker than IN. It isn't always,\nright?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 22:29:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Advice: Where could I be of help?" }, { "msg_contents": ">\n> I'd like to help work on some 7.4 features, however, since you've not seen\n> my name before, I'm obviously new to the list and the org.\n>\n> I really like working on speed optimizations and rewrites. I have 15 years\n> experience with C++-based systems and databases, and have worked on\n> commercial database engines (i.e. indexing and query execution systems),\n> sql execution and optimization, various lex and yacc based compilers and\n> parsers. I've generally been able to get code to perform as well or better\n> than competitive systems with similar functionality, and usually have been\n> able to beat other code by 3 to 10 X. My unix experience is reasonable but\n> I'm not an expert.\n\nHi,\n\njust an idea, but if you're still searching something to work on, you might want to take\na look on the deadlock problem with foreign keys. It seems there's a new kind of lock needed here,\nbecause it's possible to deadlock backends where no real deadlock situation occurs.\n\nIMO this is one of the biggest problems in postgres now, because for foreign keys are widely used and \n- even if not deadlocking - performance is limited because of the many \"select ... for update\" the fk system\nuses limit concurrency to one at a time in many situations. \n\n\nRegards,\n\tMario Weilguni\n\n\n", "msg_date": "Sat, 5 Oct 2002 08:31:05 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Advice: Where could I be of help?" }, { "msg_contents": "> just an idea, but if you're still searching something to work on, you might want to take\n> a look on the deadlock problem with foreign keys. It seems there's a new kind of lock needed here,\n> because it's possible to deadlock backends where no real deadlock situation occurs.\n> \n> IMO this is one of the biggest problems in postgres now, because for foreign keys are widely used and \n> - even if not deadlocking - performance is limited because of the many \"select ... for update\" the fk system\n> uses limit concurrency to one at a time in many situations. \n\nThat gets my vote too for what its worth... I had to remove most of the\nFK references from my tables and just replaced them with triggers as the\namount of deadlocks I was getting in stress tests was killing me.\n\nTom.\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n\n", "msg_date": "05 Oct 2002 15:53:47 +0900", "msg_from": "Thomas O'Dowd <tom@nooper.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Advice: Where could I be of help?" }, { "msg_contents": "On Sat, Oct 05, 2002 at 03:53:47PM +0900, Thomas O'Dowd wrote:\n> > just an idea, but if you're still searching something to work on, you might want to take\n> > a look on the deadlock problem with foreign keys. It seems there's a new kind of lock needed here,\n> > because it's possible to deadlock backends where no real deadlock situation occurs.\n> > \n> > IMO this is one of the biggest problems in postgres now, because for foreign keys are widely used and \n> > - even if not deadlocking - performance is limited because of the many \"select ... for update\" the fk system\n> > uses limit concurrency to one at a time in many situations. \n> \n> That gets my vote too for what its worth... I had to remove most of the\n> FK references from my tables and just replaced them with triggers as the\n> amount of deadlocks I was getting in stress tests was killing me.\n\n <jog-to-developers>\n ... maybe try use latest MySQL with InnoDB tables :-)\n </jog-to-developers>\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Mon, 7 Oct 2002 09:24:17 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Advice: Where could I be of help?" } ]
[ { "msg_contents": "\nI received this via personal email. I assume the author wants it\nshared. It shows CURRENT_TIMESTAMP changing within a function!\n\n---------------------------------------------------------------------------\n\nSteve Hulcher wrote:\n> Oracle 9i.\n> \n> Hope this is helpful\n> \n> \n> --SQL RUN----------------------------------------------------\n> /*\n> CREATE TABLE foo (a DATE);\n> CREATE OR REPLACE PROCEDURE test\n> AS\n> BEGIN\n> INSERT INTO foo SELECT CURRENT_TIMESTAMP FROM dual;\n> dbms_lock.sleep(5);\n> INSERT INTO foo SELECT CURRENT_TIMESTAMP FROM dual;\n> END;\n> /\n> show errors;\n> */\n> \n> DELETE FROM foo;\n> EXECUTE test;\n> \n> SELECT TO_CHAR(a, 'YYYY-MM-DD HH24:MI:SS') FROM foo;\n> \n> --RESULTS----------------------------------------------------\n> 0 rows deleted.\n> \n> \n> PL/SQL procedure successfully completed.\n> \n> \n> TO_CHAR(A,'YYYY-MM-\n> -------------------\n> 2002-10-02 11:33:12\n> 2002-10-02 11:33:17\n> \n> \n> \n> -----Original Message-----\n> From: Mike Mascari [mailto:mascarm@mascari.com]\n> Sent: Wednesday, October 02, 2002 11:20 AM\n> To: Bruce Momjian\n> Cc: Yury Bokhoncovich; Dan Langille; Roland Roberts;\n> PostgreSQL-development\n> Subject: Re: [HACKERS] (Fwd) Re: Any Oracle 9 users? A test please...\n> \n> \n> Bruce Momjian wrote:\n> > \n> > OK, two requests. First, would you create a _named_ PL/SQL function\n> > with those contents and try it again. Also, would you test\n> > CURRENT_TIMESTAMP too?\n> > \n> \n> SQL> CREATE TABLE foo(a date);\n> \n> Table created.\n> \n> As a PROCEDURE:\n> \n> SQL> CREATE PROCEDURE test\n> 2 AS\n> 3 BEGIN\n> 4 INSERT INTO foo SELECT SYSDATE FROM dual;\n> 5 dbms_lock.sleep(5);\n> 6 INSERT INTO foo SELECT SYSDATE FROM dual;\n> 7 END;\n> 8 /\n> \n> Procedure created.\n> \n> SQL> execute test;\n> \n> PL/SQL procedure successfully completed.\n> \n> SQL> select to_char(a, 'HH24:MI:SS') from foo;\n> \n> TO_CHAR(\n> --------\n> 12:01:07\n> 12:01:12\n> \n> As a FUNCTION:\n> \n> SQL> CREATE FUNCTION mydiff\n> 2 RETURN NUMBER\n> 3 IS\n> 4 time1 DATE;\n> 5 time2 DATE;\n> 6 c NUMBER;\n> 7 BEGIN\n> 8 SELECT SYSDATE\n> 9 INTO time1\n> 10 FROM DUAL;\n> 11 SELECT COUNT(*)\n> 12 INTO c\n> 13 FROM bar, bar, bar, bar, bar, bar, bar, bar;\n> 14 SELECT SYSDATE\n> 15 INTO time2\n> 16 FROM DUAL;\n> 17 RETURN (time2 - time1);\n> 18 END;\n> 19 /\n> \n> Function created.\n> \n> SQL> select mydiff FROM dual;\n> \n> MYDIFF\n> ----------\n> .000034722\n> \n> I can't test the use of CURRENT_TIMESTAMP because I have Oracle \n> 8, not 9.\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 2 Oct 2002 17:11:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." } ]
[ { "msg_contents": "Good day,\n\nI just stumbled across this peculiarity in PL/Perl today writing a method\ntoinvoke Perl Regexes from a function: if a run-time error is raised in an\notherwise good function, the function will not run correctly again until\nthe connection to the database is reset. I poked around in the code and it\nappears that it's because when elog() raises the ERROR, it doesn't first\ntake action to erase the system error message ($@) and consequently every\nsubsequent run has an error raised, even if it runs succesfully.\n\nFor example:\n\n-- This comparison works fine.\n\ntemplate1=# SELECT perl_re_match('test', 'test');\n perl_re_match\n---------------\n t\n(1 row)\n\n-- This one dies, for obvious reasons.\n\ntemplate1=# SELECT perl_re_match('test', 't{1}+?');\nERROR: plperl: error from function: (in cleanup) Nested quantifiers\nbefore HERE mark in regex m/t{1}+ << HERE ?/ at (eval 2) line 4.\n\n-- This should work fine again, but we still have this error raised...!\n\ntemplate1=# SELECT perl_re_match('test', 'test');\nERROR: plperl: error from function: (in cleanup) Nested quantifiers\nbefore HERE mark in regex m/t{1}+ << HERE ?/ at (eval 2) line 4.\n\nI don't know if the following is the best way to solve it, but I got\naround it by modifying the error report in this part of PL/Perl to be a\nNOTICE, cleared the $@ variable, and then raised the fatal ERROR. A simple\nthree line patch to plperl.c follows, and is attached.\n\nplperl.c:\n443c443,445\n< elog(ERROR, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n---\n> elog(NOTICE, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> sv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n> elog(ERROR, \"plperl: error was fatal.\");\n\n\nBest Regards,\nJw.\n-- \nJohn Worsley - lx@openvein.com", "msg_date": "Wed, 2 Oct 2002 16:08:36 -0700 (PDT)", "msg_from": "John Worsley <lx@openvein.com>", "msg_from_op": true, "msg_subject": "[PATCH] PL/Perl (Mis-)Behavior with Runtime Error Reporting" }, { "msg_contents": "\nJohn, any resolution on this patch?\n\n---------------------------------------------------------------------------\n\nJohn Worsley wrote:\n> Good day,\n> \n> I just stumbled across this peculiarity in PL/Perl today writing a method\n> toinvoke Perl Regexes from a function: if a run-time error is raised in an\n> otherwise good function, the function will not run correctly again until\n> the connection to the database is reset. I poked around in the code and it\n> appears that it's because when elog() raises the ERROR, it doesn't first\n> take action to erase the system error message ($@) and consequently every\n> subsequent run has an error raised, even if it runs succesfully.\n> \n> For example:\n> \n> -- This comparison works fine.\n> \n> template1=# SELECT perl_re_match('test', 'test');\n> perl_re_match\n> ---------------\n> t\n> (1 row)\n> \n> -- This one dies, for obvious reasons.\n> \n> template1=# SELECT perl_re_match('test', 't{1}+?');\n> ERROR: plperl: error from function: (in cleanup) Nested quantifiers\n> before HERE mark in regex m/t{1}+ << HERE ?/ at (eval 2) line 4.\n> \n> -- This should work fine again, but we still have this error raised...!\n> \n> template1=# SELECT perl_re_match('test', 'test');\n> ERROR: plperl: error from function: (in cleanup) Nested quantifiers\n> before HERE mark in regex m/t{1}+ << HERE ?/ at (eval 2) line 4.\n> \n> I don't know if the following is the best way to solve it, but I got\n> around it by modifying the error report in this part of PL/Perl to be a\n> NOTICE, cleared the $@ variable, and then raised the fatal ERROR. A simple\n> three line patch to plperl.c follows, and is attached.\n> \n> plperl.c:\n> 443c443,445\n> < elog(ERROR, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> ---\n> > elog(NOTICE, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> > sv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n> > elog(ERROR, \"plperl: error was fatal.\");\n> \n> \n> Best Regards,\n> Jw.\n> -- \n> John Worsley - lx@openvein.com\n> \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 19 Oct 2002 23:01:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] [PATCH] PL/Perl (Mis-)Behavior with Runtime Error\n\tReporting" } ]
[ { "msg_contents": "\nLooks good from my end, Peter, I pulled the same docs that I pulled for\nv7.2.2, which I hope is okay?\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 21:45:05 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "v7.2.3 - tag'd, packaged ... need it checked ..." }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Looks good from my end, Peter, I pulled the same docs that I pulled for\n> v7.2.2, which I hope is okay?\n\nSources look okay from here. Didn't look at the built-docs files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 02 Oct 2002 23:52:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ... " }, { "msg_contents": "On Wednesday 02 October 2002 11:52 pm, Tom Lane wrote:\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > Looks good from my end, Peter, I pulled the same docs that I pulled for\n> > v7.2.2, which I hope is okay?\n\n> Sources look okay from here. Didn't look at the built-docs files.\n\nBuilds fine here for RPM usage. Got an odd diff in the triggers regression \ntest: did we drop a NOTICE? If so, the regression output should probably \nhave been changed too. The diff:\n*** ./expected/triggers.out Sat Jan 15 14:18:23 2000\n--- ./results/triggers.out Thu Oct 3 00:16:09 2002\n***************\n*** 75,91 ****\n insert into fkeys values (60, '6', 4);\n ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\n delete from pkeys where pkey1 = 30 and pkey2 = '3';\n- NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n delete from pkeys where pkey1 = 40 and pkey2 = '4';\n- NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n- NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 = '5';\n- NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1';\n- NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n- NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n DROP TABLE pkeys;\n DROP TABLE fkeys;\n DROP TABLE fkeys2;\n--- 75,85 ----\n\nTom, the timestamp and horology passes on RH 7.3 here. Which is nice. Will \ntry 8.0 tomorrow at work.\n\nRPMs will be uploaded either tonight or tomorrow morning after I get to work; \nit will depend on how much upload bandwidth I can get out of this dialup. It \nappears to be running OK, so I may let it run.....\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 3 Oct 2002 00:29:18 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ..." }, { "msg_contents": "On Thursday 03 October 2002 12:29 am, Lamar Owen wrote:\n> RPMs will be uploaded either tonight or tomorrow morning after I get to\n> work; it will depend on how much upload bandwidth I can get out of this\n> dialup. It appears to be running OK, so I may let it run.....\n\nAfter I get to work. Too many disconnects; too low a throughput.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 3 Oct 2002 01:34:07 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ..." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Builds fine here for RPM usage. Got an odd diff in the triggers regression \n> test: did we drop a NOTICE? If so, the regression output should probably \n> have been changed too.\n\nNo, we didn't change anything, and the 7.2 regression tests passed for\nme on Tuesday. Please investigate more closely.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 09:35:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ... " }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Builds fine here for RPM usage. Got an odd diff in the triggers regression \n> test: did we drop a NOTICE? If so, the regression output should probably \n> have been changed too. The diff:\n> *** ./expected/triggers.out Sat Jan 15 14:18:23 2000\n> --- ./results/triggers.out Thu Oct 3 00:16:09 2002\n> ***************\n> *** 75,91 ****\n> insert into fkeys values (60, '6', 4);\n> ERROR: check_fkeys_pkey2_exist: tuple references non-existing key in fkeys2\n> delete from pkeys where pkey1 = 30 and pkey2 = '3';\n> - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n> delete from pkeys where pkey1 = 40 and pkey2 = '4';\n> - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n> update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 50 and pkey2 = '5';\n> - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> ERROR: check_fkeys2_fkey_restrict: tuple referenced in fkeys\n> update pkeys set pkey1 = 7, pkey2 = '70' where pkey1 = 10 and pkey2 = '1';\n> - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys are deleted\n> - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n> DROP TABLE pkeys;\n> DROP TABLE fkeys;\n> DROP TABLE fkeys2;\n> --- 75,85 ----\n\nAfter looking into this I have a theory about the cause: you must have\nbuilt the contrib/spi/refint.c module without -DREFINT_VERBOSE. That\nflag is required to pass the regression tests, because it controls\noutput of this debug notice. The normal build procedure for the\nregression tests does cause this to happen, but if you'd previously\nbuilt the contrib subdirectory with default switches, I think the\nregress tests would use the existing refint.o and get a failure.\n\nThis seems a tad undesirable now that I look at it. I don't want to\nmess with 7.2.3, but for 7.3 I think we should try to make the\nregression test work correctly with a default build of the contrib\nmodule.\n\nAs of CVS tip, the notice isn't appearing in the regression test output\nat all, because the elog was changed to DEBUG3 which is below the\ndefault message threshold. This is certainly not desirable since it\nreduces the specificity of the test.\n\nI am inclined to have the refint.c code emit the notice unconditionally\nat DEBUG1 level, and then add a \"SET client_min_messages = DEBUG1\" in\nthe triggers regression test to ensure the notice will appear.\n\nAny objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 12:46:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Trigger regression test output" }, { "msg_contents": "I said:\n> I am inclined to have the refint.c code emit the notice unconditionally\n> at DEBUG1 level, and then add a \"SET client_min_messages = DEBUG1\" in\n> the triggers regression test to ensure the notice will appear.\n\nHmm, that doesn't look that good after all: the SET causes the\nregression output to be cluttered with a whole *lot* of chatter,\nwhich will doubtless change constantly and break the test regularly.\n\nPlan B is to make the refint.c code emit the message at NOTICE level,\nbut to change the contrib makefile so that REFINT_VERBOSE is defined\nby default (ie, you gotta edit the makefile if you don't want it).\nThis will work nicely for the regression tests' purposes. If there is\nanyone out there actually using refint.c in production, they might be\nannoyed by the NOTICE chatter, but quite honestly I doubt anyone is ---\nthis contrib module has long since been superseded by standard\nforeign-key support.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 12:57:43 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output " }, { "msg_contents": "Tom Lane wrote:\n> I said:\n> > I am inclined to have the refint.c code emit the notice unconditionally\n> > at DEBUG1 level, and then add a \"SET client_min_messages = DEBUG1\" in\n> > the triggers regression test to ensure the notice will appear.\n> \n> Hmm, that doesn't look that good after all: the SET causes the\n> regression output to be cluttered with a whole *lot* of chatter,\n> which will doubtless change constantly and break the test regularly.\n> \n> Plan B is to make the refint.c code emit the message at NOTICE level,\n> but to change the contrib makefile so that REFINT_VERBOSE is defined\n> by default (ie, you gotta edit the makefile if you don't want it).\n> This will work nicely for the regression tests' purposes. If there is\n> anyone out there actually using refint.c in production, they might be\n> annoyed by the NOTICE chatter, but quite honestly I doubt anyone is ---\n> this contrib module has long since been superseded by standard\n> foreign-key support.\n\nYes, but if few people are using it, should we question whether it\nbelongs in the standard regression tests at all?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:06:37 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> This will work nicely for the regression tests' purposes. If there is\n>> anyone out there actually using refint.c in production, they might be\n>> annoyed by the NOTICE chatter, but quite honestly I doubt anyone is ---\n>> this contrib module has long since been superseded by standard\n>> foreign-key support.\n\n> Yes, but if few people are using it, should we question whether it\n> belongs in the standard regression tests at all?\n\nWell, it's not there to test itself, it's there to test trigger\nfunctionality. And, not so incidentally, to test that\ndynamically-loaded C functions work. I don't want to take it out.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 13:17:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> This will work nicely for the regression tests' purposes. If there is\n> >> anyone out there actually using refint.c in production, they might be\n> >> annoyed by the NOTICE chatter, but quite honestly I doubt anyone is ---\n> >> this contrib module has long since been superseded by standard\n> >> foreign-key support.\n> \n> > Yes, but if few people are using it, should we question whether it\n> > belongs in the standard regression tests at all?\n> \n> Well, it's not there to test itself, it's there to test trigger\n> functionality. And, not so incidentally, to test that\n> dynamically-loaded C functions work. I don't want to take it out.\n\nOh, interestings. Makes sense.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 13:19:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output" }, { "msg_contents": "On Thursday 03 October 2002 12:46 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Builds fine here for RPM usage. Got an odd diff in the triggers\n> > regression test: did we drop a NOTICE? If so, the regression output\n> > should probably have been changed too. The diff:\n> > *** ./expected/triggers.out Sat Jan 15 14:18:23 2000\n> > --- ./results/triggers.out Thu Oct 3 00:16:09 2002\n> > - NOTICE: check_pkeys_fkey_cascade: 1 tuple(s) of fkeys2 are deleted\n\n> After looking into this I have a theory about the cause: you must have\n> built the contrib/spi/refint.c module without -DREFINT_VERBOSE. That\n> flag is required to pass the regression tests, because it controls\n> output of this debug notice. The normal build procedure for the\n> regression tests does cause this to happen, but if you'd previously\n> built the contrib subdirectory with default switches, I think the\n> regress tests would use the existing refint.o and get a failure.\n\nSo the regression tests weren't really testing the actually built module, so \nto speak. Is there a good reason to leave the NOTICE's in the expected \nregression output?\n\nAs to the way it's built, the regression tests are built in the RPMset to \nallow post-install (that is, post _RPM_ install) regression testing on \nmachines without make or compilers.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 3 Oct 2002 13:46:12 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output" }, { "msg_contents": "On Thursday 03 October 2002 12:29 am, Lamar Owen wrote:\n> RPMs will be uploaded either tonight or tomorrow morning after I get to\n> work; it will depend on how much upload bandwidth I can get out of this\n> dialup. It appears to be running OK, so I may let it run.....\n\nRPMS uploaded into the usual place, so the announcement can take that into \naccount, Marc.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 3 Oct 2002 13:50:06 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ..." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> So the regression tests weren't really testing the actually built module, so \n> to speak. Is there a good reason to leave the NOTICE's in the expected \n> regression output?\n\nYes: without them the test is less useful, because you're less certain\nthat what happened was what was supposed to happen.\n\n> As to the way it's built, the regression tests are built in the RPMset to \n> allow post-install (that is, post _RPM_ install) regression testing on \n> machines without make or compilers.\n\nWell, I'm about to commit a change that makes the default build of\ncontrib/spi have the correct NOTICE output, as of 7.3. You could make\nthe 7.2 RPMset do likewise if you wish.\n\nOne thing that confuses me though is that the build options have been\nlike this for a long time (at least since 7.1). Why haven't you seen\nthis problem before? Did you recently change the way the RPMs build\ncontrib?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 14:31:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output " }, { "msg_contents": "On Thursday 03 October 2002 02:31 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> One thing that confuses me though is that the build options have been\n> like this for a long time (at least since 7.1). Why haven't you seen\n> this problem before? Did you recently change the way the RPMs build\n> contrib?\n\nYes, I recently changed that to use the default make instead of the horribly \ncobbled thing I was using. But it broke regression, which I didn't check \nwhen I built the 7.2.2-1PGDG set (I had done a regression test with \n7.2.2-0.1PGDG, which had the old kludge for contrib).\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Thu, 3 Oct 2002 15:16:21 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Trigger regression test output" }, { "msg_contents": "Marc G. Fournier writes:\n\n> Looks good from my end, Peter, I pulled the same docs that I pulled for\n> v7.2.2, which I hope is okay?\n\nProbably not, because the version number needs to be changed and they need\nto be rebuilt for each release.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 7 Oct 2002 21:23:09 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ..." }, { "msg_contents": "On Mon, 7 Oct 2002, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Looks good from my end, Peter, I pulled the same docs that I pulled for\n> > v7.2.2, which I hope is okay?\n>\n> Probably not, because the version number needs to be changed and they need\n> to be rebuilt for each release.\n\nshould I run the same 'gmake docs' then, as I've been doing for the\nsnapshot(s)?\n\n\n", "msg_date": "Tue, 8 Oct 2002 08:54:58 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ..." }, { "msg_contents": "Marc G. Fournier writes:\n\n> On Mon, 7 Oct 2002, Peter Eisentraut wrote:\n>\n> > Marc G. Fournier writes:\n> >\n> > > Looks good from my end, Peter, I pulled the same docs that I pulled for\n> > > v7.2.2, which I hope is okay?\n> >\n> > Probably not, because the version number needs to be changed and they need\n> > to be rebuilt for each release.\n>\n> should I run the same 'gmake docs' then, as I've been doing for the\n> snapshot(s)?\n\nsrc/tools/RELEASE_CHANGES contains all the places where the version number\nneeds to be changed. (Actually, I should eliminate some of these places,\nbut that won't help now.) After that you can build the docs using\n\ndoc/src$ gmake postgres.tar.gz\ndoc/src$ mv postgres.tar.gz ..\n\nand copy man.tar.gz from the ftp site (since it doesn't change) to doc/.\nAfter that, 'make dist'.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Oct 2002 23:42:18 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: v7.2.3 - tag'd, packaged ... need it checked ..." } ]
[ { "msg_contents": "Forgot to cc' the list.\n\n-----Original Message-----\nFrom: Curtis Faith [mailto:curtis@galtair.com]\nSent: Wednesday, October 02, 2002 10:59 PM\nTo: Tom Lane\nSubject: RE: [HACKERS] Advice: Where could I be of help?\n\n\nTom,\n\nHere are the things that I think look interesting:\n\n1) Eliminate unchanged column indices:\n\tPrevent index uniqueness checks when UPDATE does not modifying column\n\nSmall little task that will make a noticeable improvement. I've done this\nbefore in a b* tree system, it had a huge impact. Should be pretty isolated.\n\n2) Use indexes for min() and max() or convert to SELECT col FROM tab ORDER\nBY col DESC LIMIT 1 if appropriate index exists and WHERE clause\nacceptable - This will probably be a little more involved but I've done this\nexact optimization in a SQL system 6 or 7 years ago.\n\n3) General cache and i/o optimization:\n\n\tUse bitmaps to fetch heap pages in sequential order\n\nBased on my reading of the emails in [performance] it appears to me that\nthere might be huge potential in the caching system. I've worked on these\ncaches and there are some very non-intuitive interactions between database\ntype access and file systems that I believe offer good potential for\nimprovement. I'm basing this assessment on the assumption that the sorts of\nimprovements discussed in the [performance] emails have not been added in\nsubsequent releases.\n\nWhere does the current code stand? How are we currently doing cache flushing\nin general and for indices in particular?\n\n4) General index improvements including:\n\tOrder duplicate index entries by tid for faster heap lookups\n\tAdd FILLFACTOR to btree index creation\n\nI've done the first one before and fill factor is pretty easy, as well.\n\n5) Bitmaps:\n\tImplement a bitmap index:\n\tUse bitmaps to combine existing indexes\nI've done something similar, it looks pretty interesting.\n\n6) Improve concurrency of hash indexes (Neil Conway)- Probably more\nexploration than implementation and fairly isolated problem.\n\nBased on past experience, from a bang-for-buck perspective, I'd probably do\nthis in the numerical order. What do you think? I know what I like and can\ndo but I don't really know enough about PostgreSQL's performance weaknesses\nyet.\n\nWhat are we getting killed on?\n\n- Curtis\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\nSent: Wednesday, October 02, 2002 6:55 PM\nTo: Curtis Faith\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Advice: Where could I be of help?\n\n\nBruce Momjian <pgman@candle.pha.pa.us> writes:\n> I would read the developers corner stuff, the developers FAQ, pick a\n> TODO item, and try a patch. It's that simple.\n\nYup. I'd also suggest starting with something relatively small and\nlocalized (the nearby suggestion to fix IN/EXISTS, for example, is\nprobably not a good first project --- and anyway I was going to work\non that myself this month ;-)).\n\nNeil Conway's thought of working on plpgsql seems a good one to me;\nand as he says, there's lots of other possibilities. What do you\nfind interesting in the TODO list?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n", "msg_date": "Wed, 2 Oct 2002 23:02:05 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "FW: Advice: Where could I be of help?" }, { "msg_contents": "> Based on past experience, from a bang-for-buck perspective, I'd probably do\n> this in the numerical order. What do you think? I know what I like and can\n> do but I don't really know enough about PostgreSQL's performance weaknesses\n> yet.\n>\n> What are we getting killed on?\n>\n\nI'm not a developer, but one thing I see come up occasionally around here are \nplanner issues. Sometimes people get really hammered by the planner choices, \nand aren't provided a very good way to tune it. If you were able to eliminate \nsome worst-case-scenario type situations, that would make the few people who \nare having problems very happy (I remember one thread in particular seemed \nnasty). If I remember correctly, some developers don't much like the idea of \nquery hints, and I don't blame them, so you might want to run your ideas by \nthem first.\n\nAlso, this kind of modification might require significant additions to the \nstatistics system. The planner might be smart, but if it doesn't have any \nmore information you might not be able to get any more out of it. Autovacuum \nmight help with that as well (i.e. the info will be more up to date).\n\nRegards,\n\tJeff Davis\n\n\n\n\n\n\n", "msg_date": "Wed, 2 Oct 2002 20:24:13 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: FW: Advice: Where could I be of help?" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net] \n> Sent: 01 October 2002 21:05\n> To: Dave Page\n> Cc: pgsql-odbc@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: RE: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)\n> \n> \n> Dave Page writes:\n> \n> > > > majority of you!) knock up a makefile so the driver will build \n> > > > standalone on *nix systems please? There should be no\n> > > dependencies on\n> > > > any of the rest of the code - certainly there isn't for \n> the Win32 \n> > > > build.\n> > >\n> > > I'm working something out. I'll send it to you tomorrow.\n> \n> Hah. I tried to put something together based on Automake and \n> Libtool, but I must conclude that Libtool is just completely \n> utterly broken. I also considered copying over \n> Makefile.shlib, but that would draw in too many auxiliary \n> files and create a different kind of mess. So what I would \n> suggest right now as the course of action is to copy your \n> local psqlodbc subtree to its old location under interfaces/ \n> and try to hook things together that way.\n> \n> Perhaps one of these days we should convert Makefile.shlib \n> into a shell script that we can deploy more easily to \n> different projects.\n\nI have added README.unix to the psqlODBC CVS containing the following\ntext:\n\nI assume the odbc options haven't been removed from the autoconf stuff?\n\nRegards, Dave.\n\npsqlODBC README for *nix Systems\n================================\n\nSince psqlODBC has be moved from the main PostgreSQL source tree, we\nhave yet\nto create a new build system for the driver. Currently, in order to\nbuild under\n*nix systems, it is recommended that you copy the files in this\ndirectory to\nsrc/interfaces/odbc in your PostgreSQL source tree, then re-run\nconfigure with\nthe required options from the top of the tree. The driver can then be\nbuilt\nusing make as per normal.\n", "msg_date": "Thu, 3 Oct 2002 08:47:07 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] psqlODBC *nix Makefile (new 7.3 open item?)" } ]
[ { "msg_contents": "Tom Lane wrote:\n> \n>\n>Has anyone done the corresponding experiments on the other DBMSes to\n>identify exactly when they allow CURRENT_TIMESTAMP to advance ?\n>\n\nThis applies up to Oracle 8.1.6, maybe it helps:\nAccording to a co-worker, Oracle advances the time in transactions:\nselect to_char(sysdate, 'dd.mm.yyyy hh24:mi:ss') from dual;\n\nTO_CHAR(SYSDATE,'DD\n-------------------\n03.10.2002 10:16:28\n\n(wait ...)\n\nSQL> r\n 1* select to_char(sysdate, 'dd.mm.yyyy hh24:mi:ss') from dual\n\nTO_CHAR(SYSDATE,'DD\n-------------------\n03.10.2002 10:17:41\n\n\nIt even advances within procedures/functions, example:\n\n create or replace procedure foobar is \n s1 varchar(2000);\n s2 varchar(2000);\n begin\n select to_char(sysdate, 'dd.mm.yyyy hh24:mi:ss') into s1 from dual;\n (... put long running query here ...)\n select to_char(sysdate, 'dd.mm.yyyy hh24:mi:ss') into s2 from dual;\n dbms_output.put_line(s1);\n dbms_output.put_line(s2);\n end; \n/\n\nset serverout on\nexecute foobar;\n\n\nHope it helps.\n\nRegards,\n\tMario Weilguni\n", "msg_date": "Thu, 3 Oct 2002 10:56:10 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: (Fwd) Re: Any Oracle 9 users? A test please..." } ]
[ { "msg_contents": "\n\nI've been waiting to see how a patched file differs from my version.\n\nThe patch was added to the to apply list last week I think (it wasn't mine btw)\nand I've been doing cvs diff to view the differences so I can tell when the\npatch has been applied. Additional information given by this is the revision\nnumber the comparison is against of course. This has stayed at 1.61 all the\ntime I've been doing this cvs diff operation. Looking at the web interface to\ncvs I see the file has a revision number of 1.64. I use the anoncvs server for\nmy operations. Am I being daft or is there a problem with the anoncvs archive?\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Thu, 3 Oct 2002 10:49:10 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "anoncvs and diff" }, { "msg_contents": "Nigel J. Andrews wrote:\n> \n> \n> I've been waiting to see how a patched file differs from my version.\n> \n> The patch was added to the to apply list last week I think (it wasn't mine btw)\n> and I've been doing cvs diff to view the differences so I can tell when the\n> patch has been applied. Additional information given by this is the revision\n> number the comparison is against of course. This has stayed at 1.61 all the\n> time I've been doing this cvs diff operation. Looking at the web interface to\n> cvs I see the file has a revision number of 1.64. I use the anoncvs server for\n> my operations. Am I being daft or is there a problem with the anoncvs archive?\n\nThat is strange. anoncvs and the web interface should have the same\nversion number. What file are you looking at? \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 11:48:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs and diff" }, { "msg_contents": "On Thu, 3 Oct 2002, Bruce Momjian wrote:\n\n> Nigel J. Andrews wrote:\n> > \n> > \n> > I've been waiting to see how a patched file differs from my version.\n> > \n> > The patch was added to the to apply list last week I think (it wasn't mine btw)\n> > and I've been doing cvs diff to view the differences so I can tell when the\n> > patch has been applied. Additional information given by this is the revision\n> > number the comparison is against of course. This has stayed at 1.61 all the\n> > time I've been doing this cvs diff operation. Looking at the web interface to\n> > cvs I see the file has a revision number of 1.64. I use the anoncvs server for\n> > my operations. Am I being daft or is there a problem with the anoncvs archive?\n> \n> That is strange. anoncvs and the web interface should have the same\n> version number. What file are you looking at? \n\nsrc/pl/tcl/pltcl.c\n\nHowever, since writing that I've tried some other things.\n\ncvs diff -r HEAD pltcl.c\n\ngave me differences against revision 1.64\n\nand cvs update pltcl.c\n\nsaid it was merging changes between 1.64 and 1.61\n\nand a plain cvs diff now shows me differences against 1.64\n\nI think this is probably just a short fall in my fairly basic knowledge of how\ncvs works.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Thu, 3 Oct 2002 17:02:07 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Re: anoncvs and diff" }, { "msg_contents": "Nigel J. Andrews wrote:\n> cvs diff -r HEAD pltcl.c\n> \n> gave me differences against revision 1.64\n> \n> and cvs update pltcl.c\n> \n> said it was merging changes between 1.64 and 1.61\n> \n> and a plain cvs diff now shows me differences against 1.64\n> \n> I think this is probably just a short fall in my fairly basic knowledge of how\n> cvs works.\n\nWhat does 'cvs log' say about the file, especially the top stuff?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 12:05:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs and diff" }, { "msg_contents": "On Thu, 3 Oct 2002, Bruce Momjian wrote:\n\n> Nigel J. Andrews wrote:\n> > cvs diff -r HEAD pltcl.c\n> > \n> > gave me differences against revision 1.64\n> > \n> > and cvs update pltcl.c\n> > \n> > said it was merging changes between 1.64 and 1.61\n> > \n> > and a plain cvs diff now shows me differences against 1.64\n> > \n> > I think this is probably just a short fall in my fairly basic knowledge of how\n> > cvs works.\n> \n> What does 'cvs log' say about the file, especially the top stuff?\n\nIt gave me the log all the way up to the 1.64 revision with the REL7_3_STABLE\nlabel assigned to revision 1.64.0.2\n\nRevision 1.64 apparently backing out my patch which made 1.63.\n\nI had a brain wave and did the cvs log command which was what lead me to try\nspecifying revisions. As I say it looks like a lack of knowledge about how cvs\nworks for these things. I always thought it worked like RCS and gave a diff\nagainst the latest checked in but obviously not.\n\nBTW, I've found Neil Conway's patch for this file, email dated 25th Sept., I\ncan forward it or apply it and include the changes along with whatever I do for\nmy next submission, which ever you'd prefer. I'd suggest it's easy to let me\napply and submit it due to overlaps.\n\n\n-- \nNigel J. Andrews\n\n\n\n", "msg_date": "Thu, 3 Oct 2002 17:17:27 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Re: anoncvs and diff" }, { "msg_contents": "Nigel J. Andrews wrote:\n> It gave me the log all the way up to the 1.64 revision with the REL7_3_STABLE\n> label assigned to revision 1.64.0.2\n> \n> Revision 1.64 apparently backing out my patch which made 1.63.\n> \n> I had a brain wave and did the cvs log command which was what lead me to try\n> specifying revisions. As I say it looks like a lack of knowledge about how cvs\n> works for these things. I always thought it worked like RCS and gave a diff\n> against the latest checked in but obviously not.\n> \n> BTW, I've found Neil Conway's patch for this file, email dated 25th Sept., I\n> can forward it or apply it and include the changes along with whatever I do for\n> my next submission, which ever you'd prefer. I'd suggest it's easy to let me\n> apply and submit it due to overlaps.\n> \n\nSure, sounds good.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 3 Oct 2002 12:36:01 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs and diff" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> I had a brain wave and did the cvs log command which was what lead me to try\n> specifying revisions. As I say it looks like a lack of knowledge about how cvs\n> works for these things. I always thought it worked like RCS and gave a diff\n> against the latest checked in but obviously not.\n\nI think \"cvs diff foo.c\" without any switches gives you the diff between\nyour local copy of foo.c and the last version of foo.c *that you checked\nout* --- ie, it shows you the uncommitted editing that you've done.\n\nIf you hadn't done \"cvs update\" since rev 1.61 then this would explain\nthe behavior you saw.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 13:07:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs and diff " } ]
[ { "msg_contents": "Hi everyone,\n\nHave been thinking for a while now about viable ways to Open Source the\nFlash based training material that has been in development from last\nyear.\n\nAfter discussing this with a number of people for suggestions, feedback,\nadvise, etc, these are looking to be the general concepts that, as a\nwhole, would likely work to greatest effect:\n\n***********\n\n - Create a new Open Source license specifically for this. The er...\nDDPL (?). Digital Distribution Public License.\n\n - Release the source code to all the Flashes developed thus far,\nthrough this license.\n\nThe DDPL would go something like this:\n\n - People can use the source Flash files to create training content for\nany kind of software they so choose to. We of course heartily recommend\nOpen Source Software. Everything released must also be under the DDPL.\n\n - All content must be released unrestricted, etc, and the Flash source\nfiles must be available for all. It's allowed to be included with paid\nfor products. No restrictions, etc.\n\n - All content and Flash source files under the DDPL must also be\nsubmitted to us, so we can decide whether or not to include them the\ndigitaldistribution.com site or not, etc. We can distribute through\nresellers, etc, and no royalties are applicable.\n\nAdditionally:\n\n - We make a few points really clear and simple on the website. The\nprimary reason for existence for the digitaldistribution.com site is to\neducate end users, and additionally to create a revenue stream that can\nbe used to hire further developers for Open Source projects and be used\nto the benefit of the Open Source Community as need be (i.e. hire\nlawyers to fight against inappropriate patents, pay for advertisements\nand research studies, etc).\n\n - Open up the translation interface and mechanisms on the\ndigitaldistribution.com site so that people can come along and do\ntranslations for their language as they feel like it.\n\n - Have a support mechanism (in a way that's fair) so that resellers of\nthe tutorials are well funded to provide support for the communities of\ntheir native languages, etc.\n\n - Will probably work in something about \"Membership fees for the\ndigitaldistribution.com site will be based upon the GDP for a nation\",\nso that for example, a person coming from Thailand isn't charged\nanywhere near as much as a person in the US. Not sure how to make it\nworkable, but it's the start for addressing an important issue.\n\n***********\n\nPlease remember this is just a start and might totally change or be\ndropped entirely, depending on whether it looks to be workable and\nbeneficial, etc. Now looking for thoughts and feedback on this from a\nwider audience, so hoping people have good ideas, beneficial directions,\netc.\n\nIf everything is looking good, then we'll look to ensuring this is a\nworkable Open Source license, etc. (www.opensource.org)\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 03 Oct 2002 22:26:16 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": true, "msg_subject": "OT: Looking to Open Source the Flash training material" }, { "msg_contents": "On Thu, Oct 03, 2002 at 10:26:16PM +1000, Justin Clift wrote:\n\n> Have been thinking for a while now about viable ways to Open Source the\n> Flash based training material that has been in development from last\n> year.\n> \n> After discussing this with a number of people for suggestions, feedback,\n> advise, etc, these are looking to be the general concepts that, as a\n> whole, would likely work to greatest effect:\n\nIs there some reason not to use FSF's FDL?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"When the proper man does nothing (wu-wei),\nhis thought is felt ten thousand miles.\" (Lao Tse)\n", "msg_date": "Fri, 4 Oct 2002 02:18:35 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: OT: Looking to Open Source the Flash training material" } ]
[ { "msg_contents": "Hi,\n\nToday we concluded test for database performance. Attached are results and the \nschema, for those who have missed earlier discussion on this.\n\nWe have (almost) decided that we will partition the data across machines. The \ntheme is, after every some short interval a burst of data will be entered in \nnew table in database, indexed and vacuume. The table(s) will be inherited so \nthat query on base table will fetch results from all the children. The \napplication has to consolidate all the data per node basis. If the database is \nnot postgresql, app. has to consolidate data across partitions as well.\n\nNow we need to investigate whether selecting on base table to include children \nwould use indexes created on children table.\n\nIt's estimated that when entire data is gathered, total number of children \ntables would be around 1K-1.1K across all machines. \n\nThis is in point of average rate of data insertion i.e. 5K records/sec and \ntotal data size, estimated to be 9 billion rows max i.e. estimated database \nsize is 900GB. Obviously it's impossible to keep insertion rate on an indexed \ntable high as data grows. So partitioning/inheritance looks better approach. \n\nPostgresql is not the final winner as yet. Mysql is in close range. I will keep \nyou guys posted about the result.\n\nLet me know about any comments..\n\nBye\n Shridhar\n\n--\nPrice's Advice:\tIt's all a game -- play it to have fun.\n\n\n\nMachine \t\t\t\t\t\t\t\t\nCompaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n\"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n\"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n\"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\t\t\t\t\t\t\t\t\n\"Cost - $13,500 ($1,350 for each additional 72GB HDD)\"\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\nPerformance Parameter\t\t\t\tMySQL 3.23.52 \t\tMySQL 3.23.52 \t\tPostgreSQL 7.2.2 \t\t\n\t\t\t\t\t\tWITHOUT InnoDB \t\tWITH InnoDB for \twith built-in support \t\t\n\t\t\t\t\t\tfor transactional \ttransactional support\tfor transactions\n\t\t\t\t\t\tsupport\t\t\t\t\t\t\t\t\nComplete Data\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\t\t\nInserts + building a composite index\t\t\t\t\t\t\t\t\n\"40 GB data, 432,000,000 tuples\"\t\t3738 secs\t\t18720 secs\t\t20628 secs\t\t\n\"about 100 bytes each, schema on \n'schema' sheet\"\t\t\t\t\t\t\t\t\n\"composite index on 3 fields \n(esn, min, datetime)\"\t\t\t\t\t\t\t\t\n\t\t\t\t\t\t\nLoad Speed\t\t\t\t\t115570 tuples/second\t23076 tuples/second\t20942 tuples/second\n\t\t\t\t\t\t\nDatabase Size on Disk\t\t\t\t48 GB\t\t\t87 GB\t\t\t111 GB\n\t\t\t\t\t\t\nAverage per partition\t\t\t\t\t\t\n\t\t\t\t\t\t\nInserts + building a composite index\t\t\t\t\t\t\n\"300MB data, 3,000,000 tuples,\"\t\t\t28 secs\t\t\t130 secs\t\t150 secs\n\"about 100 bytes each, schema on \n'schema' sheet\"\t\t\t\t\t\t\n\"composite index on 3 fields \n(esn, min, datetime)\"\t\t\t\t\t\t\n\t\t\t\t\t\t\nSelect Query \t\t\t\t\t7 secs\t\t\t7 secs\t\t\t6 secs\nbased on equality match of 2 fields\t\t\t\t\t\t\n(esn and min) - 4 concurrent queries \nrunning\n\t\t\t\t\t\t\nDatabase Size on Disk\t\t\t\t341 MB\t\t\t619 MB\t\t\t788 MB\n\nField Name\tField Type\tNullable\tIndexed\ntype\t\tint\t\tno\t\tno\nesn\t\tchar (10)\tno\t\tyes\nmin\t\tchar (10)\tno\t\tyes\ndatetime\ttimestamp\tno\t\tyes\nopc0\t\tchar (3)\tno\t\tno\nopc1\t\tchar (3)\tno\t\tno\nopc2\t\tchar (3)\tno\t\tno\ndpc0\t\tchar (3)\tno\t\tno\ndpc1\t\tchar (3)\tno\t\tno\ndpc2\t\tchar (3)\tno\t\tno\nnpa\t\tchar (3)\tno\t\tno\nnxx\t\tchar (3)\tno\t\tno\nrest\t\tchar (4)\tno\t\tno\nfield0\t\tint\t\tyes\t\tno\nfield1\t\tchar (4)\tyes\t\tno\nfield2\t\tint\t\tyes\t\tno\nfield3\t\tchar (4)\tyes\t\tno\nfield4\t\tint\t\tyes\t\tno\nfield5\t\tchar (4)\tyes\t\tno\nfield6\t\tint\t\tyes\t\tno\nfield7\t\tchar (4)\tyes\t\tno\nfield8\t\tint\t\tyes\t\tno\nfield9\t\tchar (4)\tyes\t\tno", "msg_date": "Thu, 03 Oct 2002 18:06:10 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Large databases, performance" }, { "msg_contents": "Can you comment on the tools you are using to do the insertions (Perl, \nJava?) and the distribution of data (all random, all static), and the \ntransaction scope (all inserts in one transaction, each insert as a \nsingle transaction, some group of inserts as a transaction).\n\nI'd be curious what happens when you submit more queries than you have \nprocessors (you had four concurrent queries and four CPUs), if you care \nto run any additional tests. Also, I'd report the query time in \nabsolute (like you did) and also in 'Time/number of concurrent queries\". \n This will give you a sense of how the system is scaling as the workload \nincreases. Personally I am more concerned about this aspect than the \nload time, since I am going to guess that this is where all the time is \nspent. \n\nWas the original posting on GENERAL or HACKERS. Is this moving the \nPERFORMANCE for follow-up? I'd like to follow this discussion and want \nto know if I should join another group?\n\nThanks,\n\nCharlie\n\nP.S. Anyone want to comment on their expectation for 'commercial' \ndatabases handling this load? I know that we cannot speak about \nspecific performance metrics on some products (licensing restrictions) \nbut I'd be curious if folks have seen some of the databases out there \nhandle these dataset sizes and respond resonably.\n\n\nShridhar Daithankar wrote:\n\n>Hi,\n>\n>Today we concluded test for database performance. Attached are results and the \n>schema, for those who have missed earlier discussion on this.\n>\n>We have (almost) decided that we will partition the data across machines. The \n>theme is, after every some short interval a burst of data will be entered in \n>new table in database, indexed and vacuume. The table(s) will be inherited so \n>that query on base table will fetch results from all the children. The \n>application has to consolidate all the data per node basis. If the database is \n>not postgresql, app. has to consolidate data across partitions as well.\n>\n>Now we need to investigate whether selecting on base table to include children \n>would use indexes created on children table.\n>\n>It's estimated that when entire data is gathered, total number of children \n>tables would be around 1K-1.1K across all machines. \n>\n>This is in point of average rate of data insertion i.e. 5K records/sec and \n>total data size, estimated to be 9 billion rows max i.e. estimated database \n>size is 900GB. Obviously it's impossible to keep insertion rate on an indexed \n>table high as data grows. So partitioning/inheritance looks better approach. \n>\n>Postgresql is not the final winner as yet. Mysql is in close range. I will keep \n>you guys posted about the result.\n>\n>Let me know about any comments..\n>\n>Bye\n> Shridhar\n>\n>--\n>Price's Advice:\tIt's all a game -- play it to have fun.\n>\n>\n> \n>\n>------------------------------------------------------------------------\n>\n>Machine \t\t\t\t\t\t\t\t\n>Compaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n>\"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n>\"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n>\"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\t\t\t\t\t\t\t\t\n>\"Cost - $13,500 ($1,350 for each additional 72GB HDD)\"\t\t\t\t\t\t\t\t\n>\t\t\t\t\t\t\t\t\n>Performance Parameter\t\t\t\tMySQL 3.23.52 \t\tMySQL 3.23.52 \t\tPostgreSQL 7.2.2 \t\t\n>\t\t\t\t\t\tWITHOUT InnoDB \t\tWITH InnoDB for \twith built-in support \t\t\n>\t\t\t\t\t\tfor transactional \ttransactional support\tfor transactions\n>\t\t\t\t\t\tsupport\t\t\t\t\t\t\t\t\n>Complete Data\t\t\t\t\t\t\t\t\n>\t\t\t\t\t\t\t\t\n>Inserts + building a composite index\t\t\t\t\t\t\t\t\n>\"40 GB data, 432,000,000 tuples\"\t\t3738 secs\t\t18720 secs\t\t20628 secs\t\t\n>\"about 100 bytes each, schema on \n>'schema' sheet\"\t\t\t\t\t\t\t\t\n>\"composite index on 3 fields \n>(esn, min, datetime)\"\t\t\t\t\t\t\t\t\n>\t\t\t\t\t\t\n>Load Speed\t\t\t\t\t115570 tuples/second\t23076 tuples/second\t20942 tuples/second\n>\t\t\t\t\t\t\n>Database Size on Disk\t\t\t\t48 GB\t\t\t87 GB\t\t\t111 GB\n>\t\t\t\t\t\t\n>Average per partition\t\t\t\t\t\t\n>\t\t\t\t\t\t\n>Inserts + building a composite index\t\t\t\t\t\t\n>\"300MB data, 3,000,000 tuples,\"\t\t\t28 secs\t\t\t130 secs\t\t150 secs\n>\"about 100 bytes each, schema on \n>'schema' sheet\"\t\t\t\t\t\t\n>\"composite index on 3 fields \n>(esn, min, datetime)\"\t\t\t\t\t\t\n>\t\t\t\t\t\t\n>Select Query \t\t\t\t\t7 secs\t\t\t7 secs\t\t\t6 secs\n>based on equality match of 2 fields\t\t\t\t\t\t\n>(esn and min) - 4 concurrent queries \n>running\n>\t\t\t\t\t\t\n>Database Size on Disk\t\t\t\t341 MB\t\t\t619 MB\t\t\t788 MB\n> \n>\n>------------------------------------------------------------------------\n>\n>Field Name\tField Type\tNullable\tIndexed\n>type\t\tint\t\tno\t\tno\n>esn\t\tchar (10)\tno\t\tyes\n>min\t\tchar (10)\tno\t\tyes\n>datetime\ttimestamp\tno\t\tyes\n>opc0\t\tchar (3)\tno\t\tno\n>opc1\t\tchar (3)\tno\t\tno\n>opc2\t\tchar (3)\tno\t\tno\n>dpc0\t\tchar (3)\tno\t\tno\n>dpc1\t\tchar (3)\tno\t\tno\n>dpc2\t\tchar (3)\tno\t\tno\n>npa\t\tchar (3)\tno\t\tno\n>nxx\t\tchar (3)\tno\t\tno\n>rest\t\tchar (4)\tno\t\tno\n>field0\t\tint\t\tyes\t\tno\n>field1\t\tchar (4)\tyes\t\tno\n>field2\t\tint\t\tyes\t\tno\n>field3\t\tchar (4)\tyes\t\tno\n>field4\t\tint\t\tyes\t\tno\n>field5\t\tchar (4)\tyes\t\tno\n>field6\t\tint\t\tyes\t\tno\n>field7\t\tchar (4)\tyes\t\tno\n>field8\t\tint\t\tyes\t\tno\n>field9\t\tchar (4)\tyes\t\tno\n>\n> \n>\n>------------------------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n", "msg_date": "Thu, 03 Oct 2002 08:54:29 -0400", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "\nShridhar,\n\nIt's one hell of a DB you're building. I'm sure I'm not the only one interested\nso to satisfy those of us who are nosey: can you say what the application is?\n\nI'm sure we'll all understand if it's not possible for you mention such\ninformation.\n\n\n--\nNigel J. Andrews\n\n\nOn Thu, 3 Oct 2002, Shridhar Daithankar wrote:\n\n> Hi,\n> \n> Today we concluded test for database performance. Attached are results and the \n> schema, for those who have missed earlier discussion on this.\n> \n> We have (almost) decided that we will partition the data across machines. The \n> theme is, after every some short interval a burst of data will be entered in \n> new table in database, indexed and vacuume. The table(s) will be inherited so \n> that query on base table will fetch results from all the children. The \n> application has to consolidate all the data per node basis. If the database is \n> not postgresql, app. has to consolidate data across partitions as well.\n> \n> Now we need to investigate whether selecting on base table to include children \n> would use indexes created on children table.\n> \n> It's estimated that when entire data is gathered, total number of children \n> tables would be around 1K-1.1K across all machines. \n> \n> This is in point of average rate of data insertion i.e. 5K records/sec and \n> total data size, estimated to be 9 billion rows max i.e. estimated database \n> size is 900GB. Obviously it's impossible to keep insertion rate on an indexed \n> table high as data grows. So partitioning/inheritance looks better approach. \n> \n> Postgresql is not the final winner as yet. Mysql is in close range. I will keep \n> you guys posted about the result.\n> \n> Let me know about any comments..\n> \n> Bye\n> Shridhar\n\n", "msg_date": "Thu, 3 Oct 2002 13:56:03 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 13:56, Nigel J. Andrews wrote:\n> It's one hell of a DB you're building. I'm sure I'm not the only one interested\n> so to satisfy those of us who are nosey: can you say what the application is?\n> \n> I'm sure we'll all understand if it's not possible for you mention such\n> information.\n\nWell, I can't tell everything but somethings I can..\n\n1) This is a system that does not have online capability yet. This is an \nattempt to provide one.\n\n2) The goal is to avoid costs like licensing oracle. I am sure this would make \na great example for OSDB advocacy, which ever database wins..\n\n3) The database size estimates, I put earlier i.e. 9 billion tuples/900GB data \nsize, are in a fixed window. The data is generated from some real time systems. \nYou can imagine the rate.\n\n4) Further more there are timing restrictions attached to it. 5K inserts/sec. \n4800 queries per hour with response time of 10 sec. each. It's this aspect that \nhas forced us for partitioning..\n\nAnd contrary to my earlier information, this is going to be a live system \nrather than a back up one.. A better win to postgresql.. I hope it makes it.\n\nAnd BTW, all these results were on reiserfs. We didn't found much of difference \nin write performance between them. So we stick to reiserfs. And of course we \ngot the latest hot shot Mandrake9 with 2.4.19-16 which really made difference \nover RHL7.2..\n\nBye\n Shridhar\n\n--\nQOTD:\t\"Do you smell something burning or is it me?\"\t\t-- Joan of Arc\n\n", "msg_date": "Thu, 03 Oct 2002 19:33:30 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Forgive my ignorance, but what about 2.4.19-16 is that much faster? Are \nwe talking about 2x improvement for your tests? We are currently on \n2.4.9 and looking at the performance and wondering... so any comments \nare appreciated.\n\nCharlie\n\n\nShridhar Daithankar wrote:\n\n>And BTW, all these results were on reiserfs. We didn't found much of difference \n>in write performance between them. So we stick to reiserfs. And of course we \n>got the latest hot shot Mandrake9 with 2.4.19-16 which really made difference \n>over RHL7.2..\n>\n>Bye\n> Shridhar\n>\n>--\n>QOTD:\t\"Do you smell something burning or is it me?\"\t\t-- Joan of Arc\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n> \n>\n\n-- \n\n\nCharles H. Woloszynski\n\nClearMetrix, Inc.\n115 Research Drive\nBethlehem, PA 18015\n\ntel: 610-419-2210 x400\nfax: 240-371-3256\nweb: www.clearmetrix.com\n\n\n\n\n", "msg_date": "Thu, 03 Oct 2002 10:26:59 -0400", "msg_from": "\"Charles H. Woloszynski\" <chw@clearmetrix.com>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 10:26, Charles H. Woloszynski wrote:\n\n> Forgive my ignorance, but what about 2.4.19-16 is that much faster? Are \n> we talking about 2x improvement for your tests? We are currently on \n> 2.4.9 and looking at the performance and wondering... so any comments \n> are appreciated.\n\nWell, for one thing, 2.4.19 contains backported O(1) scheduler patch which \nimproves SMP performance by heaps as task queue is per cpu rather than one per \nsystem. I don't think any system routinely runs thousands of processes unless \nit's a web/ftp/mail server. In that case improved scheduling wuld help as \nwell..\n\nBesides there were major VM rewrites/changes after 2.4.10 which corrected \nalmost all the major VM fiaskos on linux. For anything VM intensive it's \nrecommended that you run 2.4.17 at least.\n\nI would say it's worth going for it.\n\nBye\n Shridhar\n\n--\nSturgeon's Law:\t90% of everything is crud.\n\n", "msg_date": "Thu, 03 Oct 2002 21:20:16 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 19:33, Shridhar Daithankar wrote:\n\n> On 3 Oct 2002 at 13:56, Nigel J. Andrews wrote:\n> > It's one hell of a DB you're building. I'm sure I'm not the only one interested\n> > so to satisfy those of us who are nosey: can you say what the application is?\n> > \n> > I'm sure we'll all understand if it's not possible for you mention such\n> > information.\n> \n> Well, I can't tell everything but somethings I can..\n> \n> 1) This is a system that does not have online capability yet. This is an \n> attempt to provide one.\n> \n> 2) The goal is to avoid costs like licensing oracle. I am sure this would make \n> a great example for OSDB advocacy, which ever database wins..\n> \n> 3) The database size estimates, I put earlier i.e. 9 billion tuples/900GB data \n> size, are in a fixed window. The data is generated from some real time systems. \n> You can imagine the rate.\n\nRead that fixed time window..\n\n> \n> 4) Further more there are timing restrictions attached to it. 5K inserts/sec. \n> 4800 queries per hour with response time of 10 sec. each. It's this aspect that \n> has forced us for partitioning..\n> \n> And contrary to my earlier information, this is going to be a live system \n> rather than a back up one.. A better win to postgresql.. I hope it makes it.\n> \n> And BTW, all these results were on reiserfs. We didn't found much of difference \n> in write performance between them. So we stick to reiserfs. And of course we \n> got the latest hot shot Mandrake9 with 2.4.19-16 which really made difference \n> over RHL7.2..\n\nWell, we were comparing ext3 v/s reiserfs. I don't remember the journalling \nmode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n0 from RAID-5 might have something to do about it.\n\nThere was a discussion on hackers some time back as in which file system is \nbetter. I hope this might have an addition over it..\n\n\nBye\n Shridhar\n\n--\n\t\"What terrible way to die.\"\t\"There are no good ways.\"\t\t-- Sulu and Kirk, \"That \nWhich Survives\", stardate unknown\n\n", "msg_date": "Thu, 03 Oct 2002 21:26:43 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "NOTE: Setting follow up to the performance list\n\nFunny that the status quo seems to be if you need fast selects on data\nthat has few inserts to pick mysql, otherwise if you have a lot of\ninserts and don't need super fast selects go with PostgreSQL; yet your\ndata seems to cut directly against this. \n\nI'm curious, did you happen to run the select tests while also running\nthe insert tests? IIRC the older mysql versions have to lock the table\nwhen doing the insert, so select performance goes in the dumper in that\nscenario, perhaps that's not an issue with 3.23.52? \n\nIt also seems like the vacuum after each insert is unnecessary, unless\nyour also deleting/updating data behind it. Perhaps just running an\nANALYZE on the table would suffice while reducing overhead.\n\nRobert Treat\n\nOn Thu, 2002-10-03 at 08:36, Shridhar Daithankar wrote:\n> Machine \t\t\t\t\t\t\t\t\n> Compaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n> \"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n> \"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n> \"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\t\t\t\t\t\t\t\t\n> \"Cost - $13,500 ($1,350 for each additional 72GB HDD)\"\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\t\t\n> Performance Parameter\t\t\t\tMySQL 3.23.52 \t\tMySQL 3.23.52 \t\tPostgreSQL 7.2.2 \t\t\n> \t\t\t\t\t\tWITHOUT InnoDB \t\tWITH InnoDB for \twith built-in support \t\t\n> \t\t\t\t\t\tfor transactional \ttransactional support\tfor transactions\n> \t\t\t\t\t\tsupport\t\t\t\t\t\t\t\t\n> Complete Data\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\t\t\n> Inserts + building a composite index\t\t\t\t\t\t\t\t\n> \"40 GB data, 432,000,000 tuples\"\t\t3738 secs\t\t18720 secs\t\t20628 secs\t\t\n> \"about 100 bytes each, schema on \n> 'schema' sheet\"\t\t\t\t\t\t\t\t\n> \"composite index on 3 fields \n> (esn, min, datetime)\"\t\t\t\t\t\t\t\t\n> \t\t\t\t\t\t\n> Load Speed\t\t\t\t\t115570 tuples/second\t23076 tuples/second\t20942 tuples/second\n> \t\t\t\t\t\t\n> Database Size on Disk\t\t\t\t48 GB\t\t\t87 GB\t\t\t111 GB\n> \t\t\t\t\t\t\n> Average per partition\t\t\t\t\t\t\n> \t\t\t\t\t\t\n> Inserts + building a composite index\t\t\t\t\t\t\n> \"300MB data, 3,000,000 tuples,\"\t\t\t28 secs\t\t\t130 secs\t\t150 secs\n> \"about 100 bytes each, schema on \n> 'schema' sheet\"\t\t\t\t\t\t\n> \"composite index on 3 fields \n> (esn, min, datetime)\"\t\t\t\t\t\t\n> \t\t\t\t\t\t\n> Select Query \t\t\t\t\t7 secs\t\t\t7 secs\t\t\t6 secs\n> based on equality match of 2 fields\t\t\t\t\t\t\n> (esn and min) - 4 concurrent queries \n> running\n> \t\t\t\t\t\t\n> Database Size on Disk\t\t\t\t341 MB\t\t\t619 MB\t\t\t788 MB\n> ----\n\n\n", "msg_date": "03 Oct 2002 11:57:29 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 8:54, Charles H. Woloszynski wrote:\n\n> Can you comment on the tools you are using to do the insertions (Perl, \n> Java?) and the distribution of data (all random, all static), and the \n> transaction scope (all inserts in one transaction, each insert as a \n> single transaction, some group of inserts as a transaction).\n\nMost proably it's all inserts in one transaction spread almost uniformly over \naround 15-20 tables. Of course there will be bunch of transactions..\n\n> I'd be curious what happens when you submit more queries than you have \n> processors (you had four concurrent queries and four CPUs), if you care \n> to run any additional tests. Also, I'd report the query time in \n> absolute (like you did) and also in 'Time/number of concurrent queries\". \n> This will give you a sense of how the system is scaling as the workload \n> increases. Personally I am more concerned about this aspect than the \n> load time, since I am going to guess that this is where all the time is \n> spent. \n\nI don't think so. Because we plan to put enough shared buffers that would \nalmost contain the indexes in RAM if not data. Besides number of tuples \nexpected per query are not many. So more concurrent queries are not going to \nhog anything other than CPU power at most.\n\nOur major concern remains load time as data is generated in real time and is \nexpecetd in database with in specified time period. We need indexes for query \nand inserting into indexed table is on hell of a job. We did attempt inserting \n8GB of data in indexed table. It took almost 20 hours at 1K tuples per second \non average.. Though impressive it's not acceptable for that load..\n> \n> Was the original posting on GENERAL or HACKERS. Is this moving the \n> PERFORMANCE for follow-up? I'd like to follow this discussion and want \n> to know if I should join another group?\n\nShall I subscribe to performance? What's the exat list name? Benchmarks? I \ndon't see anything as performance mailing list on this page..\nhttp://developer.postgresql.org/mailsub.php?devlp\n\n> P.S. Anyone want to comment on their expectation for 'commercial' \n> databases handling this load? I know that we cannot speak about \n> specific performance metrics on some products (licensing restrictions) \n> but I'd be curious if folks have seen some of the databases out there \n> handle these dataset sizes and respond resonably.\n\nWell, if something handles such kind of data with single machine and costs \nunder USD20K for entire setup, I would be willing to recommend that to client..\n\nBTW we are trying same test on HP-UX. I hope we get some better figures on 64 \nbit machines..\n\nBye\n Shridhar\n\n--\nClarke's Conclusion:\tNever let your sense of morals interfere with doing the \nright thing.\n\n", "msg_date": "Thu, 03 Oct 2002 21:37:55 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Shridhar Daithankar wrote:\n<snip>\n> > Was the original posting on GENERAL or HACKERS. Is this moving the\n> > PERFORMANCE for follow-up? I'd like to follow this discussion and want\n> > to know if I should join another group?\n> \n> Shall I subscribe to performance? What's the exat list name? Benchmarks? I\n> don't see anything as performance mailing list on this page..\n> http://developer.postgresql.org/mailsub.php?devlp\n\nIt's a fairly new mailing list. :)\n\npgsql-performance@postgresql.org\n\nEasiest way to subscribe is by emailing majordomo@postgresql.org with:\n\nsubscribe pgsql-performance\n\nas the message body.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip> \n> Bye\n> Shridhar\n> \n> --\n> Clarke's Conclusion: Never let your sense of morals interfere with doing the\n> right thing.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Fri, 04 Oct 2002 02:16:06 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 11:57, Robert Treat wrote:\n\n> NOTE: Setting follow up to the performance list\n> \n> Funny that the status quo seems to be if you need fast selects on data\n> that has few inserts to pick mysql, otherwise if you have a lot of\n> inserts and don't need super fast selects go with PostgreSQL; yet your\n> data seems to cut directly against this. \n\nWell, couple of things..\n\nThe number of inserts aren't few. it's 5000/sec.required in the field Secondly \nI don't know really but postgresql seems doing pretty fine in parallel selects. \nIf we use mysql with transaction support then numbers are really close..\n\nMay be it's time to rewrite famous myth that postgresql is slow. When properly \ntuned or given enough head room, it's almost as fast as mysql..\n\n> I'm curious, did you happen to run the select tests while also running\n> the insert tests? IIRC the older mysql versions have to lock the table\n> when doing the insert, so select performance goes in the dumper in that\n> scenario, perhaps that's not an issue with 3.23.52? \n\nIMO even if it locks tables that shouldn't affect select performance. It would \nbe fun to watch when we insert multiple chunks of data and fire queries \nconcurrently. I would be surprised if mysql starts slowing down..\n\n> It also seems like the vacuum after each insert is unnecessary, unless\n> your also deleting/updating data behind it. Perhaps just running an\n> ANALYZE on the table would suffice while reducing overhead.\n\nI believe that was vacuum analyze only. But still it takes lot of time. Good \nthing is it's not blocking..\n\nAnyway I don't think such frequent vacuums are going to convince planner to \nchoose index scan over sequential scan. I am sure it's already convinced..\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:shridhar_daithankar@persistent.co.in\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Thu, 03 Oct 2002 21:47:03 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 10:56, Shridhar Daithankar wrote:\n> Well, we were comparing ext3 v/s reiserfs. I don't remember the journalling \n> mode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n> 0 from RAID-5 might have something to do about it.\n> \n> There was a discussion on hackers some time back as in which file system is \n> better. I hope this might have an addition over it..\n\n\nHmm. Reiserfs' claim to fame is it's low latency with many, many small\nfiles and that it's journaled. I've never seem anyone comment about it\nbeing considered an extremely fast file system in an general computing\ncontext nor have I seen any even hint at it as a file system for use in\nheavy I/O databases. This is why Reiserfs is popular with news and\nsquid cache servers as it's almost an ideal fit. That is, tons of small\nfiles or directories contained within a single directory. As such, I'm\nvery surprised that reiserfs is even in the running for your comparison.\n\nMight I point you toward XFS, JFS, or ext3, ? As I understand it, XFS\nand JFS are going to be your preferred file systems for for this type of\napplication with XFS in the lead as it's tool suite is very rich and\nrobust. I'm actually lacking JFS experience but from what I've read,\nit's a notch or two back from XFS in robustness (assuming we are talking\nLinux here). Feel free to read and play to find out for your self. I'd\nrecommend that you start playing with XFS to see how the others\ncompare. After all, XFS' specific claim to fame is high throughput w/\nlow latency on large and very large files. Furthermore, they even have\na real time mechanism that you can further play with to see how it\neffects your throughput and/or latencies.\n\nGreg", "msg_date": "03 Oct 2002 11:23:28 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 12:17, Shridhar Daithankar wrote:\n> On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> May be it's time to rewrite famous myth that postgresql is slow. \n\nThat myth has been dis-proven long ago, it just takes awhile for\neveryone to catch on ;-)\n\nWhen properly \n> tuned or given enough head room, it's almost as fast as mysql..\n> \n> > I'm curious, did you happen to run the select tests while also running\n> > the insert tests? IIRC the older mysql versions have to lock the table\n> > when doing the insert, so select performance goes in the dumper in that\n> > scenario, perhaps that's not an issue with 3.23.52? \n> \n> IMO even if it locks tables that shouldn't affect select performance. It would \n> be fun to watch when we insert multiple chunks of data and fire queries \n> concurrently. I would be surprised if mysql starts slowing down..\n> \n\nHmm... been awhile since I dug into mysql internals, but IIRC once the\ntable was locked, you had to wait for the insert to complete so the\ntable would be unlocked and the select could go through. (maybe this is\na myth that I need to get clued in on)\n\n> > It also seems like the vacuum after each insert is unnecessary, unless\n> > your also deleting/updating data behind it. Perhaps just running an\n> > ANALYZE on the table would suffice while reducing overhead.\n> \n> I believe that was vacuum analyze only. But still it takes lot of time. Good \n> thing is it's not blocking..\n> \n> Anyway I don't think such frequent vacuums are going to convince planner to \n> choose index scan over sequential scan. I am sure it's already convinced..\n> \n\nMy thinking was that if your just doing inserts, you need to update the\nstatistics but don't need to check on unused tuples. \n\nRobert Treat\n\n", "msg_date": "03 Oct 2002 12:26:34 -0400", "msg_from": "Robert Treat <xzilla@users.sourceforge.net>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 11:23, Greg Copeland wrote:\n\n> On Thu, 2002-10-03 at 10:56, Shridhar Daithankar wrote:\n> > Well, we were comparing ext3 v/s reiserfs. I don't remember the journalling \n> > mode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n> > 0 from RAID-5 might have something to do about it.\n> > \n> > There was a discussion on hackers some time back as in which file system is \n> > better. I hope this might have an addition over it..\n> \n> \n> Hmm. Reiserfs' claim to fame is it's low latency with many, many small\n> files and that it's journaled. I've never seem anyone comment about it\n> being considered an extremely fast file system in an general computing\n> context nor have I seen any even hint at it as a file system for use in\n> heavy I/O databases. This is why Reiserfs is popular with news and\n> squid cache servers as it's almost an ideal fit. That is, tons of small\n> files or directories contained within a single directory. As such, I'm\n> very surprised that reiserfs is even in the running for your comparison.\n> \n> Might I point you toward XFS, JFS, or ext3, ? As I understand it, XFS\n> and JFS are going to be your preferred file systems for for this type of\n> application with XFS in the lead as it's tool suite is very rich and\n> robust. I'm actually lacking JFS experience but from what I've read,\n> it's a notch or two back from XFS in robustness (assuming we are talking\n> Linux here). Feel free to read and play to find out for your self. I'd\n> recommend that you start playing with XFS to see how the others\n> compare. After all, XFS' specific claim to fame is high throughput w/\n> low latency on large and very large files. Furthermore, they even have\n> a real time mechanism that you can further play with to see how it\n> effects your throughput and/or latencies.\n\nI would try that. Once we are thr. with tests at our hands..\n\nBye\n Shridhar\n\n--\n\t\"The combination of a number of things to make existence worthwhile.\"\t\"Yes, \nthe philosophy of 'none,' meaning 'all.'\"\t\t-- Spock and Lincoln, \"The Savage \nCurtain\", stardate 5906.4\n\n", "msg_date": "Thu, 03 Oct 2002 22:00:18 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On 3 Oct 2002 at 12:26, Robert Treat wrote:\n\n> On Thu, 2002-10-03 at 12:17, Shridhar Daithankar wrote:\n> > On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> > May be it's time to rewrite famous myth that postgresql is slow. \n> \n> That myth has been dis-proven long ago, it just takes awhile for\n> everyone to catch on ;-)\n\n:-)\n\n> Hmm... been awhile since I dug into mysql internals, but IIRC once the\n> table was locked, you had to wait for the insert to complete so the\n> table would be unlocked and the select could go through. (maybe this is\n> a myth that I need to get clued in on)\n\nIf that turns out to be true, I guess mysql will nose dive out of window.. May \nbe time to run a test that's nearer to real world expectation, especially in \nterms on concurrency..\n\nI don't think tat will be an issue with mysql with transaction support. The \nvanilla one might suffer.. Not the other one.. At least theoretically..\n\n> My thinking was that if your just doing inserts, you need to update the\n> statistics but don't need to check on unused tuples. \n\nAny other way of doing that other than vacuum analyze? I thought that was the \nonly way..\n\nBye\n Shridhar\n\n--\n\"Even more amazing was the realization that God has Internet access. Iwonder \nif He has a full newsfeed?\"(By Matt Welsh)\n\n", "msg_date": "Thu, 03 Oct 2002 22:05:24 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 03 Oct 2002 18:06:10 +0530, \"Shridhar Daithankar\"\n<shridhar_daithankar@persistent.co.in> wrote:\n>Machine \t\t\t\t\t\t\t\t\n>Compaq Proliant Server ML 530\t\t\t\t\t\t\t\t\n>\"Intel Xeon 2.4 Ghz Processor x 4, \"\t\t\t\t\t\t\t\t\n>\"4 GB RAM, 5 x 72.8 GB SCSI HDD \"\t\t\t\t\t\t\t\t\n>\"RAID 0 (Striping) Hardware Setup, Mandrake Linux 9.0\"\n\nShridhar,\n\nforgive me if I ask what has been said before: Did you run at 100%\nCPU or was IO bandwidth your limit? And is the answer the same for\nall three configurations?\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 18:44:09 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Shridhar Daithankar wrote:\n\n>On 3 Oct 2002 at 11:57, Robert Treat wrote:\n>\n> \n>\n>>NOTE: Setting follow up to the performance list\n>>\n>>Funny that the status quo seems to be if you need fast selects on data\n>>that has few inserts to pick mysql, otherwise if you have a lot of\n>>inserts and don't need super fast selects go with PostgreSQL; yet your\n>>data seems to cut directly against this. \n>> \n>>\n>\n>Well, couple of things..\n>\n>The number of inserts aren't few. it's 5000/sec.required in the field Secondly \n>I don't know really but postgresql seems doing pretty fine in parallel selects. \n>If we use mysql with transaction support then numbers are really close..\n>\n>May be it's time to rewrite famous myth that postgresql is slow. When properly \n>tuned or given enough head room, it's almost as fast as mysql..\n> \n>\n\nIn the case of concurrent transactions MySQL does not do as well due to \nvery bad locking behavious. PostgreSQL is far better because it does row \nlevel locking instead of table locking.\nIf you have many concurrent transactions MySQL performs some sort of \n\"self-denial-of-service\". I'd choose PostgreSQL in order to make sure \nthat the database does not block.\n\n\n>>I'm curious, did you happen to run the select tests while also running\n>>the insert tests? IIRC the older mysql versions have to lock the table\n>>when doing the insert, so select performance goes in the dumper in that\n>>scenario, perhaps that's not an issue with 3.23.52? \n>> \n>>\n>\n>IMO even if it locks tables that shouldn't affect select performance. It would \n>be fun to watch when we insert multiple chunks of data and fire queries \n>concurrently. I would be surprised if mysql starts slowing down..\n> \n>\n\nIn the case of concurrent SELECTs and INSERT/UPDATE/DELETE operations \nMySQL will slow down for sure. The more concurrent transactions you have \nthe worse MySQL will be.\n\n>>It also seems like the vacuum after each insert is unnecessary, unless\n>>your also deleting/updating data behind it. Perhaps just running an\n>>ANALYZE on the table would suffice while reducing overhead.\n>> \n>>\n>\n>I believe that was vacuum analyze only. But still it takes lot of time. Good \n>thing is it's not blocking..\n>\n>Anyway I don't think such frequent vacuums are going to convince planner to \n>choose index scan over sequential scan. I am sure it's already convinced..\n> \n>\n\nPostgreSQL allows you to improve execution plans by giving the planner a \nhint.\nIn addition to that: if you need REAL performance and if you are running \nsimilar queries consider using SPI.\n\nAlso: 7.3 will support PREPARE/EXECUTE.\n\nIf you are running MySQL you will not be able to add features to the \ndatabase easily.\nIn the case of PostgreSQL you have a broad range of simple interfaces \nwhich make many things pretty simple (eg. optimized data types in < 50 \nlines of C code).\n\nPostgreSQL is the database of the future and you can perform a lot of \ntuning.\nMySQL is a simple frontend to a filesystem and it is fast as long as you \nare doing SELECT 1+1 operations.\n\nAlso: Keep in mind that PostgreSQL has a wonderful core team. MySQL is \nbuilt on Monty Widenius and the core team = Monty.\nAlso: PostgreSQL = ANSI compilant, MySQL = Monty compliant\n\nIn the past few years I have seen that there is no database system which \ncan beat PostgreSQL's flexibility and stability.\nI am familiar with various database systems but believe: PostgreSQL is \nthe best choice.\n\n Hans\n\n\n>Regards,\n> Shridhar\n>\n>-----------------------------------------------------------\n>Shridhar Daithankar\n>LIMS CPE Team Member, PSPL.\n>mailto:shridhar_daithankar@persistent.co.in\n>Phone:- +91-20-5678900 Extn.270\n>Fax :- +91-20-5678901 \n>-----------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n> \n>\n\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Thu, 03 Oct 2002 18:51:05 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <postgres@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 03 Oct 2002 21:47:03 +0530, \"Shridhar Daithankar\"\n<shridhar_daithankar@persistent.co.in> wrote:\n>I believe that was vacuum analyze only.\n\nWell there is\n\n\tVACUUM [tablename]; \n\nand there is\n\n\tANALYZE [tablename];\n\nAnd\n\n\tVACUUM ANALYZE [tablename];\n\nis VACUUM followed by ANALYZE.\n\nServus\n Manfred\n", "msg_date": "Thu, 03 Oct 2002 18:53:32 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 11:17, Shridhar Daithankar wrote:\n> On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> \n[snip]\n> > I'm curious, did you happen to run the select tests while also running\n> > the insert tests? IIRC the older mysql versions have to lock the table\n> > when doing the insert, so select performance goes in the dumper in that\n> > scenario, perhaps that's not an issue with 3.23.52? \n> \n> IMO even if it locks tables that shouldn't affect select performance. It would \n> be fun to watch when we insert multiple chunks of data and fire queries \n> concurrently. I would be surprised if mysql starts slowing down..\n\nWhat kind of lock? Shared lock or exclusive lock? If SELECT\nperformance tanked when doing simultaneous INSERTs, then maybe there\nwere exclusive table locks.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"What other evidence do you have that they are terrorists, |\n| other than that they trained in these camps?\" |\n| 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n| men arrested near Buffalo NY |\n+------------------------------------------------------------+\n\n", "msg_date": "03 Oct 2002 12:38:49 -0500", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 2002-10-03 at 11:51, Hans-Jürgen Schönig wrote:\n> Shridhar Daithankar wrote:\n> \n> >On 3 Oct 2002 at 11:57, Robert Treat wrote:\n[snip]\n> PostgreSQL allows you to improve execution plans by giving the planner a \n> hint.\n> In addition to that: if you need REAL performance and if you are running \n> similar queries consider using SPI.\n\nWhat is SPI?\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"What other evidence do you have that they are terrorists, |\n| other than that they trained in these camps?\" |\n| 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n| men arrested near Buffalo NY |\n+------------------------------------------------------------+\n\n", "msg_date": "03 Oct 2002 15:55:35 -0500", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, Oct 03, 2002 at 06:51:05PM +0200, Hans-J?rgen Sch?nig wrote:\n\n> In the case of concurrent transactions MySQL does not do as well due to \n> very bad locking behavious. PostgreSQL is far better because it does row \n> level locking instead of table locking.\n\nIt is my understanding that MySQL no longer does this on InnoDB\ntables. Whether various bag-on-the-side table types are a good thing\nI will leave to others; but there's no reason to go 'round making\nclaims about old versions of MySQL any more than there is a reason to\ncontinue to talk about PostgreSQL not being crash safe. MySQL has\nmoved along nearly as quickly as PostgreSQL. \n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 3 Oct 2002 17:09:20 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "May I suggest that instead of [pgsql-performance] that [PERF] be used to\nsave some of the subject line.\n\nRon Johnson wrote:\n> \n> On Thu, 2002-10-03 at 11:51, Hans-Jürgen Schönig wrote:\n> > Shridhar Daithankar wrote:\n> >\n> > >On 3 Oct 2002 at 11:57, Robert Treat wrote:\n> [snip]\n> > PostgreSQL allows you to improve execution plans by giving the planner a\n> > hint.\n> > In addition to that: if you need REAL performance and if you are running\n> > similar queries consider using SPI.\n> \n> What is SPI?\n> \n> --\n> +------------------------------------------------------------+\n> | Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n> | Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n> | |\n> | \"What other evidence do you have that they are terrorists, |\n> | other than that they trained in these camps?\" |\n> | 17-Sep-2002 Katie Couric to an FBI agent regarding the 5 |\n> | men arrested near Buffalo NY |\n> +------------------------------------------------------------+\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n", "msg_date": "Thu, 03 Oct 2002 17:12:02 -0400", "msg_from": "Jean-Luc Lachance <jllachan@nsd.ca>", "msg_from_op": false, "msg_subject": "use [PERF] instead of " }, { "msg_contents": "On 3 Oct 2002 at 18:53, Manfred Koizar wrote:\n\n> On Thu, 03 Oct 2002 21:47:03 +0530, \"Shridhar Daithankar\"\n> <shridhar_daithankar@persistent.co.in> wrote:\n> >I believe that was vacuum analyze only.\n> \n> Well there is\n> \n> \tVACUUM [tablename]; \n> \n> and there is\n> \n> \tANALYZE [tablename];\n> \n> And\n> \n> \tVACUUM ANALYZE [tablename];\n> \n> is VACUUM followed by ANALYZE.\n\nI was using vacuum analyze. \n\nGood that you pointed out. Now I will modify the postgresql auto vacuum daemon \nthat I wrote to analyze only in case of excesive inserts. I hope that's lighter \non performance compared to vacuum analyze..\n\nBye\n Shridhar\n\n--\nMix's Law:\tThere is nothing more permanent than a temporary building.\tThere is \nnothing more permanent than a temporary tax.\n\n", "msg_date": "Fri, 04 Oct 2002 13:30:54 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "On Thu, 3 Oct 2002, Hans-J�rgen Sch�nig wrote:\n\n> In the case of concurrent transactions MySQL does not do as well due to \n> very bad locking behavious. PostgreSQL is far better because it does row \n> level locking instead of table locking.\n> If you have many concurrent transactions MySQL performs some sort of \n> \"self-denial-of-service\". I'd choose PostgreSQL in order to make sure \n> that the database does not block.\n\nWhile I'm no big fan of MySQL, I must point out that with innodb tables, \nthe locking is row level, and the ability to handle parallel read / write \nis much improved.\n\nAlso, Postgresql does NOT use row level locking, it uses MVCC, which is \n\"better than row level locking\" as Tom puts it.\n\nOf course, hot backup is only 2,000 Euros for an innodb table mysql, while \nhot backup for postgresql is free. :-)\n\nThat said, MySQL still doesn't handle parallel load nearly as well as \npostgresql, it's just better than it once was.\n\n> Also: Keep in mind that PostgreSQL has a wonderful core team. MySQL is \n> built on Monty Widenius and the core team = Monty.\n> Also: PostgreSQL = ANSI compilant, MySQL = Monty compliant\n\nThis is a very valid point. The \"committee\" that creates and steers \nPostgresql is very much a meritocracy. The \"committee\" that steers MySQL \nis Monty. \n\nI'm much happier knowing that every time something important needs to be \ndone we have a whole cupboard full of curmudgeons arguing the fine points \nso that the \"right thing\" gets done.\n\n\n", "msg_date": "Fri, 4 Oct 2002 10:05:10 -0600 (MDT)", "msg_from": "\"scott.marlowe\" <scott.marlowe@ihs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "MVCC = great ...\nI know that is not row level locking but that's the way things can be \nexplained more easily. Many people are asking my how things work and \nthis way it is easier to understand. Never tell a trainee about deadlock \ndetection and co *g*.\n\nI am happy that the PostgreSQL core team + all developers are not like \nMonty ...\nI am happy to PostgreSQL has developers such as Bruce, Tom, Jan, Marc, \nVadim, Joe, Neil, Christopher, etc. (just to name a few) ...\n\nYes, it is said to be better than it was but that's not the point:\nMySQL = Monty SQL <> ANSI SQL ...\n\nBelieve me, the table will turn and finally the better system will succeed.\nOne we have clustering, PITR, etc. running people will see how real \ndatabases work :).\n\n Hans\n\n\n\nscott.marlowe wrote:\n\n>On Thu, 3 Oct 2002, Hans-J�rgen Sch�nig wrote:\n>\n> \n>\n>>In the case of concurrent transactions MySQL does not do as well due to \n>>very bad locking behavious. PostgreSQL is far better because it does row \n>>level locking instead of table locking.\n>>If you have many concurrent transactions MySQL performs some sort of \n>>\"self-denial-of-service\". I'd choose PostgreSQL in order to make sure \n>>that the database does not block.\n>> \n>>\n>\n>While I'm no big fan of MySQL, I must point out that with innodb tables, \n>the locking is row level, and the ability to handle parallel read / write \n>is much improved.\n>\n>Also, Postgresql does NOT use row level locking, it uses MVCC, which is \n>\"better than row level locking\" as Tom puts it.\n>\n>Of course, hot backup is only 2,000 Euros for an innodb table mysql, while \n>hot backup for postgresql is free. :-)\n>\n>That said, MySQL still doesn't handle parallel load nearly as well as \n>postgresql, it's just better than it once was.\n>\n> \n>\n>>Also: Keep in mind that PostgreSQL has a wonderful core team. MySQL is \n>>built on Monty Widenius and the core team = Monty.\n>>Also: PostgreSQL = ANSI compilant, MySQL = Monty compliant\n>> \n>>\n>\n>This is a very valid point. The \"committee\" that creates and steers \n>Postgresql is very much a meritocracy. The \"committee\" that steers MySQL \n>is Monty. \n>\n>I'm much happier knowing that every time something important needs to be \n>done we have a whole cupboard full of curmudgeons arguing the fine points \n>so that the \"right thing\" gets done.\n> \n>\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n", "msg_date": "Fri, 04 Oct 2002 18:30:47 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Large databases, performance" }, { "msg_contents": "Andrew Sullivan <andrew@libertyrms.info> wrote:\n\n> On Thu, Oct 03, 2002 at 06:51:05PM +0200, Hans-J?rgen Sch?nig wrote:\n>\n> > In the case of concurrent transactions MySQL does not do as well due to\n> > very bad locking behavious. PostgreSQL is far better because it does row\n> > level locking instead of table locking.\n>\n> It is my understanding that MySQL no longer does this on InnoDB\n> tables. Whether various bag-on-the-side table types are a good thing\n> I will leave to others; but there's no reason to go 'round making\n> claims about old versions of MySQL any more than there is a reason to\n> continue to talk about PostgreSQL not being crash safe. MySQL has\n> moved along nearly as quickly as PostgreSQL.\n\nLocking and transactions is not fine in MySQL (with InnoDB) though. I tried\nto do selects on a table I was concurrently inserting to. In a single thread\nI was constantly inserting 1000 rows per transaction. While inserting I did\nsome random selects on the same table. It often happend that the insert\ntransactions were aborted due to dead lock problems. There I see the problem\nwith locking reads.\nI like PostgreSQL's MVCC!\n\nRegards,\nMichael Paesold\n\n", "msg_date": "Fri, 4 Oct 2002 18:38:21 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "\nIn Oracle you can Pin large objects into memory to prevent frequent\nreloads. Is there anyway to do this with Postgres? It appears that some\nof our tables that get hit a lot may get kicked out of memory when we\naccess some of our huge tables. Then they have to wait for I/O to get\nloaded back in. \n\n\nDavid Blood\nMatraex, Inc\n\n\n\n", "msg_date": "Fri, 4 Oct 2002 10:46:57 -0600", "msg_from": "\"David Blood\" <david@matraex.com>", "msg_from_op": false, "msg_subject": "Pinning a table into memory" }, { "msg_contents": "\"David Blood\" <david@matraex.com> writes:\n> In Oracle you can Pin large objects into memory to prevent frequent\n> reloads. Is there anyway to do this with Postgres?\n\nI can never understand why people think this would be a good idea.\nIf you're hitting a table frequently, it will stay in memory anyway\n(either in Postgres shared buffers or kernel disk cache). If you're\nnot hitting it frequently enough to keep it swapped in, then whatever\nis getting swapped in instead is probably a better candidate to be\noccupying the space. ISTM that a manual \"pin this table\" knob would\nmostly have the effect of making performance worse, whenever the\nsystem activity is slightly different from the situation you had in\nmind when you installed the pin.\n\nHaving said that, I'll freely concede that our cache management\nalgorithms could use improvement (and there are people looking at\nthat right now). But a manual pin doesn't seem like a better answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 14:47:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Pinning a table into memory " }, { "msg_contents": "On Thu, 3 Oct 2002, Shridhar Daithankar wrote:\n\n> Well, we were comparing ext3 v/s reiserfs. I don't remember the journalling\n> mode of ext3 but we did a 10 GB write test. Besides converting the RAID to RAID-\n> 0 from RAID-5 might have something to do about it.\n\nThat will have a massive, massive effect on performance. Depending on\nyour RAID subsystem, you can except RAID-0 to be between two and twenty\ntimes as fast for writes as RAID-5.\n\nIf you compared one filesystem on RAID-5 and another on RAID-0,\nyour results are likely not at all indicative of file system\nperformance.\n\nNote that I've redirected followups to the pgsql-performance list.\nAvoiding cross-posting would be nice, since I am getting lots of\nduplicate messages these days.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 7 Oct 2002 11:27:04 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On Thu, 3 Oct 2002, Shridhar Daithankar wrote:\n\n> Our major concern remains load time as data is generated in real time and is\n> expecetd in database with in specified time period.\n\nIf your time period is long enough, you can do what I do, which is\nto use partial indexes so that the portion of the data being loaded\nis not indexed. That will speed your loads quite a lot. Aftewards\nyou can either generate another partial index for the range you\nloaded, or generate a new index over both old and new data, and\nthen drop the old index.\n\nThe one trick is that the optimizer is not very smart about combining\nmultiple indexes, so you often need to split your queries across\nthe two \"partitions\" of the table that have separate indexes.\n\n> Shall I subscribe to performance?\n\nYes, you really ought to. The list is pgsql-performance@postgresql.org.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 7 Oct 2002 11:30:57 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> ... Avoiding cross-posting would be nice, since I am getting lots of\n> duplicate messages these days.\n\nCross-posting is a fact of life, and in fact encouraged, on the pg\nlists. I suggest adapting. Try sending\n\tset all unique your-email-address\nto the PG majordomo server; this sets you up to get only one copy\nof each cross-posted message.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Oct 2002 23:20:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "cross-posts (was Re: Large databases, performance)" }, { "msg_contents": "On 3 Oct 2002 at 8:54, Charles H. Woloszynski wrote:\n\n> I'd be curious what happens when you submit more queries than you have \n> processors (you had four concurrent queries and four CPUs), if you care \n> to run any additional tests. Also, I'd report the query time in \n> absolute (like you did) and also in 'Time/number of concurrent queries\". \n> This will give you a sense of how the system is scaling as the workload \n> increases. Personally I am more concerned about this aspect than the \n> load time, since I am going to guess that this is where all the time is \n> spent. \n\nOK. I am back from my cave after some more tests are done. Here are the \nresults. I am not repeating large part of it but answering your questions..\n\nDon't ask me how these numbers changed. I am not the person who conducts the \ntest neither I have access to the system. Rest(or most ) of the things remains \nsame..\n\nMySQL 3.23.52 with innodb transaction support: \n\n4 concurrent queries \t:- 257.36 ms\n40 concurrent queries\t:- 35.12 ms\n\nPostgresql 7.2.2 \n\n4 concurrent queries \t\t:- 257.43 ms\n40 concurrent \tqueries\t\t:- 41.16 ms\n\nThough I can not report oracle numbers, suffice to say that they fall in \nbetween these two numbers.\n\nOracle seems to be hell lot faster than mysql/postgresql to load raw data even \nwhen it's installed on reiserfs. We plan to run XFS tests later in hope that \nthat would improve mysql/postgresql load times. \n\nIn this run postgresql has better load time than mysql/innodb ( 18270 sec v/s \n17031 sec.) Index creation times are faster as well (100 sec v/s 130 sec). \nDon't know what parameters are changed.\n\nOnly worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \nnumbers include indexes. This is really going to be a problem when things are \ndeployed. Any idea how can it be taken down? \n\nWAL is out, it's not counted.\n\nSchema optimisation is later issue. Right now all three databases are using \nsame schema..\n\nWill it help in this situation if I recompile posgresql with block size say 32K \nrather than 8K default? Will it saev some overhead and offer better performance \nin data load etc?\n\nWill keep you guys updated..\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:shridhar_daithankar@persistent.co.in\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Mon, 07 Oct 2002 15:07:29 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "I wonder if the following changes make a difference:\n\n- compile PostgreSQL with CFLAGS=' -O3 '\n- redefine commit delays\n\nalso: keep in mind that you might gain a lot of performance by using the \nSPI if you are running many similar queries\n\ntry 7.3 - as far as I remeber there is a mechanism which caches recent \nexecution plans.\nalso: some overhead was reduced (tuples, backend startup).\n\n Hans\n\n\n>Ok. I am back from my cave after some more tests are done. Here are the \n>results. I am not repeating large part of it but answering your questions..\n>\n>Don't ask me how these numbers changed. I am not the person who conducts the \n>test neither I have access to the system. Rest(or most ) of the things remains \n>same..\n>\n>MySQL 3.23.52 with innodb transaction support: \n>\n>4 concurrent queries \t:- 257.36 ms\n>40 concurrent queries\t:- 35.12 ms\n>\n>Postgresql 7.2.2 \n>\n>4 concurrent queries \t\t:- 257.43 ms\n>40 concurrent \tqueries\t\t:- 41.16 ms\n>\n>Though I can not report oracle numbers, suffice to say that they fall in \n>between these two numbers.\n>\n>Oracle seems to be hell lot faster than mysql/postgresql to load raw data even \n>when it's installed on reiserfs. We plan to run XFS tests later in hope that \n>that would improve mysql/postgresql load times. \n>\n>In this run postgresql has better load time than mysql/innodb ( 18270 sec v/s \n>17031 sec.) Index creation times are faster as well (100 sec v/s 130 sec). \n>Don't know what parameters are changed.\n>\n>Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n>numbers include indexes. This is really going to be a problem when things are \n>deployed. Any idea how can it be taken down? \n>\n>WAL is out, it's not counted.\n>\n>Schema optimisation is later issue. Right now all three databases are using \n>same schema..\n>\n>Will it help in this situation if I recompile posgresql with block size say 32K \n>rather than 8K default? Will it saev some overhead and offer better performance \n>in data load etc?\n>\n>Will keep you guys updated..\n>\n>Regards,\n> Shridhar\n>\n>-----------------------------------------------------------\n>Shridhar Daithankar\n>LIMS CPE Team Member, PSPL.\n>mailto:shridhar_daithankar@persistent.co.in\n>Phone:- +91-20-5678900 Extn.270\n>Fax :- +91-20-5678901 \n>-----------------------------------------------------------\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n>\n\n\n-- \n*Cybertec Geschwinde u Schoenig*\nLudo-Hartmannplatz 1/14, A-1160 Vienna, Austria\nTel: +43/1/913 68 09; +43/664/233 90 75\nwww.postgresql.at <http://www.postgresql.at>, cluster.postgresql.at \n<http://cluster.postgresql.at>, www.cybertec.at \n<http://www.cybertec.at>, kernel.cybertec.at <http://kernel.cybertec.at>\n\n\n", "msg_date": "Mon, 07 Oct 2002 12:01:32 +0200", "msg_from": "=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= <hs@cybertec.at>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance" }, { "msg_contents": "On Sun, 2002-10-06 at 22:20, Tom Lane wrote:\n> Curt Sampson <cjs@cynic.net> writes:\n> > ... Avoiding cross-posting would be nice, since I am getting lots of\n> > duplicate messages these days.\n> \n> Cross-posting is a fact of life, and in fact encouraged, on the pg\n> lists. I suggest adapting. Try sending\n> \tset all unique your-email-address\n> to the PG majordomo server; this sets you up to get only one copy\n> of each cross-posted message.\nThat doesn't seem to work any more:\n\n>>>> set all unique ler@lerctr.org\n**** The \"all\" mailing list is not supported at\n**** PostgreSQL User Support Lists.\n\nWhat do I need to send now? \n\nMarc? \n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "07 Oct 2002 06:50:59 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cross-posts (was Re: Large databases," }, { "msg_contents": "> On Sun, 2002-10-06 at 22:20, Tom Lane wrote:\n> > Curt Sampson <cjs@cynic.net> writes:\n> > > ... Avoiding cross-posting would be nice, since I am getting lots of\n> > > duplicate messages these days.\n> >\n> > Cross-posting is a fact of life, and in fact encouraged, on the pg\n> > lists. I suggest adapting. Try sending\n> > set all unique your-email-address\n> > to the PG majordomo server; this sets you up to get only one copy\n> > of each cross-posted message.\n> That doesn't seem to work any more:\n>\n> >>>> set all unique ler@lerctr.org\n> **** The \"all\" mailing list is not supported at\n> **** PostgreSQL User Support Lists.\n>\n> What do I need to send now?\n>\n> Marc?\n\nit is:\nset ALL unique your-email\n\nif you also don't want to get emails that have already been cc'd to you, you\ncan use:\n\nset ALL eliminatecc your-email\n\nfor a full list of set options send:\n\nhelp set\n\nto majordomo.\n\nRegards,\nMichael Paesold\n\n\n", "msg_date": "Mon, 7 Oct 2002 14:01:25 +0200", "msg_from": "\"Michael Paesold\" <mpaesold@gmx.at>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cross-posts (was Re: Large databases," }, { "msg_contents": "On Mon, 2002-10-07 at 07:01, Michael Paesold wrote:\n> > On Sun, 2002-10-06 at 22:20, Tom Lane wrote:\n> > > Curt Sampson <cjs@cynic.net> writes:\n> > > > ... Avoiding cross-posting would be nice, since I am getting lots of\n> > > > duplicate messages these days.\n> > >\n> > > Cross-posting is a fact of life, and in fact encouraged, on the pg\n> > > lists. I suggest adapting. Try sending\n> > > set all unique your-email-address\n> > > to the PG majordomo server; this sets you up to get only one copy\n> > > of each cross-posted message.\n> > That doesn't seem to work any more:\n> >\n> > >>>> set all unique ler@lerctr.org\n> > **** The \"all\" mailing list is not supported at\n> > **** PostgreSQL User Support Lists.\n> >\n> > What do I need to send now?\n> >\n> > Marc?\n> \n> it is:\n> set ALL unique your-email\n> \n> if you also don't want to get emails that have already been cc'd to you, you\n> can use:\n> \n> set ALL eliminatecc your-email\n> \n> for a full list of set options send:\n> \n> help set\n> \n> to majordomo.\nThanks. That worked great. (I use Mailman, and didn't realize the ALL\nneeded to be capitalized. \n\nLER\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "07 Oct 2002 07:04:33 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] cross-posts (was Re: Large databases," }, { "msg_contents": "On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n<shridhar_daithankar@persistent.co.in> wrote:\n>Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n>numbers include indexes. This is really going to be a problem when things are \n>deployed. Any idea how can it be taken down? \n\nShridhar,\n\nif i'm not mistaken, a char(n)/varchar(n) column is stored as a 32-bit\ninteger specifying the length followed by as many characters as the\nlength tells. On 32-bit Intel hardware this structure is aligned on a\n4-byte boundary.\n\nFor your row layout this gives the following sizes (look at the \"phys\nsize\" column):\n\n| Field Field Null Indexed phys mini\n| Name Type size \n|--------------------------------------------\n| type int no no 4 4\n| esn char (10) no yes 16 11\n| min char (10) no yes 16 11\n| datetime timestamp no yes 8 8\n| opc0 char (3) no no 8 4\n| opc1 char (3) no no 8 4\n| opc2 char (3) no no 8 4\n| dpc0 char (3) no no 8 4\n| dpc1 char (3) no no 8 4\n| dpc2 char (3) no no 8 4\n| npa char (3) no no 8 4\n| nxx char (3) no no 8 4\n| rest char (4) no no 8 5\n| field0 int yes no 4 4\n| field1 char (4) yes no 8 5\n| field2 int yes no 4 4\n| field3 char (4) yes no 8 5\n| field4 int yes no 4 4\n| field5 char (4) yes no 8 5\n| field6 int yes no 4 4\n| field7 char (4) yes no 8 5\n| field8 int yes no 4 4\n| field9 char (4) yes no 8 5\n| ----- -----\n| 176 116\n\nIgnoring nulls for now, you have to add 32 bytes for a v7.2 heap tuple\nheader and 4 bytes for ItemIdData per tuple, ending up with 212 bytes\nper tuple or ca. 85 GB heap space for 432000000 tuples. Depending on\nfill factor similar calculations give some 30 GB for your index.\n\nNow if we had a datatype with only one byte for the string length,\nchar columns could be byte aligned and we'd have column sizes given\nunder \"mini\" in the table above. The columns would have to be\nrearranged according to alignment requirements.\n\nThus 60 bytes per heap tuple and 8 bytes per index tuple could be\nsaved, resulting in a database size of ~ 85 GB (index included). And\nI bet this would be significantly faster, too.\n\nHackers, do you think it's possible to hack together a quick and dirty\npatch, so that string length is represented by one byte? IOW can a\ndatabase be built that doesn't contain any char/varchar/text value\nlonger than 255 characters in the catalog?\n\nIf I'm not told that this is impossibly, I'd give it a try. Shridhar,\nif such a patch can be made available, would you be willing to test\nit?\n\nWhat can you do right now? Try using v7.3 beta and creating your\ntable WITHOUT OIDS. This saves 8 bytes per tuple; not much, but\nbetter save 4% than nothing.\n\nServus\n Manfred\n", "msg_date": "Mon, 07 Oct 2002 16:10:26 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 7 Oct 2002 at 16:10, Manfred Koizar wrote:\n> if i'm not mistaken, a char(n)/varchar(n) column is stored as a 32-bit\n> integer specifying the length followed by as many characters as the\n> length tells. On 32-bit Intel hardware this structure is aligned on a\n> 4-byte boundary.\n\nThat shouldn't be necessary for a char field as space is always pre-allocated. \nSounds like a possible area of imporvement to me, if that's the case..\n\n> Hackers, do you think it's possible to hack together a quick and dirty\n> patch, so that string length is represented by one byte? IOW can a\n> database be built that doesn't contain any char/varchar/text value\n> longer than 255 characters in the catalog?\n\nI say if it's a char field, there should be no indicator of length as it's not \nrequired. Just store those many characters straight ahead..\n\n> \n> If I'm not told that this is impossibly, I'd give it a try. Shridhar,\n> if such a patch can be made available, would you be willing to test\n> it?\n\nSure. But the server machine is not available this week. Some other project is \nusing it. So the results won't be out unless at least a week from now.\n\n\n> What can you do right now? Try using v7.3 beta and creating your\n> table WITHOUT OIDS. This saves 8 bytes per tuple; not much, but\n> better save 4% than nothing.\n\nIIRC there was some header optimisation which saved 4 bytes. So without OIDs \nthat should save 8. Would do that as first next thing.\n\nI talked to my friend regarding postgresql surpassing mysql substantially in \nthis test. He told me that the last test where postgresql took 23000+/150 sec \nfor load/index and mysql took 18,000+/130 index, postgresql was running in \ndefault configuration. He forgot to copy postgresql.conf to data directory \nafter he modified it.\n\nThis time results are correct. Postgresql loads data faster, indexes it faster \nand queries in almost same time.. Way to go..\n\nRegards,\n Shridhar\n\n-----------------------------------------------------------\nShridhar Daithankar\nLIMS CPE Team Member, PSPL.\nmailto:shridhar_daithankar@persistent.co.in\nPhone:- +91-20-5678900 Extn.270\nFax :- +91-20-5678901 \n-----------------------------------------------------------\n\n", "msg_date": "Mon, 07 Oct 2002 19:48:31 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> MySQL 3.23.52 with innodb transaction support: \n\n> 4 concurrent queries \t:- 257.36 ms\n> 40 concurrent queries\t:- 35.12 ms\n\n> Postgresql 7.2.2 \n\n> 4 concurrent queries \t\t:- 257.43 ms\n> 40 concurrent \tqueries\t\t:- 41.16 ms\n\nI find this pretty fishy. The extreme similarity of the 4-client\nnumbers seems improbable, from what I know of the two databases.\nI suspect your numbers are mostly measuring some non-database-related\noverhead --- communications overhead, maybe?\n\n> Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n> numbers include indexes. This is really going to be a problem when things are\n> deployed. Any idea how can it be taken down? \n\n7.3 should be a little bit better because of Manfred's work on reducing\ntuple header size --- if you create your tables WITHOUT OIDS, you should\nsave 8 bytes per row compared to earlier releases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 10:30:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On 7 Oct 2002 at 10:30, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > MySQL 3.23.52 with innodb transaction support: \n> \n> > 4 concurrent queries \t:- 257.36 ms\n> > 40 concurrent queries\t:- 35.12 ms\n> \n> > Postgresql 7.2.2 \n> \n> > 4 concurrent queries \t\t:- 257.43 ms\n> > 40 concurrent \tqueries\t\t:- 41.16 ms\n> \n> I find this pretty fishy. The extreme similarity of the 4-client\n> numbers seems improbable, from what I know of the two databases.\n> I suspect your numbers are mostly measuring some non-database-related\n> overhead --- communications overhead, maybe?\n\nI don't know but three numbers, postgresql/mysql/oracle all are 25x.xx ms. The \nclients were on same machie as of server. So no real area to point at..\n> \n> > Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql. All \n> > numbers include indexes. This is really going to be a problem when things are\n> > deployed. Any idea how can it be taken down? \n> \n> 7.3 should be a little bit better because of Manfred's work on reducing\n> tuple header size --- if you create your tables WITHOUT OIDS, you should\n> save 8 bytes per row compared to earlier releases.\n\nGot it..\n\nBye\n Shridhar\n\n--\nSweater, n.:\tA garment worn by a child when its mother feels chilly.\n\n", "msg_date": "Mon, 07 Oct 2002 20:09:55 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> I say if it's a char field, there should be no indicator of length as\n> it's not required. Just store those many characters straight ahead..\n\nYour assumption fails when considering UNICODE or other multibyte\ncharacter encodings.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Oct 2002 11:21:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On Mon, 07 Oct 2002 19:48:31 +0530, \"Shridhar Daithankar\"\n<shridhar_daithankar@persistent.co.in> wrote:\n>I say if it's a char field, there should be no indicator of length as it's not \n>required. Just store those many characters straight ahead..\n\nThis is out of reach for a quick hack ...\n\n>Sure. But the server machine is not available this week. Some other project is \n>using it. So the results won't be out unless at least a week from now.\n\n :-)\n\n>This time results are correct. Postgresql loads data faster, indexes it faster \n>and queries in almost same time.. Way to go..\n\nGreat! And now let's work on making selects faster, too.\n\nServus\n Manfred\n", "msg_date": "Mon, 07 Oct 2002 17:22:41 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 7 Oct 2002 at 11:21, Tom Lane wrote:\n\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > I say if it's a char field, there should be no indicator of length as\n> > it's not required. Just store those many characters straight ahead..\n> \n> Your assumption fails when considering UNICODE or other multibyte\n> character encodings.\n\nCorrect but is it possible to have real char string when database is not \nunicode or when locale defines size of char, to be exact?\n\nIn my case varchar does not make sense as all strings are guaranteed to be of \ndefined length. While the argument you have put is correct, it's causing a disk \nspace leak, to say so.\n\nBye\n Shridhar\n\n--\nBoucher's Observation:\tHe who blows his own horn always plays the music\tseveral \noctaves higher than originally written.\n\n", "msg_date": "Tue, 08 Oct 2002 11:14:11 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On Tue, Oct 08, 2002 at 11:14:11AM +0530, Shridhar Daithankar wrote:\n> On 7 Oct 2002 at 11:21, Tom Lane wrote:\n> \n> > \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > > I say if it's a char field, there should be no indicator of length as\n> > > it's not required. Just store those many characters straight ahead..\n> > \n> > Your assumption fails when considering UNICODE or other multibyte\n> > character encodings.\n> \n> Correct but is it possible to have real char string when database is not \n> unicode or when locale defines size of char, to be exact?\n> \n> In my case varchar does not make sense as all strings are guaranteed to be of \n> defined length. While the argument you have put is correct, it's causing a disk \n> space leak, to say so.\n\nWell, maybe. But since 7.1 or so char() and varchar() simply became text\nwith some length restrictions. This was one of the reasons. It also\nsimplified a lot of code.\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> There are 10 kinds of people in the world, those that can do binary\n> arithmetic and those that can't.\n", "msg_date": "Tue, 8 Oct 2002 17:20:47 +1000", "msg_from": "Martijn van Oosterhout <kleptog@svana.org>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance" }, { "msg_contents": "Tom Lane wrote:\n> \n> \"David Blood\" <david@matraex.com> writes:\n> > In Oracle you can Pin large objects into memory to prevent frequent\n> > reloads. Is there anyway to do this with Postgres?\n> \n> I can never understand why people think this would be a good idea.\n> If you're hitting a table frequently, it will stay in memory anyway\n> (either in Postgres shared buffers or kernel disk cache). If you're\n> not hitting it frequently enough to keep it swapped in, then whatever\n> is getting swapped in instead is probably a better candidate to be\n> occupying the space. \n\nAs I understand it, he's looking for a mechanism to prevent a single\nsequential scan on a table, larger than the buffer cache, to kick out\neverything else at once. But I agree with you that pinning other objects\nis just mucking with the symptoms instead of curing the desease.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n", "msg_date": "Tue, 08 Oct 2002 09:32:50 -0400", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Pinning a table into memory" }, { "msg_contents": "On Tue, 2002-10-08 at 02:20, Martijn van Oosterhout wrote:\n> On Tue, Oct 08, 2002 at 11:14:11AM +0530, Shridhar Daithankar wrote:\n> > On 7 Oct 2002 at 11:21, Tom Lane wrote:\n> > \n> > > \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > > > I say if it's a char field, there should be no indicator of length as\n> > > > it's not required. Just store those many characters straight ahead..\n> > > \n> > > Your assumption fails when considering UNICODE or other multibyte\n> > > character encodings.\n> > \n> > Correct but is it possible to have real char string when database is not \n> > unicode or when locale defines size of char, to be exact?\n> > \n> > In my case varchar does not make sense as all strings are guaranteed to be of \n> > defined length. While the argument you have put is correct, it's causing a disk \n> > space leak, to say so.\n\nNot only that, but you get INSERT, UPDATE, DELETE and SELECT performance\ngains with fixed length records, since you don't get fragmentation.\n\nFor example:\nTABLE T\nF1 INTEGER;\nF2 VARCHAR(200)\n\nINSERT INTO T VALUES (1, 'FOO BAR');\nINSERT INTO T VALUES (2, 'SNAFU');\n\nNext,\nUPDATE T SET F2 = 'WIGGLE WAGGLE WUMPERSTUMPER' WHERE F1 = 1;\n\nUnless there is a big gap on disk between the 2 inserted records, \npostgresql must then look somewhere else for space to put the new\nversion of T WHERE F1 = 1.\n\nWith fixed-length records, you know exactly where you can put the\nnew value of F2, thus minimizing IO.\n\n> Well, maybe. But since 7.1 or so char() and varchar() simply became text\n> with some length restrictions. This was one of the reasons. It also\n> simplified a lot of code.\n\nHow much simpler can you get than fixed-length records? \n\nOf course, then there are 2 code paths, 1 for fixed length, and\n1 for variable length.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Oct 2002 08:50:52 -0500", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance" }, { "msg_contents": "Ron Johnson <ron.l.johnson@cox.net> writes:\n> Not only that, but you get INSERT, UPDATE, DELETE and SELECT performance\n> gains with fixed length records, since you don't get fragmentation.\n\nThat argument loses a lot of its force when you consider that Postgres\nuses non-overwriting storage management. We never do an UPDATE in-place\nanyway, and so it matters little whether the updated record is the same\nsize as the original.\n\n>> Well, maybe. But since 7.1 or so char() and varchar() simply became text\n>> with some length restrictions. This was one of the reasons. It also\n>> simplified a lot of code.\n\n> How much simpler can you get than fixed-length records? \n\nIt's not simpler: it's more complicated, because you need an additional\ninput item to figure out the size of any given column in a record.\nMaking sure that that info is available every place it's needed is one\nof the costs of supporting a feature like this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 10:38:02 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance " }, { "msg_contents": "On 8 Oct 2002 at 10:38, Tom Lane wrote:\n\n> Ron Johnson <ron.l.johnson@cox.net> writes:\n> It's not simpler: it's more complicated, because you need an additional\n> input item to figure out the size of any given column in a record.\n> Making sure that that info is available every place it's needed is one\n> of the costs of supporting a feature like this.\n\nI understand. Can we put this in say page header instead of tuple header. While \nall the arguments you have put are really good, the stellar redundancy \ncertainly can do with a mid-way solution.\n\nJust a thought..\n\nBye\n Shridhar\n\n--\nbit, n:\tA unit of measure applied to color. Twenty-four-bit color\trefers to \nexpensive $3 color as opposed to the cheaper 25\tcent, or two-bit, color that \nuse to be available a few years ago.\n\n", "msg_date": "Tue, 08 Oct 2002 20:11:47 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Large databases, performance " }, { "msg_contents": "On Tue, 2002-10-08 at 09:38, Tom Lane wrote:\n> Ron Johnson <ron.l.johnson@cox.net> writes:\n> > Not only that, but you get INSERT, UPDATE, DELETE and SELECT performance\n> > gains with fixed length records, since you don't get fragmentation.\n> \n> That argument loses a lot of its force when you consider that Postgres\n> uses non-overwriting storage management. We never do an UPDATE in-place\n> anyway, and so it matters little whether the updated record is the same\n> size as the original.\n\nMust you update any relative indexes, in order to point to the\nnew location of the record?\n\n> >> Well, maybe. But since 7.1 or so char() and varchar() simply became text\n> >> with some length restrictions. This was one of the reasons. It also\n> >> simplified a lot of code.\n> \n> > How much simpler can you get than fixed-length records? \n> \n> It's not simpler: it's more complicated, because you need an additional\n> input item to figure out the size of any given column in a record.\n\nWith fixed-length, why? From the metadata, you can compute the intra-\nrecord offsets. That's how it works with the commercial RDBMS that\nI use at work.\n\nOn that system, even variable-length records don't need record-size\nfields. Any repeating text (more that ~4 chars) is replaced with\nrun-length encoding. This includes the phantom spaces at the end\nof the field.\n\n> Making sure that that info is available every place it's needed is one\n> of the costs of supporting a feature like this.\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Oct 2002 10:16:55 -0500", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance" }, { "msg_contents": "Ron, Shridhar,\n\nMaybe I missed something on this thread, but can either of you give me\nan example of a real database where the PostgreSQL approach of \"all\nstrings are TEXT\" versus the more traditional CHAR implementation have\nresulted in measurable performance loss?\n\nOtherwise, this discussion is rather academic ...\n\n-Josh Berkus\n", "msg_date": "Tue, 08 Oct 2002 08:33:53 -0700", "msg_from": "\"Josh Berkus\" <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "Ron Johnson <ron.l.johnson@cox.net> writes:\n> On Tue, 2002-10-08 at 09:38, Tom Lane wrote:\n>> That argument loses a lot of its force when you consider that Postgres\n>> uses non-overwriting storage management. We never do an UPDATE in-place\n>> anyway, and so it matters little whether the updated record is the same\n>> size as the original.\n\n> Must you update any relative indexes, in order to point to the\n> new location of the record?\n\nWe make new index entries for the new record, yes. Both the old and new\nrecords must be indexed (until one or the other is garbage-collected by\nVACUUM) so that transactions can find whichever version they are\nsupposed to be able to see according to the tuple visibility rules.\n\n>> It's not simpler: it's more complicated, because you need an additional\n>> input item to figure out the size of any given column in a record.\n\n> With fixed-length, why? From the metadata, you can compute the intra-\n> record offsets.\n\nSure, but you need an additional item of metadata than you otherwise\nwould (this is atttypmod, in Postgres terms). I'm not certain that the\ntypmod is available everyplace that would need to be able to figure out\nthe physical width of a column.\n\n> On that system, even variable-length records don't need record-size\n> fields. Any repeating text (more that ~4 chars) is replaced with\n> run-length encoding. This includes the phantom spaces at the end\n> of the field.\n\nInteresting that you should bring that up in the context of an argument\nfor supporting fixed-width fields ;-). Doesn't any form of data\ncompression bring you right back into variable-width land?\n\nPostgres' approach to data compression is that it's done per-field,\nand only on variable-width fields. We steal a couple of bits from the\nlength word to allow flagging of compressed and out-of-line values.\nIf we were to make CHAR(n) fixed-width then it would lose the ability\nto participate in either compression or out-of-line storage.\n\nBetween that and the multibyte-encoding issue, I think it's very\ndifficult to make a case that the general-purpose CHAR(n) type should\nbe implemented as fixed-width. If someone has a specialized application\nwhere they need a restricted fixed-width string type, it's not that\nhard to make a user-defined type that supports only a single column\nwidth (and thereby gets around the typmod issue). So I'm satisfied with\nsaying \"define your own type if you want this\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 11:51:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Large databases, performance " }, { "msg_contents": "On Tue, 2002-10-08 at 10:33, Josh Berkus wrote:\n> Ron, Shridhar,\n> \n> Maybe I missed something on this thread, but can either of you give me\n> an example of a real database where the PostgreSQL approach of \"all\n> strings are TEXT\" versus the more traditional CHAR implementation have\n> resulted in measurable performance loss?\n\n??????\n\n> Otherwise, this discussion is rather academic ...\n\n-- \n+------------------------------------------------------------+\n| Ron Johnson, Jr. mailto:ron.l.johnson@cox.net |\n| Jefferson, LA USA http://members.cox.net/ron.l.johnson |\n| |\n| \"they love our milk and honey, but preach about another |\n| way of living\" |\n| Merle Haggard, \"The Fighting Side Of Me\" |\n+------------------------------------------------------------+\n\n", "msg_date": "08 Oct 2002 12:42:20 -0500", "msg_from": "Ron Johnson <ron.l.johnson@cox.net>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "\nRon,\n\n> > Maybe I missed something on this thread, but can either of you give me\n> > an example of a real database where the PostgreSQL approach of \"all\n> > strings are TEXT\" versus the more traditional CHAR implementation have\n> > resulted in measurable performance loss?\n>\n> ??????\n\nIn other words, if it ain't broke, don't fix it.\n\n-- \nJosh Berkus\njosh@agliodbs.com\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Oct 2002 15:44:36 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "\nRon,\n\n> > > > Maybe I missed something on this thread, but can either of you give\n> > > > me an example of a real database where the PostgreSQL approach of\n> > > > \"all strings are TEXT\" versus the more traditional CHAR\n> > > > implementation have resulted in measurable performance loss?\n> > >\n> > > ??????\n> >\n> > In other words, if it ain't broke, don't fix it.\n>\n> Well, does Really Slow Performance qualify as \"broke\"?\n\nThat's what I was asking. Can you explain where your slow performance is \nattibutable to the CHAR implementation issues? I missed that, if it was \nexplained earlier in the thread.\n\n-- \nJosh Berkus\njosh@agliodbs.com\nAglio Database Solutions\nSan Francisco\n", "msg_date": "Tue, 8 Oct 2002 16:36:40 -0700", "msg_from": "Josh Berkus <josh@agliodbs.com>", "msg_from_op": false, "msg_subject": "Re: CHAR, VARCHAR, TEXT (Was Large Databases)" }, { "msg_contents": "On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n<shridhar_daithankar@persistent.co.in> wrote:\n>Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql.\n\nShridhar,\n\nhere is an implementation of a set of user types: char3, char4,\nchar10. Put the attached files into a new directory contrib/fixchar,\nmake, make install, and run fixchar.sql through psql. Then create\nyour table as\n\tCREATE TABLE tbl (\n\ttype\t\tint,\n\tesn\t\tchar10,\n\tmin\t\tchar10,\n\tdatetime\ttimestamp,\n\topc0\t\tchar3,\n\t...\n\trest\t\tchar4,\n\tfield0\t\tint,\n\tfield1\t\tchar4,\n\t...\n\t)\n\nThis should save 76 bytes per heap tuple and 12 bytes per index tuple,\ngiving a database size of ~ 76 GB. I'd be very interested how this\naffects performance.\n\nCode has been tested for v7.2, it crashes on v7.3 beta 1. If this is\na problem, let me know.\n\nServus\n Manfred", "msg_date": "Wed, 09 Oct 2002 10:00:03 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 9 Oct 2002 at 10:00, Manfred Koizar wrote:\n\n> On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n> <shridhar_daithankar@persistent.co.in> wrote:\n> >Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql.\n> \n> Shridhar,\n> \n> here is an implementation of a set of user types: char3, char4,\n> char10. Put the attached files into a new directory contrib/fixchar,\n> make, make install, and run fixchar.sql through psql. Then create\n> your table as\n> \tCREATE TABLE tbl (\n> \ttype\t\tint,\n> \tesn\t\tchar10,\n> \tmin\t\tchar10,\n> \tdatetime\ttimestamp,\n> \topc0\t\tchar3,\n> \t...\n> \trest\t\tchar4,\n> \tfield0\t\tint,\n> \tfield1\t\tchar4,\n> \t...\n> \t)\n> \n> This should save 76 bytes per heap tuple and 12 bytes per index tuple,\n> giving a database size of ~ 76 GB. I'd be very interested how this\n> affects performance.\n> \n> Code has been tested for v7.2, it crashes on v7.3 beta 1. If this is\n> a problem, let me know.\n\nThank you very much for this. I would certainly give it a try. Please be \npatient as next test is scheuled on monday.\n\nBye\n Shridhar\n\n--\nlove, n.:\tWhen it's growing, you don't mind watering it with a few tears.\n\n", "msg_date": "Wed, 09 Oct 2002 13:37:13 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "On 9 Oct 2002 at 10:00, Manfred Koizar wrote:\n\n> On Mon, 07 Oct 2002 15:07:29 +0530, \"Shridhar Daithankar\"\n> <shridhar_daithankar@persistent.co.in> wrote:\n> >Only worry is database size. Postgresql is 111GB v/s 87 GB for mysql.\n> \n> Shridhar,\n> \n> here is an implementation of a set of user types: char3, char4,\n> char10. Put the attached files into a new directory contrib/fixchar,\n> make, make install, and run fixchar.sql through psql. Then create\n> your table as\n\nI had a quick look in things. I think it's a great learning material for pg \ninternals..;-)\n\nI have a suggestion. In README, it should be worth mentioning that, new types \ncan be added just by changin Makefile. e.g. Changing line\n\nOBJS = char3.o char4.o char10.o\n\nto\n\nOBJS = char3.o char4.o char5.o char10.o \n\nwould add the datatype char5 as well. \n\nObviously this is for those who might not take efforts to read the source. ( \nPersonally I wouldn't have, had it been part of entire postgres source dump. \nJust would have done ./configure;make;make install)\n\nThanks for the solution. It wouldn't have occurred to me in ages to create a \ntype for this. I guess that's partly because never used postgresql beyond \nselect/insert/update/delete. Anyway should have been awake..\n\nThanks once again\n\n\nBye\n Shridhar\n\n--\nBut it's real. And if it's real it can be affected ... we may not be ableto \nbreak it, but, I'll bet you credits to Navy Beans we can put a dent in it.\t\t-- \ndeSalle, \"Catspaw\", stardate 3018.2\n\n", "msg_date": "Wed, 09 Oct 2002 13:55:28 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: Large databases, performance" }, { "msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> here is an implementation of a set of user types: char3, char4,\n> char10.\n\nCoupla quick comments on these:\n\n> CREATE FUNCTION charNN_lt(charNN, charNN)\n> RETURNS boolean\n> AS '$libdir/fixchar'\n> LANGUAGE 'c';\n\n> bool\n> charNN_lt(char *a, char *b)\n> {\n> \treturn (strncmp(a, b, NN) < 0);\n> }/*charNN_lt*/\n\nThese functions are dangerous as written, because they will crash on\nnull inputs. I'd suggest marking them strict in the function\ndeclarations. Some attention to volatility declarations (isCachable\nor isImmutable) would be a good idea too.\n\nAlso, it'd be faster and more portable to write the functions with\nversion-1 calling conventions.\n\nUsing the Makefile to auto-create the differently sized versions is\na slick trick...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Oct 2002 09:32:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On 9 Oct 2002 at 9:32, Tom Lane wrote:\n\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > here is an implementation of a set of user types: char3, char4,\n> > char10.\n> \n> Coupla quick comments on these:\n> \n> > CREATE FUNCTION charNN_lt(charNN, charNN)\n> > RETURNS boolean\n> > AS '$libdir/fixchar'\n> > LANGUAGE 'c';\n> \n> > bool\n> > charNN_lt(char *a, char *b)\n> > {\n> > \treturn (strncmp(a, b, NN) < 0);\n> > }/*charNN_lt*/\n> \n> These functions are dangerous as written, because they will crash on\n> null inputs. I'd suggest marking them strict in the function\n> declarations. Some attention to volatility declarations (isCachable\n> or isImmutable) would be a good idea too.\n\nLet me add something. Using char* is bad idea. I had faced a situation recently \non HP-UX 11 that with a libc patch, isspace collapsed for char>127. Fix was to \nuse unsigned char. There are other places also where the input character is \nused as index to an array internally and can cause weird behaviour for values \n>127\n\nI will apply both the correction here. Will post the final stuff soon.\n\nBye\n Shridhar\n\n--\nHacker's Quicky #313:\tSour Cream -n- Onion Potato Chips\tMicrowave Egg Roll\t\nChocolate Milk\n\n", "msg_date": "Wed, 09 Oct 2002 19:11:09 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "I have a problem with the index of 1 table.\n\nI hava a table created :\n\tCREATE TABLE \"acucliart\" (\n \"cod_pto\" numeric(8,0) NOT NULL,\n \"cod_cli\" varchar(9) NOT NULL,\n \"mes\" numeric(2,0) NOT NULL,\n \"ano\" numeric(4,0) NOT NULL,\n \"int_art\" numeric(5,0) NOT NULL,\n \"cantidad\" numeric(12,2),\n \"ven_siv_to\" numeric(14,2),\n \"ven_civ_to\" numeric(14,2),\n \"tic_siv_to\" numeric(14,2),\n \"tic_civ_to\" numeric(14,2),\n \"visitas\" numeric(2,0),\n \"ult_vis\" date,\n \"ven_cos\" numeric(12,2),\n \"ven_ofe\" numeric(12,2),\n \"cos_ofe\" numeric(12,2),\n CONSTRAINT \"acucliart_pkey\"\n PRIMARY KEY (\"cod_cli\")\n);\n\nif i do this select:\n\texplain select * from acucliart where cod_cli=10000;\n\t\tpostgres use the index\n\t\tNOTICE: QUERY PLAN:\n\t\tIndex Scan using cod_cli_ukey on acucliart (cost=0.00..4.82 rows=1\nwidth=478)\n\nand this select\n\t\texplain select * from acucliart where cod_cli>10000;\n\t\tPostgres don't use the index:\n\t\tNOTICE: QUERY PLAN:\n\t\tSeq Scan on acucliart (cost=0.00..22.50 rows=333 width=478)\n\nwhy?\n\n\ntk\n\n", "msg_date": "Wed, 9 Oct 2002 18:56:41 +0200", "msg_from": "\"Jose Antonio Leo\" <jaleo8@storelandia.com>", "msg_from_op": false, "msg_subject": "problem with the Index" }, { "msg_contents": "On Wed, 9 Oct 2002, Jose Antonio Leo wrote:\n\n> I have a problem with the index of 1 table.\n>\n> I hava a table created :\n> \tCREATE TABLE \"acucliart\" (\n> \"cod_pto\" numeric(8,0) NOT NULL,\n> \"cod_cli\" varchar(9) NOT NULL,\n> \"mes\" numeric(2,0) NOT NULL,\n> \"ano\" numeric(4,0) NOT NULL,\n> \"int_art\" numeric(5,0) NOT NULL,\n> \"cantidad\" numeric(12,2),\n> \"ven_siv_to\" numeric(14,2),\n> \"ven_civ_to\" numeric(14,2),\n> \"tic_siv_to\" numeric(14,2),\n> \"tic_civ_to\" numeric(14,2),\n> \"visitas\" numeric(2,0),\n> \"ult_vis\" date,\n> \"ven_cos\" numeric(12,2),\n> \"ven_ofe\" numeric(12,2),\n> \"cos_ofe\" numeric(12,2),\n> CONSTRAINT \"acucliart_pkey\"\n> PRIMARY KEY (\"cod_cli\")\n> );\n>\n> if i do this select:\n> \texplain select * from acucliart where cod_cli=10000;\n> \t\tpostgres use the index\n> \t\tNOTICE: QUERY PLAN:\n> \t\tIndex Scan using cod_cli_ukey on acucliart (cost=0.00..4.82 rows=1\n> width=478)\n>\n> and this select\n> \t\texplain select * from acucliart where cod_cli>10000;\n> \t\tPostgres don't use the index:\n> \t\tNOTICE: QUERY PLAN:\n> \t\tSeq Scan on acucliart (cost=0.00..22.50 rows=333 width=478)\n>\n> why?\n\nWell, how many rows are in the table? In the first case it estimates 1\nrow will be returned, in the second 333. Index scans are not always faster\nthan sequential scans as the percentage of the table to scan becomes\nlarger. If you haven't analyzed recently, you probably should do so and\nif you want to compare, set enable_seqscan=off and try an explain there\nand see what it gives you.\n\nAlso, why are you comparing a varchar(9) column with an integer?\n\n", "msg_date": "Wed, 9 Oct 2002 10:31:12 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: [SQL] problem with the Index" }, { "msg_contents": "On Wed, 09 Oct 2002 09:32:50 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Coupla quick comments on these:\n\nMy first attempt on user types; thanks for the tips.\n\n>These functions are dangerous as written, because they will crash on\n>null inputs. I'd suggest marking them strict in the function\n>declarations.\n\nI was not aware of this, just wondered why bpchar routines didn't\ncrash :-) Fixed.\n\n>Some attention to volatility declarations (isCachable\n>or isImmutable) would be a good idea too.\n>Also, it'd be faster and more portable to write the functions with\n>version-1 calling conventions.\n\nDone, too. In the meantime I've found out why it crashed with 7.3:\nINSERT INTO pg_opclass is now obsolete, have to use CREATE OPERATOR\nCLASS ...\n\nServus\n Manfred\n", "msg_date": "Wed, 09 Oct 2002 20:09:03 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: [pgsql-performance] Large databases, performance " }, { "msg_contents": "On Wed, 09 Oct 2002 10:00:03 +0200, I wrote:\n>here is an implementation of a set of user types: char3, char4,\n>char10.\n\nNew version available. As I don't want to spam the list with various\nversions until I get it right eventually, you can get it from\nhttp://members.aon.at/pivot/pg/fixchar20021010.tgz if you are\ninterested.\n\nWhat's new:\n\n. README updated (per Shridhar's suggestion)\n. doesn't crash on NULL (p. Tom)\n. version-1 calling conventions (p. Tom)\n. isCachable (p. Tom)\n. works for 7.2 (as delivered) and for 7.3 (make for73)\n\nShridhar, you were concerned about signed/unsigned chars; looking at\nthe code I can not see how this is a problem. So no change in this\nregard.\n\nThanks for your comments. Have fun!\n\nServus\n Manfred\n", "msg_date": "Thu, 10 Oct 2002 15:30:31 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "contrib/fixchar (Was: Large databases, performance)" }, { "msg_contents": "On 10 Oct 2002 at 15:30, Manfred Koizar wrote:\n\n> On Wed, 09 Oct 2002 10:00:03 +0200, I wrote:\n> >here is an implementation of a set of user types: char3, char4,\n> >char10.\n> \n> New version available. As I don't want to spam the list with various\n> versions until I get it right eventually, you can get it from\n> http://members.aon.at/pivot/pg/fixchar20021010.tgz if you are\n> interested.\n> \n> What's new:\n> \n> . README updated (per Shridhar's suggestion)\n> . doesn't crash on NULL (p. Tom)\n> . version-1 calling conventions (p. Tom)\n> . isCachable (p. Tom)\n> . works for 7.2 (as delivered) and for 7.3 (make for73)\n> \n> Shridhar, you were concerned about signed/unsigned chars; looking at\n> the code I can not see how this is a problem. So no change in this\n> regard.\n\nWell, this is not related to postgresql exactly but to summerise the problem, \nwith libc patch PHCO_19090 or compatible upwards, on HP-UX11, isspace does not \nwork correctly if input value is >127. Can cause lot of problem for an external \napp. It works fine with unsigned char\n\nDoes not make a difference from postgrersql point of view but would break non-\nenglish locale if they want to use this fix under some situation.\n\nBut I agree, unless somebody reports it, no point fixing it and we know the fix \nanyway..\n\n\nBye\n Shridhar\n\n--\nLive long and prosper.\t\t-- Spock, \"Amok Time\", stardate 3372.7\n\n", "msg_date": "Thu, 10 Oct 2002 19:19:11 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: contrib/fixchar (Was: Large databases, performance)" }, { "msg_contents": "\n> Well, this is not related to postgresql exactly but to summerise the\n> problem, with libc patch PHCO_19090 or compatible upwards, on\n> HP-UX11, isspace does not work correctly if input value is >127.\n\no isspace() and such are defined in the standards to operate on characters\no for historic C reasons, 'char' is widened to 'int' in function calls\no it is platform dependent whether 'char' is a signed or unsigned type\n\nIf your platform has signed 'char' (as HP-UX does on PA-RISC) and you\npass a value that is negative it will be sign extended when converted\nto 'int', and may be outside the range of values for which isspace()\nis defined.\n\nPortable code uses 'unsigned char' when using ctype.h features, even\nthough for many platforms where 'char' is an unsigned type it's not\nnecessary for correct functioning.\n\nI don't see any isspace() or similar in the code though, so I'm not\nsure why this issue is being raised?\n\nRegards,\n\nGiles\n", "msg_date": "Sat, 12 Oct 2002 08:54:48 +1000", "msg_from": "Giles Lean <giles@nemeton.com.au>", "msg_from_op": false, "msg_subject": "Re: contrib/fixchar (Was: Large databases, performance) " }, { "msg_contents": "Giles Lean <giles@nemeton.com.au> writes:\n> Portable code uses 'unsigned char' when using ctype.h features, even\n> though for many platforms where 'char' is an unsigned type it's not\n> necessary for correct functioning.\n\nYup. Awhile back I went through the PG sources and made sure we\nexplicitly casted the arguments of ctype.h functions to \"unsigned char\"\nif they weren't already. If anyone sees a place I missed (or that\nsnuck in later) please speak up!\n\n> I don't see any isspace() or similar in the code though, so I'm not\n> sure why this issue is being raised?\n\nDitto, I saw no ctype.h usage in Manfred's code. It matters not whether\nyou label strcmp's argument as unsigned...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Oct 2002 00:20:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib/fixchar (Was: Large databases, performance) " }, { "msg_contents": "On 12 Oct 2002 at 8:54, Giles Lean wrote:\n\n> Portable code uses 'unsigned char' when using ctype.h features, even\n> though for many platforms where 'char' is an unsigned type it's not\n> necessary for correct functioning.\n> \n> I don't see any isspace() or similar in the code though, so I'm not\n> sure why this issue is being raised?\n\nWell, I commented on fixchar contrib module that it should use unsigned char \nrather than just char, to be on safer side on all platforms. Nothing much..\n\nBye\n Shridhar\n\n--\nbrokee, n:\tSomeone who buys stocks on the advice of a broker.\n\n", "msg_date": "Sat, 12 Oct 2002 12:11:02 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": true, "msg_subject": "Re: contrib/fixchar (Was: Large databases, performance) " } ]
[ { "msg_contents": ">My limited reading of off_t stuff now suggests that it would be brave to \n>assume it is even a simple 64 bit number (or even 3 32 bit numbers). One \n>alternative, which I am not terribly fond of, is to have pg_dump write \n>multiple files - when we get to 1 or 2GB, we just open another file, and \n>record our file positions as a (file number, file position) pair. Low tech, \n>but at least we know it would work.\n>\n>Unless anyone knows of a documented way to get 64 bit uint/int file \n>offsets, I don't see we have mush choice.\n\nHow common is fgetpos64? Linux supports it, but I don't know about other\nsystems.\n\nhttp://hpc.uky.edu/cgi-bin/man.cgi?section=all&topic=fgetpos64\n\nRegards,\n\tMario Weilguni\n", "msg_date": "Thu, 3 Oct 2002 15:18:44 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump and large files - is this a problem? " } ]
[ { "msg_contents": "Good day,\n\nI just stumbled across this peculiarity in PL/Perl today writing a method\nto invoke Perl Regexes from a function: if a run-time error is raised in\nan otherwise good function, the function will never run correctly again\nuntil the connection to the database is reset. I poked around in the code\nand it appears that it's because when elog() raises the ERROR, it doesn't\nfirst take action to erase the system error message ($@) and consequently\nevery subsequent run has an error raised, even if it runs successfully.\n\nFor example:\n\n-- This comparison works fine.\n\ntemplate1=# SELECT perl_re_match('test', 'test');\n perl_re_match\n---------------\n t\n(1 row)\n\n-- This one dies, for obvious reasons.\n\ntemplate1=# SELECT perl_re_match('test', 't{1}+?');\nERROR: plperl: error from function: (in cleanup) Nested quantifiers\nbefore HERE mark in regex m/t{1}+ << HERE ?/ at (eval 2) line 4.\n\n-- This should work fine again, but we still have this error raised...!\n\ntemplate1=# SELECT perl_re_match('test', 'test');\nERROR: plperl: error from function: (in cleanup) Nested quantifiers\nbefore HERE mark in regex m/t{1}+ << HERE ?/ at (eval 2) line 4.\n\nI don't know if the following is the best way to solve it, but I got\naround it by modifying the error report in this part of PL/Perl to be a\nNOTICE, cleared the $@ variable, and then raised the fatal ERROR. A simple\nthree line patch to plperl.c follows, and is attached.\n\nsrc/pl/plperl/plperl.c:\n443c443,445\n< elog(ERROR, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n---\n> elog(NOTICE, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> sv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n> elog(ERROR, \"plperl: error was fatal.\");\n\nBest Regards,\nJw.\n-- \nJohn Worsley - lx@openvein.com\nhttp://www.openvein.com/", "msg_date": "Thu, 3 Oct 2002 14:47:35 -0700 (PDT)", "msg_from": "John Worsley <lx@openvein.com>", "msg_from_op": true, "msg_subject": "[GENERAL] Small patch for PL/Perl Misbehavior with Runtime Error\n\tReporting" }, { "msg_contents": "John Worsley <lx@openvein.com> writes:\n> I just stumbled across this peculiarity in PL/Perl today writing a method\n> to invoke Perl Regexes from a function: if a run-time error is raised in\n> an otherwise good function, the function will never run correctly again\n> until the connection to the database is reset. I poked around in the code\n> and it appears that it's because when elog() raises the ERROR, it doesn't\n> first take action to erase the system error message ($@) and consequently\n> every subsequent run has an error raised, even if it runs successfully.\n\nThat seems a little weird. Does Perl really expect people to do that\n(ie, is it a documented part of some API)? I wonder whether there is\nsome other action that we're supposed to take instead, but are\nmissing...\n\n> src/pl/plperl/plperl.c:\n> 443c443,445\n> < elog(ERROR, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> ---\n>> elog(NOTICE, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n>> sv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n>> elog(ERROR, \"plperl: error was fatal.\");\n\nIf this is what we'd have to do, I think a better way would be\n\n\tperlerrmsg = pstrdup(SvPV(ERRSV, PL_na));\n\tsv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n\telog(ERROR, \"plperl: error from function: %s\", perlerrmsg);\n\nSplitting the ERROR into a NOTICE with the useful info and an ERROR\nwithout any isn't real good, because the NOTICE could get dropped on the\nfloor (either because of min_message_level or a client that just plain\nloses notices).\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 19:53:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Small patch for PL/Perl Misbehavior with Runtime Error\n\tReporting" }, { "msg_contents": "On Thu, 3 Oct 2002, Tom Lane wrote:\n>That seems a little weird. Does Perl really expect people to do that\n>(ie, is it a documented part of some API)? I wonder whether there is\n>some other action that we're supposed to take instead, but are\n>missing...\n\nNot that I know of: clearing out the $@ variable manually was just my way\nof getting around the problem in practice. I think the underlying issue\nmay be tied to the fact that it's running a function generated within a\nSafe Module, but I'm not enough of a Perl Guru to say anything more\ndecisive than that.\n\n>If this is what we'd have to do, I think a better way would be\n>\n>\tperlerrmsg = pstrdup(SvPV(ERRSV, PL_na));\n>\tsv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n>\telog(ERROR, \"plperl: error from function: %s\", perlerrmsg);\n>Splitting the ERROR into a NOTICE with the useful info and an ERROR\n>without any isn't real good, because the NOTICE could get dropped on the\n>floor (either because of min_message_level or a client that just plain\n>loses notices).\n\nYeah, that's a cleaner solution. I take it anything pstrdup'd by\nPostgreSQL gets freed automatically by the backend? (I wasn't familiar\nenough with the backend to know how to ask for memory confident in the\nunderstanding that it would at some point be freed. ;)\n\nJw.\n-- \nJohn Worsley - lx@openvein.com\nhttp://www.openvein.com/\n\n", "msg_date": "Fri, 4 Oct 2002 20:41:45 -0700 (PDT)", "msg_from": "John Worsley <lx@openvein.com>", "msg_from_op": true, "msg_subject": "Re: [GENERAL] Small patch for PL/Perl Misbehavior with" }, { "msg_contents": "John Worsley <lx@openvein.com> writes:\n> Yeah, that's a cleaner solution. I take it anything pstrdup'd by\n> PostgreSQL gets freed automatically by the backend?\n\nPretty much. The only situation where it wouldn't be is if\nCurrentMemoryContext is pointing at TopMemoryContext or another\nlong-lived context --- but we are *very* chary about how much code we\nallow to run with such a setting. User-definable functions can safely\nassume that palloc'd space will live only long enough for them to return\nsomething to their caller in it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 23:51:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Small patch for PL/Perl Misbehavior with Runtime Error\n\tReporting" }, { "msg_contents": "\nDid we ever address this plperl issue? I see the code unchanged in CVS.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> John Worsley <lx@openvein.com> writes:\n> > I just stumbled across this peculiarity in PL/Perl today writing a method\n> > to invoke Perl Regexes from a function: if a run-time error is raised in\n> > an otherwise good function, the function will never run correctly again\n> > until the connection to the database is reset. I poked around in the code\n> > and it appears that it's because when elog() raises the ERROR, it doesn't\n> > first take action to erase the system error message ($@) and consequently\n> > every subsequent run has an error raised, even if it runs successfully.\n> \n> That seems a little weird. Does Perl really expect people to do that\n> (ie, is it a documented part of some API)? I wonder whether there is\n> some other action that we're supposed to take instead, but are\n> missing...\n> \n> > src/pl/plperl/plperl.c:\n> > 443c443,445\n> > < elog(ERROR, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> > ---\n> >> elog(NOTICE, \"plperl: error from function: %s\", SvPV(ERRSV, PL_na));\n> >> sv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n> >> elog(ERROR, \"plperl: error was fatal.\");\n> \n> If this is what we'd have to do, I think a better way would be\n> \n> \tperlerrmsg = pstrdup(SvPV(ERRSV, PL_na));\n> \tsv_setpv(perl_get_sv(\"@\",FALSE),\"\");\n> \telog(ERROR, \"plperl: error from function: %s\", perlerrmsg);\n> \n> Splitting the ERROR into a NOTICE with the useful info and an ERROR\n> without any isn't real good, because the NOTICE could get dropped on the\n> floor (either because of min_message_level or a client that just plain\n> loses notices).\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 24 May 2003 00:03:57 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Small patch for PL/Perl Misbehavior with Runtime" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Did we ever address this plperl issue? I see the code unchanged in CVS.\n\nIt's fixed, although not in the way proposed in that patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 May 2003 00:09:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] Small patch for PL/Perl Misbehavior with Runtime Error\n\tReporting" } ]
[ { "msg_contents": "tom lane wrote:\n> But more globally, I think that our worst problems these days have to do\n> with planner misestimations leading to bad plans. The planner is\n> usually *capable* of generating a good plan, but all too often it picks\n> the wrong one. We need work on improving the cost modeling equations\n> to be closer to reality. If that's at all close to your sphere of\n> interest then I think it should be #1 priority --- it's both localized,\n> which I think is important for a first project, and potentially a\n> considerable win.\n\nThis seems like a very interesting problem. One of the ways that I thought\nwould be interesting and would solve the problem of trying to figure out the\nright numbers is to have certain guesses for the actual values based on\nstatistics gathered during vacuum and general running and then have the\nplanner run the \"best\" plan.\n\nThen during execution if the planner turned out to be VERY wrong about\ncertain assumptions the execution system could update the stats that led to\nthose wrong assumptions. That way the system would seek the correct values\nautomatically. We could also gather the stats that the system produces for\ncertain actual databases and then use those to make smarter initial guesses.\n\nI've found that I can never predict costs. I always end up testing\nempirically and find myself surprised at the results.\n\nWe should be able to make the executor smart enough to keep count of actual\ncosts (or a statistical approximation) without introducing any significant\noverhead.\n\ntom lane also wrote:\n> There is no \"cache flushing\". We have a shared buffer cache management\n> algorithm that's straight LRU across all buffers. There's been some\n> interest in smarter cache-replacement code; I believe Neil Conway is\n> messing around with an LRU-2 implementation right now. If you've got\n> better ideas we're all ears.\n\nHmmm, this is the area that I think could lead to huge performance gains.\n\nConsider a simple system with a table tbl_master that gets read by each\nprocess many times but with very infrequent inserts and that contains about\n3,000 rows. The single but heavily used index for this table is contained in\na btree with a depth of three with 20 - 8K pages in the first two levels of\nthe btree.\n\nAnother table tbl_detail with 10 indices that gets very frequent inserts.\nThere are over 300,000 rows. Some queries result in index scans over the\napproximatley 5,000 8K pages in the index.\n\nThere is a 40M shared cache for this system.\n\nEverytime a query which requires the index scan runs it will blow out the\nentire cache since the scan will load more blocks than the cache holds. Only\nblocks that are accessed while the scan is going will survive. LRU is bad,\nbad, bad!\n\nLRU-2 might be better but it seems like it still won't give enough priority\nto the most frequently used blocks. I don't see how it would do better for\nthe above case.\n\nI once implemented a modified cache algorithm that was based on the clock\nalgorithm for VM page caches. VM paging is similar to databases in that\nthere is definite locality of reference and certain pages are MUCH more\nlikely to be requested.\n\nThe basic idea was to have a flag in each block that represented the access\ntime in clock intervals. Imagine a clock hand sweeping across a clock, every\naccess is like a tiny movement in the clock hand. Blocks that are not\naccessed during a sweep are candidates for removal.\n\nMy modification was to use access counts to increase the durability of the\nmore accessed blocks. Each time a block is accessed it's flag is shifted\nleft (up to a maximum number of shifts - ShiftN ) and 1 is added to it.\nEvery so many cache accesses (and synchronously when the cache is full) a\npass is made over each block, right shifting the flags (a clock sweep). This\ncan also be done one block at a time each access so the clock is directly\nlinked to the cache access rate. Any blocks with 0 are placed into a doubly\nlinked list of candidates for removal. New cache blocks are allocated from\nthe list of candidates. Accesses of blocks in the candidate list just\nremoves them from the list.\n\nAn index root node page would likely be accessed frequently enough so that\nall it's bits would be set so it would take ShiftN clock sweeps.\n\nThis algorithm increased the cache hit ratio from 40% to about 90% for the\ncases I tested when compared to a simple LRU mechanism. The paging ratio is\ngreatly dependent on the ratio of the actual database size to the cache\nsize.\n\nThe bottom line that it is very important to keep blocks that are frequently\naccessed in the cache. The top levels of large btrees are accessed many\nhundreds (actually a power of the number of keys in each page) of times more\nfrequently than the leaf pages. LRU can be the worst possible algorithm for\nsomething like an index or table scan of large tables since it flushes a\nlarge number of potentially frequently accessed blocks in favor of ones that\nare very unlikely to be retrieved again.\n\ntom lane also wrote:\n> This is an interesting area. Keep in mind though that Postgres is a\n> portable DB that tries to be agnostic about what kernel and filesystem\n> it's sitting on top of --- and in any case it does not run as root, so\n> has very limited ability to affect what the kernel/filesystem do.\n> I'm not sure how much can be done without losing those portability\n> advantages.\n\nThe kinds of things I was thinking about should be very portable. I found\nthat simply writing the cache in order of the file system offset results in\nvery greatly improved performance since it lets the head seek in smaller\nincrements and much more smoothly, especially with modern disks. Most of the\ntime the file system will create files are large sequential bytes on the\nphysical disks in order. It might be in a few chunks but those chunks will\nbe sequential and fairly large.\n\ntom lane also wrote:\n> Well, not really all that isolated. The bottom-level index code doesn't\n> know whether you're doing INSERT or UPDATE, and would have no easy\n> access to the original tuple if it did know. The original theory about\n> this was that the planner could detect the situation where the index(es)\n> don't overlap the set of columns being changed by the UPDATE, which\n> would be nice since there'd be zero runtime overhead. Unfortunately\n> that breaks down if any BEFORE UPDATE triggers are fired that modify the\n> tuple being stored. So all in all it turns out to be a tad messy to fit\n> this in :-(. I am unconvinced that the impact would be huge anyway,\n> especially as of 7.3 which has a shortcut path for dead index entries.\n\nWell, this probably is not the right place to start then.\n\n- Curtis\n\n", "msg_date": "Thu, 3 Oct 2002 18:17:55 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Advice: Where could I be of help? " }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> Then during execution if the planner turned out to be VERY wrong about\n> certain assumptions the execution system could update the stats that led to\n> those wrong assumptions. That way the system would seek the correct values\n> automatically.\n\nThat has been suggested before, but I'm unsure how to make it work.\nThere are a lot of parameters involved in any planning decision and it's\nnot obvious which ones to tweak, or in which direction, if the plan\nturns out to be bad. But if you can come up with some ideas, go to\nit!\n\n> Everytime a query which requires the index scan runs it will blow out the\n> entire cache since the scan will load more blocks than the cache\n> holds.\n\nRight, that's the scenario that kills simple LRU ...\n\n> LRU-2 might be better but it seems like it still won't give enough priority\n> to the most frequently used blocks.\n\nBlocks touched more than once per query (like the upper-level index\nblocks) will survive under LRU-2. Blocks touched once per query won't.\nSeems to me that it should be a win.\n\n> My modification was to use access counts to increase the durability of the\n> more accessed blocks.\n\nYou could do it that way too, but I'm unsure whether the extra\ncomplexity will buy anything. Ultimately, I think an LRU-anything\nalgorithm is equivalent to a clock sweep for those pages that only get\ntouched once per some-long-interval: the single-touch guys get recycled\nin order of last use, which seems just like a clock sweep around the\ncache. The guys with some amount of preference get excluded from the\nonce-around sweep. To determine whether LRU-2 is better or worse than\nsome other preference algorithm requires a finer grain of analysis than\nthis. I'm not a fan of \"more complex must be better\", so I'd want to see\nwhy it's better before buying into it ...\n\n> The kinds of things I was thinking about should be very portable. I found\n> that simply writing the cache in order of the file system offset results in\n> very greatly improved performance since it lets the head seek in smaller\n> increments and much more smoothly, especially with modern disks.\n\nShouldn't the OS be responsible for scheduling those writes\nappropriately? Ye good olde elevator algorithm ought to handle this;\nand it's at least one layer closer to the actual disk layout than we\nare, thus more likely to issue the writes in a good order. It's worth\nexperimenting with, perhaps, but I'm pretty dubious about it.\n\nBTW, one other thing that Vadim kept saying we should do is alter the\ncache management strategy to retain dirty blocks in memory (ie, give\nsome amount of preference to as-yet-unwritten dirty pages compared to\nclean pages). There is no reliability cost here since the WAL will let\nus reconstruct any dirty pages if we crash before they get written; and\nthe periodic checkpoints will ensure that we eventually write a dirty\nblock and thus it will become available for recycling. This seems like\na promising line of thought that's orthogonal to the basic\nLRU-vs-whatever issue. Nobody's got round to looking at it yet though.\nI've got no idea how much preference should be given to a dirty block\n--- not infinite, probably, but some.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 18:47:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Advice: Where could I be of help? " }, { "msg_contents": "I wrote:\n\n> > My modification was to use access counts to increase the\n> durability of the\n> > more accessed blocks.\n>\n\ntom lane replies:\n> You could do it that way too, but I'm unsure whether the extra\n> complexity will buy anything. Ultimately, I think an LRU-anything\n> algorithm is equivalent to a clock sweep for those pages that only get\n> touched once per some-long-interval: the single-touch guys get recycled\n> in order of last use, which seems just like a clock sweep around the\n> cache. The guys with some amount of preference get excluded from the\n> once-around sweep. To determine whether LRU-2 is better or worse than\n> some other preference algorithm requires a finer grain of analysis than\n> this. I'm not a fan of \"more complex must be better\", so I'd want to see\n> why it's better before buying into it ...\n\nI'm definitely not a fan of \"more complex must be better either\". In fact,\nits surprising how often the real performance problems are easy to fix\nand simple while many person years are spent solving the issue everyone\n\"knows\" must be causing the performance problems only to find little gain.\n\nThe key here is empirical testing. If the cache hit ratio for LRU-2 is\nmuch better then there may be no need here. OTOH, it took less than\nless than 30 lines or so of code to do what I described, so I don't consider\nit too, too \"more complex\" :=} We should run a test which includes\nrunning indexes (or is indices the PostgreSQL convention?) that are three\nor more times the size of the cache to see how well LRU-2 works. Is there\nany cache performance reporting built into pgsql?\n\ntom lane wrote:\n> Shouldn't the OS be responsible for scheduling those writes\n> appropriately? Ye good olde elevator algorithm ought to handle this;\n> and it's at least one layer closer to the actual disk layout than we\n> are, thus more likely to issue the writes in a good order. It's worth\n> experimenting with, perhaps, but I'm pretty dubious about it.\n\nI wasn't proposing anything other than changing the order of the writes,\nnot actually ensuring that they get written that way at the level you\ndescribe above. This will help a lot on brain-dead file systems that\ncan't do this ordering and probably also in cases where the number\nof blocks in the cache is very large.\n\nOn a related note, while looking at the code, it seems to me that we\nare writing out the buffer cache synchronously, so there won't be\nany possibility of the file system reordering anyway. This appears to be\na huge performance problem. I've read claims in the archives that\nthat the buffers are written asynchronously but my read of the\ncode says otherwise. Can someone point out my error?\n\nI only see calls that ultimately call FileWrite or write(2) which will\nblock without a O_NOBLOCK open. I thought one of the main reasons\nfor having a WAL is so that you can write out the buffer's asynchronously.\n\nWhat am I missing?\n\nI wrote:\n> > Then during execution if the planner turned out to be VERY wrong about\n> > certain assumptions the execution system could update the stats\n> that led to\n> > those wrong assumptions. That way the system would seek the\n> correct values\n> > automatically.\n\ntom lane replied:\n> That has been suggested before, but I'm unsure how to make it work.\n> There are a lot of parameters involved in any planning decision and it's\n> not obvious which ones to tweak, or in which direction, if the plan\n> turns out to be bad. But if you can come up with some ideas, go to\n> it!\n\nI'll have to look at the current planner before I can suggest\nanything concrete.\n\n- Curtis\n\n", "msg_date": "Fri, 4 Oct 2002 01:26:36 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Advice: Where could I be of help? " } ]
[ { "msg_contents": "I've been looking at the TODO lists and caching issues and think there may\nbe a way to greatly improve the performance of the WAL.\n\nI've made the following assumptions based on my reading in the manual and\nthe WAL archives since about November 2000:\n\n1) WAL is currently fsync'd before commit succeeds. This is done to ensure\nthat the D in ACID is satisfied.\n2) The wait on fsync is the biggest time cost for inserts or updates.\n3) fsync itself probably increases contention for file i/o on the same file\nsince some OS file system cache structures must be locked as part of fsync.\nDepending on the file system this could be a significant choke on total i/o\nthroughput.\n\nThe issue is that there must be a definite record in durable storage for the\nlog before one can be certain that a transaction has succeeded.\n\nI'm not familiar with the exact WAL implementation in PostgreSQL but am\nfamiliar with others including ARIES II, however, it seems that it comes\ndown to making sure that the write to the WAL log has been positively\nwritten to disk.\n\nSo, why don't we use files opened with O_DSYNC | O_APPEND for the WAL log\nand then use aio_write for all log writes? A transaction would simple do all\nthe log writing using aio_write and block until all the last log aio request\nhas completed using aio_waitcomplete. The call to aio_waitcomplete won't\nreturn until the log record has been written to the disk. Opening with\nO_DSYNC ensures that when i/o completes the write has been written to the\ndisk, and aio_write with O_APPEND opened files ensures that writes append in\nthe order they are received, hence when the aio_write for the last log entry\nfor a transaction completes, the transaction can be sure that its log\nrecords are in durable storage (IDE problems aside).\n\nIt seems to me that this would:\n\n1) Preserve the required D semantics.\n2) Allow transactions to complete and do work while other threads are\nwaiting on the completion of the log write.\n3) Obviate the need for commit_delay, since there is no blocking and the\nfile system and the disk controller can put multiple writes to the log\ntogether as the drive is waiting for the end of the log file to come under\none of the heads.\n\nHere are the relevant TODO's:\n\n Delay fsync() when other backends are about to commit too [fsync]\n Determine optimal commit_delay value\n\n Determine optimal fdatasync/fsync, O_SYNC/O_DSYNC options\n Allow multiple blocks to be written to WAL with one write()\n\n\nAm I missing something?\n\nCurtis Faith\nPrincipal\nGalt Capital, LLP\n\n------------------------------------------------------------------\nGalt Capital http://www.galtcapital.com\n12 Wimmelskafts Gade\nPost Office Box 7549 voice: 340.776.0144\nCharlotte Amalie, St. Thomas fax: 340.776.0244\nUnited States Virgin Islands 00801 cell: 340.643.5368\n\n", "msg_date": "Thu, 3 Oct 2002 18:26:02 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Potential Large Performance Gain in WAL synching" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> So, why don't we use files opened with O_DSYNC | O_APPEND for the WAL log\n> and then use aio_write for all log writes?\n\nWe already offer an O_DSYNC option. It's not obvious to me what\naio_write brings to the table (aside from loss of portability).\nYou still have to wait for the final write to complete, no?\n\n> 2) Allow transactions to complete and do work while other threads are\n> waiting on the completion of the log write.\n\nI'm missing something. There is no useful work that a transaction can\ndo between writing its commit record and reporting completion, is there?\nIt has to wait for that record to hit disk.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Oct 2002 19:17:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "\ntom lane replies:\n> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > So, why don't we use files opened with O_DSYNC | O_APPEND for \n> the WAL log\n> > and then use aio_write for all log writes?\n> \n> We already offer an O_DSYNC option. It's not obvious to me what\n> aio_write brings to the table (aside from loss of portability).\n> You still have to wait for the final write to complete, no?\n\nWell, for starters by the time the write which includes the commit\nlog entry is written, much of the writing of the log for the\ntransaction will already be on disk, or in a controller on its \nway.\n\nI don't see any O_NONBLOCK or O_NDELAY references in the sources \nso it looks like the log writes are blocking. If I read correctly,\nXLogInsert calls XLogWrite which calls write which blocks. If these\nassumptions are correct, there should be some significant gain here but I\nwon't know how much until I try to change it. This issue only affects the\nspeed of a given back-ends transaction processing capability.\n\nThe REAL issue and the one that will greatly affect total system\nthroughput is that of contention on the file locks. Since fsynch needs to\nobtain a write lock on the file descriptor, as does the write calls which\noriginate from XLogWrite as the writes are written to the disk, other\nback-ends will block while another transaction is committing if the\nlog cache fills to the point where their XLogInsert results in a \nXLogWrite call to flush the log cache. I'd guess this means that one\nwon't gain much by adding other back-end processes past three or four\nif there are a lot of inserts or updates.\n\nThe method I propose does not result in any blocking because of writes\nother than the final commit's write and it has the very significant\nadvantage of allowing other transactions (from other back-ends) to\ncontinue until they enter commit (and blocking waiting for their final\ncommit write to complete).\n\n> > 2) Allow transactions to complete and do work while other threads are\n> > waiting on the completion of the log write.\n> \n> I'm missing something. There is no useful work that a transaction can\n> do between writing its commit record and reporting completion, is there?\n> It has to wait for that record to hit disk.\n\nThe key here is that a thread that has not committed and therefore is\nnot blocking can do work while \"other threads\" (should have said back-ends \nor processes) are waiting on their commit writes.\n\n- Curtis\n\nP.S. If I am right in my assumptions about the way the current system\nworks, I'll bet the change would speed up inserts in Shridhar's huge\ndatabase test by at least a factor of two or three, perhaps even an\norder of magnitude. :-)\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Thursday, October 03, 2002 7:17 PM\n> To: Curtis Faith\n> Cc: Pgsql-Hackers\n> Subject: Re: [HACKERS] Potential Large Performance Gain in WAL synching \n> \n> \n> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > So, why don't we use files opened with O_DSYNC | O_APPEND for \n> the WAL log\n> > and then use aio_write for all log writes?\n> \n> We already offer an O_DSYNC option. It's not obvious to me what\n> aio_write brings to the table (aside from loss of portability).\n> You still have to wait for the final write to complete, no?\n> \n> > 2) Allow transactions to complete and do work while other threads are\n> > waiting on the completion of the log write.\n> \n> I'm missing something. There is no useful work that a transaction can\n> do between writing its commit record and reporting completion, is there?\n> It has to wait for that record to hit disk.\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Fri, 4 Oct 2002 00:25:43 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "Curtis Faith wrote:\n> The method I propose does not result in any blocking because of writes\n> other than the final commit's write and it has the very significant\n> advantage of allowing other transactions (from other back-ends) to\n> continue until they enter commit (and blocking waiting for their final\n> commit write to complete).\n> \n> > > 2) Allow transactions to complete and do work while other threads are\n> > > waiting on the completion of the log write.\n> > \n> > I'm missing something. There is no useful work that a transaction can\n> > do between writing its commit record and reporting completion, is there?\n> > It has to wait for that record to hit disk.\n> \n> The key here is that a thread that has not committed and therefore is\n> not blocking can do work while \"other threads\" (should have said back-ends \n> or processes) are waiting on their commit writes.\n\nI may be missing something here, but other backends don't block while\none writes to WAL. Remember, we are proccess based, not thread based,\nso the write() call only blocks the one session. If you had threads,\nand you did a write() call that blocked other threads, I can see where\nyour idea would be good, and where async i/o becomes an advantage.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 00:44:22 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> The REAL issue and the one that will greatly affect total system\n> throughput is that of contention on the file locks. Since fsynch needs to\n> obtain a write lock on the file descriptor, as does the write calls which\n> originate from XLogWrite as the writes are written to the disk, other\n> back-ends will block while another transaction is committing if the\n> log cache fills to the point where their XLogInsert results in a \n> XLogWrite call to flush the log cache.\n\nBut that's exactly *why* we have a log cache: to ensure we can buffer a\nreasonable amount of log data between XLogFlush calls. If the above\nscenario is really causing a problem, doesn't that just mean you need\nto increase wal_buffers?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 00:50:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "Bruce Momjian wrote:\n> I may be missing something here, but other backends don't block while\n> one writes to WAL.\n\nI don't think they'll block until they get to the fsync or XLogWrite\ncall while another transaction is fsync'ing.\n\nI'm no Unix filesystem expert but I don't see how the OS can\nhandle multiple writes and fsyncs to the same file descriptors without\nblocking other processes from writing at the same time. It may be that\nthere are some clever data structures they use but I've not seen huge\npraise for most of the file systems. A well written file system could\nminimize this contention but I'll bet it's there with most of the ones\nthat PostgreSQL most commonly runs on.\n\nI'll have to write a test and see if there really is a problem.\n\n- Curtis\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Friday, October 04, 2002 12:44 AM\n> To: Curtis Faith\n> Cc: Tom Lane; Pgsql-Hackers\n> Subject: Re: [HACKERS] Potential Large Performance Gain in WAL synching\n>\n>\n> Curtis Faith wrote:\n> > The method I propose does not result in any blocking because of writes\n> > other than the final commit's write and it has the very significant\n> > advantage of allowing other transactions (from other back-ends) to\n> > continue until they enter commit (and blocking waiting for their final\n> > commit write to complete).\n> >\n> > > > 2) Allow transactions to complete and do work while other\n> threads are\n> > > > waiting on the completion of the log write.\n> > >\n> > > I'm missing something. There is no useful work that a transaction can\n> > > do between writing its commit record and reporting\n> completion, is there?\n> > > It has to wait for that record to hit disk.\n> >\n> > The key here is that a thread that has not committed and therefore is\n> > not blocking can do work while \"other threads\" (should have\n> said back-ends\n> > or processes) are waiting on their commit writes.\n>\n> I may be missing something here, but other backends don't block while\n> one writes to WAL. Remember, we are proccess based, not thread based,\n> so the write() call only blocks the one session. If you had threads,\n> and you did a write() call that blocked other threads, I can see where\n> your idea would be good, and where async i/o becomes an advantage.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n\n", "msg_date": "Fri, 4 Oct 2002 01:40:36 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "I wrote:\n> > The REAL issue and the one that will greatly affect total system\n> > throughput is that of contention on the file locks. Since\n> fsynch needs to\n> > obtain a write lock on the file descriptor, as does the write\n> calls which\n> > originate from XLogWrite as the writes are written to the disk, other\n> > back-ends will block while another transaction is committing if the\n> > log cache fills to the point where their XLogInsert results in a\n> > XLogWrite call to flush the log cache.\n\ntom lane wrote:\n> But that's exactly *why* we have a log cache: to ensure we can buffer a\n> reasonable amount of log data between XLogFlush calls. If the above\n> scenario is really causing a problem, doesn't that just mean you need\n> to increase wal_buffers?\n\nWell, in cases where there are a lot of small transactions the contention\nwill not be on the XLogWrite calls from caches getting full but from\nXLogWrite calls from transaction commits which will happen very frequently.\nI think this will have a detrimental effect on very high update frequency\nperformance.\n\nSo while larger WAL caches will help in the case of cache flushing because\nof its being full I don't think it will make any difference for the\npotentially\nmore common case of transaction commits.\n\n- Curtis\n\n", "msg_date": "Fri, 4 Oct 2002 01:50:20 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "Curtis Faith wrote:\n> Bruce Momjian wrote:\n> > I may be missing something here, but other backends don't block while\n> > one writes to WAL.\n> \n> I don't think they'll block until they get to the fsync or XLogWrite\n> call while another transaction is fsync'ing.\n> \n> I'm no Unix filesystem expert but I don't see how the OS can\n> handle multiple writes and fsyncs to the same file descriptors without\n> blocking other processes from writing at the same time. It may be that\n> there are some clever data structures they use but I've not seen huge\n> praise for most of the file systems. A well written file system could\n> minimize this contention but I'll bet it's there with most of the ones\n> that PostgreSQL most commonly runs on.\n> \n> I'll have to write a test and see if there really is a problem.\n\nYes, I can see some contention, but what does aio solve?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 11:48:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "\nI wrote:\n> > I'm no Unix filesystem expert but I don't see how the OS can\n> > handle multiple writes and fsyncs to the same file descriptors without\n> > blocking other processes from writing at the same time. It may be that\n> > there are some clever data structures they use but I've not seen huge\n> > praise for most of the file systems. A well written file system could\n> > minimize this contention but I'll bet it's there with most of the ones\n> > that PostgreSQL most commonly runs on.\n> > \n> > I'll have to write a test and see if there really is a problem.\n\nBruce Momjian wrote:\n\n> Yes, I can see some contention, but what does aio solve?\n> \n\nWell, theoretically, aio lets the file system handle the writes without\nrequiring any locks being held by the processes issuing those reads. \nThe disk i/o scheduler can therefore issue the writes using spinlocks or\nsomething very fast since it controls the timing of each of the actual\nwrites. In some systems this is handled by the kernal and can be very\nfast.\n\nI suspect that with large RAID controllers or intelligent disk systems\nlike EMC this is even more important because they should be able to\nhandle a much higher level of concurrent i/o.\n\nNow whether or not the common file systems handle this well, I can't say,\n\nTake a look at some comments on how Oracle uses asynchronous I/O\n\nhttp://www.ixora.com.au/notes/redo_write_multiplexing.htm\nhttp://www.ixora.com.au/notes/asynchronous_io.htm\nhttp://www.ixora.com.au/notes/raw_asynchronous_io.htm\n\nIt seems that OS support for this will likely increase and that this\nissue will become more and more important as uses contemplate SMP systems\nor if threading is added to certain PostgreSQL subsystems.\n\nIt might be easier for me to implement the change I propose and then\nsee what kind of difference it makes.\n\nI wanted to run the idea past this group first. We can all postulate\nwhether or not it will work but we won't know unless we try it. My real\nissue is one of what happens in the event that it does work.\n\nI've had very good luck implementing this sort of thing for other systems\nbut I don't yet know the range of i/o requests that PostgreSQL makes.\n\nAssuming we can demonstrate no detrimental effects on system reliability\nand that the change is implemented in such a way that it can be turned\non or off easily, will a 50% or better increase in speed for updates\njustify the sort or change I am proposing. 20%? 10%?\n\n- Curtis\n", "msg_date": "Fri, 4 Oct 2002 12:22:39 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "Curtis Faith wrote:\n> > Yes, I can see some contention, but what does aio solve?\n> > \n> \n> Well, theoretically, aio lets the file system handle the writes without\n> requiring any locks being held by the processes issuing those reads. \n> The disk i/o scheduler can therefore issue the writes using spinlocks or\n> something very fast since it controls the timing of each of the actual\n> writes. In some systems this is handled by the kernal and can be very\n> fast.\n\nI am again confused. When we do write(), we don't have to lock\nanything, do we? (Multiple processes can write() to the same file just\nfine.) We do block the current process, but we have nothing else to do\nuntil we know it is written/fsync'ed. Does aio more easily allow the\nkernel to order those write? Is that the issue? Well, certainly the\nkernel already order the writes. Just because we write() doesn't mean\nit goes to disk. Only fsync() or the kernel do that.\n\n> \n> I suspect that with large RAID controllers or intelligent disk systems\n> like EMC this is even more important because they should be able to\n> handle a much higher level of concurrent i/o.\n> \n> Now whether or not the common file systems handle this well, I can't say,\n> \n> Take a look at some comments on how Oracle uses asynchronous I/O\n> \n> http://www.ixora.com.au/notes/redo_write_multiplexing.htm\n> http://www.ixora.com.au/notes/asynchronous_io.htm\n> http://www.ixora.com.au/notes/raw_asynchronous_io.htm\n\nYes, but Oracle is threaded, right, so, yes, they clearly could win with\nit. I read the second URL and it said we could issue separate writes\nand have them be done in an optimal order. However, we use the file\nsystem, not raw devices, so don't we already have that in the kernel\nwith fsync()?\n\n> It seems that OS support for this will likely increase and that this\n> issue will become more and more important as uses contemplate SMP systems\n> or if threading is added to certain PostgreSQL subsystems.\n\nProbably. Having seen the Informix 5/7 debacle, I don't want to fall\ninto the trap where we add stuff that just makes things faster on\nSMP/threaded systems when it makes our code _slower_ on single CPU\nsystems, which is exaclty what Informix did in Informix 7, and we know\nhow that ended (lost customers, bought by IBM). I don't think that's\ngoing to happen to us, but I thought I would mention it.\n\n> Assuming we can demonstrate no detrimental effects on system reliability\n> and that the change is implemented in such a way that it can be turned\n> on or off easily, will a 50% or better increase in speed for updates\n> justify the sort or change I am proposing. 20%? 10%?\n\nYea, let's see what boost we get, and the size of the patch, and we can\nreview it. It is certainly worth researching.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 4 Oct 2002 13:15:30 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "\nBruce Momjian wrote:\n> I am again confused. When we do write(), we don't have to lock\n> anything, do we? (Multiple processes can write() to the same file just\n> fine.) We do block the current process, but we have nothing else to do\n> until we know it is written/fsync'ed. Does aio more easily allow the\n> kernel to order those write? Is that the issue? Well, certainly the\n> kernel already order the writes. Just because we write() doesn't mean\n> it goes to disk. Only fsync() or the kernel do that.\n\n\"We\" don't have to lock anything, but most file systems can't process\nfsync's\nsimultaneous with other writes, so those writes block because the file\nsystem grabs its own internal locks. The fsync call is more\ncontentious than typical writes because its duration is usually\nlonger so it holds the locks longer over more pages and structures.\nThat is the real issue. The contention caused by fsync'ing very frequently\nwhich blocks other writers and readers.\n\nFor the buffer manager, the blocking of readers is probably even more\nproblematic when the cache is a small percentage (say < 10% to 15%) of\nthe total database size because most leaf node accesses will result in\na read. Each of these reads will have to wait on the fsync as well. Again,\na very well written file system probably can minimize this but I've not\nseen any.\n\nFurther comment on:\n<We do block the current process, but we have nothing else to do\n>until we know it is written/fsync'ed.\n\nWriting out a bunch of calls at the end, after having consumed a lot\nof CPU cycles and then waiting is not as efficient as writing them out,\nwhile those CPU cycles are being used. We are currently waisting the\ntime it takes for a given process to write.\n\nThe thinking probably has been that this is no big deal because other\nprocesses, say B, C and D can use the CPU cycles while process A blocks.\nThis is true UNLESS the other processes are blocking on reads or\nwrites caused by process A doing the final writes and fsync.\n\n> Yes, but Oracle is threaded, right, so, yes, they clearly could win with\n> it. I read the second URL and it said we could issue separate writes\n> and have them be done in an optimal order. However, we use the file\n> system, not raw devices, so don't we already have that in the kernel\n> with fsync()?\n\nWhether by threads or multiple processes, there is the same contention on\nthe file through multiple writers. The file system can decide to reorder\nwrites before they start but not after. If a write comes after a\nfsync starts it will have to wait on that fsync.\n\nLikewise a given process's writes can NEVER be reordered if they are\nsubmitted synchronously, as is done in the calls to flush the log as\nwell as the dirty pages in the buffer in the current code.\n\n> Probably. Having seen the Informix 5/7 debacle, I don't want to fall\n> into the trap where we add stuff that just makes things faster on\n> SMP/threaded systems when it makes our code _slower_ on single CPU\n> systems, which is exaclty what Informix did in Informix 7, and we know\n> how that ended (lost customers, bought by IBM). I don't think that's\n> going to happen to us, but I thought I would mention it.\n\nYes, I hate \"improvements\" that make things worse for most people. Any\nchanges I'd contemplate would be simply another configuration driven\noptimization that could be turned off very easily.\n\n- Curtis\n\n", "msg_date": "Fri, 4 Oct 2002 15:48:40 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> ... most file systems can't process fsync's\n> simultaneous with other writes, so those writes block because the file\n> system grabs its own internal locks.\n\nOh? That would be a serious problem, but I've never heard that asserted\nbefore. Please provide some evidence.\n\nOn a filesystem that does have that kind of problem, can't you avoid it\njust by using O_DSYNC on the WAL files? Then there's no need to call\nfsync() at all, except during checkpoints (which actually issue sync()\nnot fsync(), anyway).\n\n> Whether by threads or multiple processes, there is the same contention on\n> the file through multiple writers. The file system can decide to reorder\n> writes before they start but not after. If a write comes after a\n> fsync starts it will have to wait on that fsync.\n\nAFAICS we cannot allow the filesystem to reorder writes of WAL blocks,\non safety grounds (we want to be sure we have a consistent WAL up to the\nend of what we've written). Even if we can allow some reordering when a\nsingle transaction puts out a large volume of WAL data, I fail to see\nwhere any large gain is going to come from. We're going to be issuing\nthose writes sequentially and that ought to match the disk layout about\nas well as can be hoped anyway.\n\n> Likewise a given process's writes can NEVER be reordered if they are\n> submitted synchronously, as is done in the calls to flush the log as\n> well as the dirty pages in the buffer in the current code.\n\nWe do not fsync buffer pages; in fact a transaction commit doesn't write\nbuffer pages at all. I think the above is just a misunderstanding of\nwhat's really happening. We have synchronous WAL writing, agreed, but\nwe want that AFAICS. Data block writes are asynchronous (between\ncheckpoints, anyway).\n\nThere is one thing in the current WAL code that I don't like: if the WAL\nbuffers fill up then everybody who would like to make WAL entries is\nforced to wait while some space is freed, which means a write, which is\nsynchronous if you are using O_DSYNC. It would be nice to have a\nbackground process whose only task is to issue write()s as soon as WAL\npages are filled, thus reducing the probability that foreground\nprocesses have to wait for WAL writes (when they're not committing that\nis). But this could be done portably with one more postmaster child\nprocess; I see no real need to dabble in aio_write.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 16:45:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "I wrote:\n> > ... most file systems can't process fsync's\n> > simultaneous with other writes, so those writes block because the file\n> > system grabs its own internal locks.\n>\n\ntom lane replies:\n> Oh? That would be a serious problem, but I've never heard that asserted\n> before. Please provide some evidence.\n\nWell I'm basing this on past empirical testing and having read some man\npages that describe fsync under this exact scenario. I'll have to write\na test to prove this one way or another. I'll also try and look into\nthe linux/BSD source for the common file systems used for PostgreSQL.\n\n> On a filesystem that does have that kind of problem, can't you avoid it\n> just by using O_DSYNC on the WAL files? Then there's no need to call\n> fsync() at all, except during checkpoints (which actually issue sync()\n> not fsync(), anyway).\n>\n\nNo, they're not exactly the same thing. Consider:\n\nProcess A\t\t\t File System\n--------- -----------\nWrites index buffer .idling...\nWrites entry to log cache .\nWrites another index buffer .\nWrites another log entry .\nWrites tuple buffer .\nWrites another log entry .\nIndex scan .\nLarge table sort .\nWrites tuple buffer .\nWrites another log entry .\nWrites .\nWrites another index buffer .\nWrites another log entry .\nWrites another index buffer .\nWrites another log entry .\nIndex scan .\nLarge table sort .\nCommit .\nFile Write Log Entry .\n.idling... Write to cache\nFile Write Log Entry .idling...\n.idling... Write to cache\nFile Write Log Entry .idling...\n.idling... Write to cache\nFile Write Log Entry .idling...\n.idling... Write to cache\nWrite Commit Log Entry .idling...\n.idling... Write to cache\nCall fsync .idling...\n.idling... Write all buffers to device.\n.DONE.\n\nIn this case, Process A is waiting for all the buffers to write\nat the end of the transaction.\n\nWith asynchronous I/O this becomes:\n\nProcess A\t\t\t File System\n--------- -----------\nWrites index buffer .idling...\nWrites entry to log cache Queue up write - move head to cylinder\nWrites another index buffer Write log entry to media\nWrites another log entry Immediate write to cylinder since head is\nstill there.\nWrites tuple buffer .\nWrites another log entry Queue up write - move head to cylinder\nIndex scan .busy with scan...\nLarge table sort Write log entry to media\nWrites tuple buffer .\nWrites another log entry Queue up write - move head to cylinder\nWrites .\nWrites another index buffer Write log entry to media\nWrites another log entry Queue up write - move head to cylinder\nWrites another index buffer .\nWrites another log entry Write log entry to media\nIndex scan .\nLarge table sort Write log entry to media\nCommit .\nWrite Commit Log Entry Immediate write to cylinder since head is\nstill there.\n.DONE.\n\nEffectively the real work of writing the cache is done while the CPU\nfor the process is busy doing index scans, sorts, etc. With the WAL\nlog on another device and SCSI I/O the log writing should almost always be\ndone except for the final commit write.\n\n> > Whether by threads or multiple processes, there is the same\n> contention on\n> > the file through multiple writers. The file system can decide to reorder\n> > writes before they start but not after. If a write comes after a\n> > fsync starts it will have to wait on that fsync.\n>\n> AFAICS we cannot allow the filesystem to reorder writes of WAL blocks,\n> on safety grounds (we want to be sure we have a consistent WAL up to the\n> end of what we've written). Even if we can allow some reordering when a\n> single transaction puts out a large volume of WAL data, I fail to see\n> where any large gain is going to come from. We're going to be issuing\n> those writes sequentially and that ought to match the disk layout about\n> as well as can be hoped anyway.\n\nMy comment was applying to reads and writes of other processes not the\nWAL log. In my original email, recall I mentioned using the O_APPEND\nopen flag which will ensure that all log entries are done sequentially.\n\n> > Likewise a given process's writes can NEVER be reordered if they are\n> > submitted synchronously, as is done in the calls to flush the log as\n> > well as the dirty pages in the buffer in the current code.\n>\n> We do not fsync buffer pages; in fact a transaction commit doesn't write\n> buffer pages at all. I think the above is just a misunderstanding of\n> what's really happening. We have synchronous WAL writing, agreed, but\n> we want that AFAICS. Data block writes are asynchronous (between\n> checkpoints, anyway).\n\nHmm, I keep hearing that buffer block writes are asynchronous but I don't\nread that in the code at all. There are simple \"write\" calls with files\nthat are not opened with O_NOBLOCK, so they'll be done synchronously. The\ncode for this is relatively straighforward (once you get past the\nstorage manager abstraction) so I don't see what I might be missing.\n\nIt's true that data blocks are not required to be written before the\ntransaction commits, so they are in some sense asynchronous to the\ntransactions. However, they still later on block the process that\nis requesting a new block when it happens to be dirty forcing a write\nof the block in the cache.\n\nIt looks to me like BufferAlloc will simply result in a call to\nBufferReplace > smgrblindwrt > write for md storage manager objects.\n\nThis means that a process will block while the write of dirty cache\nbuffers takes place.\n\nI'm happy to be wrong on this but I don't see any hard evidence\nof asynch file calls anywhere in the code. Unless I am missing something\nthis is a huuuuge problem.\n\n> There is one thing in the current WAL code that I don't like: if the WAL\n> buffers fill up then everybody who would like to make WAL entries is\n> forced to wait while some space is freed, which means a write, which is\n> synchronous if you are using O_DSYNC. It would be nice to have a\n> background process whose only task is to issue write()s as soon as WAL\n> pages are filled, thus reducing the probability that foreground\n> processes have to wait for WAL writes (when they're not committing that\n> is). But this could be done portably with one more postmaster child\n> process; I see no real need to dabble in aio_write.\n\nHmm, well, another process writing the log would accomplish the same thing\nbut isn't that what a file system is? ISTM that aio_write is quite a bit\neasier and higher performance? This is especially true for those OS's which\nhave KAIO support.\n\n- Curtis\n\n", "msg_date": "Fri, 4 Oct 2002 18:09:07 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> It looks to me like BufferAlloc will simply result in a call to\n> BufferReplace > smgrblindwrt > write for md storage manager objects.\n> \n> This means that a process will block while the write of dirty cache\n> buffers takes place.\n\nI think Tom was suggesting that when a buffer is written out, the\nwrite() call only pushes the data down into the filesystem's buffer --\nwhich is free to then write the actual blocks to disk whenever it\nchooses to. In other words, the write() returns, the backend process\ncan continue with what it was doing, and at some later time the blocks\nthat we flushed from the Postgres buffer will actually be written to\ndisk. So in some sense of the word, that I/O is asynchronous.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "04 Oct 2002 19:03:59 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "After some research I still hold that fsync blocks, at least on\nFreeBSD. Am I missing something?\n\nHere's the evidence:\n\nCode from: /usr/src/sys/syscalls/vfs_syscalls\n\nint\nfsync(p, uap)\n struct proc *p;\n struct fsync_args /* {\n syscallarg(int) fd;\n } */ *uap;\n{\n register struct vnode *vp;\n struct file *fp;\n vm_object_t obj;\n int error;\n\n if ((error = getvnode(p->p_fd, SCARG(uap, fd), &fp)) != 0)\n return (error);\n vp = (struct vnode *)fp->f_data;\n vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, p);\n if (VOP_GETVOBJECT(vp, &obj) == 0)\n vm_object_page_clean(obj, 0, 0, 0);\n if ((error = VOP_FSYNC(vp, fp->f_cred, MNT_WAIT, p)) == 0 &&\n vp->v_mount && (vp->v_mount->mnt_flag & MNT_SOFTDEP) &&\n bioops.io_fsync)\n error = (*bioops.io_fsync)(vp);\n VOP_UNLOCK(vp, 0, p);\n return (error);\n}\n\nNotice the calls to:\n\n\tvn_lock(vp, LK_EXCLUSIVE | LK_RETRY, p);\n\t..\n\tVOP_UNLOCK(vp, 0, p);\n\nsurrounding the call to VOP_FSYNC.\n\n From the man pages for VOP_UNLOCK:\n\n\nHEADER STUFF .....\n\n\n VOP_LOCK(struct vnode *vp, int flags, struct proc *p);\n\n int\n VOP_UNLOCK(struct vnode *vp, int flags, struct proc *p);\n\n int\n VOP_ISLOCKED(struct vnode *vp, struct proc *p);\n\n int\n vn_lock(struct vnode *vp, int flags, struct proc *p);\n\n\n\nDESCRIPTION\n These calls are used to serialize access to the filesystem, such as to\n prevent two writes to the same file from happening at the same time.\n\n The arguments are:\n\n vp the vnode being locked or unlocked\n\n flags One of the lock request types:\n\n\t\t LK_SHARED\t Shared lock\n\t\t LK_EXCLUSIVE\t Exclusive lock\n\t\t LK_UPGRADE\t Shared-to-exclusive upgrade\n\t\t LK_EXCLUPGRADE First shared-to-exclusive upgrade\n\t\t LK_DOWNGRADE\t Exclusive-to-shared downgrade\n\t\t LK_RELEASE\t Release any type of lock\n\t\t LK_DRAIN\t Wait for all lock activity to end\n\n\t The lock type may be or'ed with these lock flags:\n\n\t\t LK_NOWAIT\t Do not sleep to wait for lock\n\t\t LK_SLEEPFAIL\t Sleep, then return failure\n\t\t LK_CANRECURSE Allow recursive exclusive lock\n\t\t LK_REENABLE\t Lock is to be reenabled after drain\n\t\t LK_NOPAUSE\t No spinloop\n\n\t The lock type may be or'ed with these control flags:\n\n\t\t LK_INTERLOCK\t Specify when the caller already has a simple\n\t\t\t\t lock (VOP_LOCK will unlock the simple lock\n\t\t\t\t after getting the lock)\n\t\t LK_RETRY\t Retry until locked\n\t\t LK_NOOBJ\t Don't create object\n\n p\t process context to use for the locks\n\n Kernel code should use vn_lock() to lock a vnode rather than calling\n VOP_LOCK() directly.\n", "msg_date": "Fri, 4 Oct 2002 19:32:49 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "fsync exlusive lock evidence WAS: Potential Large Performance Gain in\n\tWAL synching" }, { "msg_contents": "On Fri, 2002-10-04 at 18:03, Neil Conway wrote:\n> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > It looks to me like BufferAlloc will simply result in a call to\n> > BufferReplace > smgrblindwrt > write for md storage manager objects.\n> > \n> > This means that a process will block while the write of dirty cache\n> > buffers takes place.\n> \n> I think Tom was suggesting that when a buffer is written out, the\n> write() call only pushes the data down into the filesystem's buffer --\n> which is free to then write the actual blocks to disk whenever it\n> chooses to. In other words, the write() returns, the backend process\n> can continue with what it was doing, and at some later time the blocks\n> that we flushed from the Postgres buffer will actually be written to\n> disk. So in some sense of the word, that I/O is asynchronous.\n\n\nIsn't that true only as long as there is buffer space available? When\nthere isn't buffer space available, seems the window for blocking comes\ninto play?? So I guess you could say it is optimally asynchronous and\nworse case synchronous. I think the worse case situation is one which\nhe's trying to address.\n\nAt least that's how I interpret it.\n\nGreg", "msg_date": "04 Oct 2002 19:17:56 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "\nCurtis Faith writes:\n\n> I'm no Unix filesystem expert but I don't see how the OS can handle\n> multiple writes and fsyncs to the same file descriptors without\n> blocking other processes from writing at the same time.\n\nWhy not? Other than the necessary synchronisation for attributes such\nas file size and modification times, multiple processes can readily\nwrite to different areas of the same file at the \"same\" time.\n\nfsync() may not return until after the buffers it schedules are\nwritten, but it doesn't have to block subsequent writes to different\nbuffers in the file either. (Note too Tom Lane's responses about\nwhen fsync() is used and not used.)\n\n> I'll have to write a test and see if there really is a problem.\n\nPlease do. I expect you'll find things aren't as bad as you fear.\n\nIn another posting, you write:\n\n> Hmm, I keep hearing that buffer block writes are asynchronous but I don't\n> read that in the code at all. There are simple \"write\" calls with files\n> that are not opened with O_NOBLOCK, so they'll be done synchronously. The\n> code for this is relatively straighforward (once you get past the\n> storage manager abstraction) so I don't see what I might be missing.\n\nThere is a confusion of terminology here: the write() is synchronous\nfrom the point of the application only in that the data is copied into\nkernel buffers (or pages remapped, or whatever) before the system call\nreturns. For files opened with O_DSYNC the write() would wait for the\ndata to be written to disk. Thus O_DSYNC is \"synchronous\" I/O, but\nthere is no equivalently easy name for the regular \"flush to disk\nafter write() returns\" that the Unix kernel has done ~forever.\n\nThe asynchronous I/O that you mention (\"aio\") is a third thing,\ndifferent from both regular write() and write() with O_DSYNC. I\nunderstand that with aio the data is not even transferred to the\nkernel before the aio_write() call returns, but I've never programmed\nwith aio and am not 100% sure how it works.\n\nRegards,\n\nGiles\n\n\n", "msg_date": "Sat, 05 Oct 2002 10:49:06 +1000", "msg_from": "Giles Lean <giles@nemeton.com.au>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "Neil Conway <neilc@samurai.com> writes:\n> \"Curtis Faith\" <curtis@galtair.com> writes:\n>> It looks to me like BufferAlloc will simply result in a call to\n>> BufferReplace > smgrblindwrt > write for md storage manager objects.\n>> \n>> This means that a process will block while the write of dirty cache\n>> buffers takes place.\n\n> I think Tom was suggesting that when a buffer is written out, the\n> write() call only pushes the data down into the filesystem's buffer --\n> which is free to then write the actual blocks to disk whenever it\n> chooses to.\n\nExactly --- in all Unix systems that I know of, a write() is\nasynchronous unless one takes special pains (like opening the file\nwith O_SYNC). Pushing the data from userspace to the kernel disk\nbuffers does not count as I/O in my mind.\n\nI am quite concerned about Curtis' worries about fsync, though.\nThere's not any fundamental reason for fsync to block other operations,\nbut that doesn't mean that it's been implemented reasonably everywhere\n:-(. We need to take a look at that.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Oct 2002 23:13:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "I resent this since it didn't seem to get to the list.\n\nAfter some research I still hold that fsync blocks, at least on\nFreeBSD. Am I missing something?\n\nHere's the evidence:\n\nCode from: /usr/src/sys/syscalls/vfs_syscalls\n\nint\nfsync(p, uap)\n struct proc *p;\n struct fsync_args /* {\n syscallarg(int) fd;\n } */ *uap;\n{\n register struct vnode *vp;\n struct file *fp;\n vm_object_t obj;\n int error;\n\n if ((error = getvnode(p->p_fd, SCARG(uap, fd), &fp)) != 0)\n return (error);\n vp = (struct vnode *)fp->f_data;\n vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, p);\n if (VOP_GETVOBJECT(vp, &obj) == 0)\n vm_object_page_clean(obj, 0, 0, 0);\n if ((error = VOP_FSYNC(vp, fp->f_cred, MNT_WAIT, p)) == 0 &&\n vp->v_mount && (vp->v_mount->mnt_flag & MNT_SOFTDEP) &&\n bioops.io_fsync)\n error = (*bioops.io_fsync)(vp);\n VOP_UNLOCK(vp, 0, p);\n return (error);\n}\n\nNotice the calls to:\n\n\tvn_lock(vp, LK_EXCLUSIVE | LK_RETRY, p);\n\t..\n\tVOP_UNLOCK(vp, 0, p);\n\nsurrounding the call to VOP_FSYNC.\n\n From the man pages for VOP_UNLOCK:\n\n\nHEADER STUFF .....\n\n\n VOP_LOCK(struct vnode *vp, int flags, struct proc *p);\n\n int\n VOP_UNLOCK(struct vnode *vp, int flags, struct proc *p);\n\n int\n VOP_ISLOCKED(struct vnode *vp, struct proc *p);\n\n int\n vn_lock(struct vnode *vp, int flags, struct proc *p);\n\n\n\nDESCRIPTION\n These calls are used to serialize access to the filesystem, such as to\n prevent two writes to the same file from happening at the same time.\n\n The arguments are:\n\n vp the vnode being locked or unlocked\n\n flags One of the lock request types:\n\n\t\t LK_SHARED\t Shared lock\n\t\t LK_EXCLUSIVE\t Exclusive lock\n\t\t LK_UPGRADE\t Shared-to-exclusive upgrade\n\t\t LK_EXCLUPGRADE First shared-to-exclusive upgrade\n\t\t LK_DOWNGRADE\t Exclusive-to-shared downgrade\n\t\t LK_RELEASE\t Release any type of lock\n\t\t LK_DRAIN\t Wait for all lock activity to end\n\n\t The lock type may be or'ed with these lock flags:\n\n\t\t LK_NOWAIT\t Do not sleep to wait for lock\n\t\t LK_SLEEPFAIL\t Sleep, then return failure\n\t\t LK_CANRECURSE Allow recursive exclusive lock\n\t\t LK_REENABLE\t Lock is to be reenabled after drain\n\t\t LK_NOPAUSE\t No spinloop\n\n\t The lock type may be or'ed with these control flags:\n\n\t\t LK_INTERLOCK\t Specify when the caller already has a simple\n\t\t\t\t lock (VOP_LOCK will unlock the simple lock\n\t\t\t\t after getting the lock)\n\t\t LK_RETRY\t Retry until locked\n\t\t LK_NOOBJ\t Don't create object\n\n p\t process context to use for the locks\n\n Kernel code should use vn_lock() to lock a vnode rather than calling\n VOP_LOCK() directly.\n", "msg_date": "Fri, 4 Oct 2002 23:16:52 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> After some research I still hold that fsync blocks, at least on\n> FreeBSD. Am I missing something?\n\n> Here's the evidence:\n> [ much snipped ]\n> vp = (struct vnode *)fp->f_data;\n> vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, p);\n\nHm, I take it a \"vnode\" is what's usually called an inode, ie the unique\nidentification data for a specific disk file?\n\nThis is kind of ugly in general terms but I'm not sure that it really\nhurts Postgres. In our present scheme, the only files we ever fsync()\nare WAL log files, not data files. And in normal operation there is\nonly one WAL writer at a time, and *no* WAL readers. So an exclusive\nkernel-level lock on a WAL file while we fsync really shouldn't create\nany problem for us. (Unless this indirectly blocks other operations\nthat I'm missing?)\n\nAs I commented before, I think we could do with an extra process to\nissue WAL writes in places where they're not in the critical path for\na foreground process. But that seems to be orthogonal from this issue.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 00:07:15 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching " }, { "msg_contents": "Tom Lane wrote:\n> \"Curtis Faith\" <curtis@galtair.com> writes:\n> > After some research I still hold that fsync blocks, at least on\n> > FreeBSD. Am I missing something?\n> \n> > Here's the evidence:\n> > [ much snipped ]\n> > vp = (struct vnode *)fp->f_data;\n> > vn_lock(vp, LK_EXCLUSIVE | LK_RETRY, p);\n> \n> Hm, I take it a \"vnode\" is what's usually called an inode, ie the unique\n> identification data for a specific disk file?\n\nYes, Virtual Inode. I think it is virtual because it is used for NFS,\nwhere the handle really isn't an inode.\n\n> This is kind of ugly in general terms but I'm not sure that it really\n> hurts Postgres. In our present scheme, the only files we ever fsync()\n> are WAL log files, not data files. And in normal operation there is\n> only one WAL writer at a time, and *no* WAL readers. So an exclusive\n> kernel-level lock on a WAL file while we fsync really shouldn't create\n> any problem for us. (Unless this indirectly blocks other operations\n> that I'm missing?)\n\nI think the small issue is:\n\n\tproc1\t\tproc2\n\twrite\n\tfsync\t\twrite\n\t\t\tfync\n\nProc2 has to wait for the fsync, but the write is so short compared to\nthe fsync, I don't see an issue. Now, if someone would come up with\ncode that did only one fsync for the above case, that would be a big\nwin.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 00:17:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "It appears the fsync problem is pervasive. Here's Linux 2.4.19's\nversion from fs/buffer.c:\n\nlock-> down(&inode->i_sem);\n ret = filemap_fdatasync(inode->i_mapping);\n err = file->f_op->fsync(file, dentry, 1);\n if (err && !ret)\n ret = err;\n err = filemap_fdatawait(inode->i_mapping);\n if (err && !ret)\n ret = err;\nunlock-> up(&inode->i_sem);\n\nBut this is probably not a big factor as you outline below because\nthe WALWriteLock is causing the same kind of contention.\n\ntom lane wrote:\n> This is kind of ugly in general terms but I'm not sure that it really\n> hurts Postgres. In our present scheme, the only files we ever fsync()\n> are WAL log files, not data files. And in normal operation there is\n> only one WAL writer at a time, and *no* WAL readers. So an exclusive\n> kernel-level lock on a WAL file while we fsync really shouldn't create\n> any problem for us. (Unless this indirectly blocks other operations\n> that I'm missing?)\n\nI hope you're right but I see some very similar contention problems in\nthe case of many small transactions because of the WALWriteLock.\n\nAssume Transaction A which writes a lot of buffers and XLog entries,\nso the Commit forces a relatively lengthy fsynch.\n\nTransactions B - E block not on the kernel lock from fsync but on\nthe WALWriteLock. \n\nWhen A finishes the fsync and subsequently releases the WALWriteLock\nB unblocks and gets the WALWriteLock for its fsync for the flush.\n\nC blocks on the WALWriteLock waiting to write its XLOG_XACT_COMMIT.\n\nB Releases and now C writes its XLOG_XACT_COMMIT.\n\nThere now seems to be a lot of contention on the WALWriteLock. This\nis a shame for a system that has no locking at the logical level and\ntherefore seems like it could be very, very fast and offer\nincredible concurrency.\n\n> As I commented before, I think we could do with an extra process to\n> issue WAL writes in places where they're not in the critical path for\n> a foreground process. But that seems to be orthogonal from this issue.\n \nIt's only orthogonal to the fsync-specific contention issue. We now\nhave to worry about WALWriteLock semantics causes the same contention.\nYour idea of a separate LogWriter process could very nicely solve this\nproblem and accomplish a few other things at the same time if we make\na few enhancements.\n\nBack-end servers would not issue fsync calls. They would simply block\nwaiting until the LogWriter had written their record to the disk, i.e.\nuntil the sync'd block # was greater than the block that contained the\nXLOG_XACT_COMMIT record. The LogWriter could wake up committed back-\nends after its log write returns.\n\nThe log file would be opened O_DSYNC, O_APPEND every time. The LogWriter\nwould issue writes of the optimal size when enough data was present or\nof smaller chunks if enough time had elapsed since the last write.\n\nThe nice part is that the WALWriteLock semantics could be changed to\nallow the LogWriter to write to disk while WALWriteLocks are acquired\nby back-end servers. WALWriteLocks would only be held for the brief time\nneeded to copy the entries into the log buffer. The LogWriter would\nonly need to grab a lock to determine the current end of the log\nbuffer. Since it would be writing blocks that occur earlier in the\ncache than the XLogInsert log writers it won't need to grab a\nWALWriteLock before writing the cache buffers.\n\nMany transactions would commit on the same fsync (now really a write\nwith O_DSYNC) and we would get optimal write throughput for the log\nsystem.\n\nThis would handle all the issues I had and it doesn't sound like a\nhuge change. In fact, it ends up being almost semantically identical \nto the aio_write suggestion I made orignally, except the\nLogWriter is doing the background writing instead of the OS and we\ndon't have to worry about aio implementations and portability.\n\n- Curtis\n\n\n", "msg_date": "Sat, 5 Oct 2002 02:11:53 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Proposed LogWriter Scheme,\n\tWAS: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "tgl@sss.pgh.pa.us (Tom Lane) writes:\n\n[snip]\n> On a filesystem that does have that kind of problem, can't you avoid it\n> just by using O_DSYNC on the WAL files? Then there's no need to call\n> fsync() at all, except during checkpoints (which actually issue sync()\n> not fsync(), anyway).\n\nThis comment on using sync() instead of fsync() makes me slightly\nworried since sync() doesn't in any way guarantee that all data is\nwritten immediately. E.g. on *BSD with softupdates, it doesn't even\nguarantee that data is written within some deterministic time as\nfar as I know (*).\n\nWith a quick check of the code I found\n\n/*\n *\tmdsync() -- Sync storage.\n *\n */\nint\nmdsync()\n{\n\tsync();\n\tif (IsUnderPostmaster)\n\t\tsleep(2);\n\tsync();\n\treturn SM_SUCCESS;\n}\n\n\nwhich is ugly (imho) even if sync() starts an immediate and complete\nfile system flush (which I don't think it does with softupdates).\n\nIt seems to be used only by\n\n/* ------------------------------------------------\n * FlushBufferPool\n *\n * Flush all dirty blocks in buffer pool to disk\n * at the checkpoint time\n * ------------------------------------------------\n */\nvoid\nFlushBufferPool(void)\n{\n\tBufferSync();\n\tsmgrsync(); /* calls mdsync() */\n}\n\n\nso the question that remains is what kinds of guarantees\nFlushBufferPool() really expects and needs from smgrsync() ?\n\nIf smgrsync() is called to make up for lack of fsync() calls\nin BufferSync(), I'm getting really worried :-)\n\n _\nMats Lofkvist\nmal@algonet.se\n\n\n(*) See for example\n http://groups.google.com/groups?th=bfc8a0dc5373ed6e\n", "msg_date": "05 Oct 2002 10:46:03 +0200", "msg_from": "Mats Lofkvist <mal@algonet.se>", "msg_from_op": false, "msg_subject": "Use of sync() [was Re: Potential Large Performance Gain in WAL\n\tsynching]" }, { "msg_contents": "Curtis Faith wrote:\n> Back-end servers would not issue fsync calls. They would simply block\n> waiting until the LogWriter had written their record to the disk, i.e.\n> until the sync'd block # was greater than the block that contained the\n> XLOG_XACT_COMMIT record. The LogWriter could wake up committed back-\n> ends after its log write returns.\n> \n> The log file would be opened O_DSYNC, O_APPEND every time. The LogWriter\n> would issue writes of the optimal size when enough data was present or\n> of smaller chunks if enough time had elapsed since the last write.\n\nSo every backend is to going to wait around until its fsync gets done by\nthe backend process? How is that a win? This is just another version\nof our GUC parameters:\n\t\n\t#commit_delay = 0 # range 0-100000, in microseconds\n\t#commit_siblings = 5 # range 1-1000\n\nwhich attempt to delay fsync if other backends are nearing commit. \nPushing things out to another process isn't a win; figuring out if\nsomeone else is coming for commit is. Remember, write() is fast, fsync\nis slow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 07:49:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" }, { "msg_contents": "Bruce Momjian wrote:\n> So every backend is to going to wait around until its fsync gets done by\n> the backend process? How is that a win? This is just another version\n> of our GUC parameters:\n> \t\n> \t#commit_delay = 0 # range 0-100000, in microseconds\n> \t#commit_siblings = 5 # range 1-1000\n> \n> which attempt to delay fsync if other backends are nearing commit. \n> Pushing things out to another process isn't a win; figuring out if\n> someone else is coming for commit is. \n\nIt's not the same at all. My proposal make two extremely important changes\nfrom a performance perspective.\n\n1) WALWriteLocks are never held by processes for lengthy transations. Only\nfor long enough to copy the log entry into the buffer. This means real\nwork can be done by other processes while a transaction is waiting for\nit's commit to finish. I'm sure that blocking on XLogInsert because another\ntransaction is performing an fsync is extremely common with frequent update\nscenarios.\n\n2) The log is written using optimal write sizes which is much better than\na user-defined guess of the microseconds to delay the fsync. We should be\nable to get the bottleneck to be the maximum write throughput of the disk\nwith the modifications to Tom Lane's scheme I proposed.\n\n> Remember, write() is fast, fsync is slow.\n\nOkay, it's clear I missed the point about Unix write earlier :-)\n\nHowever, it's not just saving fsyncs that we need to worry about. It's the\nunnecessary blocking of other processes that are simply trying to\nappend some log records in the course of whatever updating, inserting they\nare doing. They may be a long way from commit.\n\nfsync being slow is the whole reason for not wanting to have exclusive\nlocks held for the duration of an fsync.\n\nOn an SMP machine this change alone would probably speed things up by\nan order of magnitude (assuming there aren't any other similar locks\ncausing the same problem).\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 09:01:21 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme,\n\tWAS: Potential Large PerformanceGain in WAL synching" }, { "msg_contents": "Bruce Momjian kirjutas L, 05.10.2002 kell 13:49:\n> Curtis Faith wrote:\n> > Back-end servers would not issue fsync calls. They would simply block\n> > waiting until the LogWriter had written their record to the disk, i.e.\n> > until the sync'd block # was greater than the block that contained the\n> > XLOG_XACT_COMMIT record. The LogWriter could wake up committed back-\n> > ends after its log write returns.\n> > \n> > The log file would be opened O_DSYNC, O_APPEND every time. The LogWriter\n> > would issue writes of the optimal size when enough data was present or\n> > of smaller chunks if enough time had elapsed since the last write.\n> \n> So every backend is to going to wait around until its fsync gets done by\n> the backend process? How is that a win? This is just another version\n> of our GUC parameters:\n> \t\n> \t#commit_delay = 0 # range 0-100000, in microseconds\n> \t#commit_siblings = 5 # range 1-1000\n> \n> which attempt to delay fsync if other backends are nearing commit. \n> Pushing things out to another process isn't a win; figuring out if\n> someone else is coming for commit is. \n\nExactly. If I understand correctly what Curtis is proposing, you don't\nhave to figure it out under his scheme - you just issue a WALWait\ncommand and the WAL writing process notifies you when your transactions\nWAL is safe storage. \n\nIf the other committer was able to get his WALWait in before the actual\nwrite took place, it will notified too, if not, it will be notified\nabout 1/166th sec. later (for 10K rpm disk) when it's write is done on\nthe next rev of disk platters.\n\nThe writer process should just issue a continuous stream of\naio_write()'s while there are any waiters and keep track which waiters\nare safe to continue - thus no guessing of who's gonna commit.\n\nIf supported by platform this should use zero-copy writes - it should be\nsafe because WAL is append-only.\n\n-----------\nHannu\n\n", "msg_date": "05 Oct 2002 15:44:12 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> Assume Transaction A which writes a lot of buffers and XLog entries,\n> so the Commit forces a relatively lengthy fsynch.\n\n> Transactions B - E block not on the kernel lock from fsync but on\n> the WALWriteLock. \n\nYou are confusing WALWriteLock with WALInsertLock. A\ntransaction-committing flush operation only holds the former.\nXLogInsert only needs the latter --- at least as long as it\ndoesn't need to write.\n\nThus, given adequate space in the WAL buffers, transactions B-E do not\nget blocked by someone else who is writing/syncing in order to commit.\n\nNow, as the code stands at the moment there is no event other than\ncommit or full-buffers that prompts a write; that means that we are\nlikely to run into the full-buffer case more often than is good for\nperformance. But a background writer task would fix that.\n\n> Back-end servers would not issue fsync calls. They would simply block\n> waiting until the LogWriter had written their record to the disk, i.e.\n> until the sync'd block # was greater than the block that contained the\n> XLOG_XACT_COMMIT record. The LogWriter could wake up committed back-\n> ends after its log write returns.\n\nThis will pessimize performance except in the case where WAL traffic\nis very heavy, because it means you don't commit until the block\ncontaining your commit record is filled. What if you are the only\nactive backend?\n\nMy view of this is that backends would wait for the background writer\nonly when they encounter a full-buffer situation, or indirectly when\nthey are trying to do a commit write and the background guy has the\nWALWriteLock. The latter serialization is unavoidable: in that\nscenario, the background guy is writing/flushing an earlier page of\nthe WAL log, and we *must* have that down to disk before we can declare\nour transaction committed. So any scheme that tries to eliminate the\nserialization of WAL writes will fail. I do not, however, see any\nvalue in forcing all the WAL writes to be done by a single process;\nwhich is essentially what you're saying we should do. That just adds\nextra process-switch overhead that we don't really need.\n\n> The log file would be opened O_DSYNC, O_APPEND every time.\n\nKeep in mind that we support platforms without O_DSYNC. I am not\nsure whether there are any that don't have O_SYNC either, but I am\nfairly sure that we measured O_SYNC to be slower than fsync()s on\nsome platforms.\n\n> The nice part is that the WALWriteLock semantics could be changed to\n> allow the LogWriter to write to disk while WALWriteLocks are acquired\n> by back-end servers.\n\nAs I said, we already have that; you are confusing WALWriteLock\nwith WALInsertLock.\n\n> Many transactions would commit on the same fsync (now really a write\n> with O_DSYNC) and we would get optimal write throughput for the log\n> system.\n\nHow are you going to avoid pessimizing the few-transactions case?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 11:15:47 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme,\n\tWAS: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> \"Curtis Faith\" <curtis@galtair.com> writes:\n\n> > The log file would be opened O_DSYNC, O_APPEND every time.\n> \n> Keep in mind that we support platforms without O_DSYNC. I am not\n> sure whether there are any that don't have O_SYNC either, but I am\n> fairly sure that we measured O_SYNC to be slower than fsync()s on\n> some platforms.\n\nAnd don't we preallocate WAL files anyway? So O_APPEND would be\nirrelevant?\n\n-Doug\n", "msg_date": "05 Oct 2002 11:28:07 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme,\n\tWAS: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> The writer process should just issue a continuous stream of\n> aio_write()'s while there are any waiters and keep track which waiters\n> are safe to continue - thus no guessing of who's gonna commit.\n\nThis recipe sounds like \"eat I/O bandwidth whether we need it or not\".\nIt might be optimal in the case where activity is so heavy that we\ndo actually need a WAL write on every disk revolution, but in any\nscenario where we're not maxing out the WAL disk's bandwidth, it will\nhurt performance. In particular, it would seriously degrade performance\nif the WAL file isn't on its own spindle but has to share bandwidth with\ndata file access.\n\nWhat we really want, of course, is \"write on every revolution where\nthere's something worth writing\" --- either we've filled a WAL blovk\nor there is a commit pending. But that just gets us back into the\nsame swamp of how-do-you-guess-whether-more-commits-will-arrive-soon.\nI don't see how an extra process makes that problem any easier.\n\nBTW, it would seem to me that aio_write() buys nothing over plain write()\nin terms of ability to gang writes. If we issue the write at time T\nand it completes at T+X, we really know nothing about exactly when in\nthat interval the data was read out of our WAL buffers. We cannot\nassume that commit records that were stored into the WAL buffer during\nthat interval got written to disk. The only safe assumption is that\nonly records that were in the buffer at time T are down to disk; and\nthat means that late arrivals lose. You can't issue aio_write\nimmediately after the previous one completes and expect that this\noptimizes performance --- you have to delay it as long as you possibly\ncan in hopes that more commit records arrive. So it comes down to being\nthe same problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 11:32:42 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance " }, { "msg_contents": "Mats Lofkvist <mal@algonet.se> writes:\n> [ mdsync is ugly and not completely reliable ]\n\nYup, it is. Do you have a better solution?\n\nfsync is not the answer, since the checkpoint process has no way to know\nwhat files may have been touched since the last checkpoint ... and even\nif it could find that out, a string of retail fsync calls would kill\nperformance, cf. Curtis Faith's complaint.\n\nIn practice I am not sure there is a problem. The local man page for\nsync() says\n\n The writing, although scheduled, is not necessarily complete upon\n return from sync.\n\nNow if \"scheduled\" means \"will occur before any subsequently-commanded\nwrite occurs\" then we're fine. I don't know if that's true though ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 12:07:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of sync() [was Re: Potential Large Performance Gain in WAL\n\tsynching]" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> In practice I am not sure there is a problem. The local man page for\n> sync() says\n> \n> The writing, although scheduled, is not necessarily complete upon\n> return from sync.\n> \n> Now if \"scheduled\" means \"will occur before any subsequently-commanded\n> write occurs\" then we're fine. I don't know if that's true though ...\n\nIn my understanding, it means \"all currently dirty blocks in the file\ncache are queued to the disk driver\". The queued writes will\neventually complete, but not necessarily before sync() returns. I\ndon't think subsequent write()s will block, unless the system is low\non buffers and has to wait until dirty blocks are freed by the driver.\n\n-Doug\n", "msg_date": "05 Oct 2002 12:29:33 -0400", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Use of sync() [was Re: Potential Large Performance Gain in WAL\n\tsynching]" }, { "msg_contents": "> You are confusing WALWriteLock with WALInsertLock. A\n> transaction-committing flush operation only holds the former.\n> XLogInsert only needs the latter --- at least as long as it\n> doesn't need to write.\n\nWell that make things better than I thought. We still end up with\na disk write for each transaction though and I don't see how this\ncan ever get better than (Disk RPM)/ 60 transactions per second,\nsince commit fsyncs are serialized. Every fsync will have to wait\nalmost a full revolution to reach the end of the log.\n\nAs a practial matter then everyone will use commit_delay to\nimprove this.\n \n> This will pessimize performance except in the case where WAL traffic\n> is very heavy, because it means you don't commit until the block\n> containing your commit record is filled. What if you are the only\n> active backend?\n\nWe could handle this using a mechanism analogous to the current\ncommit delay. If there are more than commit_siblings other processes\nrunning then do the write automatically after commit_delay seconds.\n\nThis would make things no more pessimistic than the current\nimplementation but provide the additional benefit of allowing the\nLogWriter to write in optimal sizes if there are many transactions.\n\nThe commit_delay method won't be as good in many cases. Consider\na update scenario where a larger commit delay gives better throughput.\nA given transaction will flush after commit_delay milliseconds. The\ndelay is very unlikely to result in a scenario where the dirty log \nbuffers are the optimal size.\n\nAs a practical matter I think this would tend to make the writes\nlarger than they would otherwise have been and this would\nunnecessarily delay the commit on the transaction.\n\n> I do not, however, see any\n> value in forcing all the WAL writes to be done by a single process;\n> which is essentially what you're saying we should do. That just adds\n> extra process-switch overhead that we don't really need.\n\nI don't think that an fsync will ever NOT cause the process to get\nswitched out so I don't see how another process doing the write would\nresult in more overhead. The fsync'ing process will block on the\nfsync, so there will always be at least one process switch (probably\nmany) while waiting for the fsync to comlete since we are talking\nmany milliseconds for the fsync in every case.\n\n> > The log file would be opened O_DSYNC, O_APPEND every time.\n> \n> Keep in mind that we support platforms without O_DSYNC. I am not\n> sure whether there are any that don't have O_SYNC either, but I am\n> fairly sure that we measured O_SYNC to be slower than fsync()s on\n> some platforms.\n\nWell there is no reason that the logwriter couldn't be doing fsyncs\ninstead of O_DSYNC writes in those cases. I'd leave this switchable\nusing the current flags. Just change the semantics a bit.\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 12:33:14 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme,\n\tWAS: Potential Large Performance Gain in WAL synching" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> In practice I am not sure there is a problem. The local man page for\n>> sync() says\n>> \n>> The writing, although scheduled, is not necessarily complete upon\n>> return from sync.\n>> \n>> Now if \"scheduled\" means \"will occur before any subsequently-commanded\n>> write occurs\" then we're fine. I don't know if that's true though ...\n\n> In my understanding, it means \"all currently dirty blocks in the file\n> cache are queued to the disk driver\". The queued writes will\n> eventually complete, but not necessarily before sync() returns. I\n> don't think subsequent write()s will block, unless the system is low\n> on buffers and has to wait until dirty blocks are freed by the driver.\n\nWe don't need later write()s to block. We only need them to not hit\ndisk before the sync-queued writes hit disk. So I guess the question\nboils down to what \"queued to the disk driver\" means --- has the order\nof writes been determined at that point?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 12:55:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Use of sync() [was Re: Potential Large Performance Gain in WAL\n\tsynching]" }, { "msg_contents": ">In particular, it would seriously degrade performance if the WAL file\n> isn't on its own spindle but has to share bandwidth with\n> data file access.\n\nIf the OS is stupid I could see this happening. But if there are\nbuffers and some sort of elevator algorithm the I/O won't happen\nat bad times.\n\nI agree with you though that writing for every single insert probably\ndoes not make sense. There should be some blocking of writes. The\noptimal size would have to be derived empirically.\n\n> What we really want, of course, is \"write on every revolution where\n> there's something worth writing\" --- either we've filled a WAL blovk\n> or there is a commit pending. But that just gets us back into the\n> same swamp of how-do-you-guess-whether-more-commits-will-arrive-soon.\n> I don't see how an extra process makes that problem any easier.\n\nThe whole point of the extra process handling all the writes is so\nthat it can write on every revolution, if there is something to\nwrite. It doesn't need to care if more commits will arrive soon.\n\n> BTW, it would seem to me that aio_write() buys nothing over plain write()\n> in terms of ability to gang writes. If we issue the write at time T\n> and it completes at T+X, we really know nothing about exactly when in\n> that interval the data was read out of our WAL buffers. We cannot\n> assume that commit records that were stored into the WAL buffer during\n> that interval got written to disk.\n\nWhy would we need to make that assumption? The only thing we'd need to\nknow is that a given write succeeded meaning that commits before that\nwrite are done.\n\nThe advantage to aio_write in this scenario is when writes cross track\nboundaries or when the head is in the wrong spot. If we write\nin reasonable blocks with aio_write the write might get to the disk\nbefore the head passes the location for the write.\n\nConsider a scenario where:\n\n Head is at file offset 10,000.\n\n Log contains blocks 12,000 - 12,500\n\n ..time passes..\n\n Head is now at 12,050\n\n Commit occurs writing block 12,501\n\nIn the aio_write case the write would already have been done for blocks \n12,000 to 12,050 and would be queued up for some additional blocks up to\npotentially 12,500. So the write for the commit could occur without an\nadditional rotation delay. We are talking 85 to 200 milliseconds\ndelay for this rotation on a single disk. I don't know how often this\nhappens in actual practice but it might occur as often as every other\ntime.\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 13:22:40 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Doug McNaught <doug@wireboard.com> writes:\n\n> > In my understanding, it means \"all currently dirty blocks in the file\n> > cache are queued to the disk driver\". The queued writes will\n> > eventually complete, but not necessarily before sync() returns. I\n> > don't think subsequent write()s will block, unless the system is low\n> > on buffers and has to wait until dirty blocks are freed by the driver.\n> \n> We don't need later write()s to block. We only need them to not hit\n> disk before the sync-queued writes hit disk. So I guess the question\n> boils down to what \"queued to the disk driver\" means --- has the order\n> of writes been determined at that point?\n\nIt's certainy possible that new write(s) get put into the queue\nalongside old ones--I think the Linux block layer tries to do this\nwhen it can, for one. According to the manpage, Linux used to wait\nuntil everything was written to return from sync(), though I don't\n*think* it does anymore. But that's not mandated by the specs.\n\nSo I don't think we can rely on such behavior (not reordering writes\nacross a sync()), though it will probably happen in practice a lot of\nthe time. AFAIK there isn't anything better than sync() + sleep() as\nfar as the specs go. Yes, it kinda sucks. ;)\n\n-Doug\n", "msg_date": "05 Oct 2002 13:53:49 -0400", "msg_from": "Doug McNaught <doug@mcnaught.org>", "msg_from_op": false, "msg_subject": "Re: Use of sync() [was Re: Potential Large Performance Gain in WAL\n\tsynching]" }, { "msg_contents": "Curtis Faith wrote:\n> The advantage to aio_write in this scenario is when writes cross track\n> boundaries or when the head is in the wrong spot. If we write\n> in reasonable blocks with aio_write the write might get to the disk\n> before the head passes the location for the write.\n> \n> Consider a scenario where:\n> \n> Head is at file offset 10,000.\n> \n> Log contains blocks 12,000 - 12,500\n> \n> ..time passes..\n> \n> Head is now at 12,050\n> \n> Commit occurs writing block 12,501\n> \n> In the aio_write case the write would already have been done for blocks \n> 12,000 to 12,050 and would be queued up for some additional blocks up to\n> potentially 12,500. So the write for the commit could occur without an\n> additional rotation delay. We are talking 85 to 200 milliseconds\n> delay for this rotation on a single disk. I don't know how often this\n> happens in actual practice but it might occur as often as every other\n> time.\n\nSo, you are saying that we may get back aio confirmation quicker than if\nwe issued our own write/fsync because the OS was able to slip our flush\nto disk in as part of someone else's or a general fsync?\n\nI don't buy that because it is possible our write() gets in as part of\nsomeone else's fsync and our fsync becomes a no-op, meaning there aren't\nany dirty buffers for that file. Isn't that also possible?\n\nAlso, remember the kernel doesn't know where the platter rotation is\neither. Only the SCSI drive can reorder the requests to match this. The\nOS can group based on head location, but it doesn't know much about the\nplatter location, and it doesn't even know where the head is.\n\nAlso, does aio return info when the data is in the kernel buffers or\nwhen it is actually on the disk? \n\nSimply, aio allows us to do the write and get notification when it is\ncomplete. I don't see how that helps us, and I don't see any other\nadvantages to aio. To use aio, we need to find something that _can't_\nbe solved with more traditional Unix API's, and I haven't seen that yet.\n\nThis aio thing is getting out of hand. It's like we have a hammer, and\neverything looks like a nail, or a use for aio.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 14:26:43 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" }, { "msg_contents": "On Sat, 2002-10-05 at 20:32, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > The writer process should just issue a continuous stream of\n> > aio_write()'s while there are any waiters and keep track which waiters\n> > are safe to continue - thus no guessing of who's gonna commit.\n> \n> This recipe sounds like \"eat I/O bandwidth whether we need it or not\".\n> It might be optimal in the case where activity is so heavy that we\n> do actually need a WAL write on every disk revolution, but in any\n> scenario where we're not maxing out the WAL disk's bandwidth, it will\n> hurt performance. In particular, it would seriously degrade performance\n> if the WAL file isn't on its own spindle but has to share bandwidth with\n> data file access.\n> \n> What we really want, of course, is \"write on every revolution where\n> there's something worth writing\" --- either we've filled a WAL blovk\n> or there is a commit pending. \n\nThat's what I meant by \"while there are any waiters\".\n\n> But that just gets us back into the\n> same swamp of how-do-you-guess-whether-more-commits-will-arrive-soon.\n> I don't see how an extra process makes that problem any easier.\n\nI still think that we could get gang writes automatically, if we just\nask for aio_write at completion of each WAL file page and keep track of\nthose that are written. We could also keep track of write position\ninside the WAL page for\n\n1. end of last write() of each process\n\n2. WAL files write position at each aio_write()\n\nThen we can safely(?) assume, that each backend wants only its own\nwrite()'s be on disk before it can assume the trx has committed. If the\nfsync()-like request comes in at time when aio_write for that processes\nlast position has committed, we can let that process continue without\neven a context switch.\n\nIn the above scenario I assume that kernel can do the right thing by\ndoing multiple aio_write requests for the same page in one sweep and not\ndoing one physical write for each aio_write.\n\n> BTW, it would seem to me that aio_write() buys nothing over plain write()\n> in terms of ability to gang writes. If we issue the write at time T\n> and it completes at T+X, we really know nothing about exactly when in\n> that interval the data was read out of our WAL buffers. \n\nYes, most likely. If we do several write's of the same pages they will\nhit physical disk at the same physical write.\n\n> We cannot\n> assume that commit records that were stored into the WAL buffer during\n> that interval got written to disk. The only safe assumption is that\n> only records that were in the buffer at time T are down to disk; and\n> that means that late arrivals lose. \n\nI assume that if each commit record issues an aio_write when all of\nthose which actually reached the disk will be notified. \n\nIOW the first aio_write orders the write, but all the latecomers which\narrive before actual write will also get written and notified.\n\n> You can't issue aio_write\n> immediately after the previous one completes and expect that this\n> optimizes performance --- you have to delay it as long as you possibly\n> can in hopes that more commit records arrive. \n\nI guess we have quite different cases for different hardware\nconfigurations - if we have a separate disk subsystem for WAL, we may\nwant to keep the log flowing to disk as fast as it is ready, including\nthe writing of last, partial page as often as new writes to it are done\n- as we possibly can't write more than ~ 250 times/sec (with 15K drives,\nno battery RAM) we will always have at least two context switches\nbetween writes (for 500Hz ontext switch clock), and much more if\nprocesses background themselves while waiting for small transactions to\ncommit.\n\n> So it comes down to being the same problem.\n\nOr its solution ;) as instead of the predicting we just write all data\nin log that is ready to be written. If we postpone writing, there will\nbe hickups when we suddenly discover that we need to write a whole lot\nof pages (fsync()) after idling the disk for some period.\n\n---------------\nHannu\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n", "msg_date": "05 Oct 2002 23:29:45 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "> So, you are saying that we may get back aio confirmation quicker than if\n> we issued our own write/fsync because the OS was able to slip our flush\n> to disk in as part of someone else's or a general fsync?\n> \n> I don't buy that because it is possible our write() gets in as part of\n> someone else's fsync and our fsync becomes a no-op, meaning there aren't\n> any dirty buffers for that file. Isn't that also possible?\n\nSeparate out the two concepts:\n\n1) Writing of incomplete transactions at the block level by a\nbackground LogWriter. \n\nI think it doesn't matter whether the write is aio_write or\nwrite, writing blocks when we get them should provide the benefit\nI outlined.\n\nWaiting till fsync could miss the opporunity to write before the \nhead passes the end of the last durable write because the drive\nbuffers might empty causing up to a full rotation's delay.\n\n2) aio_write vs. normal write.\n\nSince as you and others have pointed out aio_write and write are both\nasynchronous, the issue becomes one of whether or not the copies to the\nfile system buffers happen synchronously or not.\n\nThis is not a big difference but it seems to me that the OS might be\nable to avoid some context switches by grouping copying in the case\nof aio_write. I've heard anecdotal reports that this is significantly\nfaster for some things but I don't know for certain.\n\n> \n> Also, remember the kernel doesn't know where the platter rotation is\n> either. Only the SCSI drive can reorder the requests to match this. The\n> OS can group based on head location, but it doesn't know much about the\n> platter location, and it doesn't even know where the head is.\n\nThe kernel doesn't need to know anything about platter rotation. It\njust needs to keep the disk write buffers full enough not to cause\na rotational latency.\n\nIt's not so much a matter of reordering as it is of getting the data\ninto the SCSI drive before the head passes the last write's position.\nIf the SCSI drive's buffers are kept full it can continue writing at\nits full throughput. If the writes stop and the buffers empty\nit will need to wait up to a full rotation before it gets to the end \nof the log again\n\n> Also, does aio return info when the data is in the kernel buffers or\n> when it is actually on the disk? \n> \n> Simply, aio allows us to do the write and get notification when it is\n> complete. I don't see how that helps us, and I don't see any other\n> advantages to aio. To use aio, we need to find something that _can't_\n> be solved with more traditional Unix API's, and I haven't seen that yet.\n> \n> This aio thing is getting out of hand. It's like we have a hammer, and\n> everything looks like a nail, or a use for aio.\n\nYes, while I think its probably worth doing and faster, it won't help as\nmuch as just keeping the drive buffers full even if that's by using write\ncalls.\n\nI still don't understand the opposition to aio_write. Could we just have\nthe configuration setup determine whether one or the other is used? I \ndon't see why we wouldn't use the faster calls if they were present and\nreliable on a given system.\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 15:46:18 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" }, { "msg_contents": "Curtis Faith wrote:\n> > So, you are saying that we may get back aio confirmation quicker than if\n> > we issued our own write/fsync because the OS was able to slip our flush\n> > to disk in as part of someone else's or a general fsync?\n> > \n> > I don't buy that because it is possible our write() gets in as part of\n> > someone else's fsync and our fsync becomes a no-op, meaning there aren't\n> > any dirty buffers for that file. Isn't that also possible?\n> \n> Separate out the two concepts:\n> \n> 1) Writing of incomplete transactions at the block level by a\n> background LogWriter. \n> \n> I think it doesn't matter whether the write is aio_write or\n> write, writing blocks when we get them should provide the benefit\n> I outlined.\n> \n> Waiting till fsync could miss the opportunity to write before the \n> head passes the end of the last durable write because the drive\n> buffers might empty causing up to a full rotation's delay.\n\nNo question about that! The sooner we can get stuff to the WAL buffers,\nthe more likely we will get some other transaction to do our fsync work.\nAny ideas on how we can do that?\n\n> 2) aio_write vs. normal write.\n> \n> Since as you and others have pointed out aio_write and write are both\n> asynchronous, the issue becomes one of whether or not the copies to the\n> file system buffers happen synchronously or not.\n> \n> This is not a big difference but it seems to me that the OS might be\n> able to avoid some context switches by grouping copying in the case\n> of aio_write. I've heard anecdotal reports that this is significantly\n> faster for some things but I don't know for certain.\n\nI suppose it is possible, but because we spend so much time in fsync, we\nwant to focus on that. People have recommended mmap of the WAL file,\nand that seems like a much more direct way to handle it rather than aio.\nHowever, we can't control when the stuff gets sent to disk with mmap'ed\nWAL, or should I say we can't write to it and withhold writes to the\ndisk file with mmap, so we would need some intermediate step, and then\nagain, it just becomes more steps and extra steps slow things down too.\n\n\n> > This aio thing is getting out of hand. It's like we have a hammer, and\n> > everything looks like a nail, or a use for aio.\n> \n> Yes, while I think its probably worth doing and faster, it won't help as\n> much as just keeping the drive buffers full even if that's by using write\n> calls.\n\n> I still don't understand the opposition to aio_write. Could we just have\n> the configuration setup determine whether one or the other is used? I \n> don't see why we wouldn't use the faster calls if they were present and\n> reliable on a given system.\n\nWe hesitate to add code relying on new features unless it is a\nsignificant win, and in the aio case, we would have different WAL disk\nwrite models for with/without aio, so it clearly could be two code\npaths, and with two code paths, we can't as easily improve or optimize. \nIf we get 2% boost out of some feature, but it later discourages us\nfrom adding a 5% optimization, it is a loss. And, in most cases, the 2%\noptimization is for a few platform, while the 5% optimization is for\nall. This code is +15 years old, so we are looking way down the road,\nnot just for today's hot feature.\n\nFor example, Tom just improved DISTINCT by 25% by optimizing some of the\nsorting and function call handling. If we had more complex threaded\nsort code, that may not have been possible, or it may have been possible\nfor him to optimize only one of the code paths.\n\nI can't tell you how many aio/mmap/fancy feature discussions we have\nhad, and we obviously discuss them, but in the end, they end up being of\nquestionable value for the risk/complexity; but, we keep talking,\nhoping we are wrong or some good ideas come out of it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 16:06:29 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" }, { "msg_contents": "> No question about that! The sooner we can get stuff to the WAL buffers,\n> the more likely we will get some other transaction to do our fsync work.\n> Any ideas on how we can do that?\n\nMore like the sooner we get stuff out of the WAL buffers and into the\ndisk's buffers whether by write or aio_write.\n\nIt doesn't do any good to have information in the XLog unless it\ngets written to the disk buffers before they empty.\n\n> We hesitate to add code relying on new features unless it is a\n> significant win, and in the aio case, we would have different WAL disk\n> write models for with/without aio, so it clearly could be two code\n> paths, and with two code paths, we can't as easily improve or optimize. \n> If we get 2% boost out of some feature, but it later discourages us\n> from adding a 5% optimization, it is a loss. And, in most cases, the 2%\n> optimization is for a few platform, while the 5% optimization is for\n> all. This code is +15 years old, so we are looking way down the road,\n> not just for today's hot feature.\n\nI'll just have to implement it and see if it's as easy and isolated as I\nthink it might be and would allow the same algorithm for aio_write or\nwrite.\n\n> I can't tell you how many aio/mmap/fancy feature discussions we have\n> had, and we obviously discuss them, but in the end, they end up being of\n> questionable value for the risk/complexity; but, we keep talking,\n> hoping we are wrong or some good ideas come out of it.\n\nI'm all in favor of keeping clean designs. I'm very pleased with how\neasy PostreSQL is to read and understand given how much it does.\n\n- Curtis\n", "msg_date": "Sat, 5 Oct 2002 17:02:18 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" }, { "msg_contents": "Curtis Faith wrote:\n> > No question about that! The sooner we can get stuff to the WAL buffers,\n> > the more likely we will get some other transaction to do our fsync work.\n> > Any ideas on how we can do that?\n> \n> More like the sooner we get stuff out of the WAL buffers and into the\n> disk's buffers whether by write or aio_write.\n\nDoes aio_write to write or write _and_ fsync()?\n\n> It doesn't do any good to have information in the XLog unless it\n> gets written to the disk buffers before they empty.\n\nJust for clarification, we have two issues in this thread:\n\n\tWAL memory buffers fill up, forcing WAL write\n\tmultiple commits at the same time force too many fsync's\n\nI just wanted to throw that out.\n\n> > I can't tell you how many aio/mmap/fancy feature discussions we have\n> > had, and we obviously discuss them, but in the end, they end up being of\n> > questionable value for the risk/complexity; but, we keep talking,\n> > hoping we are wrong or some good ideas come out of it.\n> \n> I'm all in favor of keeping clean designs. I'm very pleased with how\n> easy PostreSQL is to read and understand given how much it does.\n\nGlad you see the situation we are in. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 5 Oct 2002 17:44:56 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Or its solution ;) as instead of the predicting we just write all data\n> in log that is ready to be written. If we postpone writing, there will\n> be hickups when we suddenly discover that we need to write a whole lot\n> of pages (fsync()) after idling the disk for some period.\n\nThis part is exactly the same point that I've been proposing to solve\nwith a background writer process. We don't need aio_write for that.\nThe background writer can handle pushing completed WAL pages out to\ndisk. The sticky part is trying to gang the writes for multiple \ntransactions whose COMMIT records would fit into the same WAL page,\nand that WAL page isn't full yet.\n\nThe rest of what you wrote seems like wishful thinking about how\naio_write might behave :-(. I have no faith in it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Oct 2002 19:03:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large Performance " }, { "msg_contents": "It seems that the Hackers list isn't in the list to \nsubscribe/unsubscribe at http://developer.postgresql.org/mailsub.php\n\nJust an FYI.\n\n-Mitch\n\nComputers are like Air Conditioners, they don't work when you open \nWindows.\n\n", "msg_date": "Sat, 5 Oct 2002 16:31:46 -0700", "msg_from": "Mitch <mitch@doot.org>", "msg_from_op": false, "msg_subject": "Mailing list unsubscribe - hackers isn't there?" }, { "msg_contents": "On Sun, 2002-10-06 at 04:03, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > Or its solution ;) as instead of the predicting we just write all data\n> > in log that is ready to be written. If we postpone writing, there will\n> > be hickups when we suddenly discover that we need to write a whole lot\n> > of pages (fsync()) after idling the disk for some period.\n> \n> This part is exactly the same point that I've been proposing to solve\n> with a background writer process. We don't need aio_write for that.\n> The background writer can handle pushing completed WAL pages out to\n> disk. The sticky part is trying to gang the writes for multiple \n> transactions whose COMMIT records would fit into the same WAL page,\n> and that WAL page isn't full yet.\n\nI just hoped that kernel could be used as the background writer process\nand in the process also solve the multiple commits on the same page\nproblem\n\n> The rest of what you wrote seems like wishful thinking about how\n> aio_write might behave :-(. I have no faith in it.\n\nYeah, and the fact that there are several slightly different\nimplementations of AIO even on Linux alone does not help.\n\nI have to test the SGI KAIO implementation for conformance with my\nwishful thinking ;)\n\nPerhaps you could ask around about AIO in RedHat Advanced Server (is it\nthe same AIO as SGI, how does it behave in \"multiple writes on the same\npage\" case) as you may have better links to RedHat ?\n\n--------------\nHannu\n\n\n", "msg_date": "06 Oct 2002 10:37:31 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "On Sat, 2002-10-05 at 14:46, Curtis Faith wrote:\n> \n> 2) aio_write vs. normal write.\n> \n> Since as you and others have pointed out aio_write and write are both\n> asynchronous, the issue becomes one of whether or not the copies to the\n> file system buffers happen synchronously or not.\n\nActually, I believe that write will be *mostly* asynchronous while\naio_write will always be asynchronous. In a buffer poor environment, I\nbelieve write will degrade into a synchronous operation. In an ideal\nsituation, I think they will prove to be on par with one another with a\nslight bias toward aio_write. In less than ideal situations where\nbuffer space is at a premium, I think aio_write will get the leg up.\n\n> \n> The kernel doesn't need to know anything about platter rotation. It\n> just needs to keep the disk write buffers full enough not to cause\n> a rotational latency.\n\n\nWhich is why in a buffer poor environment, aio_write is generally\npreferred as the write is still queued even if the buffer is full. That\nmeans it will be ready to begin placing writes into the buffer, all\nwithout the process having to wait. On the other hand, when using write,\nthe process must wait.\n\nIn a worse case scenario, it seems that aio_write does get a win.\n\nI personally would at least like to see an aio implementation and would\nbe willing to even help benchmark it to benchmark/validate any returns\nin performance. Surely if testing reflected a performance boost it\nwould be considered for baseline inclusion?\n\nGreg", "msg_date": "06 Oct 2002 11:06:38 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> I personally would at least like to see an aio implementation and would\n> be willing to even help benchmark it to benchmark/validate any returns\n> in performance. Surely if testing reflected a performance boost it\n> would be considered for baseline inclusion?\n\nIt'd be considered, but whether it'd be accepted would have to depend\non the size of the performance boost, its portability (how many\nplatforms/scenarios do you actually get a boost for), and the extent of\nbloat/uglification of the code.\n\nI can't personally get excited about something that only helps if your\nserver is starved for RAM --- who runs servers that aren't fat on RAM\nanymore? But give it a shot if you like. Perhaps your analysis is\npessimistic.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Oct 2002 12:46:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large " }, { "msg_contents": "On Sun, 2002-10-06 at 11:46, Tom Lane wrote:\n> I can't personally get excited about something that only helps if your\n> server is starved for RAM --- who runs servers that aren't fat on RAM\n> anymore? But give it a shot if you like. Perhaps your analysis is\n> pessimistic.\n\nI do suspect my analysis is somewhat pessimistic too but to what degree,\nI have no idea. You make a good case on your memory argument but please\nallow me to further kick it around. I don't find it far fetched to\nimagine situations where people may commit large amounts of memory for\nthe database yet marginally starve available memory for file system\nbuffers. Especially so on heavily I/O bound systems or where sporadicly\nother types of non-database file activity may occur.\n\nNow, while I continue to assure myself that it is not far fetched I\nhonestly have no idea how often this type of situation will typically\noccur. Of course, that opens the door for simply adding more memory\nand/or slightly reducing the amount of memory available to the database\n(thus making it available elsewhere). Now, after all that's said and\ndone, having something like aio in use would seemingly allowing it to be\nsomewhat more \"self-tuning\" from a potential performance perspective.\n\nGreg", "msg_date": "06 Oct 2002 15:21:05 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "\nOn 6 Oct 2002, Greg Copeland wrote:\n\n> On Sat, 2002-10-05 at 14:46, Curtis Faith wrote:\n> >\n> > 2) aio_write vs. normal write.\n> >\n> > Since as you and others have pointed out aio_write and write are both\n> > asynchronous, the issue becomes one of whether or not the copies to the\n> > file system buffers happen synchronously or not.\n>\n> Actually, I believe that write will be *mostly* asynchronous while\n> aio_write will always be asynchronous. In a buffer poor environment, I\n> believe write will degrade into a synchronous operation. In an ideal\n> situation, I think they will prove to be on par with one another with a\n> slight bias toward aio_write. In less than ideal situations where\n> buffer space is at a premium, I think aio_write will get the leg up.\n\nBrowsed web and came across this piece of text regarding a Linux-KAIO\npatch by Silicon Graphics...\n\n\"The asynchronous I/O (AIO) facility implements interfaces defined by the\nPOSIX standard, although it has not been through formal compliance\ncertification. This version of AIO is implemented with support from kernel\nmodifications, and hence will be called KAIO to distinguish it from AIO\nfacilities available from newer versions of glibc/librt. Because of the\nkernel support, KAIO is able to perform split-phase I/O to maximize\nconcurrency of I/O at the device. With split-phase I/O, the initiating\nrequest (such as an aio_read) truly queues the I/O at the device as the\nfirst phase of the I/O request; a second phase of the I/O request,\nperformed as part of the I/O completion, propagates results of the\nrequest. The results may include the contents of the I/O buffer on a\nread, the number of bytes read or written, and any error status.\n\nPreliminary experience with KAIO have shown over 35% improvement in\ndatabase performance tests. Unit tests (which only perform I/O) using KAIO\nand Raw I/O have been successful in achieving 93% saturation with 12 disks\nhung off 2 X 40 MB/s Ultra-Wide SCSI channels. We believe that these\nencouraging results are a direct result of implementing a significant\npart of KAIO in the kernel using split-phase I/O while avoiding or\nminimizing the use of any globally contented locks.\"\n\nWell...\n\n> In a worse case scenario, it seems that aio_write does get a win.\n>\n> I personally would at least like to see an aio implementation and would\n> be willing to even help benchmark it to benchmark/validate any returns\n> in performance. Surely if testing reflected a performance boost it\n> would be considered for baseline inclusion?\n\n", "msg_date": "Mon, 7 Oct 2002 18:38:47 +0300 (EEST)", "msg_from": "Antti Haapala <antti.haapala@iki.fi>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "On Mon, 2002-10-07 at 10:38, Antti Haapala wrote:\n> Browsed web and came across this piece of text regarding a Linux-KAIO\n> patch by Silicon Graphics...\n> \n\nYa, I have read this before. The problem here is that I'm not aware of\nwhich AIO implementation on Linux is the forerunner nor do I have any\nidea how it's implementation or performance details defer from that of\nother implementations on other platforms. I know there are at least two\naio efforts underway for Linux. There could yet be others. Attempting\nto cite specifics that only pertain to Linux and then, only with a\nspecific implementation which may or may not be in general use is\nquestionable. Because of this I simply left it as saying that I believe\nmy analysis is pessimistic.\n\nAnyone have any idea of Red Hat's Advanced Server uses KAIO or what?\n\n> \n> Preliminary experience with KAIO have shown over 35% improvement in\n> database performance tests. Unit tests (which only perform I/O) using KAIO\n> and Raw I/O have been successful in achieving 93% saturation with 12 disks\n> hung off 2 X 40 MB/s Ultra-Wide SCSI channels. We believe that these\n> encouraging results are a direct result of implementing a significant\n> part of KAIO in the kernel using split-phase I/O while avoiding or\n> minimizing the use of any globally contented locks.\"\n\nThe problem here is, I have no idea what they are comparing to (worse\ncase read/writes which we know PostgreSQL *mostly* isn't suffering\nfrom). If we assume that PostgreSQL's read/write operations are\nsomewhat optimized (as it currently sounds like they are), I'd seriously\ndoubt we'd see that big of a difference. On the other hand, I'm hoping\nthat if an aio postgresql implementation does get done we'll see\nsomething like a 5%-10% performance boost. Even still, I have nothing\nto pin that on other than hope. If we do see a notable performance\nincrease for Linux, I have no idea what it will do for other platforms.\n\nThen, there are all of the issues that Tom brought up about\nbloat/uglification and maintainability. So, while I certainly do keep\nthose remarks in my mind, I think it's best to simply encourage the\neffort (or something like it) and help determine where we really sit by\nmeans of empirical evidence.\n\n\nGreg", "msg_date": "07 Oct 2002 11:20:21 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> Ya, I have read this before. The problem here is that I'm not aware of\n> which AIO implementation on Linux is the forerunner nor do I have any\n> idea how it's implementation or performance details defer from that of\n> other implementations on other platforms.\n\nThe implementation of AIO in 2.5 is the one by Ben LaHaise (not\nSGI). Not sure what the performance is like -- although it's been\nmerged into 2.5 already, so someone can do some benchmarking. Can\nanyone suggest a good test?\n\nKeep in mind that glibc has had a user-space implementation for a\nlittle while (although I'd guess the performance to be unimpressive),\nso AIO would not be *that* kernel-version specific.\n\n> Anyone have any idea of Red Hat's Advanced Server uses KAIO or what?\n\nRH AS uses Ben LaHaise's implemention of AIO, I believe.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "07 Oct 2002 12:35:48 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "> On Sun, 2002-10-06 at 11:46, Tom Lane wrote:\n> > I can't personally get excited about something that only helps if your\n> > server is starved for RAM --- who runs servers that aren't fat on RAM\n> > anymore? But give it a shot if you like. Perhaps your analysis is\n> > pessimistic.\n>\n> <snipped> I don't find it far fetched to\n> imagine situations where people may commit large amounts of memory for\n> the database yet marginally starve available memory for file system\n> buffers. Especially so on heavily I/O bound systems or where sporadicly\n> other types of non-database file activity may occur.\n>\n> <snipped> Of course, that opens the door for simply adding more memory\n> and/or slightly reducing the amount of memory available to the database\n> (thus making it available elsewhere). Now, after all that's said and\n> done, having something like aio in use would seemingly allowing it to be\n> somewhat more \"self-tuning\" from a potential performance perspective.\n\nGood points.\n\nNow for some surprising news (at least it surprised me).\n\nI researched the file system source on my system (FreeBSD 4.6) and found\nthat the behavior was optimized for non-database access to eliminate\nunnecessary writes when temp files are created and deleted rapidly. It was\nnot optimized to get data to the disk in the most efficient manner.\n\nThe syncer on FreeBSD appears to place dirtied filesystem buffers into\nwork queues that range from 1 to SYNCER_MAXDELAY. Each second the syncer\nprocesses one of the queues and increments a counter syncer_delayno.\n\nOn my system the setting for SYNCER_MAXDELAY is 32. So each second 1/32nd\nof the writes that were buffered are processed. If the syncer gets behind\nand the writes for a given second exceed one second to process the syncer\ndoes not wait but begins processing the next queue.\n\nAFAICT this means that there is no opportunity to have writes combined by\nthe disk since they are processed in buckets based on the time the writes\ncame in.\n\nAlso, it seems very likely that many installations won't have enough\nbuffers for 30 seconds worth of changes and that there would be some level\nof SYNCHRONOUS writing because of this delay and the syncer process getting\nbacked up. This might happen once per second as the buffers get full and\nthe syncer has not yet started for that second interval.\n\nLinux might handle this better. I saw some emails exchanged a year or so\nago about starting writes immediately in a low-priority way but I'm not\nsure if those patches got applied to the linux kernel or not. The source I\nhad access to seems to do something analogous to FreeBSD but using fixed\npercentages of the dirty blocks or a minimum number of blocks. They appear\nto be handled in LRU order however.\n\nOn-disk caches are much much larger these days so it seems that some way of\ngetting the data out sooner would result in better write performance for\nthe cache. My newer drive is a 10K RPM IBM Ultrastar SCSI and it has a 4M\ncache. I don't see these caches getting smaller over time so not letting\nthe disk see writes will become more and more of a performance drain.\n\n- Curtis\n\n", "msg_date": "Mon, 7 Oct 2002 14:31:00 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "On Mon, 2002-10-07 at 21:35, Neil Conway wrote:\n> Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > Ya, I have read this before. The problem here is that I'm not aware of\n> > which AIO implementation on Linux is the forerunner nor do I have any\n> > idea how it's implementation or performance details defer from that of\n> > other implementations on other platforms.\n> \n> The implementation of AIO in 2.5 is the one by Ben LaHaise (not\n> SGI). Not sure what the performance is like -- although it's been\n> merged into 2.5 already, so someone can do some benchmarking. Can\n> anyone suggest a good test?\n\nWhat would be really interesting is to aio_write small chunks to the\nsame 8k page by multiple threads/processes and then wait for the page\ngetting written to disk.\n\nThen check how many backends get their wait back at the same write. \n\nThe docs for POSIX aio_xxx are at:\n\nhttp://www.opengroup.org/onlinepubs/007904975/functions/aio_write.html\n\n----------------\nHannu\n\n\n", "msg_date": "08 Oct 2002 00:32:42 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Proposed LogWriter Scheme, WAS: Potential Large" }, { "msg_contents": "Curtis Faith wrote:\n> Good points.\n> \n> Now for some surprising news (at least it surprised me).\n> \n> I researched the file system source on my system (FreeBSD 4.6) and found\n> that the behavior was optimized for non-database access to eliminate\n> unnecessary writes when temp files are created and deleted rapidly. It was\n> not optimized to get data to the disk in the most efficient manner.\n> \n> The syncer on FreeBSD appears to place dirtied filesystem buffers into\n> work queues that range from 1 to SYNCER_MAXDELAY. Each second the syncer\n> processes one of the queues and increments a counter syncer_delayno.\n> \n> On my system the setting for SYNCER_MAXDELAY is 32. So each second 1/32nd\n> of the writes that were buffered are processed. If the syncer gets behind\n> and the writes for a given second exceed one second to process the syncer\n> does not wait but begins processing the next queue.\n> \n> AFAICT this means that there is no opportunity to have writes combined by\n> the disk since they are processed in buckets based on the time the writes\n> came in.\n\nThis is the trickle syncer. It prevents bursts of disk activity every\n30 seconds. It is for non-fsync writes, of course, and I assume if the\nkernel buffers get low, it starts to flush faster.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 7 Oct 2002 16:28:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "> This is the trickle syncer. It prevents bursts of disk activity every\n> 30 seconds. It is for non-fsync writes, of course, and I assume if the\n> kernel buffers get low, it starts to flush faster.\n\nAFAICT, the syncer only speeds up when virtual memory paging fills the\nbuffers past\na threshold and even in that event it only speeds it up by a factor of two.\n\nI can't find any provision for speeding up flushing of the dirty buffers\nwhen they fill for normal file system writes, so I don't think that\nhappens.\n\n- Curtis\n\n", "msg_date": "Mon, 7 Oct 2002 16:43:20 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "On Mon, 2002-10-07 at 15:28, Bruce Momjian wrote:\n> This is the trickle syncer. It prevents bursts of disk activity every\n> 30 seconds. It is for non-fsync writes, of course, and I assume if the\n> kernel buffers get low, it starts to flush faster.\n\nDoesn't this also increase the likelihood that people will be running in\na buffer-poor environment more frequently that I previously asserted,\nespecially in very heavily I/O bound systems? Unless I'm mistaken, that\nopens the door for a general case of why an aio implementation should be\nlooked into.\n\nAlso, on a side note, IIRC, linux kernel 2.5.x has a new priority\nelevator which is said to be MUCH better as saturating disks than ever\nbefore. Once 2.6 (or whatever it's number will be) is released, it may\nnot be as much of a problem as it seems to be for FreeBSD (I think\nthat's the one you're using).\n\n\nGreg", "msg_date": "07 Oct 2002 16:28:08 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> Doesn't this also increase the likelihood that people will be running in\n> a buffer-poor environment more frequently that I previously asserted,\n> especially in very heavily I/O bound systems? Unless I'm mistaken, that\n> opens the door for a general case of why an aio implementation should be\n> looked into.\n\nWell, at least for *this specific sitation*, it doesn't really change\nanything -- since FreeBSD doesn't implement POSIX AIO as far as I\nknow, we can't use that as an alternative.\n\nHowever, I'd suspect that the FreeBSD kernel allows for some way to\ntune the behavior of the syncer. If that's the case, we could do some\nresearch into what settings are more appropriate for FreeBSD, and\nrecommend those in the docs. I don't run FreeBSD, however -- would\nsomeone like to volunteer to take a look at this?\n\nBTW Curtis, did you happen to check whether this behavior has been\nchanged in FreeBSD 5.0?\n\n> Also, on a side note, IIRC, linux kernel 2.5.x has a new priority\n> elevator which is said to be MUCH better as saturating disks than ever\n> before.\n\nYeah, there are lots of new & interesting features for database\nsystems in the new kernel -- I'm looking forward to when 2.6 is widely\ndeployed...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "07 Oct 2002 20:42:17 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "> Greg Copeland <greg@CopelandConsulting.Net> writes:\n> > Doesn't this also increase the likelihood that people will be\n> > running in a buffer-poor environment more frequently that I\n> > previously asserted, especially in very heavily I/O bound\n> > systems? Unless I'm mistaken, that opens the door for a\n> > general case of why an aio implementation should be looked into.\n\nNeil Conway replies:\n> Well, at least for *this specific sitation*, it doesn't really change\n> anything -- since FreeBSD doesn't implement POSIX AIO as far as I\n> know, we can't use that as an alternative.\n\nI haven't tried it yet but there does seem to be an aio implementation that\nconforms to POSIX in FreeBSD 4.6.2. Its part of the kernel and can be\nfound in:\n/usr/src/sys/kern/vfs_aio.c\n\n> However, I'd suspect that the FreeBSD kernel allows for some way to\n> tune the behavior of the syncer. If that's the case, we could do some\n> research into what settings are more appropriate for FreeBSD, and\n> recommend those in the docs. I don't run FreeBSD, however -- would\n> someone like to volunteer to take a look at this?\n\nI didn't see anything obvious in the docs but I still believe there's some\nway to tune it. I'll let everyone know if I find some better settings.\n\n> BTW Curtis, did you happen to check whether this behavior has been\n> changed in FreeBSD 5.0?\n\nI haven't checked but I will.\n\n", "msg_date": "Mon, 7 Oct 2002 22:45:31 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "Curtis Faith wrote:\n> > This is the trickle syncer. It prevents bursts of disk activity every\n> > 30 seconds. It is for non-fsync writes, of course, and I assume if the\n> > kernel buffers get low, it starts to flush faster.\n> \n> AFAICT, the syncer only speeds up when virtual memory paging fills the\n> buffers past\n> a threshold and even in that event it only speeds it up by a factor of two.\n> \n> I can't find any provision for speeding up flushing of the dirty buffers\n> when they fill for normal file system writes, so I don't think that\n> happens.\n\nSo you think if I try to write a 1 gig file, it will write enough to\nfill up the buffers, then wait while the sync'er writes out a few blocks\nevery second, free up some buffers, then write some more?\n\nTake a look at vfs_bio::getnewbuf() on *BSD and you will see that when\nit can't get a buffer, it will async write a dirty buffer to disk.\n\nAs far as this AIO conversation is concerned, I want to see someone come\nup with some performance improvement that we can only do with AIO. \nUnless I see it, I am not interested in pursuing this thread.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 7 Oct 2002 23:08:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "> So you think if I try to write a 1 gig file, it will write enough to\n> fill up the buffers, then wait while the sync'er writes out a few blocks\n> every second, free up some buffers, then write some more?\n>\n> Take a look at vfs_bio::getnewbuf() on *BSD and you will see that when\n> it can't get a buffer, it will async write a dirty buffer to disk.\n\nWe've addressed this scenario before, if I recall, the point Greg made\nearlier is that buffers getting full means writes become synchronous.\n\nI was trying to point out was that it was very likely that the buffers will\nfill even for large buffers and that the writes are going to be driven out\nnot by efficient ganging but by something approaching LRU flushing, with an\noccasional once a second slightly more efficient write of 1/32nd of the\nbuffers.\n\nOnce the buffers get full, all subsequent writes turn into synchronous\nwrites, since even if the kernel writes asynchronously (meaning it can do\nother work), the writing process can't complete, it has to wait until the\nbuffer has been flushed and is free for the copy. So the relatively poor\nimplementation (for database inserts at least) of the syncer mechanism will\ncost a lot of performance if we get to this synchronous write mode due to a\nfull buffer. It appears this scenario is much more likely than I had\nthought.\n\nDo you not think this is a potential performance problem to be explored?\n\nI'm only pursuing this as hard as I am because I feel like it's deja vu all\nover again. I've done this before and found a huge improvement (12X to 20X\nfor bulk inserts). I'm not necessarily expecting that level of improvement\nhere but my gut tells me there is more here than seems obvious on the\nsurface.\n\n> As far as this AIO conversation is concerned, I want to see someone come\n> up with some performance improvement that we can only do with AIO.\n> Unless I see it, I am not interested in pursuing this thread.\n\nIf I come up with something via aio that helps I'd be more than happy if\nsomeone else points out a non-aio way to accomplish the same thing. I'm by\nno means married to any particular solutions, I care about getting problems\nsolved. And I'll stop trying to sell anyone on aio.\n\n- Curtis\n\n", "msg_date": "Tue, 8 Oct 2002 09:53:16 -0400", "msg_from": "\"Curtis Faith\" <curtis@galtair.com>", "msg_from_op": true, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" }, { "msg_contents": "\"Curtis Faith\" <curtis@galtair.com> writes:\n> Do you not think this is a potential performance problem to be explored?\n\nI agree that there's a problem if the kernel runs short of buffer space.\nI am not sure whether that's really an issue in practical situations,\nnor whether we can do much about it at the application level if it is\n--- but by all means look for solutions if you are concerned.\n\n(This is, BTW, one of the reasons for discouraging people from pushing\nPostgres' shared buffer cache up to a large fraction of total RAM;\nstarving the kernel of disk buffers is just plain not a good idea.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Oct 2002 10:50:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme] " }, { "msg_contents": "Bruce,\n\nIs there remarks along these lines in the performance turning section of\nthe docs? Based on what's coming out of this it would seem that\nstressing the importance of leaving a notable (rule of thumb here?)\namount for general OS/kernel needs is pretty important.\n\n\nGreg\n\n\nOn Tue, 2002-10-08 at 09:50, Tom Lane wrote:\n> (This is, BTW, one of the reasons for discouraging people from pushing\n> Postgres' shared buffer cache up to a large fraction of total RAM;\n> starving the kernel of disk buffers is just plain not a good idea.)", "msg_date": "08 Oct 2002 09:55:17 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Dirty Buffer Writing [was Proposed LogWriter Scheme]" } ]