threads
listlengths
1
2.99k
[ { "msg_contents": "Oops I screwed up again. :) \n\nI was actually right the first time my postgresql 7.0.3 was running with\nfsync off. Due to my weird results I searched more thoroughly and found my\n7.0.3's pg_options had a nofsync=1.\n\nSo 7.0.3 is twice as fast only with fsync off.\n\n7.1beta4 snapshot - fsync.\n\n./pgbench -i pgbench -s 5\n./pgbench -c 5 -t 500 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 5\nnumber of clients: 5\nnumber of transactions per client: 500\nnumber of transactions actually processed: 2500/2500\ntps = 22.435799(including connections establishing)\ntps = 22.453842(excluding connections establishing)\n\n7.0.3 no fsync\n./pgbench -i pgbench -s 5\n./pgbench -c 5 -t 500 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 5\nnumber of clients: 5\nnumber of transactions per client: 500\nnumber of transactions actually processed: 2500/2500\ntps = 52.971997(including connections establishing)\ntps = 53.044280(excluding connections establishing)\n\n7.0.3 fsync\n./pgbench -c 5 -t 500 pgbench\nstarting vacuum...end.\ntransaction type: TPC-B (sort of)\nscaling factor: 5\nnumber of clients: 5\nnumber of transactions per client: 500\nnumber of transactions actually processed: 2500/2500\ntps = 7.366986(including connections establishing)\ntps = 7.368807(excluding connections establishing)\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 26 Feb 2001 10:01:17 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": true, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" } ]
[ { "msg_contents": "\n\nhi i'm re-sending this mail again to seek more help since i haven't got\nany solution as yet. Additionaly , could it be an oracle taking up so much \nresource so that i postgres can't get any of them? \n\nKatsu\n\n---------- Forwarded message ----------\nDate: Sun, 18 Feb 2001 15:27:47 +1100 (EST)\nFrom: Katsuyuki Tanaka <katsut@cse.unsw.EDU.AU>\nTo: pgsql-general@postgresql.org\nSubject: IPC Shared Memory\n\n\n\tHi when i run postmaster i got the following error and\n\tpostmaser doesn't start,\n\n\tFATAL 1: InitProcGlobal: IpcSemaphoreCreate failed\n\tIpcSemaphoreCreate: semget failed (No space left on device) key=8888014,\n\tnum=16, permission=600\n\tThis type of error is usually caused by an improper\n\tshared memory or System V IPC semaphore configuration.\n\tFor more information, see the FAQ and platform-specific\n\tFAQ's in the source directory pgsql/doc or on our\n\n\t\n\ti made query to admin and the max shared mem setting is already\n\tset to the max possible\n\n\t*\n\t* IPC Shared Memory\n\t*\n\t4294967295 max shared memory segment size (SHMMAX)\n \t\t1 min shared memory segment size (SHMMIN)\n \t\t100 shared memory identifiers (SHMMNI)\n \t\t10 max attached shm segments per process (SHMSEG)\n\n\t\n\tI tried -N and -B but didn't have luck. Could anyone help me on\n\tthis? This problem appeared after admin changed their config \n\ti believe, could that be it?\n\n\tThanks\n\tKatsu\n\n\n", "msg_date": "Mon, 26 Feb 2001 16:06:56 +1100 (EST)", "msg_from": "Katsuyuki Tanaka <katsut@cse.unsw.EDU.AU>", "msg_from_op": true, "msg_subject": "IPC Shared Memory (fwd)" }, { "msg_contents": "Katsu,\n\nKatsuyuki Tanaka wrote:\n> hi i'm re-sending this mail again to seek more help since i haven't got\n> any solution as yet. Additionaly , could it be an oracle taking up so much\n> resource so that i postgres can't get any of them?\n\nYes, I think your oracle db doesn't probably leave enough for PG.\nI don't know how much of the shared resources are necessary for PG,\nbut I assume you have problems with the number of semaphores.\n\nI had similar problems and I didn't want to reconfigure the systems, so I \nreduced the PROCESS parameter for oracle.\n\nHave a look at the following description, on how to set and calculate the \nshared memory/semaphores for Oracle on UNIX (I assume you use Solaris):\n\nhttp://otn.oracle.com/doc/solaris/server.815/a67457/pre.htm#1000595\n\nThat's the only version I found in HTML form, so if you have a different \noracle version you might have to check your specific installation guide \nfor that, but in principle the calculations are still the same for the \ndifferent oracle versions, I think.\n\ne.g. for 8.1.7 solaris\nhttp://otn.oracle.com/docs/products/oracle8i/pdf/installguide_sun_817.pdf\n\n\n> i made query to admin and the max shared mem setting is already\n> set to the max possible\n> *\n> * IPC Shared Memory\n> *\n> 4294967295 max shared memory segment size (SHMMAX)\n> 1 min shared memory segment size (SHMMIN)\n> 100 shared memory identifiers (SHMMNI)\n> 10 max attached shm segments per process (SHMSEG)\nYes, the (SHMMAX) seems to be max, but the others could be higher. The\nsame is true with the semaphore parameters. But those parameter influence quite\na bit how many resources are reserved from the kernel (-> needs memory)\nSo, the settings should balance your needs and available memory.\n\n-- \nBest regards,\nPeter Schindler\t\n Synchronicity Inc. | pschindler@synchronicity.com\n http://www.synchronicity.com | +49 89 89 66 99 42 (Germany)\n", "msg_date": "Mon, 26 Feb 2001 09:51:02 +0100", "msg_from": "Peter Schindler <pschindler@synchronicity.com>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] IPC Shared Memory (fwd)" } ]
[ { "msg_contents": "\nI am having some problems with our proxy server (wget times out on header)\nand would like to know whether it would be possible to install http access\nto the snapshots and other download files ?\n\nThis would probably also benefit others, no ?\n\nThanks\nAndreas\n", "msg_date": "Mon, 26 Feb 2001 10:58:46 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "http access to ftp.postgresql.org files" }, { "msg_contents": "On Mon, 26 Feb 2001, Zeugswetter Andreas SB wrote:\n\n>\n> I am having some problems with our proxy server (wget times out on header)\n> and would like to know whether it would be possible to install http access\n> to the snapshots and other download files ?\n>\n> This would probably also benefit others, no ?\n\nSee if this works: http://www.postgresql.org/ftpsite/\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Feb 2001 07:53:16 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: http access to ftp.postgresql.org files" }, { "msg_contents": "Just an FYI -- It works well from behind my proxy..\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Vince Vielhaber\" <vev@michvhf.com>\nTo: \"Zeugswetter Andreas SB\" <ZeugswetterA@wien.spardat.at>\nCc: \"'The Hermit Hacker'\" <scrappy@hub.org>; <pgsql-hackers@postgresql.org>\nSent: Monday, February 26, 2001 7:53 AM\nSubject: Re: http access to ftp.postgresql.org files\n\n\n> On Mon, 26 Feb 2001, Zeugswetter Andreas SB wrote:\n>\n> >\n> > I am having some problems with our proxy server (wget times out on\nheader)\n> > and would like to know whether it would be possible to install http\naccess\n> > to the snapshots and other download files ?\n> >\n> > This would probably also benefit others, no ?\n>\n> See if this works: http://www.postgresql.org/ftpsite/\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n", "msg_date": "Mon, 26 Feb 2001 07:59:35 -0500", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "Re: http access to ftp.postgresql.org files" } ]
[ { "msg_contents": "\ntest/bench/{create|runtest}.sh uses switch '-Q' for\npostgres, but postgres gives error on it. Otherwise\nseems working, only lots of debug output is seen.\n\n-- \nmarko\n\n", "msg_date": "Mon, 26 Feb 2001 12:15:42 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "'postgres -Q' in test/bench" }, { "msg_contents": "Marko Kreen writes:\n\n> test/bench/{create|runtest}.sh uses switch '-Q' for\n> postgres, but postgres gives error on it. Otherwise\n> seems working, only lots of debug output is seen.\n\nReplace it with '-d 0'. I'm not sure these benchmark tools are maintained\nat all. contrib/pgbench seems to be preferred.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 17:13:31 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: 'postgres -Q' in test/bench" } ]
[ { "msg_contents": "\n> Regardless of whether this particular behavior is fixable, this brings\n> up something that I think we *must* do before 7.1 release: create a\n> utility that blows away a corrupted logfile to allow the system to\n> restart with whatever is in the datafiles. Otherwise, there is no\n> recovery technique for WAL restart failures, short of initdb and\n> restore from last backup. I'd rather be able to get at data of\n> questionable up-to-dateness than not have any chance of recovery\n> at all.\n\nIt would imho be great if this utility would also have a means to \nextend a logfile, that was not extended to the full 16Mb, and\nrevert the change that writes the whole file in the init phase.\n\nImho this write at logfile init time adds a substantial amount of IO,\nthat would better be avoided. If we really need this, it would imho \nbe better to preallocate N logfiles and reuse them after checkpoint.\n\nAndreas \n", "msg_date": "Mon, 26 Feb 2001 12:23:09 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL does not recover gracefully from out-of-disk-sp\n\tace" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Imho this write at logfile init time adds a substantial amount of IO,\n> that would better be avoided. If we really need this, it would imho \n> be better to preallocate N logfiles and reuse them after checkpoint.\n\nAlready done. See the WAL_FILES parameter.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 10:39:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL does not recover gracefully from out-of-disk-sp ace " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > Regardless of whether this particular behavior is fixable, this brings\n> > up something that I think we *must* do before 7.1 release: create a\n> > utility that blows away a corrupted logfile to allow the system to\n> > restart with whatever is in the datafiles. Otherwise, there is no\n> > recovery technique for WAL restart failures, short of initdb and\n> > restore from last backup. I'd rather be able to get at data of\n> > questionable up-to-dateness than not have any chance of recovery\n> > at all.\n> \nUpdate OPEN ITEMS list:\n\nSource Code Changes\n-------------------\nAllow recovery from corrupted WAL file\nFinalize commit_delay value\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 12:22:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL does not recover gracefully from out-of-disk-sp\n ace" } ]
[ { "msg_contents": "\n> > I am having some problems with our proxy server (wget times out on header)\n> > and would like to know whether it would be possible to install http access\n> > to the snapshots and other download files ?\n> >\n> > This would probably also benefit others, no ?\n> \n> See if this works: http://www.postgresql.org/ftpsite/\n\nWorks great !! Wow, what a time to market. I am impressed :-)\n\nMany thanks\nAndreas\n", "msg_date": "Mon, 26 Feb 2001 14:08:01 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: http access to ftp.postgresql.org files" } ]
[ { "msg_contents": "> So 7.0.3 is twice as fast only with fsync off.\n\nAre there FK updates/deletes in pgbench' tests?\nRemember how SELECT FOR UPDATE in FK triggers\naffects performance...\nAlso, 5 clients is small number.\n\nVadim\nP.S. Sorry for delays with my replies -\ninternet connection is pain here: it takes\n5-10 minutes to read each message -:(((\n\n-----------------------------------------------\nFREE! The World's Best Email Address @email.com\nReserve your name now at http://www.email.com\n\n\n", "msg_date": "Mon, 26 Feb 2001 10:04:53 -0500 (EST)", "msg_from": "Vadim Mikheev <vadim4o@email.com>", "msg_from_op": true, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" } ]
[ { "msg_contents": ">> Regardless of whether this particular behavior is fixable,\n>> this brings up something that I think we *must* do before\n>> 7.1 release: create a utility that blows away a corrupted\n>> logfile to allow the system to restart with whatever is in\n>> the datafiles. Otherwise, there is no recovery technique\n>> for WAL restart failures, short of initdb and restore from\n>> last backup. I'd rather be able to get at data of\n>> questionable up-to-dateness than not have any chance of\n>> recovery at all.\n>\n> I've asked 2 or 3 times how to recover from recovery failure\n> but got no answer. We should some recipi for the failure\n> before 7.1 release.\n\nAnd I answered 2 or 3 times with fixes for each reported\nrecovery failure -:) (And asked to help with testing...)\n\nSeems to me that \"fixing\" is the only \"answer\" you would\nget asking the same question to Oracle, Informix or any\nother system with transaction log. Does anybody know how\n\"big boys\" deal with this issue?\n\nVadim\n\n-----------------------------------------------\nFREE! The World's Best Email Address @email.com\nReserve your name now at http://www.email.com\n\n\n", "msg_date": "Mon, 26 Feb 2001 10:50:08 -0500 (EST)", "msg_from": "Vadim Mikheev <vadim4o@email.com>", "msg_from_op": true, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-space" } ]
[ { "msg_contents": "\n> > Imho this write at logfile init time adds a substantial amount of IO,\n> > that would better be avoided. If we really need this, it would imho \n> > be better to preallocate N logfiles and reuse them after checkpoint.\n> \n> Already done. See the WAL_FILES parameter.\n\nI meant something else. I did not mean to write/format the logfile during \ncheckpoint at all, but to reuse logfile number 0000000 for a later logfile.\nDoing the format during checkpoint hurts even more, since that is the\ntime when the most writes occur, no ?\n\nIn my setup initdb, or startup, or whatever would preallocate N logfiles,\nand those are reused round robin.\n\nAndreas\n", "msg_date": "Mon, 26 Feb 2001 16:51:33 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: WAL does not recover gracefully from out-of-dis\n\tk-sp ace" } ]
[ { "msg_contents": "\n> \tAre there any major outstandings that ppl have on their plates,\n> that should prevent a release?\n\nImho startup after a failing WAL recovery is a 'must do' before release,\nas Tom pointed out. Remember that you can currently run into this situation\nwith as easy a mistake as running out of diskspace.\n\nAlso a reasonable default for commit_delay and commit_siblings should be \nfound before release.\n\nAndreas\n", "msg_date": "Mon, 26 Feb 2001 17:11:39 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Release in 2 weeks ... " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> \n> > \tAre there any major outstandings that ppl have on their plates,\n> > that should prevent a release?\n> \n> Imho startup after a failing WAL recovery is a 'must do' before release,\n> as Tom pointed out. Remember that you can currently run into this situation\n> with as easy a mistake as running out of diskspace.\n> \n> Also a reasonable default for commit_delay and commit_siblings should be \n> found before release.\n\nAdded to open items:\n\nSource Code Changes\n-------------------\nAllow recovery from corrupted WAL file\nFinalize commit_delay value\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 12:25:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Release in 2 weeks ..." } ]
[ { "msg_contents": "\nOne thing that I remember from a performance test we once did is, that the results\nare a lot more realistic, better and more stable, if you try to decouple the startup of \nthe different clients a little bit, so they are not all in the same section of code at the same time.\n\nWe inserted random usleeps, I forgot what range, but 10 ms seem reasonable to me.\n\nThis was another database, but it might also apply here.\n\nAndreas\n", "msg_date": "Mon, 26 Feb 2001 18:50:09 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: CommitDelay performance improvement " } ]
[ { "msg_contents": "Can anyone tell me what is going on, when I get a stuck spinlock?\nIs there a data corruption or anything else to worry about ?\nI've found some references about spinlocks in the -hackers list,\nso is that fixed with a later version than beta4 already?\n\nActually I was running a stack of pgbench jobs with varying commit_delay\nparameter and # of clients, but it doesn't look deterministic on any of their\nvalues. \nI've got those fatal errors, with exactly the same data several times now. \nI've restarted the postmaster as well as I've dropped the bench database and \nrecreated it, but it didn't really help. That error is still coming\n*sometimes*.\nBTW, I think I didn't see this before, when I was running pgbench only\nonce from the command line, but since I use the script with the for\nloop.\n\n\nSome environment info:\n\nbench=# select version();\n version \n---------------------------------------------------------------------\n PostgreSQL 7.1beta4 on sparc-sun-solaris2.7, compiled by GCC 2.95.1\n\ncheckpoint_timeout = 1800 # range 30-1800\ncommit_delay = 0 # range 0-1000\ndebug_level = 0 # range 0-16\nfsync = false\nmax_connections = 100 # 1-1024\nshared_buffers = 4096\nsort_mem = 4096\ntcpip_socket = true\nwal_buffers = 128 # min 4\nwal_debug = 0 # range 0-16\nwal_files = 10 # range 0-64\n\n\npgbench -i -s 10 bench\n...\nPGOPTIONS=\"-c commit_delay=$del \" \\\n pgbench -c $cli -t 100 -n bench\n\nThanks,\nPeter\n\n=========\n\nFATAL: s_lock(fcc01067) at xlog.c:2088, stuck spinlock. Aborting.\n\nFATAL: s_lock(fcc01067) at xlog.c:2088, stuck spinlock. Aborting.\nServer process (pid 7889) exited with status 6 at Mon Feb 26 09:17:36 2001\nTerminating any active server processes...\nNOTICE: Message from PostgreSQL backend:\n The Postmaster has informed me that some other backend died abnormally and possibly corr\nupted shared memory.\n I have rolled back the current transaction and am going to terminate your database \nsystem connection and exit.\n Please reconnect to the database system and repeat your query.\nThe Data Base System is in recovery mode\nServer processes were terminated at Mon Feb 26 09:17:36 2001\nReinitializing shared memory and semaphores\nDEBUG: starting up\nDEBUG: database system was interrupted at 2001-02-26 09:17:33\nDEBUG: CheckPoint record at (0, 3648965776)\nDEBUG: Redo record at (0, 3648965776); Undo record at (0, 0); Shutdown FALSE\nDEBUG: NextTransactionId: 1362378; NextOid: 2362993\nDEBUG: database system was not properly shut down; automatic recovery in progress...\nDEBUG: redo starts at (0, 3648965840)\nDEBUG: ReadRecord: record with zero len at (0, 3663163376)\nDEBUG: Formatting logfile 0 seg 218 block 699 at offset 4080\nDEBUG: The last logId/logSeg is (0, 218)\nDEBUG: redo done at (0, 3663163336)\n\n-- \nBest regards,\nPeter Schindler\t\n Synchronicity Inc. | pschindler@synchronicity.com\n http://www.synchronicity.com | +49 89 89 66 99 42 (Germany)\n", "msg_date": "Mon, 26 Feb 2001 19:29:02 +0100", "msg_from": "Peter Schindler <pschindler@synchronicity.com>", "msg_from_op": true, "msg_subject": "stuck spinlock" }, { "msg_contents": "Peter Schindler <pschindler@synchronicity.com> writes:\n> FATAL: s_lock(fcc01067) at xlog.c:2088, stuck spinlock. Aborting.\n\nJudging from the line number, this is in CreateCheckPoint. I'm\nbetting that your platform (Solaris 2.7, you said?) has the same odd\nbehavior that I discovered a couple days ago on HPUX: a select with\na delay of tv_sec = 0, tv_usec = 1000000 doesn't delay 1 second like\na reasonable person would expect, but fails instantly with EINVAL.\nThis causes the spinlock timeout in CreateCheckPoint to effectively\nbe only a few microseconds rather than the intended ten minutes.\nSo, if the postmaster happens to fire off a checkpoint process while\nsome regular backend is doing something with the WAL log, kaboom.\n\nIn short: please try the latest nightly snapshot (this fix is since\nbeta5, unfortunately) and let me know if you still see a problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 21:31:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spinlock " }, { "msg_contents": "Tom Lane wrote:\n> Judging from the line number, this is in CreateCheckPoint. I'm\n> betting that your platform (Solaris 2.7, you said?) has the same odd\n> behavior that I discovered a couple days ago on HPUX: a select with\n> a delay of tv_sec = 0, tv_usec = 1000000 doesn't delay 1 second like\n> a reasonable person would expect, but fails instantly with EINVAL.\nAfter I finally understood what you meant, this behavior looks somehow\nreasonable to me as its a struct, but I must admit, that I don't have \nto much knowledge in this area.\n\nAnyway, after further thoughts I was curious about this odd behavior on\nthe different platforms and I used your previously posted program, extended\nit a little bit and run it on all platforms I could get a hold of.\nPlease have a look at the extracted log and comments below about the different \nplatforms. It seems, that this functions a \"good\" example of a really \nincompatible implementation across platforms, even within the same \nacross different versions of the OSs.\nHappy wondering ;-)\n\n\n> In short: please try the latest nightly snapshot (this fix is since\n> beta5, unfortunately) and let me know if you still see a problem.\nI did and I didn't get the error yet, but didn't run as many jobs either.\nIf I get the error again, I'll post it.\n\nThanks for your help,\nPeter\n\n=====", "msg_date": "Wed, 28 Feb 2001 12:05:04 +0100", "msg_from": "Peter Schindler <pschindler@synchronicity.com>", "msg_from_op": true, "msg_subject": "Re: stuck spinlock" }, { "msg_contents": "Interesting numbers --- thanks for sending them along.\n\nLooks like I was mistaken to think that most platforms would allow\ntv_usec >= 1 sec. Ah well, another day, another bug...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 10:08:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: stuck spinlock " } ]
[ { "msg_contents": "\nI just put the regress test tool online. It's just a simple form, I'll\nadd the borders and stuff later. No reporting tool yet, but I'm working\non that now.\n\nhttp://www.postgresql.org/~vev/regress/\n\nBe prepared, it's gonna ask you for your regression.out and\nregression.diff files. I just uploaded mine and it worked, let me\nknow immediately if it doesn't for you!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Feb 2001 14:29:28 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "regress test reporting" } ]
[ { "msg_contents": "\nis regression.out and/or regression.diff deleted if the tests pass? I've\nnever seen all tests pass so I don't know but I just had someone tell me\nthat it does.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 26 Feb 2001 15:32:48 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "regression.out and regression.diff" }, { "msg_contents": "Vince Vielhaber writes:\n\n> is regression.out and/or regression.diff deleted if the tests pass? I've\n> never seen all tests pass so I don't know but I just had someone tell me\n> that it does.\n\nWe've come to a point where all tests should pass all the time on all\nsupported platforms. If they don't, that's a bug and should be posted to\na mailing list.\n\nThe test output should come from 'gmake check', not the test against an\nalready installed server ('installcheck').\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 20:40:40 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: regression.out and regression.diff" }, { "msg_contents": "On Tue, 27 Feb 2001, Peter Eisentraut wrote:\n\n> Vince Vielhaber writes:\n>\n> > is regression.out and/or regression.diff deleted if the tests pass? I've\n> > never seen all tests pass so I don't know but I just had someone tell me\n> > that it does.\n>\n> We've come to a point where all tests should pass all the time on all\n> supported platforms. If they don't, that's a bug and should be posted to\n> a mailing list.\n>\n> The test output should come from 'gmake check', not the test against an\n> already installed server ('installcheck').\n\nIt was a snapshot - no idea what kind of shape it was in and I don't\nrecall how I did the check, but it may have been installcheck.\n\nEither way my question still needs to be answered. If there are no\nerrors, is regression.out and regression.diff deleted? I can see the\ndiff file being empty, but deleted?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 27 Feb 2001 15:08:49 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: regression.out and regression.diff" }, { "msg_contents": "On Tue, 27 Feb 2001, Vince Vielhaber wrote:\n\n> On Tue, 27 Feb 2001, Peter Eisentraut wrote:\n>\n> > Vince Vielhaber writes:\n> >\n> > > is regression.out and/or regression.diff deleted if the tests pass? I've\n> > > never seen all tests pass so I don't know but I just had someone tell me\n> > > that it does.\n> >\n> > We've come to a point where all tests should pass all the time on all\n> > supported platforms. If they don't, that's a bug and should be posted to\n> > a mailing list.\n> >\n> > The test output should come from 'gmake check', not the test against an\n> > already installed server ('installcheck').\n>\n> It was a snapshot - no idea what kind of shape it was in and I don't\n> recall how I did the check, but it may have been installcheck.\n\nMake that \"must have been installcheck\" 'cuze I just reran it with\ngmake check and it passed all but the ignored random test.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 27 Feb 2001 15:45:44 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: regression.out and regression.diff" }, { "msg_contents": "Vince Vielhaber writes:\n\n> It was a snapshot - no idea what kind of shape it was in and I don't\n> recall how I did the check, but it may have been installcheck.\n\nEven snapshots should work all the time. If not, it should be reported,\nnot posted to a web page.\n\n> Either way my question still needs to be answered. If there are no\n> errors, is regression.out and regression.diff deleted? I can see the\n> diff file being empty, but deleted?\n\nThe diff file always exists; if all tests passed, it's empty. The .out\nfile always contains a copy of what you see on the screen (i.e., the\nok/FAIL). Which one of these two you want depends on what you are trying\nto achieve, but the .out file is probably less necessary, because if there\nare failed tests, you will see in the diff file.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 21:52:28 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: regression.out and regression.diff" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Either way my question still needs to be answered. If there are no\n>> errors, is regression.out and regression.diff deleted? I can see the\n>> diff file being empty, but deleted?\n\n> The diff file always exists; if all tests passed, it's empty.\n\nForgot your own code already, Peter?\n\nif [ -s \"$diff_file\" ]; then\n echo \"The differences that caused some tests to fail can be viewed in the\"\n echo \"file \\`$diff_file'. A copy of the test summary that you see\"\n echo \"above is saved in the file \\`$result_summary_file'.\"\n echo\nelse\n rm -f \"$diff_file\" \"$result_summary_file\"\nfi\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 17:14:42 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression.out and regression.diff " }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> The test output should come from 'gmake check', not the test against an\n> already installed server ('installcheck').\n>> \n>> It was a snapshot - no idea what kind of shape it was in and I don't\n>> recall how I did the check, but it may have been installcheck.\n\n> Make that \"must have been installcheck\" 'cuze I just reran it with\n> gmake check and it passed all but the ignored random test.\n\nI have no idea why Peter thinks 'make installcheck' should be less\nreliable than 'make check'. If installcheck fails for you, let's\nsee that too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 17:19:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: regression.out and regression.diff " }, { "msg_contents": "Tom Lane writes:\n\n> I have no idea why Peter thinks 'make installcheck' should be less\n> reliable than 'make check'. If installcheck fails for you, let's\n> see that too.\n\nIn the test run that Vince had posted to his web tool, the server process\napparently didn't have write permission to the source tree, so all the\ntests that did a COPY failed, plus all subsequent tests that depended on\nthose tables. Additionally, the installcheck is also prone to fail if\ntemplate1 was initialized with a different multibyte encoding, if there\nwas a different locale during initdb, or if there's something fishy in\npostgresql.conf. At least I wouldn't accept installcheck output as a\nfinal result before seeing 'check'.\n\nAdditionally, make check also tests 'make install' and 'initdb'\nrobustness, which installcheck doesn't do, so the former should be\npreferred as final test result.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 28 Feb 2001 16:57:26 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: regression.out and regression.diff " } ]
[ { "msg_contents": "vacuum analyze on pg_type fails if bogus entries remain in pg_operator.\nHere is a sample script to reproduce the problem.\n\ndrop table t1;\ncreate table t1(i int);\ndrop function foo(t1,t1);\ncreate function foo(t1,t1) returns bool as 'select true' language 'sql';\ncreate operator = (\n\tleftarg = t1,\n\trightarg = t1,\n\tcommutator = =,\n\tprocedure = foo\n\t);\ndrop table t1;\nvacuum analyze;\n\nTo fix the problem I propose following patches. Comments?\n--\nTatsuo Ishii\n\n*** parse_coerce.c.orig\tSat Feb 3 20:07:53 2001\n--- parse_coerce.c\tTue Feb 27 11:33:01 2001\n***************\n*** 190,195 ****\n--- 190,201 ----\n \t\tOid\t\t\tinputTypeId = input_typeids[i];\n \t\tOid\t\t\ttargetTypeId = func_typeids[i];\n \n+ \t\tif (typeidIsValid(inputTypeId) == false)\n+ \t\t return(false);\n+ \n+ \t\tif (typeidIsValid(targetTypeId) == false)\n+ \t\t return(false);\n+ \n \t\t/* no problem if same type */\n \t\tif (inputTypeId == targetTypeId)\n \t\t\tcontinue;\n", "msg_date": "Tue, 27 Feb 2001 12:41:12 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "vacuum analyze fails: ERROR: Unable to locate type oid 2230924 in\n\tcatalog" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> *** parse_coerce.c.orig\tSat Feb 3 20:07:53 2001\n> --- parse_coerce.c\tTue Feb 27 11:33:01 2001\n> ***************\n> *** 190,195 ****\n> --- 190,201 ----\n> \t\tOid\t\t\tinputTypeId = input_typeids[i];\n> \t\tOid\t\t\ttargetTypeId = func_typeids[i];\n \n> + \t\tif (typeidIsValid(inputTypeId) == false)\n> + \t\t return(false);\n> + \n> + \t\tif (typeidIsValid(targetTypeId) == false)\n> + \t\t return(false);\n> + \n> \t\t/* no problem if same type */\n> \t\tif (inputTypeId == targetTypeId)\n> \t\t\tcontinue;\n\nI'd suggest not arbitrarily erroring out when there is no need for\na conversion, and not doing the cache lookup implied by typeidIsValid\nwhen it's not necessary to touch the type at all. Hence, I'd recommend\nmoving this down a few lines. Also, conform to the surrounding coding\nstyle and add a comment:\n\n\t\t/* don't know what to do for the input type? then quit... */\n\t\tif (inputTypeId == InvalidOid)\n\t\t\treturn false;\n\n+\t\t/* don't choke on references to no-longer-existing types */\n+ \t\tif (!typeidIsValid(inputTypeId))\n+ \t\t return false;\n+ \n+ \t\tif (!typeidIsValid(targetTypeId))\n+ \t\t return false;\n\n\t\t/*\n\t\t * If input is an untyped string constant, assume we can convert\n\t\t * it to anything except a class type.\n\t\t */\n\n\nBTW, is this sufficient to prevent the VACUUM failure, or are there more\nproblems downstream?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 22:59:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze fails: ERROR: Unable to locate type oid 2230924 in\n\tcatalog" }, { "msg_contents": "This looks very safe and I believe should be applied.\n\n> vacuum analyze on pg_type fails if bogus entries remain in pg_operator.\n> Here is a sample script to reproduce the problem.\n> \n> drop table t1;\n> create table t1(i int);\n> drop function foo(t1,t1);\n> create function foo(t1,t1) returns bool as 'select true' language 'sql';\n> create operator = (\n> \tleftarg = t1,\n> \trightarg = t1,\n> \tcommutator = =,\n> \tprocedure = foo\n> \t);\n> drop table t1;\n> vacuum analyze;\n> \n> To fix the problem I propose following patches. Comments?\n> --\n> Tatsuo Ishii\n> \n> *** parse_coerce.c.orig\tSat Feb 3 20:07:53 2001\n> --- parse_coerce.c\tTue Feb 27 11:33:01 2001\n> ***************\n> *** 190,195 ****\n> --- 190,201 ----\n> \t\tOid\t\t\tinputTypeId = input_typeids[i];\n> \t\tOid\t\t\ttargetTypeId = func_typeids[i];\n> \n> + \t\tif (typeidIsValid(inputTypeId) == false)\n> + \t\t return(false);\n> + \n> + \t\tif (typeidIsValid(targetTypeId) == false)\n> + \t\t return(false);\n> + \n> \t\t/* no problem if same type */\n> \t\tif (inputTypeId == targetTypeId)\n> \t\t\tcontinue;\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 23:03:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze fails: ERROR: Unable to locate type oid\n\t2230924 in catalog" }, { "msg_contents": "> I'd suggest not arbitrarily erroring out when there is no need for\n> a conversion, and not doing the cache lookup implied by typeidIsValid\n> when it's not necessary to touch the type at all. Hence, I'd recommend\n> moving this down a few lines. Also, conform to the surrounding coding\n> style and add a comment:\n\nThanks for the advice.\n\n> \t\t/* don't know what to do for the input type? then quit... */\n> \t\tif (inputTypeId == InvalidOid)\n> \t\t\treturn false;\n> \n> +\t\t/* don't choke on references to no-longer-existing types */\n> + \t\tif (!typeidIsValid(inputTypeId))\n> + \t\t return false;\n> + \n> + \t\tif (!typeidIsValid(targetTypeId))\n> + \t\t return false;\n\nI thought \"typeidIsValid(targetTypeId) == false\" is better than\n\"!typeidIsValid(targetTypeId)\"?\n\n> BTW, is this sufficient to prevent the VACUUM failure, or are there more\n> problems downstream?\n\nThe patches fix the particular case. However I'm not sure there is no\nlurking problem.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 27 Feb 2001 13:19:13 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze fails: ERROR: Unable to locate type\n\toid 2230924 in catalog" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I thought \"typeidIsValid(targetTypeId) == false\" is better than\n> \"!typeidIsValid(targetTypeId)\"?\n\nI've always thought that \"== true\" and \"== false\" on something that's\nalready a boolean are not good style. It's a matter of taste I suppose.\nBut note that the existing calls on typeidIsValid are coded that way...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 23:39:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: vacuum analyze fails: ERROR: Unable to locate type oid 2230924 in\n\tcatalog" }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I thought \"typeidIsValid(targetTypeId) == false\" is better than\n> > \"!typeidIsValid(targetTypeId)\"?\n> \n> I've always thought that \"== true\" and \"== false\" on something that's\n> already a boolean are not good style. It's a matter of taste I suppose.\n> But note that the existing calls on typeidIsValid are coded that way...\n> \n> \t\t\tregards, tom lane\n\nI see.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 27 Feb 2001 13:43:24 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze fails: ERROR: Unable to locate type\n\toid 2230924 in catalog" }, { "msg_contents": "I have comitted the fix to parser/parse_coarse.c. My previous patches\nwas not correct and I changed the checking right after\ntypeInheritsFrom.\n--\nTatsuo Ishii\n\n> vacuum analyze on pg_type fails if bogus entries remain in pg_operator.\n> Here is a sample script to reproduce the problem.\n> \n> drop table t1;\n> create table t1(i int);\n> drop function foo(t1,t1);\n> create function foo(t1,t1) returns bool as 'select true' language 'sql';\n> create operator = (\n> \tleftarg = t1,\n> \trightarg = t1,\n> \tcommutator = =,\n> \tprocedure = foo\n> \t);\n> drop table t1;\n> vacuum analyze;\n", "msg_date": "Tue, 27 Feb 2001 16:10:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: vacuum analyze fails: ERROR: Unable to locate type\n\toid 2230924 in catalog" } ]
[ { "msg_contents": "\n> > I agree that 30k looks like the magic delay, and probably 30/5 would be a\n> > good conservative choice. But now I think about the choice of number, I\n> > think it must vary with the speed of the machine and length of the\n> > transactions; at 20tps, each TX is completing in around 50ms.\n\nI think disk speed should probably be the main factor.\nAfter the first run 30k/5 also seemed the best here, but running the test\nagain shows, that the results are only reproducible after a new initdb.\nAnybody else see reproducible results without previous initdb ?\n\nOne thing I noticed is, that WAL_FILES needs to be at least 4, because\none run fills up to 3 logfiles, and we don't want to measure WAL formating.\n\nAndreas\n", "msg_date": "Tue, 27 Feb 2001 10:56:07 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: CommitDelay performance improvement " } ]
[ { "msg_contents": "At 10:56 27/02/01 +0100, Zeugswetter Andreas SB wrote:\n>\n>> > I agree that 30k looks like the magic delay, and probably 30/5 would be a\n>> > good conservative choice. But now I think about the choice of number, I\n>> > think it must vary with the speed of the machine and length of the\n>> > transactions; at 20tps, each TX is completing in around 50ms.\n>\n>I think disk speed should probably be the main factor.\n>After the first run 30k/5 also seemed the best here, but running the test\n>again shows, that the results are only reproducible after a new initdb.\n>Anybody else see reproducible results without previous initdb ?\n\n\nI think we want something that reflects the chance of a time-saving as a\nresult of a wait, which is why I suggested having each backend monitor\ncommits/sec, then basing the delay on some % of that number. eg. if\ncommits/sec = 1, then it's either low-load, or long tx's, in either case\nCommitDelay won't help. Similarly, if we have 1000 commits/sec, then we\nhave a very fast system and/or disk, and CommitDelay of 10ms is clearly\nglacial. \n\nAFAICS, dynamically monitoring commits/sec (or a similar statistic) is\nTOWTG, but in all cases we need to set a max on CommitDelay to prevent\nindividual TXs getting too long (although I am unsure if the latter is\n*really* necessary, it is far better to be safe).\n\nNote: commits/sec need to be kept for each backend so we can remove the\ncontribution of the backend that is considering waiting.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 27 Feb 2001 21:18:09 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: AW: CommitDelay performance improvement " } ]
[ { "msg_contents": "Hi,\n\nI have an application which has an queue of data it has to insert into\na table in a local database. the insert-queries syntax is all the same,\nand the values are the only thing that differs. The insert-query looks\nlike this:\n\n INSERT INTO \"table\" VALUES(a, b, c, d, e, f, g, h)\n\n...but I cannot insert more than 200/sec, and that is much too slow for\nme. Are there ways to precompile a sqlquery or do other tricks to get the\n*fastest* insertion-rate, since the data-queue is growing faster than\n200/sec... I don't care about integrity etc!\n\nI'm using PostgreSQL 7.0.3, RH 6.2 Linux 2.2.4, and the pq library with\ngcc.\n\n\nRegards,\n\nSteffen E. Thorkildsen\n\n(PS! Please reply to my e-mail aswell.)\n\n", "msg_date": "Tue, 27 Feb 2001 13:25:07 +0100 (MET)", "msg_from": "Steffen Emil Thorkildsen <steffent@ifi.uio.no>", "msg_from_op": true, "msg_subject": "Query precompilation?" }, { "msg_contents": "On Tue, 27 Feb 2001, you wrote:\n> Hi,\n> \n> I have an application which has an queue of data it has to insert into\n> a table in a local database. the insert-queries syntax is all the same,\n> and the values are the only thing that differs. The insert-query looks\n> like this:\n> \n> INSERT INTO \"table\" VALUES(a, b, c, d, e, f, g, h)\n> \n> ...but I cannot insert more than 200/sec, and that is much too slow for\n> mme. Are there ways to precompile a sqlquery or do other tricks to get the\n> *fastest* insertion-rate, since the data-queue is growing faster than\n> 200/sec... \n\n> I don't care about integrity etc!\n\nYou should !-)\n\nYou can find some valueable tips in the documentation: \nhttp://www.de.postgresql.org/users-lounge/docs/7.0/user/c4929.htm\n\n> I'm using PostgreSQL 7.0.3, RH 6.2 Linux 2.2.4, and the pq library with\n> gcc.\n> \n> \n> Regards,\n> \n> Steffen E. Thorkildsen\n> \n> (PS! Please reply to my e-mail aswell.)\n", "msg_date": "Tue, 27 Feb 2001 14:11:04 +0100", "msg_from": "Robert Schrem <Robert.Schrem@WiredMinds.de>", "msg_from_op": false, "msg_subject": "Re: Query precompilation?" }, { "msg_contents": "Steffen Emil Thorkildsen <steffent@ifi.uio.no> writes:\n> I have an application which has an queue of data it has to insert into\n> a table in a local database. the insert-queries syntax is all the same,\n> and the values are the only thing that differs. The insert-query looks\n> like this:\n\n> INSERT INTO \"table\" VALUES(a, b, c, d, e, f, g, h)\n\n> ...but I cannot insert more than 200/sec, and that is much too slow for\n> me.\n\nConsider using COPY FROM STDIN instead ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 10:56:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query precompilation? " }, { "msg_contents": "Steffen Emil Thorkildsen <steffent@ifi.uio.no> writes:\n\n> me. Are there ways to precompile a sqlquery or do other tricks to get the\n> *fastest* insertion-rate, since the data-queue is growing faster than\n> 200/sec... I don't care about integrity etc!\n> \n> I'm using PostgreSQL 7.0.3, RH 6.2 Linux 2.2.4, and the pq library with\n> gcc.\n> \n\nApart from the COPY mentioned by Tom Lane, you should also fo through the\nobvious checklist: use -F to disable fsync, drop indexes(if possible), use\nseveral connections(could help if you have multiprossessor system)\n", "msg_date": "27 Feb 2001 17:11:16 +0100", "msg_from": "Gunnar R|nning <gunnar@candleweb.no>", "msg_from_op": false, "msg_subject": "Re: Query precompilation?" }, { "msg_contents": "(...)\n> >\n> > I don't care about integrity etc!\n>\n> You should !-)\n>\n> You can find some valueable tips in the documentation:\n> http://www.de.postgresql.org/users-lounge/docs/7.0/user/c4929.htm\n>\n\nIn the docs there is this paragraph:\n>Disable Auto-commit\n>\n> Turn off auto-commit and just do one commit at the end. Otherwise Postgres \n>is doing a lot of work for each record added. In general when you are doing \n>bulk inserts, you want to turn off some of the database features to gain \n>speed. \n\nThis sounds nice, but I've read a lot of postgres documents and still do not \nknow how to disable autocommit. Is this possible? And how?\n\nMario Weilguni\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n", "msg_date": "Tue, 27 Feb 2001 19:39:43 +0100", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: Re: Query precompilation?" }, { "msg_contents": "Mario,\n\n> This sounds nice, but I've read a lot of postgres documents and still do not\n> know how to disable autocommit. Is this possible? And how?\n\nYes, you can disable autocommit. All you have to do is wrap your SQL\nstatements within an explicit BEGIN ... COMMIT block.\n\nRegards, Joe\n\n--\nJoe Mitchell joe.mitchell@greatbridge.com\nKnowledge Engineer 757.233.5567 voice\nGreat Bridge, LLC 757.233.5555 fax\nwww.greatbridge.com\n\n\n\n\nMario,\nThis sounds nice, but I've read a lot of postgres\ndocuments and still do not\nknow how to disable autocommit. Is this possible? And how?\nYes, you can disable autocommit.  All you have to do is wrap your\nSQL statements within an explicit BEGIN ... COMMIT block.\nRegards, Joe\n-- \nJoe Mitchell                      joe.mitchell@greatbridge.com\nKnowledge Engineer                757.233.5567 voice\nGreat Bridge, LLC                 757.233.5555 fax\nwww.greatbridge.com", "msg_date": "Tue, 27 Feb 2001 16:29:08 -0500", "msg_from": "jmitchell@greatbridge.com", "msg_from_op": false, "msg_subject": "Re: Query precompilation?" }, { "msg_contents": "I'm sorry Joe but I must know.. What exactly does a \"Knowledge Engineer\" do?\nI've never run into a person with that title before.. Perhaps it's because I\nlive in my office but I'm still curious..\n\nThanks!\n\n-Mitch\n\n----- Original Message -----\nFrom: <jmitchell@greatbridge.com>\nTo: <mweilguni@sime.com>; <pgsql-general@postgresql.org>\nSent: Tuesday, February 27, 2001 4:29 PM\nSubject: Re: Query precompilation?\n\n\n> Mario,\n>\n> > This sounds nice, but I've read a lot of postgres documents and still do\nnot\n> > know how to disable autocommit. Is this possible? And how?\n>\n> Yes, you can disable autocommit. All you have to do is wrap your SQL\n> statements within an explicit BEGIN ... COMMIT block.\n>\n> Regards, Joe\n>\n> --\n> Joe Mitchell joe.mitchell@greatbridge.com\n> Knowledge Engineer 757.233.5567 voice\n> Great Bridge, LLC 757.233.5555 fax\n> www.greatbridge.com\n>\n>\n>\n\n", "msg_date": "Tue, 27 Feb 2001 16:36:28 -0500", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "Re: Query precompilation? - Off topic" }, { "msg_contents": "Hi!\n\nThanks for the answer, but that's not disabling autocommit, it committing by \nhand. What I mean ist Oracle-behaviour --> everthing is a transaction and \nmust be commited by \"COMMIT\". What I ment was something like \"SET autocommit \nto OFF\" or something like this.\n\nAnyway, thanks for your answer, now I know it's not possible.\n\nCiao,\n�������� Mario\n\nAm Dienstag, 27. Februar 2001 22:29 schrieben Sie:\n> Mario,\n>\n> > This sounds nice, but I've read a lot of postgres documents and still do\n> > not know how to disable autocommit. Is this possible? And how?\n>\n> Yes, you can disable autocommit. All you have to do is wrap your SQL\n> statements within an explicit BEGIN ... COMMIT block.\n>\n> Regards, Joe\n>\n> --\n> Joe Mitchell joe.mitchell@greatbridge.com\n> Knowledge Engineer 757.233.5567 voice\n> Great Bridge, LLC 757.233.5555 fax\n> www.greatbridge.com\n\n----------------------------------------\nContent-Type: text/html; charset=\"us-ascii\"; name=\"Anhang: 1\"\nContent-Transfer-Encoding: 7bit\nContent-Description: \n----------------------------------------\n\n-- \n===================================================\n Mario Weilguni � � � � � � � � KPNQwest Austria GmbH\n�Senior Engineer Web Solutions Nikolaiplatz 4\n�tel: +43-316-813824 � � � � 8020 graz, austria\n�fax: +43-316-813824-26 � � � http://www.kpnqwest.at\n�e-mail: mario.weilguni@kpnqwest.com\n===================================================\n", "msg_date": "Tue, 27 Feb 2001 22:37:46 +0100", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: Re: Query precompilation?" }, { "msg_contents": "On Tue, 27 Feb 2001, Mario Weilguni wrote:\n\n> Thanks for the answer, but that's not disabling autocommit, it committing by \n> hand. What I mean ist Oracle-behaviour --> everthing is a transaction and \n> must be commited by \"COMMIT\". What I ment was something like \"SET autocommit \n> to OFF\" or something like this.\n\nEverything _is_ a transaction - the BEGIN ... COMMIT is implied, if you\ndon't wrap your SQL statements in BEGIN ... COMMIT.\n\nCompare:\n\ndominic=# INSERT INTO pages ( page_from, page_to, page_data ) VALUES ( 'Dominic', '555-1212', 'This is a test page');\nINSERT 945129 1\n\n[ This was one transaction ]\ndominic=# SELECT count(*) FROM pages;\n count \n-------\n 1\n(1 row)\n\n[ This was the second transaction ]\n\n... for a total of two transactions, as opposed to:\n\ndominic=# BEGIN;\nBEGIN\ndominic=# INSERT INTO pages ( page_from, page_to, page_data ) VALUES ( 'Dominic', '555-1212', 'Test page number two.' );\nINSERT 945130 1\ndominic=# SELECT count(*) FROM pages;\n count \n-------\n 2\n(1 row)\n\ndominic=# COMMIT;\nCOMMIT\n\n[ This was just _one_ transaction ]\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n\n", "msg_date": "Tue, 27 Feb 2001 15:56:47 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "Re: Re: Query precompilation?" }, { "msg_contents": "Hi Mitch,\n\nPMFJI ... Knowledge Engineering is where Great Bridge touches its \ncustomers- it encompasses engineering support, consulting, and \ntraining. Joe and his colleagues support our paying customers and also \ndo some development and documentation work. And of course, they're on \nthe project mailing lists like everyone else :-)\n\n<propaganda>\nWe made up the name to highlight what we think is important in an open \nsource company - real engineers, who deal in knowledge of the software. \nSo it's not some entry-level operator taking a customer's call, it's \nsomeone who's a trained user of PostgreSQL himself. Our competitive \nadvantage as a company can't lie in anything like proprietary software \nproducts- it has to be in the people we hire and retain.\n</propaganda>\n\nRegards,\nNed\n\n\nMitch Vincent wrote:\n\n> I'm sorry Joe but I must know.. What exactly does a \"Knowledge Engineer\" do?\n> I've never run into a person with that title before.. Perhaps it's because I\n> live in my office but I'm still curious..\n> \n> Thanks!\n> \n> -Mitch\n\n\n-- \n----------------------------------------------------\nNed Lilly e: ned@greatbridge.com\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Tue, 27 Feb 2001 17:20:10 -0500", "msg_from": "Ned Lilly <ned@greatbridge.com>", "msg_from_op": false, "msg_subject": "Re: Re: Query precompilation? - Off topic" }, { "msg_contents": "Mario Weilguni wrote:\n> \n> (...)\n> > >\n> > > I don't care about integrity etc!\n> >\n> > You should !-)\n> >\n> > You can find some valueable tips in the documentation:\n> > http://www.de.postgresql.org/users-lounge/docs/7.0/user/c4929.htm\n> >\n> \n> In the docs there is this paragraph:\n> >Disable Auto-commit\n> >\n> > Turn off auto-commit and just do one commit at the end. Otherwise Postgres\n> >is doing a lot of work for each record added. In general when you are doing\n> >bulk inserts, you want to turn off some of the database features to gain\n> >speed.\n> \n> This sounds nice, but I've read a lot of postgres documents and still do not\n> know how to disable autocommit. Is this possible? And how?\n\nAt the moment, use a BEGIN/COMMIT block around a set of insert\nstatements. Someday we'll likely have an explicit command to affect the\nbehavior.\n\n - Thomas\n", "msg_date": "Wed, 28 Feb 2001 03:18:20 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Query precompilation?" } ]
[ { "msg_contents": "Hi all,\n\nI've been trying to play with beta5 today on unixware 711. I have 2\nproblems:\n\n1) enabling --with-tcl yields to link errors on bin/pgtclsh and\ninterfaces/pl/tcl because Makefile insists on linking with libtcl7.6.0\ninstead on libtcl7.6\n\n2) enabling --with-openssl causes a compilation error on\nsrc/backend/libpq/crypt.c because of multiply defined symbol des_encrypt\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Tue, 27 Feb 2001 20:14:19 +0100 (MET)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "beta5 and unixware 711" }, { "msg_contents": "* Olivier PRENANT <ohp@pyrenet.fr> [010227 13:30]:\n> Hi all,\n> \n> I've been trying to play with beta5 today on unixware 711. I have 2\n> problems:\n> \n> 1) enabling --with-tcl yields to link errors on bin/pgtclsh and\n> interfaces/pl/tcl because Makefile insists on linking with libtcl7.6.0\n> instead on libtcl7.6\nWith the Skunkware packages installed, it's fine here.\n> \n> 2) enabling --with-openssl causes a compilation error on\n> src/backend/libpq/crypt.c because of multiply defined symbol des_encrypt\nThis is an OpenSSL error. We need to get THEIR attention....\n\nLER\n\n> \n> Regards,\n> \n> -- \n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: ohp@pyrenet.fr\n> ------------------------------------------------------------------------------\n> Make your life a dream, make your dream a reality. (St Exupery)\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 27 Feb 2001 14:27:31 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: beta5 and unixware 711" }, { "msg_contents": "On Tue, 27 Feb 2001, Larry Rosenman wrote:\n\n> * Olivier PRENANT <ohp@pyrenet.fr> [010227 13:30]:\n> > Hi all,\n> > \n> > I've been trying to play with beta5 today on unixware 711. I have 2\n> > problems:\n> > \n> > 1) enabling --with-tcl yields to link errors on bin/pgtclsh and\n> > interfaces/pl/tcl because Makefile insists on linking with libtcl7.6.0\n> > instead on libtcl7.6\n> With the Skunkware packages installed, it's fine here.\nYou mean V802?? I had the same problems...\n> > \n> > 2) enabling --with-openssl causes a compilation error on\n> > src/backend/libpq/crypt.c because of multiply defined symbol des_encrypt\n> This is an OpenSSL error. We need to get THEIR attention....\n> \nIn the mean time, is there something I can do (apart from disbling ssl)?\n\nRegards,\n> LER\n> \n> > \n> > Regards,\n> > \n> > -- \n> > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > FRANCE Email: ohp@pyrenet.fr\n> > ------------------------------------------------------------------------------\n> > Make your life a dream, make your dream a reality. (St Exupery)\n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Tue, 27 Feb 2001 22:00:38 +0100 (MET)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: beta5 and unixware 711" }, { "msg_contents": "* Olivier PRENANT <ohp@pyrenet.fr> [010227 15:00]:\n> On Tue, 27 Feb 2001, Larry Rosenman wrote:\n> \n> > * Olivier PRENANT <ohp@pyrenet.fr> [010227 13:30]:\n> > > Hi all,\n> > > \n> > > I've been trying to play with beta5 today on unixware 711. I have 2\n> > > problems:\n> > > \n> > > 1) enabling --with-tcl yields to link errors on bin/pgtclsh and\n> > > interfaces/pl/tcl because Makefile insists on linking with libtcl7.6.0\n> > > instead on libtcl7.6\n> > With the Skunkware packages installed, it's fine here.\n> You mean V802?? I had the same problems...\nlerami% ldd ~postgres/bin/pgtclsh\n/usr/local/pgsql/bin/pgtclsh needs:\n libpgtcl.so.2 => /usr/local/pgsql/lib/libpgtcl.so.2\n libpq.so.2 => /usr/local/pgsql/lib/libpq.so.2\n libtcl8.2.so => /usr/local/lib/libtcl8.2.so\n /usr/lib/libdl.so.1\n /usr/lib/libsocket.so.2\n /usr/lib/libm.so.1\n /usr/local/lib/libz.so.1\n /usr/lib/libresolv.so.2\n /usr/lib/libnsl.so.1\n /usr/local/lib/libreadline.so.3\n /usr/lib/libc.so.1\nlerami%\n\nHere is my config input:\n\nCC=cc CXX=CC ./configure --prefix=/usr/local/pgsql --enable-syslog \\\n\t--with-CXX --with-perl --enable-multibyte --enable-cassert \\\n\t--with-includes=/usr/local/include --with-libs=/usr/local/lib \\\n\t--with-tcl --with-tclconfig=/usr/local/lib \\\n\t--with-tkconfig=/usr/local/lib --enable-locale --with-python\n\n> > > \n> > > 2) enabling --with-openssl causes a compilation error on\n> > > src/backend/libpq/crypt.c because of multiply defined symbol des_encrypt\n> > This is an OpenSSL error. We need to get THEIR attention....\n> > \n> In the mean time, is there something I can do (apart from disbling ssl)?\nI would remove the des_encrypt from openSSL's des.h.\n\nLet me know if that works (I haven't tried).\n\nLER\n\n> \n> Regards,\n> > LER\n> > \n> > > \n> > > Regards,\n> > > \n> > > -- \n> > > Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> > > Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> > > 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> > > FRANCE Email: ohp@pyrenet.fr\n> > > ------------------------------------------------------------------------------\n> > > Make your life a dream, make your dream a reality. (St Exupery)\n> > \n> > \n> \n> -- \n> Olivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\n> Quartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n> 31190 AUTERIVE +33-6-07-63-80-64 (GSM)\n> FRANCE Email: ohp@pyrenet.fr\n> ------------------------------------------------------------------------------\n> Make your life a dream, make your dream a reality. (St Exupery)\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 27 Feb 2001 15:05:25 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: beta5 and unixware 711" } ]
[ { "msg_contents": "I have completed a database internals presentation. The PDF is at:\n\n\thttp://candle.pha.pa.us/main/writings/internals.pdf\n\nI am interested in any comments. I need to add text to it. FYI, you\nwill find a system catalog chart in the presentation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Feb 2001 17:04:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Database Internals Presentation" }, { "msg_contents": "On Wednesday 28 February 2001 04:04, Bruce Momjian wrote:\n> I have completed a database internals presentation. The PDF is at:\n>\n> \thttp://candle.pha.pa.us/main/writings/internals.pdf\n>\n> I am interested in any comments. I need to add text to it. FYI, you\n> will find a system catalog chart in the presentation.\n\nWow! That's cool.\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Wed, 28 Feb 2001 14:17:53 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Database Internals Presentation" } ]
[ { "msg_contents": "\n\nAny suggestion why postmaster/postgres would think it had been compiled\nwith BLCKSZ 0? :\n\nroot@blue:/usr/local/pgsql# su postgres -c \"bin/postmaster -D\n/usr/local/pgsql/data \"\nDEBUG: Data Base System is starting up at Tue Feb 27 22:31:51 2001\nFATAL 2: database was initialized with BLCKSZ 0,\n but the backend was compiled with BLCKSZ 8192.\n looks like you need to initdb.\nFATAL 2: database was initialized with BLCKSZ 0,\n but the backend was compiled with BLCKSZ 8192.\n looks like you need to initdb.\nStartup failed - abort\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Tue, 27 Feb 2001 21:50:43 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "BLCKSZ 0?" }, { "msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> Any suggestion why postmaster/postgres would think it had been compiled\n> with BLCKSZ 0? :\n\n> root@blue:/usr/local/pgsql# su postgres -c \"bin/postmaster -D\n> /usr/local/pgsql/data \"\n> DEBUG: Data Base System is starting up at Tue Feb 27 22:31:51 2001\n> FATAL 2: database was initialized with BLCKSZ 0,\n> but the backend was compiled with BLCKSZ 8192.\n> looks like you need to initdb.\n\nRead that again --- it did *not* say it was compiled with BLCKSZ 0.\nIt said (or meant, anyway) it found zero in the pg_control field that\nindicates the BLCKSZ in use in the database. Something's broken with\nyour pg_control file ... care to give more details?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 23:46:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BLCKSZ 0? " }, { "msg_contents": "On Tue, 27 Feb 2001, Tom Lane wrote:\n\n> \"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> \n> > root@blue:/usr/local/pgsql# su postgres -c \"bin/postmaster -D\n> > /usr/local/pgsql/data \"\n> > DEBUG: Data Base System is starting up at Tue Feb 27 22:31:51 2001\n> > FATAL 2: database was initialized with BLCKSZ 0,\n> > but the backend was compiled with BLCKSZ 8192.\n> > looks like you need to initdb.\n> \n> Read that again --- it did *not* say it was compiled with BLCKSZ 0.\n> It said (or meant, anyway) it found zero in the pg_control field that\n> indicates the BLCKSZ in use in the database. Something's broken with\n> your pg_control file ... care to give more details?\n\nAdmin installed PostgreSQL 7.0(.x?) - clueless coder comes along sometime\nlater, decides to install 6.5.3, because debian apt-get doesn't have\n7.0(.x) yet. Doesn't pg_dumpall, doesn't back up anyting (not even just a\ntar of /usr/local/pgsql) Believing he lost all of his DB setup, admin\ncontacts me to try to get things back up and running. He claims apt-get\ninstalled the binaries in a different place (and assumes .deb package\ninstalls the data directory elsewhere as well.) So I tried to start up the\nDB using the (supposedly) old/original binaries, and the old data\ndirectory, which is when I get the above message(s).\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Wed, 28 Feb 2001 09:28:56 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] BLCKSZ 0? " }, { "msg_contents": "\"Dominic J. Eidson\" <sauron@the-infinite.org> writes:\n> DEBUG: Data Base System is starting up at Tue Feb 27 22:31:51 2001\n> FATAL 2: database was initialized with BLCKSZ 0,\n> but the backend was compiled with BLCKSZ 8192.\n> looks like you need to initdb.\n\n> So I tried to start up the\n> DB using the (supposedly) old/original binaries, and the old data\n> directory, which is when I get the above message(s).\n\nWell, those are very clearly 7.0 or later binaries, because 6.5 didn't\nhave any such crosscheck. Which is probably why the field's not set,\ntoo ... so your data directory is old but your executables aren't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 10:39:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] BLCKSZ 0? " } ]
[ { "msg_contents": "hi, \ni have relation with 3 int's and 1 text :\n\\d newstexts\n Table \"newstexts\"\n Attribute | Type | Modifier\n-----------+---------+-------------------------------------------------\n id | integer | not null default nextval('newstexts_seq'::text)\n news_id | integer | not null default ''\n class_id | integer | not null default ''\n text | text | not null default ''\n\ni also have a rather complicated trigger which does some trick *AFTER*\ninserting data to this table.\nwhen i try to insert something i got:\nERROR: newses_seq.nextval: bad magic (00000000)\n\ni know it's my fault. but what should i look for to trace the problem, and kill\nthe bug?\ni would really appreciate quick answers or suggestions (i'm running out of time\n...)\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\n najwspanialsz� rzecz� jak� da�o nam nowoczesne spo�ecze�stwo,\n jest niesamowita wr�cz �atwo�� unikania kontakt�w z nim ...\n", "msg_date": "Wed, 28 Feb 2001 14:54:45 +0100", "msg_from": "hubert depesz lubaczewski <depesz@depesz.pl>", "msg_from_op": true, "msg_subject": "strange error" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.pl> writes:\n> ERROR: newses_seq.nextval: bad magic (00000000)\n\nHmm, something bad has happened to your sequence object.\n\nIt would be interesting to try to figure out what caused that, but if\nyou're in a hurry, try dropping and recreating that sequence.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 10:18:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] strange error " } ]
[ { "msg_contents": "Hi,\n\nI want to import from a .txt file, I usually use:\n\nCOPY noticies FROM '/home/teixi/_6tm_/_elbulli/premsai.txt' USING\nDELIMITERS '|' \\g\n\na) If table 'noticies' has a date row and is null on data file it claims:\n\tUnable to import date filed ''\n\nso how to admit date fields null?\n\nb) When there are some 'carriage returns' on a field in the data file is\nunderstood as a new \n\n|row1data1|row1data2|row1data3|^M\n|row2data1^M\nstillrow2data1|row2data2|row2data2|^M\n\nis interpreted as a 3 row data file when in reallity is a 2 row data file.\n\nhow could I handle this when importing ?\n\n\n\n\nbests from barcelona,\njaume teixi\n\n\nps: weather loops between 5�C to 30�C, crazy.\n", "msg_date": "Wed, 28 Feb 2001 16:44:54 +0100", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": true, "msg_subject": "Still some problems importing with COPY" } ]
[ { "msg_contents": "Hi,\n\nTesting beta5 on unixware7 gives an error on int8 test while beta4 (I've\njust retested it) works ok regressions.diff follows:\n\nAlso, compiling with openssl give a compile error on\nsrc/backend/libpq/crypt.c; this error CAN be avoided by commenting out the\ndefinition of des_encrypt in /ur/local/ssl/include/openssl/des.h\n\nI'm not sure this is good practice though.\n\nAnyway, even after that, there are linking errors on libecpg.so and perl\nbecause of a lack of -L/usr/local/ssl/lib\n\nEasyly avoid by setting ad hoc LD_LIBRARY_PATH.\n\n *** ./expected/int8.out\tMon Jan 29 03:53:58 2001\n--- ./results/int8.out\tWed Feb 28 16:29:40 2001\n***************\n*** 5,111 ****\n CREATE TABLE INT8_TBL(q1 int8, q2 int8);\n INSERT INTO INT8_TBL VALUES('123','456');\n INSERT INTO INT8_TBL VALUES('123','4567890123456789');\n INSERT INTO INT8_TBL VALUES('4567890123456789','123');\n INSERT INTO INT8_TBL VALUES('4567890123456789','4567890123456789');\n INSERT INTO INT8_TBL VALUES('4567890123456789','-4567890123456789');\n SELECT * FROM INT8_TBL;\n q1 | q2 \n! ------------------+-------------------\n 123 | 456\n! 123 | 4567890123456789\n! 4567890123456789 | 123\n! 4567890123456789 | 4567890123456789\n! 4567890123456789 | -4567890123456789\n! (5 rows)\n \n SELECT '' AS five, q1 AS plus, -q1 AS minus FROM INT8_TBL;\n five | plus | minus \n! ------+------------------+-------------------\n | 123 | -123\n! | 123 | -123\n! | 4567890123456789 | -4567890123456789\n! | 4567890123456789 | -4567890123456789\n! | 4567890123456789 | -4567890123456789\n! (5 rows)\n \n SELECT '' AS five, q1, q2, q1 + q2 AS plus FROM INT8_TBL;\n five | q1 | q2 | plus \n! ------+------------------+-------------------+------------------\n | 123 | 456 | 579\n! | 123 | 4567890123456789 | 4567890123456912\n! | 4567890123456789 | 123 | 4567890123456912\n! | 4567890123456789 | 4567890123456789 | 9135780246913578\n! | 4567890123456789 | -4567890123456789 | 0\n! (5 rows)\n \n SELECT '' AS five, q1, q2, q1 - q2 AS minus FROM INT8_TBL;\n five | q1 | q2 | minus \n! ------+------------------+-------------------+-------------------\n | 123 | 456 | -333\n! | 123 | 4567890123456789 | -4567890123456666\n! | 4567890123456789 | 123 | 4567890123456666\n! | 4567890123456789 | 4567890123456789 | 0\n! | 4567890123456789 | -4567890123456789 | 9135780246913578\n! (5 rows)\n \n SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL\n WHERE q1 < 1000 or (q2 > 0 and q2 < 1000);\n three | q1 | q2 | multiply \n! -------+------------------+------------------+--------------------\n | 123 | 456 | 56088\n! | 123 | 4567890123456789 | 561850485185185047\n! | 4567890123456789 | 123 | 561850485185185047\n! (3 rows)\n \n SELECT '' AS five, q1, q2, q1 / q2 AS divide FROM INT8_TBL;\n five | q1 | q2 | divide \n! ------+------------------+-------------------+----------------\n | 123 | 456 | 0\n! | 123 | 4567890123456789 | 0\n! | 4567890123456789 | 123 | 37137318076884\n! | 4567890123456789 | 4567890123456789 | 1\n! | 4567890123456789 | -4567890123456789 | -1\n! (5 rows)\n \n SELECT '' AS five, q1, float8(q1) FROM INT8_TBL;\n five | q1 | float8 \n! ------+------------------+----------------------\n | 123 | 123\n! | 123 | 123\n! | 4567890123456789 | 4.56789012345679e+15\n! | 4567890123456789 | 4.56789012345679e+15\n! | 4567890123456789 | 4.56789012345679e+15\n! (5 rows)\n \n SELECT '' AS five, q2, float8(q2) FROM INT8_TBL;\n five | q2 | float8 \n! ------+-------------------+-----------------------\n | 456 | 456\n! | 4567890123456789 | 4.56789012345679e+15\n! | 123 | 123\n! | 4567890123456789 | 4.56789012345679e+15\n! | -4567890123456789 | -4.56789012345679e+15\n! (5 rows)\n \n SELECT '' AS five, 2 * q1 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------------\n | 246\n! | 246\n! | 9135780246913578\n! | 9135780246913578\n! | 9135780246913578\n! (5 rows)\n \n SELECT '' AS five, q1 * 2 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------------\n | 246\n! | 246\n! | 9135780246913578\n! | 9135780246913578\n! | 9135780246913578\n! (5 rows)\n \n -- TO_CHAR()\n --\n--- 5,77 ----\n CREATE TABLE INT8_TBL(q1 int8, q2 int8);\n INSERT INTO INT8_TBL VALUES('123','456');\n INSERT INTO INT8_TBL VALUES('123','4567890123456789');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n INSERT INTO INT8_TBL VALUES('4567890123456789','123');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n INSERT INTO INT8_TBL VALUES('4567890123456789','4567890123456789');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n INSERT INTO INT8_TBL VALUES('4567890123456789','-4567890123456789');\n+ ERROR: int8 value out of range: \"4567890123456789\"\n SELECT * FROM INT8_TBL;\n q1 | q2 \n! -----+-----\n 123 | 456\n! (1 row)\n \n SELECT '' AS five, q1 AS plus, -q1 AS minus FROM INT8_TBL;\n five | plus | minus \n! ------+------+-------\n | 123 | -123\n! (1 row)\n \n SELECT '' AS five, q1, q2, q1 + q2 AS plus FROM INT8_TBL;\n five | q1 | q2 | plus \n! ------+-----+-----+------\n | 123 | 456 | 579\n! (1 row)\n \n SELECT '' AS five, q1, q2, q1 - q2 AS minus FROM INT8_TBL;\n five | q1 | q2 | minus \n! ------+-----+-----+-------\n | 123 | 456 | -333\n! (1 row)\n \n SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL\n WHERE q1 < 1000 or (q2 > 0 and q2 < 1000);\n three | q1 | q2 | multiply \n! -------+-----+-----+----------\n | 123 | 456 | 56088\n! (1 row)\n \n SELECT '' AS five, q1, q2, q1 / q2 AS divide FROM INT8_TBL;\n five | q1 | q2 | divide \n! ------+-----+-----+--------\n | 123 | 456 | 0\n! (1 row)\n \n SELECT '' AS five, q1, float8(q1) FROM INT8_TBL;\n five | q1 | float8 \n! ------+-----+--------\n | 123 | 123\n! (1 row)\n \n SELECT '' AS five, q2, float8(q2) FROM INT8_TBL;\n five | q2 | float8 \n! ------+-----+--------\n | 456 | 456\n! (1 row)\n \n SELECT '' AS five, 2 * q1 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------\n | 246\n! (1 row)\n \n SELECT '' AS five, q1 * 2 AS \"twice int4\" FROM INT8_TBL;\n five | twice int4 \n! ------+------------\n | 246\n! (1 row)\n \n -- TO_CHAR()\n --\n***************\n*** 114,124 ****\n to_char_1 | to_char | to_char \n -----------+------------------------+------------------------\n | 123 | 456\n! | 123 | 4,567,890,123,456,789\n! | 4,567,890,123,456,789 | 123\n! | 4,567,890,123,456,789 | 4,567,890,123,456,789\n! | 4,567,890,123,456,789 | -4,567,890,123,456,789\n! (5 rows)\n \n SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999') \n \tFROM INT8_TBL;\t\n--- 80,86 ----\n to_char_1 | to_char | to_char \n -----------+------------------------+------------------------\n | 123 | 456\n! (1 row)\n \n SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999') \n \tFROM INT8_TBL;\t\n***************\n*** 125,135 ****\n to_char_2 | to_char | to_char \n -----------+--------------------------------+--------------------------------\n | 123.000,000 | 456.000,000\n! | 123.000,000 | 4,567,890,123,456,789.000,000\n! | 4,567,890,123,456,789.000,000 | 123.000,000\n! | 4,567,890,123,456,789.000,000 | 4,567,890,123,456,789.000,000\n! | 4,567,890,123,456,789.000,000 | -4,567,890,123,456,789.000,000\n! (5 rows)\n \n SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR') \n \tFROM INT8_TBL;\n--- 87,93 ----\n to_char_2 | to_char | to_char \n -----------+--------------------------------+--------------------------------\n | 123.000,000 | 456.000,000\n! (1 row)\n \n SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR') \n \tFROM INT8_TBL;\n***************\n*** 136,146 ****\n to_char_3 | to_char | to_char \n -----------+--------------------+------------------------\n | <123> | <456.000>\n! | <123> | <4567890123456789.000>\n! | <4567890123456789> | <123.000>\n! | <4567890123456789> | <4567890123456789.000>\n! | <4567890123456789> | 4567890123456789.000\n! (5 rows)\n \n SELECT '' AS to_char_4, to_char( (q1 * -1), '9999999999999999S'), to_char( (q2 * -1), 'S9999999999999999') \n \tFROM INT8_TBL;\n--- 94,100 ----\n to_char_3 | to_char | to_char \n -----------+--------------------+------------------------\n | <123> | <456.000>\n! (1 row)\n \n SELECT '' AS to_char_4, to_char( (q1 * -1), '9999999999999999S'), to_char( (q2 * -1), 'S9999999999999999') \n \tFROM INT8_TBL;\n***************\n*** 147,285 ****\n to_char_4 | to_char | to_char \n -----------+-------------------+-------------------\n | 123- | -456\n! | 123- | -4567890123456789\n! | 4567890123456789- | -123\n! | 4567890123456789- | -4567890123456789\n! | 4567890123456789- | +4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_5, to_char(q2, 'MI9999999999999999') FROM INT8_TBL;\t\n to_char_5 | to_char \n -----------+--------------------\n | 456\n! | 4567890123456789\n! | 123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_6, to_char(q2, 'FMS9999999999999999') FROM INT8_TBL;\n to_char_6 | to_char \n! -----------+-------------------\n | +456\n! | +4567890123456789\n! | +123\n! | +4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_7, to_char(q2, 'FM9999999999999999THPR') FROM INT8_TBL;\n to_char_7 | to_char \n! -----------+--------------------\n | 456TH\n! | 4567890123456789TH\n! | 123RD\n! | 4567890123456789TH\n! | <4567890123456789>\n! (5 rows)\n \n SELECT '' AS to_char_8, to_char(q2, 'SG9999999999999999th') FROM INT8_TBL;\t\n to_char_8 | to_char \n -----------+---------------------\n | + 456th\n! | +4567890123456789th\n! | + 123rd\n! | +4567890123456789th\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_9, to_char(q2, '0999999999999999') FROM INT8_TBL;\t\n to_char_9 | to_char \n -----------+-------------------\n | 0000000000000456\n! | 4567890123456789\n! | 0000000000000123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_10, to_char(q2, 'S0999999999999999') FROM INT8_TBL;\t\n to_char_10 | to_char \n ------------+-------------------\n | +0000000000000456\n! | +4567890123456789\n! | +0000000000000123\n! | +4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_11, to_char(q2, 'FM0999999999999999') FROM INT8_TBL;\t\n to_char_11 | to_char \n! ------------+-------------------\n | 0000000000000456\n! | 4567890123456789\n! | 0000000000000123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_12, to_char(q2, 'FM9999999999999999.000') FROM INT8_TBL;\n to_char_12 | to_char \n! ------------+-----------------------\n | 456.000\n! | 4567890123456789.000\n! | 123.000\n! | 4567890123456789.000\n! | -4567890123456789.000\n! (5 rows)\n \n SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000') FROM INT8_TBL;\t\n to_char_13 | to_char \n ------------+------------------------\n | 456.000\n! | 4567890123456789.000\n! | 123.000\n! | 4567890123456789.000\n! | -4567890123456789.000\n! (5 rows)\n \n SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n to_char_14 | to_char \n! ------------+-------------------\n | 456\n! | 4567890123456789\n! | 123\n! | 4567890123456789\n! | -4567890123456789\n! (5 rows)\n \n SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n to_char_15 | to_char \n ------------+-------------------------------------------\n | +4 5 6 . 0 0 0 \n! | + 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n! | +1 2 3 . 0 0 0 \n! | + 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n! | - 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n! (5 rows)\n \n SELECT '' AS to_char_16, to_char(q2, '99999 \"text\" 9999 \"9999\" 999 \"\\\\\"text between quote marks\\\\\"\" 9999') FROM INT8_TBL;\n to_char_16 | to_char \n ------------+-----------------------------------------------------------\n | text 9999 \"text between quote marks\" 456\n! | 45678 text 9012 9999 345 \"text between quote marks\" 6789\n! | text 9999 \"text between quote marks\" 123\n! | 45678 text 9012 9999 345 \"text between quote marks\" 6789\n! | -45678 text 9012 9999 345 \"text between quote marks\" 6789\n! (5 rows)\n \n SELECT '' AS to_char_17, to_char(q2, '999999SG9999999999') FROM INT8_TBL;\n to_char_17 | to_char \n ------------+-------------------\n | + 456\n! | 456789+0123456789\n! | + 123\n! | 456789+0123456789\n! | 456789-0123456789\n! (5 rows)\n \n--- 101,183 ----\n to_char_4 | to_char | to_char \n -----------+-------------------+-------------------\n | 123- | -456\n! (1 row)\n \n SELECT '' AS to_char_5, to_char(q2, 'MI9999999999999999') FROM INT8_TBL;\t\n to_char_5 | to_char \n -----------+--------------------\n | 456\n! (1 row)\n \n SELECT '' AS to_char_6, to_char(q2, 'FMS9999999999999999') FROM INT8_TBL;\n to_char_6 | to_char \n! -----------+---------\n | +456\n! (1 row)\n \n SELECT '' AS to_char_7, to_char(q2, 'FM9999999999999999THPR') FROM INT8_TBL;\n to_char_7 | to_char \n! -----------+---------\n | 456TH\n! (1 row)\n \n SELECT '' AS to_char_8, to_char(q2, 'SG9999999999999999th') FROM INT8_TBL;\t\n to_char_8 | to_char \n -----------+---------------------\n | + 456th\n! (1 row)\n \n SELECT '' AS to_char_9, to_char(q2, '0999999999999999') FROM INT8_TBL;\t\n to_char_9 | to_char \n -----------+-------------------\n | 0000000000000456\n! (1 row)\n \n SELECT '' AS to_char_10, to_char(q2, 'S0999999999999999') FROM INT8_TBL;\t\n to_char_10 | to_char \n ------------+-------------------\n | +0000000000000456\n! (1 row)\n \n SELECT '' AS to_char_11, to_char(q2, 'FM0999999999999999') FROM INT8_TBL;\t\n to_char_11 | to_char \n! ------------+------------------\n | 0000000000000456\n! (1 row)\n \n SELECT '' AS to_char_12, to_char(q2, 'FM9999999999999999.000') FROM INT8_TBL;\n to_char_12 | to_char \n! ------------+---------\n | 456.000\n! (1 row)\n \n SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000') FROM INT8_TBL;\t\n to_char_13 | to_char \n ------------+------------------------\n | 456.000\n! (1 row)\n \n SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n to_char_14 | to_char \n! ------------+---------\n | 456\n! (1 row)\n \n SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n to_char_15 | to_char \n ------------+-------------------------------------------\n | +4 5 6 . 0 0 0 \n! (1 row)\n \n SELECT '' AS to_char_16, to_char(q2, '99999 \"text\" 9999 \"9999\" 999 \"\\\\\"text between quote marks\\\\\"\" 9999') FROM INT8_TBL;\n to_char_16 | to_char \n ------------+-----------------------------------------------------------\n | text 9999 \"text between quote marks\" 456\n! (1 row)\n \n SELECT '' AS to_char_17, to_char(q2, '999999SG9999999999') FROM INT8_TBL;\n to_char_17 | to_char \n ------------+-------------------\n | + 456\n! (1 row)\n \n\n======================================================================\n\n*** ./expected/subselect.out\tThu Mar 23 08:42:13 2000\n--- ./results/subselect.out\tWed Feb 28 16:35:21 2001\n***************\n*** 160,167 ****\n select q1, float8(count(*)) / (select count(*) from int8_tbl)\n from int8_tbl group by q1;\n q1 | ?column? \n! ------------------+----------\n! 123 | 0.4\n! 4567890123456789 | 0.6\n! (2 rows)\n \n--- 160,166 ----\n select q1, float8(count(*)) / (select count(*) from int8_tbl)\n from int8_tbl group by q1;\n q1 | ?column? \n! -----+----------\n! 123 | 1\n! (1 row)\n \n\n======================================================================\n\n*** ./expected/union.out\tThu Nov 9 03:47:49 2000\n--- ./results/union.out\tWed Feb 28 16:35:22 2001\n***************\n*** 259,318 ****\n --\n SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;\n q2 \n! ------------------\n! 123\n! 4567890123456789\n! (2 rows)\n \n SELECT q2 FROM int8_tbl INTERSECT ALL SELECT q1 FROM int8_tbl;\n q2 \n! ------------------\n! 123\n! 4567890123456789\n! 4567890123456789\n! (3 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n q2 \n! -------------------\n! -4567890123456789\n 456\n! (2 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT q1 FROM int8_tbl;\n q2 \n! -------------------\n! -4567890123456789\n 456\n! (2 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q1 FROM int8_tbl;\n q2 \n! -------------------\n! -4567890123456789\n 456\n! 4567890123456789\n! (3 rows)\n \n SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl;\n q1 \n! ----\n! (0 rows)\n \n SELECT q1 FROM int8_tbl EXCEPT ALL SELECT q2 FROM int8_tbl;\n q1 \n! ------------------\n 123\n! 4567890123456789\n! (2 rows)\n \n SELECT q1 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q2 FROM int8_tbl;\n q1 \n! ------------------\n 123\n! 4567890123456789\n! 4567890123456789\n! (3 rows)\n \n --\n -- Mixed types\n--- 259,307 ----\n --\n SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;\n q2 \n! ----\n! (0 rows)\n \n SELECT q2 FROM int8_tbl INTERSECT ALL SELECT q1 FROM int8_tbl;\n q2 \n! ----\n! (0 rows)\n \n SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n q2 \n! -----\n 456\n! (1 row)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT q1 FROM int8_tbl;\n q2 \n! -----\n 456\n! (1 row)\n \n SELECT q2 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q1 FROM int8_tbl;\n q2 \n! -----\n 456\n! (1 row)\n \n SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl;\n q1 \n! -----\n! 123\n! (1 row)\n \n SELECT q1 FROM int8_tbl EXCEPT ALL SELECT q2 FROM int8_tbl;\n q1 \n! -----\n 123\n! (1 row)\n \n SELECT q1 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q2 FROM int8_tbl;\n q1 \n! -----\n 123\n! (1 row)\n \n --\n -- Mixed types\n***************\n*** 337,396 ****\n --\n SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl;\n q1 \n! -------------------\n! 123\n! 4567890123456789\n 456\n! 4567890123456789\n! 123\n! 4567890123456789\n! -4567890123456789\n! (7 rows)\n \n SELECT q1 FROM int8_tbl INTERSECT (((SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl)));\n q1 \n! ------------------\n! 123\n! 4567890123456789\n! (2 rows)\n \n (((SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl))) UNION ALL SELECT q2 FROM int8_tbl;\n q1 \n! -------------------\n! 123\n! 4567890123456789\n 456\n! 4567890123456789\n! 123\n! 4567890123456789\n! -4567890123456789\n! (7 rows)\n \n SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n q1 \n! -------------------\n! -4567890123456789\n 456\n! (2 rows)\n \n SELECT q1 FROM int8_tbl UNION ALL (((SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl)));\n q1 \n! -------------------\n 123\n- 123\n- 4567890123456789\n- 4567890123456789\n- 4567890123456789\n- -4567890123456789\n 456\n! (7 rows)\n \n (((SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl))) EXCEPT SELECT q1 FROM int8_tbl;\n q1 \n! -------------------\n! -4567890123456789\n 456\n! (2 rows)\n \n --\n -- Subqueries with ORDER BY & LIMIT clauses\n--- 326,364 ----\n --\n SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl;\n q1 \n! -----\n 456\n! (1 row)\n \n SELECT q1 FROM int8_tbl INTERSECT (((SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl)));\n q1 \n! ----\n! (0 rows)\n \n (((SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl))) UNION ALL SELECT q2 FROM int8_tbl;\n q1 \n! -----\n 456\n! (1 row)\n \n SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n q1 \n! -----\n 456\n! (1 row)\n \n SELECT q1 FROM int8_tbl UNION ALL (((SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl)));\n q1 \n! -----\n 123\n 456\n! (2 rows)\n \n (((SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl))) EXCEPT SELECT q1 FROM int8_tbl;\n q1 \n! -----\n 456\n! (1 row)\n \n --\n -- Subqueries with ORDER BY & LIMIT clauses\n***************\n*** 399,408 ****\n SELECT q1,q2 FROM int8_tbl EXCEPT SELECT q2,q1 FROM int8_tbl\n ORDER BY q2,q1;\n q1 | q2 \n! ------------------+-------------------\n! 4567890123456789 | -4567890123456789\n 123 | 456\n! (2 rows)\n \n -- This should fail, because q2 isn't a name of an EXCEPT output column\n SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1;\n--- 367,375 ----\n SELECT q1,q2 FROM int8_tbl EXCEPT SELECT q2,q1 FROM int8_tbl\n ORDER BY q2,q1;\n q1 | q2 \n! -----+-----\n 123 | 456\n! (1 row)\n \n -- This should fail, because q2 isn't a name of an EXCEPT output column\n SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1;\n***************\n*** 410,419 ****\n -- But this should work:\n SELECT q1 FROM int8_tbl EXCEPT (((SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1)));\n q1 \n! ------------------\n 123\n! 4567890123456789\n! (2 rows)\n \n --\n -- New syntaxes (7.1) permit new tests\n--- 377,385 ----\n -- But this should work:\n SELECT q1 FROM int8_tbl EXCEPT (((SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1)));\n q1 \n! -----\n 123\n! (1 row)\n \n --\n -- New syntaxes (7.1) permit new tests\n***************\n*** 420,430 ****\n --\n (((((select * from int8_tbl)))));\n q1 | q2 \n! ------------------+-------------------\n 123 | 456\n! 123 | 4567890123456789\n! 4567890123456789 | 123\n! 4567890123456789 | 4567890123456789\n! 4567890123456789 | -4567890123456789\n! (5 rows)\n \n--- 386,392 ----\n --\n (((((select * from int8_tbl)))));\n q1 | q2 \n! -----+-----\n 123 | 456\n! (1 row)\n \n\n======================================================================\n\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 28 Feb 2001 17:04:51 +0100 (MET)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "int8 beta5 broken?" }, { "msg_contents": "Sorry to follow-up on my own post; int8 test passes if open-ssl is not\nused.\n\nweird!!\n\nRegards,\nOn Wed, 28 Feb 2001, Olivier PRENANT wrote:\n\n> Hi,\n> \n> Testing beta5 on unixware7 gives an error on int8 test while beta4 (I've\n> just retested it) works ok regressions.diff follows:\n> \n> Also, compiling with openssl give a compile error on\n> src/backend/libpq/crypt.c; this error CAN be avoided by commenting out the\n> definition of des_encrypt in /ur/local/ssl/include/openssl/des.h\n> \n> I'm not sure this is good practice though.\n> \n> Anyway, even after that, there are linking errors on libecpg.so and perl\n> because of a lack of -L/usr/local/ssl/lib\n> \n> Easyly avoid by setting ad hoc LD_LIBRARY_PATH.\n> \n> *** ./expected/int8.out\tMon Jan 29 03:53:58 2001\n> --- ./results/int8.out\tWed Feb 28 16:29:40 2001\n> ***************\n> *** 5,111 ****\n> CREATE TABLE INT8_TBL(q1 int8, q2 int8);\n> INSERT INTO INT8_TBL VALUES('123','456');\n> INSERT INTO INT8_TBL VALUES('123','4567890123456789');\n> INSERT INTO INT8_TBL VALUES('4567890123456789','123');\n> INSERT INTO INT8_TBL VALUES('4567890123456789','4567890123456789');\n> INSERT INTO INT8_TBL VALUES('4567890123456789','-4567890123456789');\n> SELECT * FROM INT8_TBL;\n> q1 | q2 \n> ! ------------------+-------------------\n> 123 | 456\n> ! 123 | 4567890123456789\n> ! 4567890123456789 | 123\n> ! 4567890123456789 | 4567890123456789\n> ! 4567890123456789 | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS five, q1 AS plus, -q1 AS minus FROM INT8_TBL;\n> five | plus | minus \n> ! ------+------------------+-------------------\n> | 123 | -123\n> ! | 123 | -123\n> ! | 4567890123456789 | -4567890123456789\n> ! | 4567890123456789 | -4567890123456789\n> ! | 4567890123456789 | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS five, q1, q2, q1 + q2 AS plus FROM INT8_TBL;\n> five | q1 | q2 | plus \n> ! ------+------------------+-------------------+------------------\n> | 123 | 456 | 579\n> ! | 123 | 4567890123456789 | 4567890123456912\n> ! | 4567890123456789 | 123 | 4567890123456912\n> ! | 4567890123456789 | 4567890123456789 | 9135780246913578\n> ! | 4567890123456789 | -4567890123456789 | 0\n> ! (5 rows)\n> \n> SELECT '' AS five, q1, q2, q1 - q2 AS minus FROM INT8_TBL;\n> five | q1 | q2 | minus \n> ! ------+------------------+-------------------+-------------------\n> | 123 | 456 | -333\n> ! | 123 | 4567890123456789 | -4567890123456666\n> ! | 4567890123456789 | 123 | 4567890123456666\n> ! | 4567890123456789 | 4567890123456789 | 0\n> ! | 4567890123456789 | -4567890123456789 | 9135780246913578\n> ! (5 rows)\n> \n> SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL\n> WHERE q1 < 1000 or (q2 > 0 and q2 < 1000);\n> three | q1 | q2 | multiply \n> ! -------+------------------+------------------+--------------------\n> | 123 | 456 | 56088\n> ! | 123 | 4567890123456789 | 561850485185185047\n> ! | 4567890123456789 | 123 | 561850485185185047\n> ! (3 rows)\n> \n> SELECT '' AS five, q1, q2, q1 / q2 AS divide FROM INT8_TBL;\n> five | q1 | q2 | divide \n> ! ------+------------------+-------------------+----------------\n> | 123 | 456 | 0\n> ! | 123 | 4567890123456789 | 0\n> ! | 4567890123456789 | 123 | 37137318076884\n> ! | 4567890123456789 | 4567890123456789 | 1\n> ! | 4567890123456789 | -4567890123456789 | -1\n> ! (5 rows)\n> \n> SELECT '' AS five, q1, float8(q1) FROM INT8_TBL;\n> five | q1 | float8 \n> ! ------+------------------+----------------------\n> | 123 | 123\n> ! | 123 | 123\n> ! | 4567890123456789 | 4.56789012345679e+15\n> ! | 4567890123456789 | 4.56789012345679e+15\n> ! | 4567890123456789 | 4.56789012345679e+15\n> ! (5 rows)\n> \n> SELECT '' AS five, q2, float8(q2) FROM INT8_TBL;\n> five | q2 | float8 \n> ! ------+-------------------+-----------------------\n> | 456 | 456\n> ! | 4567890123456789 | 4.56789012345679e+15\n> ! | 123 | 123\n> ! | 4567890123456789 | 4.56789012345679e+15\n> ! | -4567890123456789 | -4.56789012345679e+15\n> ! (5 rows)\n> \n> SELECT '' AS five, 2 * q1 AS \"twice int4\" FROM INT8_TBL;\n> five | twice int4 \n> ! ------+------------------\n> | 246\n> ! | 246\n> ! | 9135780246913578\n> ! | 9135780246913578\n> ! | 9135780246913578\n> ! (5 rows)\n> \n> SELECT '' AS five, q1 * 2 AS \"twice int4\" FROM INT8_TBL;\n> five | twice int4 \n> ! ------+------------------\n> | 246\n> ! | 246\n> ! | 9135780246913578\n> ! | 9135780246913578\n> ! | 9135780246913578\n> ! (5 rows)\n> \n> -- TO_CHAR()\n> --\n> --- 5,77 ----\n> CREATE TABLE INT8_TBL(q1 int8, q2 int8);\n> INSERT INTO INT8_TBL VALUES('123','456');\n> INSERT INTO INT8_TBL VALUES('123','4567890123456789');\n> + ERROR: int8 value out of range: \"4567890123456789\"\n> INSERT INTO INT8_TBL VALUES('4567890123456789','123');\n> + ERROR: int8 value out of range: \"4567890123456789\"\n> INSERT INTO INT8_TBL VALUES('4567890123456789','4567890123456789');\n> + ERROR: int8 value out of range: \"4567890123456789\"\n> INSERT INTO INT8_TBL VALUES('4567890123456789','-4567890123456789');\n> + ERROR: int8 value out of range: \"4567890123456789\"\n> SELECT * FROM INT8_TBL;\n> q1 | q2 \n> ! -----+-----\n> 123 | 456\n> ! (1 row)\n> \n> SELECT '' AS five, q1 AS plus, -q1 AS minus FROM INT8_TBL;\n> five | plus | minus \n> ! ------+------+-------\n> | 123 | -123\n> ! (1 row)\n> \n> SELECT '' AS five, q1, q2, q1 + q2 AS plus FROM INT8_TBL;\n> five | q1 | q2 | plus \n> ! ------+-----+-----+------\n> | 123 | 456 | 579\n> ! (1 row)\n> \n> SELECT '' AS five, q1, q2, q1 - q2 AS minus FROM INT8_TBL;\n> five | q1 | q2 | minus \n> ! ------+-----+-----+-------\n> | 123 | 456 | -333\n> ! (1 row)\n> \n> SELECT '' AS three, q1, q2, q1 * q2 AS multiply FROM INT8_TBL\n> WHERE q1 < 1000 or (q2 > 0 and q2 < 1000);\n> three | q1 | q2 | multiply \n> ! -------+-----+-----+----------\n> | 123 | 456 | 56088\n> ! (1 row)\n> \n> SELECT '' AS five, q1, q2, q1 / q2 AS divide FROM INT8_TBL;\n> five | q1 | q2 | divide \n> ! ------+-----+-----+--------\n> | 123 | 456 | 0\n> ! (1 row)\n> \n> SELECT '' AS five, q1, float8(q1) FROM INT8_TBL;\n> five | q1 | float8 \n> ! ------+-----+--------\n> | 123 | 123\n> ! (1 row)\n> \n> SELECT '' AS five, q2, float8(q2) FROM INT8_TBL;\n> five | q2 | float8 \n> ! ------+-----+--------\n> | 456 | 456\n> ! (1 row)\n> \n> SELECT '' AS five, 2 * q1 AS \"twice int4\" FROM INT8_TBL;\n> five | twice int4 \n> ! ------+------------\n> | 246\n> ! (1 row)\n> \n> SELECT '' AS five, q1 * 2 AS \"twice int4\" FROM INT8_TBL;\n> five | twice int4 \n> ! ------+------------\n> | 246\n> ! (1 row)\n> \n> -- TO_CHAR()\n> --\n> ***************\n> *** 114,124 ****\n> to_char_1 | to_char | to_char \n> -----------+------------------------+------------------------\n> | 123 | 456\n> ! | 123 | 4,567,890,123,456,789\n> ! | 4,567,890,123,456,789 | 123\n> ! | 4,567,890,123,456,789 | 4,567,890,123,456,789\n> ! | 4,567,890,123,456,789 | -4,567,890,123,456,789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999') \n> \tFROM INT8_TBL;\t\n> --- 80,86 ----\n> to_char_1 | to_char | to_char \n> -----------+------------------------+------------------------\n> | 123 | 456\n> ! (1 row)\n> \n> SELECT '' AS to_char_2, to_char(q1, '9G999G999G999G999G999D999G999'), to_char(q2, '9,999,999,999,999,999.999,999') \n> \tFROM INT8_TBL;\t\n> ***************\n> *** 125,135 ****\n> to_char_2 | to_char | to_char \n> -----------+--------------------------------+--------------------------------\n> | 123.000,000 | 456.000,000\n> ! | 123.000,000 | 4,567,890,123,456,789.000,000\n> ! | 4,567,890,123,456,789.000,000 | 123.000,000\n> ! | 4,567,890,123,456,789.000,000 | 4,567,890,123,456,789.000,000\n> ! | 4,567,890,123,456,789.000,000 | -4,567,890,123,456,789.000,000\n> ! (5 rows)\n> \n> SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR') \n> \tFROM INT8_TBL;\n> --- 87,93 ----\n> to_char_2 | to_char | to_char \n> -----------+--------------------------------+--------------------------------\n> | 123.000,000 | 456.000,000\n> ! (1 row)\n> \n> SELECT '' AS to_char_3, to_char( (q1 * -1), '9999999999999999PR'), to_char( (q2 * -1), '9999999999999999.999PR') \n> \tFROM INT8_TBL;\n> ***************\n> *** 136,146 ****\n> to_char_3 | to_char | to_char \n> -----------+--------------------+------------------------\n> | <123> | <456.000>\n> ! | <123> | <4567890123456789.000>\n> ! | <4567890123456789> | <123.000>\n> ! | <4567890123456789> | <4567890123456789.000>\n> ! | <4567890123456789> | 4567890123456789.000\n> ! (5 rows)\n> \n> SELECT '' AS to_char_4, to_char( (q1 * -1), '9999999999999999S'), to_char( (q2 * -1), 'S9999999999999999') \n> \tFROM INT8_TBL;\n> --- 94,100 ----\n> to_char_3 | to_char | to_char \n> -----------+--------------------+------------------------\n> | <123> | <456.000>\n> ! (1 row)\n> \n> SELECT '' AS to_char_4, to_char( (q1 * -1), '9999999999999999S'), to_char( (q2 * -1), 'S9999999999999999') \n> \tFROM INT8_TBL;\n> ***************\n> *** 147,285 ****\n> to_char_4 | to_char | to_char \n> -----------+-------------------+-------------------\n> | 123- | -456\n> ! | 123- | -4567890123456789\n> ! | 4567890123456789- | -123\n> ! | 4567890123456789- | -4567890123456789\n> ! | 4567890123456789- | +4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_5, to_char(q2, 'MI9999999999999999') FROM INT8_TBL;\t\n> to_char_5 | to_char \n> -----------+--------------------\n> | 456\n> ! | 4567890123456789\n> ! | 123\n> ! | 4567890123456789\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_6, to_char(q2, 'FMS9999999999999999') FROM INT8_TBL;\n> to_char_6 | to_char \n> ! -----------+-------------------\n> | +456\n> ! | +4567890123456789\n> ! | +123\n> ! | +4567890123456789\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_7, to_char(q2, 'FM9999999999999999THPR') FROM INT8_TBL;\n> to_char_7 | to_char \n> ! -----------+--------------------\n> | 456TH\n> ! | 4567890123456789TH\n> ! | 123RD\n> ! | 4567890123456789TH\n> ! | <4567890123456789>\n> ! (5 rows)\n> \n> SELECT '' AS to_char_8, to_char(q2, 'SG9999999999999999th') FROM INT8_TBL;\t\n> to_char_8 | to_char \n> -----------+---------------------\n> | + 456th\n> ! | +4567890123456789th\n> ! | + 123rd\n> ! | +4567890123456789th\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_9, to_char(q2, '0999999999999999') FROM INT8_TBL;\t\n> to_char_9 | to_char \n> -----------+-------------------\n> | 0000000000000456\n> ! | 4567890123456789\n> ! | 0000000000000123\n> ! | 4567890123456789\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_10, to_char(q2, 'S0999999999999999') FROM INT8_TBL;\t\n> to_char_10 | to_char \n> ------------+-------------------\n> | +0000000000000456\n> ! | +4567890123456789\n> ! | +0000000000000123\n> ! | +4567890123456789\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_11, to_char(q2, 'FM0999999999999999') FROM INT8_TBL;\t\n> to_char_11 | to_char \n> ! ------------+-------------------\n> | 0000000000000456\n> ! | 4567890123456789\n> ! | 0000000000000123\n> ! | 4567890123456789\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_12, to_char(q2, 'FM9999999999999999.000') FROM INT8_TBL;\n> to_char_12 | to_char \n> ! ------------+-----------------------\n> | 456.000\n> ! | 4567890123456789.000\n> ! | 123.000\n> ! | 4567890123456789.000\n> ! | -4567890123456789.000\n> ! (5 rows)\n> \n> SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000') FROM INT8_TBL;\t\n> to_char_13 | to_char \n> ------------+------------------------\n> | 456.000\n> ! | 4567890123456789.000\n> ! | 123.000\n> ! | 4567890123456789.000\n> ! | -4567890123456789.000\n> ! (5 rows)\n> \n> SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n> to_char_14 | to_char \n> ! ------------+-------------------\n> | 456\n> ! | 4567890123456789\n> ! | 123\n> ! | 4567890123456789\n> ! | -4567890123456789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n> to_char_15 | to_char \n> ------------+-------------------------------------------\n> | +4 5 6 . 0 0 0 \n> ! | + 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n> ! | +1 2 3 . 0 0 0 \n> ! | + 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n> ! | - 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 . 0 0 0\n> ! (5 rows)\n> \n> SELECT '' AS to_char_16, to_char(q2, '99999 \"text\" 9999 \"9999\" 999 \"\\\\\"text between quote marks\\\\\"\" 9999') FROM INT8_TBL;\n> to_char_16 | to_char \n> ------------+-----------------------------------------------------------\n> | text 9999 \"text between quote marks\" 456\n> ! | 45678 text 9012 9999 345 \"text between quote marks\" 6789\n> ! | text 9999 \"text between quote marks\" 123\n> ! | 45678 text 9012 9999 345 \"text between quote marks\" 6789\n> ! | -45678 text 9012 9999 345 \"text between quote marks\" 6789\n> ! (5 rows)\n> \n> SELECT '' AS to_char_17, to_char(q2, '999999SG9999999999') FROM INT8_TBL;\n> to_char_17 | to_char \n> ------------+-------------------\n> | + 456\n> ! | 456789+0123456789\n> ! | + 123\n> ! | 456789+0123456789\n> ! | 456789-0123456789\n> ! (5 rows)\n> \n> --- 101,183 ----\n> to_char_4 | to_char | to_char \n> -----------+-------------------+-------------------\n> | 123- | -456\n> ! (1 row)\n> \n> SELECT '' AS to_char_5, to_char(q2, 'MI9999999999999999') FROM INT8_TBL;\t\n> to_char_5 | to_char \n> -----------+--------------------\n> | 456\n> ! (1 row)\n> \n> SELECT '' AS to_char_6, to_char(q2, 'FMS9999999999999999') FROM INT8_TBL;\n> to_char_6 | to_char \n> ! -----------+---------\n> | +456\n> ! (1 row)\n> \n> SELECT '' AS to_char_7, to_char(q2, 'FM9999999999999999THPR') FROM INT8_TBL;\n> to_char_7 | to_char \n> ! -----------+---------\n> | 456TH\n> ! (1 row)\n> \n> SELECT '' AS to_char_8, to_char(q2, 'SG9999999999999999th') FROM INT8_TBL;\t\n> to_char_8 | to_char \n> -----------+---------------------\n> | + 456th\n> ! (1 row)\n> \n> SELECT '' AS to_char_9, to_char(q2, '0999999999999999') FROM INT8_TBL;\t\n> to_char_9 | to_char \n> -----------+-------------------\n> | 0000000000000456\n> ! (1 row)\n> \n> SELECT '' AS to_char_10, to_char(q2, 'S0999999999999999') FROM INT8_TBL;\t\n> to_char_10 | to_char \n> ------------+-------------------\n> | +0000000000000456\n> ! (1 row)\n> \n> SELECT '' AS to_char_11, to_char(q2, 'FM0999999999999999') FROM INT8_TBL;\t\n> to_char_11 | to_char \n> ! ------------+------------------\n> | 0000000000000456\n> ! (1 row)\n> \n> SELECT '' AS to_char_12, to_char(q2, 'FM9999999999999999.000') FROM INT8_TBL;\n> to_char_12 | to_char \n> ! ------------+---------\n> | 456.000\n> ! (1 row)\n> \n> SELECT '' AS to_char_13, to_char(q2, 'L9999999999999999.000') FROM INT8_TBL;\t\n> to_char_13 | to_char \n> ------------+------------------------\n> | 456.000\n> ! (1 row)\n> \n> SELECT '' AS to_char_14, to_char(q2, 'FM9999999999999999.999') FROM INT8_TBL;\n> to_char_14 | to_char \n> ! ------------+---------\n> | 456\n> ! (1 row)\n> \n> SELECT '' AS to_char_15, to_char(q2, 'S 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 . 9 9 9') FROM INT8_TBL;\n> to_char_15 | to_char \n> ------------+-------------------------------------------\n> | +4 5 6 . 0 0 0 \n> ! (1 row)\n> \n> SELECT '' AS to_char_16, to_char(q2, '99999 \"text\" 9999 \"9999\" 999 \"\\\\\"text between quote marks\\\\\"\" 9999') FROM INT8_TBL;\n> to_char_16 | to_char \n> ------------+-----------------------------------------------------------\n> | text 9999 \"text between quote marks\" 456\n> ! (1 row)\n> \n> SELECT '' AS to_char_17, to_char(q2, '999999SG9999999999') FROM INT8_TBL;\n> to_char_17 | to_char \n> ------------+-------------------\n> | + 456\n> ! (1 row)\n> \n> \n> ======================================================================\n> \n> *** ./expected/subselect.out\tThu Mar 23 08:42:13 2000\n> --- ./results/subselect.out\tWed Feb 28 16:35:21 2001\n> ***************\n> *** 160,167 ****\n> select q1, float8(count(*)) / (select count(*) from int8_tbl)\n> from int8_tbl group by q1;\n> q1 | ?column? \n> ! ------------------+----------\n> ! 123 | 0.4\n> ! 4567890123456789 | 0.6\n> ! (2 rows)\n> \n> --- 160,166 ----\n> select q1, float8(count(*)) / (select count(*) from int8_tbl)\n> from int8_tbl group by q1;\n> q1 | ?column? \n> ! -----+----------\n> ! 123 | 1\n> ! (1 row)\n> \n> \n> ======================================================================\n> \n> *** ./expected/union.out\tThu Nov 9 03:47:49 2000\n> --- ./results/union.out\tWed Feb 28 16:35:22 2001\n> ***************\n> *** 259,318 ****\n> --\n> SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;\n> q2 \n> ! ------------------\n> ! 123\n> ! 4567890123456789\n> ! (2 rows)\n> \n> SELECT q2 FROM int8_tbl INTERSECT ALL SELECT q1 FROM int8_tbl;\n> q2 \n> ! ------------------\n> ! 123\n> ! 4567890123456789\n> ! 4567890123456789\n> ! (3 rows)\n> \n> SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n> q2 \n> ! -------------------\n> ! -4567890123456789\n> 456\n> ! (2 rows)\n> \n> SELECT q2 FROM int8_tbl EXCEPT ALL SELECT q1 FROM int8_tbl;\n> q2 \n> ! -------------------\n> ! -4567890123456789\n> 456\n> ! (2 rows)\n> \n> SELECT q2 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q1 FROM int8_tbl;\n> q2 \n> ! -------------------\n> ! -4567890123456789\n> 456\n> ! 4567890123456789\n> ! (3 rows)\n> \n> SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl;\n> q1 \n> ! ----\n> ! (0 rows)\n> \n> SELECT q1 FROM int8_tbl EXCEPT ALL SELECT q2 FROM int8_tbl;\n> q1 \n> ! ------------------\n> 123\n> ! 4567890123456789\n> ! (2 rows)\n> \n> SELECT q1 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q2 FROM int8_tbl;\n> q1 \n> ! ------------------\n> 123\n> ! 4567890123456789\n> ! 4567890123456789\n> ! (3 rows)\n> \n> --\n> -- Mixed types\n> --- 259,307 ----\n> --\n> SELECT q2 FROM int8_tbl INTERSECT SELECT q1 FROM int8_tbl;\n> q2 \n> ! ----\n> ! (0 rows)\n> \n> SELECT q2 FROM int8_tbl INTERSECT ALL SELECT q1 FROM int8_tbl;\n> q2 \n> ! ----\n> ! (0 rows)\n> \n> SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n> q2 \n> ! -----\n> 456\n> ! (1 row)\n> \n> SELECT q2 FROM int8_tbl EXCEPT ALL SELECT q1 FROM int8_tbl;\n> q2 \n> ! -----\n> 456\n> ! (1 row)\n> \n> SELECT q2 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q1 FROM int8_tbl;\n> q2 \n> ! -----\n> 456\n> ! (1 row)\n> \n> SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl;\n> q1 \n> ! -----\n> ! 123\n> ! (1 row)\n> \n> SELECT q1 FROM int8_tbl EXCEPT ALL SELECT q2 FROM int8_tbl;\n> q1 \n> ! -----\n> 123\n> ! (1 row)\n> \n> SELECT q1 FROM int8_tbl EXCEPT ALL SELECT DISTINCT q2 FROM int8_tbl;\n> q1 \n> ! -----\n> 123\n> ! (1 row)\n> \n> --\n> -- Mixed types\n> ***************\n> *** 337,396 ****\n> --\n> SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl;\n> q1 \n> ! -------------------\n> ! 123\n> ! 4567890123456789\n> 456\n> ! 4567890123456789\n> ! 123\n> ! 4567890123456789\n> ! -4567890123456789\n> ! (7 rows)\n> \n> SELECT q1 FROM int8_tbl INTERSECT (((SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl)));\n> q1 \n> ! ------------------\n> ! 123\n> ! 4567890123456789\n> ! (2 rows)\n> \n> (((SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl))) UNION ALL SELECT q2 FROM int8_tbl;\n> q1 \n> ! -------------------\n> ! 123\n> ! 4567890123456789\n> 456\n> ! 4567890123456789\n> ! 123\n> ! 4567890123456789\n> ! -4567890123456789\n> ! (7 rows)\n> \n> SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n> q1 \n> ! -------------------\n> ! -4567890123456789\n> 456\n> ! (2 rows)\n> \n> SELECT q1 FROM int8_tbl UNION ALL (((SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl)));\n> q1 \n> ! -------------------\n> 123\n> - 123\n> - 4567890123456789\n> - 4567890123456789\n> - 4567890123456789\n> - -4567890123456789\n> 456\n> ! (7 rows)\n> \n> (((SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl))) EXCEPT SELECT q1 FROM int8_tbl;\n> q1 \n> ! -------------------\n> ! -4567890123456789\n> 456\n> ! (2 rows)\n> \n> --\n> -- Subqueries with ORDER BY & LIMIT clauses\n> --- 326,364 ----\n> --\n> SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl;\n> q1 \n> ! -----\n> 456\n> ! (1 row)\n> \n> SELECT q1 FROM int8_tbl INTERSECT (((SELECT q2 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl)));\n> q1 \n> ! ----\n> ! (0 rows)\n> \n> (((SELECT q1 FROM int8_tbl INTERSECT SELECT q2 FROM int8_tbl))) UNION ALL SELECT q2 FROM int8_tbl;\n> q1 \n> ! -----\n> 456\n> ! (1 row)\n> \n> SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl;\n> q1 \n> ! -----\n> 456\n> ! (1 row)\n> \n> SELECT q1 FROM int8_tbl UNION ALL (((SELECT q2 FROM int8_tbl EXCEPT SELECT q1 FROM int8_tbl)));\n> q1 \n> ! -----\n> 123\n> 456\n> ! (2 rows)\n> \n> (((SELECT q1 FROM int8_tbl UNION ALL SELECT q2 FROM int8_tbl))) EXCEPT SELECT q1 FROM int8_tbl;\n> q1 \n> ! -----\n> 456\n> ! (1 row)\n> \n> --\n> -- Subqueries with ORDER BY & LIMIT clauses\n> ***************\n> *** 399,408 ****\n> SELECT q1,q2 FROM int8_tbl EXCEPT SELECT q2,q1 FROM int8_tbl\n> ORDER BY q2,q1;\n> q1 | q2 \n> ! ------------------+-------------------\n> ! 4567890123456789 | -4567890123456789\n> 123 | 456\n> ! (2 rows)\n> \n> -- This should fail, because q2 isn't a name of an EXCEPT output column\n> SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1;\n> --- 367,375 ----\n> SELECT q1,q2 FROM int8_tbl EXCEPT SELECT q2,q1 FROM int8_tbl\n> ORDER BY q2,q1;\n> q1 | q2 \n> ! -----+-----\n> 123 | 456\n> ! (1 row)\n> \n> -- This should fail, because q2 isn't a name of an EXCEPT output column\n> SELECT q1 FROM int8_tbl EXCEPT SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1;\n> ***************\n> *** 410,419 ****\n> -- But this should work:\n> SELECT q1 FROM int8_tbl EXCEPT (((SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1)));\n> q1 \n> ! ------------------\n> 123\n> ! 4567890123456789\n> ! (2 rows)\n> \n> --\n> -- New syntaxes (7.1) permit new tests\n> --- 377,385 ----\n> -- But this should work:\n> SELECT q1 FROM int8_tbl EXCEPT (((SELECT q2 FROM int8_tbl ORDER BY q2 LIMIT 1)));\n> q1 \n> ! -----\n> 123\n> ! (1 row)\n> \n> --\n> -- New syntaxes (7.1) permit new tests\n> ***************\n> *** 420,430 ****\n> --\n> (((((select * from int8_tbl)))));\n> q1 | q2 \n> ! ------------------+-------------------\n> 123 | 456\n> ! 123 | 4567890123456789\n> ! 4567890123456789 | 123\n> ! 4567890123456789 | 4567890123456789\n> ! 4567890123456789 | -4567890123456789\n> ! (5 rows)\n> \n> --- 386,392 ----\n> --\n> (((((select * from int8_tbl)))));\n> q1 | q2 \n> ! -----+-----\n> 123 | 456\n> ! (1 row)\n> \n> \n> ======================================================================\n> \n> \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 28 Feb 2001 17:44:48 +0100 (MET)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: int8 beta5 broken?" }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> Testing beta5 on unixware7 gives an error on int8 test while beta4 (I've\n> just retested it) works ok.\n\nThat's odd. int8.c hasn't changed since beta3 (except in the\nfloat-to-int8 routine, which isn't involved here). Is there any\ndifference in the src/include/config.h file generated by the two\nversions?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 11:53:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] int8 beta5 broken? " }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> Sorry to follow-up on my own post; int8 test passes if open-ssl is not\n> used.\n\nThat's difficult to believe, because int8.c doesn't include anything\nthat even knows SSL exists. Larry, can you confirm this behavior?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 12:04:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: int8 beta5 broken? " }, { "msg_contents": "Working on it.\n\nGive me a couple of hours.\n\nLER\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Wednesday, February 28, 2001 11:04 AM\nTo: ohp@pyrenet.fr\nCc: pgsql-hackers@postgresql.org; Larry Rosenman\nSubject: Re: [HACKERS] Re: int8 beta5 broken? \n\n\nOlivier PRENANT <ohp@pyrenet.fr> writes:\n> Sorry to follow-up on my own post; int8 test passes if open-ssl is not\n> used.\n\nThat's difficult to believe, because int8.c doesn't include anything\nthat even knows SSL exists. Larry, can you confirm this behavior?\n\n\t\t\tregards, tom lane\n\n", "msg_date": "Wed, 28 Feb 2001 11:12:48 -0600", "msg_from": "\"Larry Rosenman\" <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "RE: Re: int8 beta5 broken? " }, { "msg_contents": "Olivier PRENANT writes:\n\n> Testing beta5 on unixware7 gives an error on int8 test while beta4 (I've\n> just retested it) works ok regressions.diff follows:\n\nThis doesn't happen to be caused by the compiler bug described in\ndoc/FAQ_SCO?\n\n> Anyway, even after that, there are linking errors on libecpg.so and perl\n> because of a lack of -L/usr/local/ssl/lib\n\nCan you post some screen output, so we can see what the command was that\ncaused the error?\n\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 28 Feb 2001 18:19:53 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] int8 beta5 broken?" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010228 11:13]:\n> Working on it.\n> \n> Give me a couple of hours.\n> \nOlivier,\n How did you build OpenSSL? I get the following (I only have a\nstatic lib): \n\n\ncc -O -K inline -K PIC -I. -I../../../src/include -I/usr/local/include -I/usr/local/ssl/include -DFRONTEND -DSYSCONFDIR='\"/usr/local/pgsql/etc\"' -c -o wchar.o wchar.c\nar crs libpq.a `lorder fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o wchar.o | tsort`\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \tfe-connect.o\nUX:tsort: INFO: \tfe-auth.o\nUX:tsort: WARNING: Cycle in data\nUX:tsort: INFO: \tfe-exec.o\nUX:tsort: INFO: \tfe-misc.o\nUX:tsort: INFO: \tfe-connect.o\n: libpq.a\ncc -G -Wl,-z,text -Wl,-h,libpq.so.2 fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o fe-lobj.o pqexpbuffer.o dllist.o pqsignal.o wchar.o -L/usr/local/lib -L/usr/local/ssl/lib -lssl -lcrypto -lresolv -lnsl -lsocket -Wl,-R/usr/local/pgsql/lib -o libpq.so.2.1\nUX:ld: INFO: text relocations referenced from files:\nlibssl.a(s23_meth.o)\nlibssl.a(s23_srvr.o)\nlibssl.a(s23_clnt.o)\nlibssl.a(s23_lib.o)\nlibssl.a(s23_pkt.o)\nlibssl.a(t1_meth.o)\nlibssl.a(t1_srvr.o)\nlibssl.a(t1_clnt.o)\nlibssl.a(t1_lib.o)\nlibssl.a(t1_enc.o)\nlibssl.a(ssl_lib.o)\nlibssl.a(ssl_err2.o)\nlibssl.a(ssl_cert.o)\nlibssl.a(ssl_sess.o)\nlibssl.a(ssl_ciph.o)\nlibssl.a(ssl_algs.o)\nlibssl.a(ssl_err.o)\nlibssl.a(s2_srvr.o)\nlibssl.a(s2_clnt.o)\nlibssl.a(s2_lib.o)\nlibssl.a(s2_enc.o)\nlibssl.a(s2_pkt.o)\nlibssl.a(s3_meth.o)\nlibssl.a(s3_srvr.o)\nlibssl.a(s3_clnt.o)\nlibssl.a(s3_lib.o)\nlibssl.a(s3_enc.o)\nlibssl.a(s3_pkt.o)\nlibssl.a(s3_both.o)\nlibssl.a(ssl_rsa.o)\nlibcrypto.a(cryptlib.o)\nlibcrypto.a(mem.o)\nlibcrypto.a(ex_data.o)\nlibcrypto.a(md5_dgst.o)\nlibcrypto.a(sha1dgst.o)\nlibcrypto.a(hmac.o)\nlibcrypto.a(fcrypt.o)\nlibcrypto.a(bn_lib.o)\nlibcrypto.a(rsa_lib.o)\nlibcrypto.a(rsa_sign.o)\nlibcrypto.a(dsa_vrf.o)\nlibcrypto.a(dsa_sign.o)\nlibcrypto.a(dh_key.o)\nlibcrypto.a(dh_lib.o)\nlibcrypto.a(buffer.o)\nlibcrypto.a(bio_lib.o)\nlibcrypto.a(bss_file.o)\nlibcrypto.a(bss_sock.o)\nlibcrypto.a(bf_buff.o)\nlibcrypto.a(b_print.o)\nlibcrypto.a(stack.o)\nlibcrypto.a(lhash.o)\nlibcrypto.a(rand_lib.o)\nlibcrypto.a(err.o)\nlibcrypto.a(err_all.o)\nlibcrypto.a(o_names.o)\nlibcrypto.a(obj_dat.o)\nlibcrypto.a(obj_lib.o)\nlibcrypto.a(obj_err.o)\nlibcrypto.a(digest.o)\nlibcrypto.a(evp_enc.o)\nlibcrypto.a(e_des.o)\nlibcrypto.a(e_idea.o)\nlibcrypto.a(e_des3.o)\nlibcrypto.a(e_rc4.o)\nlibcrypto.a(names.o)\nlibcrypto.a(e_rc2.o)\nlibcrypto.a(m_md2.o)\nlibcrypto.a(m_md5.o)\nlibcrypto.a(m_sha1.o)\nlibcrypto.a(m_dss1.o)\nlibcrypto.a(p_sign.o)\nlibcrypto.a(p_verify.o)\nlibcrypto.a(p_lib.o)\nlibcrypto.a(evp_err.o)\nlibcrypto.a(e_null.o)\nlibcrypto.a(evp_lib.o)\nlibcrypto.a(evp_pbe.o)\nlibcrypto.a(a_object.o)\nlibcrypto.a(a_dup.o)\nlibcrypto.a(x_sig.o)\nlibcrypto.a(x_name.o)\nlibcrypto.a(x_x509.o)\nlibcrypto.a(x_x509a.o)\nlibcrypto.a(d2i_r_pr.o)\nlibcrypto.a(i2d_r_pr.o)\nlibcrypto.a(d2i_pr.o)\nlibcrypto.a(i2d_dhp.o)\nlibcrypto.a(d2i_dhp.o)\nlibcrypto.a(asn1_lib.o)\nlibcrypto.a(asn1_err.o)\nlibcrypto.a(evp_asn1.o)\nlibcrypto.a(pem_all.o)\nlibcrypto.a(pem_err.o)\nlibcrypto.a(x509_d2.o)\nlibcrypto.a(x509_cmp.o)\nlibcrypto.a(x509_obj.o)\nlibcrypto.a(x509_vfy.o)\nlibcrypto.a(x509_err.o)\nlibcrypto.a(x509_ext.o)\nlibcrypto.a(x509type.o)\nlibcrypto.a(x509_lu.o)\nlibcrypto.a(x_all.o)\nlibcrypto.a(x509_trs.o)\nlibcrypto.a(by_file.o)\nlibcrypto.a(by_dir.o)\nlibcrypto.a(v3_lib.o)\nlibcrypto.a(v3err.o)\nlibcrypto.a(v3_alt.o)\nlibcrypto.a(v3_skey.o)\nlibcrypto.a(v3_akey.o)\nlibcrypto.a(v3_pku.o)\nlibcrypto.a(v3_enum.o)\nlibcrypto.a(v3_sxnet.o)\nlibcrypto.a(v3_cpols.o)\nlibcrypto.a(v3_crld.o)\nlibcrypto.a(v3_purp.o)\nlibcrypto.a(v3_info.o)\nlibcrypto.a(conf_err.o)\nlibcrypto.a(pkcs7err.o)\nlibcrypto.a(pk12err.o)\nlibcrypto.a(comp_lib.o)\nlibcrypto.a(mem_dbg.o)\nlibcrypto.a(cpt_err.o)\nlibcrypto.a(md2_dgst.o)\nlibcrypto.a(md5_one.o)\nlibcrypto.a(set_key.o)\nlibcrypto.a(ecb_enc.o)\nlibcrypto.a(ecb3_enc.o)\nlibcrypto.a(cfb64enc.o)\nlibcrypto.a(cfb64ede.o)\nlibcrypto.a(ofb64ede.o)\nlibcrypto.a(ofb64enc.o)\nlibcrypto.a(des_enc.o)\nlibcrypto.a(fcrypt_b.o)\nlibcrypto.a(rc2_ecb.o)\nlibcrypto.a(rc2_skey.o)\nlibcrypto.a(rc2_cbc.o)\nlibcrypto.a(rc2cfb64.o)\nlibcrypto.a(rc2ofb64.o)\nlibcrypto.a(rc4_skey.o)\nlibcrypto.a(i_cbc.o)\nlibcrypto.a(i_cfb64.o)\nlibcrypto.a(i_ofb64.o)\nlibcrypto.a(i_ecb.o)\nlibcrypto.a(bn_exp.o)\nlibcrypto.a(bn_ctx.o)\nlibcrypto.a(bn_mul.o)\nlibcrypto.a(bn_rand.o)\nlibcrypto.a(bn_word.o)\nlibcrypto.a(bn_blind.o)\nlibcrypto.a(bn_gcd.o)\nlibcrypto.a(bn_err.o)\nlibcrypto.a(bn_sqr.o)\nlibcrypto.a(bn_asm.o)\nlibcrypto.a(bn_recp.o)\nlibcrypto.a(bn_mont.o)\nlibcrypto.a(rsa_eay.o)\nlibcrypto.a(rsa_err.o)\nlibcrypto.a(rsa_pk1.o)\nlibcrypto.a(rsa_ssl.o)\nlibcrypto.a(rsa_none.o)\nlibcrypto.a(rsa_oaep.o)\nlibcrypto.a(dsa_lib.o)\nlibcrypto.a(dsa_asn1.o)\nlibcrypto.a(dsa_err.o)\nlibcrypto.a(dsa_ossl.o)\nlibcrypto.a(dh_err.o)\nlibcrypto.a(dso_err.o)\nlibcrypto.a(buf_err.o)\nlibcrypto.a(bio_err.o)\nlibcrypto.a(md_rand.o)\nlibcrypto.a(rand_err.o)\nlibcrypto.a(rand_win.o)\nlibcrypto.a(evp_pkey.o)\nlibcrypto.a(a_bitstr.o)\nlibcrypto.a(a_utctm.o)\nlibcrypto.a(a_gentm.o)\nlibcrypto.a(a_time.o)\nlibcrypto.a(a_int.o)\nlibcrypto.a(a_octet.o)\nlibcrypto.a(a_print.o)\nlibcrypto.a(a_type.o)\nlibcrypto.a(a_set.o)\nlibcrypto.a(a_d2i_fp.o)\nlibcrypto.a(a_i2d_fp.o)\nlibcrypto.a(a_enum.o)\nlibcrypto.a(a_vis.o)\nlibcrypto.a(a_utf8.o)\nlibcrypto.a(a_sign.o)\nlibcrypto.a(a_digest.o)\nlibcrypto.a(a_verify.o)\nlibcrypto.a(x_algor.o)\nlibcrypto.a(x_pubkey.o)\nlibcrypto.a(x_req.o)\nlibcrypto.a(x_attrib.o)\nlibcrypto.a(x_cinf.o)\nlibcrypto.a(x_crl.o)\nlibcrypto.a(x_info.o)\nlibcrypto.a(x_spki.o)\nlibcrypto.a(nsseq.o)\nlibcrypto.a(d2i_r_pu.o)\nlibcrypto.a(i2d_r_pu.o)\nlibcrypto.a(d2i_s_pr.o)\nlibcrypto.a(i2d_s_pr.o)\nlibcrypto.a(d2i_pu.o)\nlibcrypto.a(i2d_pu.o)\nlibcrypto.a(i2d_pr.o)\nlibcrypto.a(t_x509.o)\nlibcrypto.a(t_x509a.o)\nlibcrypto.a(t_pkey.o)\nlibcrypto.a(p7_i_s.o)\nlibcrypto.a(p7_lib.o)\nlibcrypto.a(i2d_dsap.o)\nlibcrypto.a(d2i_dsap.o)\nlibcrypto.a(x_pkey.o)\nlibcrypto.a(x_exten.o)\nlibcrypto.a(a_bytes.o)\nlibcrypto.a(asn_pack.o)\nlibcrypto.a(p8_pkey.o)\nlibcrypto.a(pem_info.o)\nlibcrypto.a(pem_lib.o)\nlibcrypto.a(x509_def.o)\nlibcrypto.a(x509name.o)\nlibcrypto.a(x509_v3.o)\nlibcrypto.a(v3_bcons.o)\nlibcrypto.a(v3_bitst.o)\nlibcrypto.a(v3_conf.o)\nlibcrypto.a(v3_extku.o)\nlibcrypto.a(v3_ia5.o)\nlibcrypto.a(v3_prn.o)\nlibcrypto.a(v3_utl.o)\nlibcrypto.a(v3_genn.o)\nlibcrypto.a(conf_lib.o)\nlibcrypto.a(conf_api.o)\nlibcrypto.a(conf_def.o)\nlibcrypto.a(p12_add.o)\nlibcrypto.a(p12_bags.o)\nlibcrypto.a(p12_decr.o)\nlibcrypto.a(p12_sbag.o)\nlibcrypto.a(sha1_one.o)\nlibcrypto.a(bn_add.o)\nlibcrypto.a(bn_div.o)\nlibcrypto.a(bn_print.o)\nlibcrypto.a(bn_shift.o)\nlibcrypto.a(bn_exp2.o)\nlibcrypto.a(err_prn.o)\nlibcrypto.a(encode.o)\nlibcrypto.a(evp_key.o)\nlibcrypto.a(x_val.o)\nlibcrypto.a(d2i_s_pu.o)\nlibcrypto.a(i2d_s_pu.o)\nlibcrypto.a(p7_signd.o)\nlibcrypto.a(p7_evp.o)\nlibcrypto.a(p7_dgst.o)\nlibcrypto.a(p7_s_e.o)\nlibcrypto.a(p7_enc.o)\nlibcrypto.a(a_bool.o)\nlibcrypto.a(a_strnid.o)\nlibcrypto.a(p5_pbe.o)\nlibcrypto.a(p5_pbev2.o)\nlibcrypto.a(x509_req.o)\nlibcrypto.a(x509rset.o)\nlibcrypto.a(x509_att.o)\nlibcrypto.a(pk7_lib.o)\nlibcrypto.a(read_pwd.o)\nlibcrypto.a(a_mbstr.o)\nlibcrypto.a(p7_signi.o)\nlibcrypto.a(p7_recip.o)\nlibcrypto.a(p7_enc_c.o)\nUX:ld: ERROR: relocations remain against non-writeable, allocatable section .text\ngmake[3]: *** [libpq.so.2.1] Error 1\ngmake[3]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces/libpq'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/ler/pg-dev/pgsql/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/ler/pg-dev/pgsql/src'\ngmake: *** [all] Error 2\n> LER\n> \n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Wednesday, February 28, 2001 11:04 AM\n> To: ohp@pyrenet.fr\n> Cc: pgsql-hackers@postgresql.org; Larry Rosenman\n> Subject: Re: [HACKERS] Re: int8 beta5 broken? \n> \n> \n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > Sorry to follow-up on my own post; int8 test passes if open-ssl is not\n> > used.\n> \n> That's difficult to believe, because int8.c doesn't include anything\n> that even knows SSL exists. Larry, can you confirm this behavior?\n> \n> \t\t\tregards, tom lane\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 28 Feb 2001 11:26:59 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Re: int8 beta5 broken?" }, { "msg_contents": "On Wed, 28 Feb 2001, Tom Lane wrote:\n\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > Sorry to follow-up on my own post; int8 test passes if open-ssl is not\n> > used.\n> \n> That's difficult to believe, because int8.c doesn't include anything\n> that even knows SSL exists. Larry, can you confirm this behavior?\nHi, I've been testing a little more further and found that (with-openssl)\nif initdb is done with LANG=C, int8 test succeeds\nif initdb is done with LANG=fr int8 test fails.\n\nI'll try nowwithout openssl and LANG=fr\n\nRegards\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Thu, 1 Mar 2001 13:00:57 +0100 (MET)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Re: int8 beta5 broken? " } ]
[ { "msg_contents": "I've been looking at the WAL code and trying to figure out what the\n\"backup block\" mechanism is for. It appears that that can attach\nup to two disk pages of info to a WAL log record. If there are any\ncases where more than one page is really attached to a record, then\nWAL will crash and burn with BLCKSZ = 32K, because the record length\nfield in XLogRecord is only 16 bits: there's not room for 2 * BLCKSZ,\nlet alone any additional fields.\n\nThe define\n#define _INTL_MAXLOGRECSZ\t(3 * MAXLOGRECSZ)\nin xlog.c is even more disturbing, if accurate; that'd mean that\nBLCKSZ = 16k wouldn't work either.\n\nI don't like the idea of restricting the maximum blocksize, since we\ndon't yet have a complete implementation of TOAST (cf indexes). This\nappears to mean that the format of WAL records will have to change;\neither we widen xl_len or arrange for backup blocks to be stored as\nseparate records. Or perhaps just agree that the backup blocks aren't\ncounted in xl_len, although that seems a tad klugy.\n\nOn the other hand, if there can't really be more than one backup block\nper record, there's no issue. Where is this used, and for what?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 13:36:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Doesn't WAL fail at BLCKSZ = 32k?" } ]
[ { "msg_contents": "I just took a close look at the COMP_CRC64 macro in xlog.c.\n\nThis isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\non just the odd-numbered bytes and one on just the even-numbered bytes\nof the datastream. That's hardly any stronger than a single 32-bit CRC;\nit's certainly not what I thought we had agreed to implement.\n\nWe can't change this algorithm without forcing an initdb, which would be\na rather unpleasant thing to do at this late stage of the release cycle.\nBut I'm not happy with it. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Feb 2001 16:53:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "On Wed, Feb 28, 2001 at 04:53:09PM -0500, Tom Lane wrote:\n> I just took a close look at the COMP_CRC64 macro in xlog.c.\n> \n> This isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\n> on just the odd-numbered bytes and one on just the even-numbered bytes\n> of the datastream. That's hardly any stronger than a single 32-bit CRC;\n> it's certainly not what I thought we had agreed to implement.\n> \n> We can't change this algorithm without forcing an initdb, which would be\n> a rather unpleasant thing to do at this late stage of the release cycle.\n> But I'm not happy with it. Comments?\n\nThis might be a good time to update:\n\n The CRC-64 code used in the SWISS-PROT genetic database is (now) at:\n\n ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/old/SPcrc.tar.gz\n\n From the README:\n\n The code in this package has been derived from the BTLib package\n obtained from Christian Iseli <chris@ludwig-alpha.unil.ch>.\n From his mail:\n\n The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and\n B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University\n Press. Pages 896ff.\n\n The generator polynomial is x64 + x4 + x3 + x1 + 1.\n\nI would suggest that if you don't change the algorithm, at least change\nthe name in the sources. Were you to #ifdef in a real crc-64, and make \na compile-time option to select the old one, you could allow users who \nwish to avoid the initdb a way to continue with the existing pair of \nCRC-32s.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 28 Feb 2001 14:42:58 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "\nI will just add a TODO item and we can hit it for 7.2.\n\n\n> On Wed, Feb 28, 2001 at 04:53:09PM -0500, Tom Lane wrote:\n> > I just took a close look at the COMP_CRC64 macro in xlog.c.\n> > \n> > This isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\n> > on just the odd-numbered bytes and one on just the even-numbered bytes\n> > of the datastream. That's hardly any stronger than a single 32-bit CRC;\n> > it's certainly not what I thought we had agreed to implement.\n> > \n> > We can't change this algorithm without forcing an initdb, which would be\n> > a rather unpleasant thing to do at this late stage of the release cycle.\n> > But I'm not happy with it. Comments?\n> \n> This might be a good time to update:\n> \n> The CRC-64 code used in the SWISS-PROT genetic database is (now) at:\n> \n> ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/old/SPcrc.tar.gz\n> \n> From the README:\n> \n> The code in this package has been derived from the BTLib package\n> obtained from Christian Iseli <chris@ludwig-alpha.unil.ch>.\n> From his mail:\n> \n> The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and\n> B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University\n> Press. Pages 896ff.\n> \n> The generator polynomial is x64 + x4 + x3 + x1 + 1.\n> \n> I would suggest that if you don't change the algorithm, at least change\n> the name in the sources. Were you to #ifdef in a real crc-64, and make \n> a compile-time option to select the old one, you could allow users who \n> wish to avoid the initdb a way to continue with the existing pair of \n> CRC-32s.\n> \n> Nathan Myers\n> ncm@zembu.com\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Feb 2001 21:16:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "Added to TODO:\n\n\t* Correct CRC WAL code to be normal CRC32 algorithm \n\n> On Wed, Feb 28, 2001 at 04:53:09PM -0500, Tom Lane wrote:\n> > I just took a close look at the COMP_CRC64 macro in xlog.c.\n> > \n> > This isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\n> > on just the odd-numbered bytes and one on just the even-numbered bytes\n> > of the datastream. That's hardly any stronger than a single 32-bit CRC;\n> > it's certainly not what I thought we had agreed to implement.\n> > \n> > We can't change this algorithm without forcing an initdb, which would be\n> > a rather unpleasant thing to do at this late stage of the release cycle.\n> > But I'm not happy with it. Comments?\n> \n> This might be a good time to update:\n> \n> The CRC-64 code used in the SWISS-PROT genetic database is (now) at:\n> \n> ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/old/SPcrc.tar.gz\n> \n> From the README:\n> \n> The code in this package has been derived from the BTLib package\n> obtained from Christian Iseli <chris@ludwig-alpha.unil.ch>.\n> From his mail:\n> \n> The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and\n> B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University\n> Press. Pages 896ff.\n> \n> The generator polynomial is x64 + x4 + x3 + x1 + 1.\n> \n> I would suggest that if you don't change the algorithm, at least change\n> the name in the sources. Were you to #ifdef in a real crc-64, and make \n> a compile-time option to select the old one, you could allow users who \n> wish to avoid the initdb a way to continue with the existing pair of \n> CRC-32s.\n> \n> Nathan Myers\n> ncm@zembu.com\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Feb 2001 21:17:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "On Wed, Feb 28, 2001 at 09:17:19PM -0500, Bruce Momjian wrote:\n> > On Wed, Feb 28, 2001 at 04:53:09PM -0500, Tom Lane wrote:\n> > > I just took a close look at the COMP_CRC64 macro in xlog.c.\n> > > \n> > > This isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\n> > > on just the odd-numbered bytes and one on just the even-numbered bytes\n> > > of the datastream. That's hardly any stronger than a single 32-bit CRC;\n> > > it's certainly not what I thought we had agreed to implement.\n> > > \n> > > We can't change this algorithm without forcing an initdb, which would be\n> > > a rather unpleasant thing to do at this late stage of the release cycle.\n> > > But I'm not happy with it. Comments?\n> > \n> > This might be a good time to update:\n> > \n> > The CRC-64 code used in the SWISS-PROT genetic database is (now) at:\n> > \n> > ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/old/SPcrc.tar.gz\n> > \n> > From the README:\n> > \n> > The code in this package has been derived from the BTLib package\n> > obtained from Christian Iseli <chris@ludwig-alpha.unil.ch>.\n> > From his mail:\n> > \n> > The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and\n> > B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University\n> > Press. Pages 896ff.\n> > \n> > The generator polynomial is x64 + x4 + x3 + x1 + 1.\n> > \n> > I would suggest that if you don't change the algorithm, at least change\n> > the name in the sources. Were you to #ifdef in a real crc-64, and make \n> > a compile-time option to select the old one, you could allow users who \n> > wish to avoid the initdb a way to continue with the existing pair of \n> > CRC-32s.\n>\n> Added to TODO:\n> \n> \t* Correct CRC WAL code to be normal CRC32 algorithm \n\nUm, how about\n\n * Correct CRC WAL code to be a real CRC64 algorithm\n\ninstead?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 28 Feb 2001 18:51:18 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "> > Added to TODO:\n> > \n> > \t* Correct CRC WAL code to be normal CRC32 algorithm \n> \n> Um, how about\n> \n> * Correct CRC WAL code to be a real CRC64 algorithm\n> \n> instead?\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 28 Feb 2001 22:30:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> The CRC-64 code used in the SWISS-PROT genetic database is (now) at:\n> ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/old/SPcrc.tar.gz\n\n> From the README:\n\n> The code in this package has been derived from the BTLib package\n> obtained from Christian Iseli <chris@ludwig-alpha.unil.ch>.\n> From his mail:\n\n> The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and\n> B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University\n> Press. Pages 896ff.\n\n> The generator polynomial is x64 + x4 + x3 + x1 + 1.\n\nNathan (or anyone else with a copy of \"Numerical recipes in C\", which\nI'm embarrassed to admit I don't own), is there any indication in there\nthat anyone spent any effort on choosing that particular generator\npolynomial? As far as I can see, it violates one of the standard\nguidelines for choosing a polynomial, namely that it be a multiple of\n(x + 1) ... which in modulo-2 land is equivalent to having an even\nnumber of terms, which this ain't got. See Ross Williams'\nA PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS, available from\nftp://ftp.rocksoft.com/papers/crc_v3.txt among other places, which is\nby far the most thorough and readable thing I've ever seen on CRCs.\n\nI spent some time digging around the net for standard CRC64 polynomials,\nand the only thing I could find that looked like it might have been\npicked by someone who understood what they were doing is in the DLT\n(digital linear tape) standard, ECMA-182 (available from\nhttp://www.ecma.ch/ecma1/STAND/ECMA-182.HTM):\n\nx^64 + x^62 + x^57 + x^55 + x^54 + x^53 + x^52 + x^47 + x^46 + x^45 +\nx^40 + x^39 + x^38 + x^37 + x^35 + x^33 + x^32 + x^31 + x^29 + x^27 +\nx^24 + x^23 + x^22 + x^21 + x^19 + x^17 + x^13 + x^12 + x^10 + x^9 +\nx^7 + x^4 + x + 1\n\nI'm probably going to go with this one unless someone can come up with\nan authoritative source for another one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 14:00:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ... " }, { "msg_contents": "On Mon, Mar 05, 2001 at 02:00:59PM -0500, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > The CRC-64 code used in the SWISS-PROT genetic database is (now) at:\n> > ftp://ftp.ebi.ac.uk/pub/software/swissprot/Swissknife/old/SPcrc.tar.gz\n> \n> > From the README:\n> \n> > The code in this package has been derived from the BTLib package\n> > obtained from Christian Iseli <chris@ludwig-alpha.unil.ch>.\n> > From his mail:\n> \n> > The reference is: W. H. Press, S. A. Teukolsky, W. T. Vetterling, and\n> > B. P. Flannery, \"Numerical recipes in C\", 2nd ed., Cambridge University\n> > Press. Pages 896ff.\n> \n> > The generator polynomial is x64 + x4 + x3 + x1 + 1.\n> \n> Nathan (or anyone else with a copy of \"Numerical recipes in C\", which\n> I'm embarrassed to admit I don't own), is there any indication in there\n> that anyone spent any effort on choosing that particular generator\n> polynomial? As far as I can see, it violates one of the standard\n> guidelines for choosing a polynomial, namely that it be a multiple of\n> (x + 1) ... which in modulo-2 land is equivalent to having an even\n> number of terms, which this ain't got. See Ross Williams'\n> A PAINLESS GUIDE TO CRC ERROR DETECTION ALGORITHMS, available from\n> ftp://ftp.rocksoft.com/papers/crc_v3.txt among other places, which is\n> by far the most thorough and readable thing I've ever seen on CRCs.\n> \n> I spent some time digging around the net for standard CRC64 polynomials,\n> and the only thing I could find that looked like it might have been\n> picked by someone who understood what they were doing is in the DLT\n> (digital linear tape) standard, ECMA-182 (available from\n> http://www.ecma.ch/ecma1/STAND/ECMA-182.HTM):\n> \n> x^64 + x^62 + x^57 + x^55 + x^54 + x^53 + x^52 + x^47 + x^46 + x^45 +\n> x^40 + x^39 + x^38 + x^37 + x^35 + x^33 + x^32 + x^31 + x^29 + x^27 +\n> x^24 + x^23 + x^22 + x^21 + x^19 + x^17 + x^13 + x^12 + x^10 + x^9 +\n> x^7 + x^4 + x + 1\n\nI'm sorry to have taken so long to reply. \n\nThe polynomial chosen for SWISS-PROT turns out to be presented, in \nNumerical Recipes, just as an example of a primitive polynomial of \nthat degree; no assertion is made about its desirability for error \nchecking. It is (in turn) drawn from E. J. Watson, \"Mathematics of \nComputation\", vol. 16, pp368-9.\n\nHaving (x + 1) as a factor guarantees to catch all errors in which\nan odd number of bits have been changed. Presumably you are then\ninfinitesimally less likely to catch all errors in which an even \nnumber of bits have been changed.\n\nI would have posted the ECMA-182 polynomial if I had found it. (That \nwas good searching!) One hopes that the ECMA polynomial was chosen more \ncarefully than entirely at random. High-degree codes are often chosen \nby Monte Carlo methods, by applying statistical tests to randomly-chosen \nvalues, because the search space is so large.\n\nI have verified that Tom transcribed the polynomial correctly from\nthe PDF image. The ECMA document doesn't say whether their polynomial\nis applied \"bit-reversed\", but the check would be equally strong either\nway.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 12 Mar 2001 15:44:43 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." } ]
[ { "msg_contents": "\n> I just took a close look at the COMP_CRC64 macro in xlog.c.\n> \n> This isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\n> on just the odd-numbered bytes and one on just the even-numbered bytes\n> of the datastream. That's hardly any stronger than a single \n> 32-bit CRC;\n> it's certainly not what I thought we had agreed to implement.\n\nHmm, strange. I thought that we had agreed upon a 32 bit CRC\non the grounds, that it would be strong enough to guard a single\nlog record.\n\nAndreas\n", "msg_date": "Thu, 1 Mar 2001 09:06:46 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> it's certainly not what I thought we had agreed to implement.\n\n> Hmm, strange. I thought that we had agreed upon a 32 bit CRC\n> on the grounds, that it would be strong enough to guard a single\n> log record.\n\nI thought that, and still think it, but I was outvoted. However I\nsee no value in the present actual implementation ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Mar 2001 10:35:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Uh, this is *not* a 64-bit CRC ... " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp]\n> Sent: 01 March 2001 02:05\n> To: Patrick Welche\n> Cc: Tom Lane; charpent@bacbuc.dyndns.org; \n> pgsql-hackers@postgresql.org; pgsql-odbc@postgresql.org; \n> pam1001@cus.cam.ac.uk\n> Subject: Re: [ODBC] Re: [HACKERS] Release in 2 weeks ...\n> \n> \n> Patrick Welche wrote:\n> > \n> > On Wed, Feb 28, 2001 at 08:53:31AM +0900, Hiroshi Inoue wrote:\n> > ...\n> > > I think I've fixed this bug at least for MS-Access.\n> > > You could get the latest win32 driver from\n> > > ftp://ftp.greatbridge.org/pub/pgadmin/stable/psqlodbc.zip .\n> > > Please try it.\n> > \n> > How can I just install that file? (ie., M$ Access -> \n> psqlodbc.dll -> real OS)\n> > \n> \n> I don't know if M$-access requires MDAC now(it didn't require\n> MDAC before). I use ADO and don't use M$-access other than\n> testing. ADO requires MDAC and pgAdmin uses ADO AFAIK.\n\nYes, pgAdmin does use MDAC, currently v2.6 \n\n> > ===== aside:\n> > \n> > I just tried installing pgAdmin - the installer says:\n> > \n> > This setup requires at least version 2.5 of the Microsoft \n> Data Access\n> > Components (MDAC) to be installed first. If the MDAC installer\n> > (mdac_typ.exe) is not provided with this setup, you can \n> find it on the\n> > Microsoft web site (www.microsoft.com)\n> > \n> > And after searching said website,\n> > http://www.microsoft.com/data/download2.htm\n> > shows:\n> > \n> > Microsoft Data Access Components MDAC 2.1.1.3711.11 < 2.5...\n> > \n> \n> I can see the following at http://www.microsoft.com/data/download.htm\n> \n> Data Access Components (MDAC) redistribution releases.\n> Five releases of MDAC are available here: The new MDAC\n> 2.6, two of MDAC 2.5, and two of MDAC 2.1. You can\n\nThe message stating that you need MDAC 2.5 is generated by the MS Installer\n- I cannot change. The pgAdmin notes on the website (which I wrote)\nrecommend v2.6 of MDAC which is indeed available at www.microsoft.com/data/\n\nRegards, Dave.\n", "msg_date": "Thu, 1 Mar 2001 08:19:05 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "RE: Re: [HACKERS] Release in 2 weeks ..." } ]
[ { "msg_contents": "As the WAL stuff is currently constructed, the system will refuse to\nstart up unless the checkPoint field of pg_control points at a valid\ncheckpoint record in the WAL log.\n\nNow I know we write and fsync the checkpoint record before we rewrite\npg_control, but this still leaves me feeling mighty uncomfortable.\nSee past discussions about how fsync order doesn't necessarily mean\nanything if the disk drive chooses to reorder writes. Since loss of\nthe checkpoint record means complete loss of the database, I think we\nneed to work harder here.\n\nWhat I'm thinking is that pg_control should have pointers to the last\ntwo checkpoint records, not only the last one. If we fail to read the\nmost recent checkpoint, try the one before it (which, obviously, means\nwe must keep the log files long enough that we still have that one too).\nWe can run forward from there and redo the intervening WAL records the\nsame as we would do anyway.\n\nThis would mean an initdb to change the format of pg_control. However\nI already have a couple other reasons in favor of an initdb: the\nrecord-length bug I mentioned yesterday, and the bogus CRC algorithm.\nI'm not finished reviewing the WAL code, either :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 01 Mar 2001 12:24:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "WAL's single point of failure: latest CHECKPOINT record" }, { "msg_contents": "Hi all,\n\nOut of curiosity, does anyone know of any projects that are presently\ncreating PostgreSQL database recovery tools?\n\nFor example database corruption recovery, Point In Time restoration, and\nsuch things?\n\nIt might be a good project for GreatBridge to look into if no-one else\nis doing it already.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n\nTom Lane wrote:\n> \n> As the WAL stuff is currently constructed, the system will refuse to\n> start up unless the checkPoint field of pg_control points at a valid\n> checkpoint record in the WAL log.\n> \n> Now I know we write and fsync the checkpoint record before we rewrite\n> pg_control, but this still leaves me feeling mighty uncomfortable.\n> See past discussions about how fsync order doesn't necessarily mean\n> anything if the disk drive chooses to reorder writes. Since loss of\n> the checkpoint record means complete loss of the database, I think we\n> need to work harder here.\n> \n> What I'm thinking is that pg_control should have pointers to the last\n> two checkpoint records, not only the last one. If we fail to read the\n> most recent checkpoint, try the one before it (which, obviously, means\n> we must keep the log files long enough that we still have that one too).\n> We can run forward from there and redo the intervening WAL records the\n> same as we would do anyway.\n> \n> This would mean an initdb to change the format of pg_control. However\n> I already have a couple other reasons in favor of an initdb: the\n> record-length bug I mentioned yesterday, and the bogus CRC algorithm.\n> I'm not finished reviewing the WAL code, either :-(\n> \n> regards, tom lane\n", "msg_date": "Fri, 02 Mar 2001 10:56:58 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: WAL's single point of failure: latest CHECKPOINT record" }, { "msg_contents": "We really need point-in-time recovery, removal of the need to vacuum,\nand more full-featured replication. Hopefully most can be addressed in\n7.2.\n\n> Hi all,\n> \n> Out of curiosity, does anyone know of any projects that are presently\n> creating PostgreSQL database recovery tools?\n> \n> For example database corruption recovery, Point In Time restoration, and\n> such things?\n> \n> It might be a good project for GreatBridge to look into if no-one else\n> is doing it already.\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> Database Administrator\n> \n> Tom Lane wrote:\n> > \n> > As the WAL stuff is currently constructed, the system will refuse to\n> > start up unless the checkPoint field of pg_control points at a valid\n> > checkpoint record in the WAL log.\n> > \n> > Now I know we write and fsync the checkpoint record before we rewrite\n> > pg_control, but this still leaves me feeling mighty uncomfortable.\n> > See past discussions about how fsync order doesn't necessarily mean\n> > anything if the disk drive chooses to reorder writes. Since loss of\n> > the checkpoint record means complete loss of the database, I think we\n> > need to work harder here.\n> > \n> > What I'm thinking is that pg_control should have pointers to the last\n> > two checkpoint records, not only the last one. If we fail to read the\n> > most recent checkpoint, try the one before it (which, obviously, means\n> > we must keep the log files long enough that we still have that one too).\n> > We can run forward from there and redo the intervening WAL records the\n> > same as we would do anyway.\n> > \n> > This would mean an initdb to change the format of pg_control. However\n> > I already have a couple other reasons in favor of an initdb: the\n> > record-length bug I mentioned yesterday, and the bogus CRC algorithm.\n> > I'm not finished reviewing the WAL code, either :-(\n> > \n> > regards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 1 Mar 2001 19:22:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL's single point of failure: latest CHECKPOINT record" }, { "msg_contents": "Yes, there is backend functionality on tap for 7.2 (see TODO) that will need to\nbe in place before the tools Justin mentions can be properly built.\n\nWe're very interested in helping out with the tools, and will be talking to the\n-hackers list more about our ideas once 7.1 is out the door.\n\nRegards,\nNed\n\n\nBruce Momjian wrote:\n\n> We really need point-in-time recovery, removal of the need to vacuum,\n> and more full-featured replication. Hopefully most can be addressed in\n> 7.2.\n>\n> > Hi all,\n> >\n> > Out of curiosity, does anyone know of any projects that are presently\n> > creating PostgreSQL database recovery tools?\n> >\n> > For example database corruption recovery, Point In Time restoration, and\n> > such things?\n> >\n> > It might be a good project for GreatBridge to look into if no-one else\n> > is doing it already.\n> >\n> > Regards and best wishes,\n> >\n> > Justin Clift\n> > Database Administrator\n> >\n\n", "msg_date": "Thu, 01 Mar 2001 21:03:27 -0500", "msg_from": "Ned Lilly <ned@greatbridge.com>", "msg_from_op": false, "msg_subject": "7.2 tools (was: WAL's single point of failure: latest CHECKPOINT\n\trecord)" } ]
[ { "msg_contents": "7.1beta5 in contrib/retep:\n\nImplementation and README are both empty.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The LORD is my shepherd; I shall not want. He maketh \n me to lie down in green pastures: he leadeth me beside\n the still waters, he restoreth my soul...Surely\n goodness and mercy shall follow me all the days of my\n life; and I will dwell in the house of the LORD for\n ever.\" Psalms 23:1,2,6 \n\n\n", "msg_date": "Thu, 01 Mar 2001 20:37:53 +0000", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "contrib: retep - empty documentation files" }, { "msg_contents": "At 20:37 01/03/01 +0000, Oliver Elphick wrote:\n>7.1beta5 in contrib/retep:\n>\n>Implementation and README are both empty.\n\nHmmm, not sure what happened there. I'm committing in more of the retep \ncontrib stuff over the weekend, so I'll fix them then.\n\nPeter\n\n\n>--\n>Oliver Elphick Oliver.Elphick@lfix.co.uk\n>Isle of Wight http://www.lfix.co.uk/oliver\n>PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\n>GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n> ========================================\n> \"The LORD is my shepherd; I shall not want. He maketh\n> me to lie down in green pastures: he leadeth me beside\n> the still waters, he restoreth my soul...Surely\n> goodness and mercy shall follow me all the days of my\n> life; and I will dwell in the house of the LORD for\n> ever.\" Psalms 23:1,2,6\n\n", "msg_date": "Thu, 01 Mar 2001 21:26:36 +0000", "msg_from": "Peter Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: contrib: retep - empty documentation files" } ]
[ { "msg_contents": "The following files are empty:\n\n./src/test/bench/query21\n./src/test/bench/query22\n./src/test/bench/query24\n./src/test/bench/query25\n\nIs that intentional?\n\n(I see they have been that way for 4 years.)\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nPGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47 6B 7E 39 CC 56 E4 C1 47\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n ========================================\n \"The LORD is my shepherd; I shall not want. He maketh \n me to lie down in green pastures: he leadeth me beside\n the still waters, he restoreth my soul...Surely\n goodness and mercy shall follow me all the days of my\n life; and I will dwell in the house of the LORD for\n ever.\" Psalms 23:1,2,6 \n\n\n", "msg_date": "Thu, 01 Mar 2001 22:01:54 +0000", "msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "Empty queries in src/test/bench" } ]
[ { "msg_contents": "On Fri, 2 Mar 2001, Jaruwan Laongmal wrote:\n\n> I had deleted a very large number of records out of my SQL table in order to\n> decrease the harddisk space. But after I use command 'ls -l\n> /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> do not reduce due to the effect of 'delete' SQL command. What should I do\n> if I would like to decrease the harddisk space?\n\nVACUUM\n\n\n", "msg_date": "Fri, 2 Mar 2001 08:04:45 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] why the DB file size does not reduce when 'delete'\n\tthe data in DB?" }, { "msg_contents": "On Fri, 2 Mar 2001, Jaruwan Laongmal wrote:\n\n> I had deleted a very large number of records out of my SQL table in order to\n> decrease the harddisk space. But after I use command 'ls -l\n> /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> do not reduce due to the effect of 'delete' SQL command. What should I do\n> if I would like to decrease the harddisk space?\n\nPostgres will only mark them as deleted, but the rows will stay in the\nDB.\nDo a vacuum on the database and the deleted rows will be eliminated.\n\nSaludos... ;-)\n\nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n\n", "msg_date": "Fri, 2 Mar 2001 09:36:02 -0300 (ART)", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: why the DB file size does not reduce when 'delete' the data in\n DB?" }, { "msg_contents": "I had deleted a very large number of records out of my SQL table in order to\ndecrease the harddisk space. But after I use command 'ls -l\n/usr/local/pgsql/data/base/', it is found that the size of concerning files\ndo not reduce due to the effect of 'delete' SQL command. What should I do\nif I would like to decrease the harddisk space?\n\nI am looking forward to your response. Thank you very much for any help.\n-\nJaruwan\n\n", "msg_date": "Fri, 2 Mar 2001 19:40:42 +0700", "msg_from": "\"Jaruwan Laongmal\" <jaruwan@gits.net.th>", "msg_from_op": false, "msg_subject": "why the DB file size does not reduce when 'delete' the data in DB?" }, { "msg_contents": "> I had deleted a very large number of records out of my SQL table in order to\n> decrease the harddisk space. But after I use command 'ls -l\n> /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> do not reduce due to the effect of 'delete' SQL command. What should I do\n> if I would like to decrease the harddisk space?\n\nRun \"vacuum\" from SQL or \"vacuumdb\" from the command line. Tables will\nbe reduced in size, though currently indices are not.\n\n - Thomas\n", "msg_date": "Fri, 02 Mar 2001 13:30:38 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why the DB file size does not reduce when 'delete' the\n\tdata in DB?" }, { "msg_contents": "Jaruwan Laongmal wrote:\n\n> I had deleted a very large number of records out of my SQL table in order to\n> decrease the harddisk space. But after I use command 'ls -l\n> /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> do not reduce due to the effect of 'delete' SQL command. What should I do\n> if I would like to decrease the harddisk space?\n\nRun the command VACUUM;\n\nThis will do the actual removal of deleted records. DELETE just marks \nthem as deleted\n\n> I am looking forward to your response. Thank you very much for any help.\n> -\n> Jaruwan\n\n\n", "msg_date": "Fri, 02 Mar 2001 17:03:05 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why the DB file size does not reduce when 'delete' the\n\tdata in DB?" }, { "msg_contents": "\n----- Original Message ----- \nFrom: The Hermit Hacker <scrappy@hub.org>\nTo: Jaruwan Laongmal <jaruwan@gits.net.th>\nCc: <pgsql-hackers@postgresql.org>; <pgsql-sql@postgresql.org>\nSent: Friday, March 02, 2001 8:04 PM\nSubject: Re: [HACKERS] why the DB file size does not reduce when 'delete'the data in DB?\n\n\n> On Fri, 2 Mar 2001, Jaruwan Laongmal wrote:\n> \n> > I had deleted a very large number of records out of my SQL table in order to\n> > decrease the harddisk space. But after I use command 'ls -l\n> > /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> > do not reduce due to the effect of 'delete' SQL command. What should I do\n> > if I would like to decrease the harddisk space?\n> \n> VACUUM\n> \n> \n\ncould anyone remove this nasty bug in 7.2? this is already a big pain and is the reason \nwhy am I still using MySQL in my product server. another nasty thing is it does not \nallow me to reference table in another database. sigh.\n\nRegards,\nXuYifeng\n\n\n\n\n", "msg_date": "Sun, 4 Mar 2001 10:01:37 +0800", "msg_from": "\"xuyifeng\" <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] why the DB file size does not reduce when 'delete'the\n\tdata in DB?" }, { "msg_contents": "On Sun, 4 Mar 2001, xuyifeng wrote:\n\n>\n> ----- Original Message -----\n> From: The Hermit Hacker <scrappy@hub.org>\n> To: Jaruwan Laongmal <jaruwan@gits.net.th>\n> Cc: <pgsql-hackers@postgresql.org>; <pgsql-sql@postgresql.org>\n> Sent: Friday, March 02, 2001 8:04 PM\n> Subject: Re: [HACKERS] why the DB file size does not reduce when 'delete'the data in DB?\n>\n>\n> > On Fri, 2 Mar 2001, Jaruwan Laongmal wrote:\n> >\n> > > I had deleted a very large number of records out of my SQL table in order to\n> > > decrease the harddisk space. But after I use command 'ls -l\n> > > /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> > > do not reduce due to the effect of 'delete' SQL command. What should I do\n> > > if I would like to decrease the harddisk space?\n> >\n> > VACUUM\n> >\n> >\n>\n> could anyone remove this nasty bug in 7.2? this is already a big pain\n> and is the reason why am I still using MySQL in my product server.\n> another nasty thing is it does not allow me to reference table in\n> another database. sigh.\n\nIts actually not considered a *bug*, but it was a feature that was part of\nan older feature that was removed. Vadim has plans for implementing an\nOverWriting Storage Manager, but scheduale of it is uncertain ... could be\nfor v7.2 ...\n\n\n", "msg_date": "Sat, 3 Mar 2001 23:05:29 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] why the DB file size does not reduce when 'delete'the\n\tdata in DB?" }, { "msg_contents": "\n----- Original Message ----- \nFrom: Roberto Mello <rmello@cc.usu.edu>\nTo: xuyifeng <jamexu@telekbird.com.cn>\nSent: Sunday, March 04, 2001 10:40 AM\nSubject: Re: [HACKERS] why the DB file size does not reduce when 'delete'the data in DB?\n\n\n> On Sun, Mar 04, 2001 at 10:01:37AM +0800, xuyifeng wrote:\n> > \n> > could anyone remove this nasty bug in 7.2? this is already a big pain and is the reason \n> > why am I still using MySQL in my product server. another nasty thing is it does not \n> > allow me to reference table in another database. sigh.\n> \n> I don \"vacuum analyze\" in my Postgres database every night very easily\n> using a simple cron job. I create the cron job and forget about it. Really\n> it's not that hard don't you think?\n> Sure, it's not the desired way, but that's not big enough a reason to\n> use PostgreSQL. I could think of a dozen reasons that counter that one and\n> would favor PostgreSQL in technical terms. Compared to those, this is\n> really really a minor issue.\n> As to referring to a table in another DB, I don't know when is that\n> going to be implemented.\n> \n> -Roberto\n> \n> -- \n> +----| http://fslc.usu.edu USU Free Software & GNU/Linux Club|------+\n> Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n> http://www.sdl.usu.edu - Space Dynamics Lab, Web Developer \n> << Politically Un-Correct Tagline Deleted! >>\n> \n\nI know your situations, your DB is not updated and inserted lots of records in few minutes,\nmine is difference, I have a real time Stock Trading system, you know, stock, its price \nis changed every minute or even every second , I need update and insert delta change into DB, \ndraw their trend graphics, suppose there are 3000 stocks in market, there maybe 9000 records \nchanged and inserted in one minutes, because PGSQL's storage manager problem( it does not \nreuse deleted record space), in 4 hours trading period, my harddisk can be full filled. because in \nthe period, the table indeed gets very large, doing VACCUME is impossible in realtime, it will lock\nout other clients too long time, my point of view is PGSQL is fit for static or small changed database, \nnot fit for lots of change in short time.\n\nRegards,\nXu Yifeng\n\n\n\n\n", "msg_date": "Sun, 4 Mar 2001 15:15:18 +0800", "msg_from": "\"xuyifeng\" <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re: why the DB file size does not reduce when 'delete'the data in DB?" }, { "msg_contents": "> I know your situations, your DB is not updated and inserted lots of\nrecords in few minutes,\n> mine is difference, I have a real time Stock Trading system, you know,\nstock, its price\n> is changed every minute or even every second , I need update and insert\ndelta change into DB,\n> draw their trend graphics, suppose there are 3000 stocks in market,\nthere maybe 9000 records\n> changed and inserted in one minutes, because PGSQL's storage manager\nproblem( it does not\n> reuse deleted record space), in 4 hours trading period, my harddisk can\nbe full filled. because in\n> the period, the table indeed gets very large, doing VACCUME is impossible\nin realtime, it will lock\n> out other clients too long time, my point of view is PGSQL is fit for\nstatic or small changed database,\n> not fit for lots of change in short time.\n\nIt's admitadly a problem so I don't think you need to convince everyone that\nit's not the best way to handle things :-)\n\nI hate to say it, but your options currently are to upgrade your storage\ndevice or change databases... I think I'd fork out some cash for some new\nhardware verses buying a commercial database or putting up with the missing\nfeatures of MySQL..\n\nAll my humble opinion of course, I wish you the best of luck.\n\n-Mitch\n\n", "msg_date": "Sun, 4 Mar 2001 13:37:11 -0500", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "Re: why the DB file size does not reduce when 'delete'the data in DB?" }, { "msg_contents": "On Sun, Mar 04, 2001 at 10:01:37AM +0800, xuyifeng allegedly wrote:\n> ----- Original Message ----- \n> From: The Hermit Hacker <scrappy@hub.org>\n> To: Jaruwan Laongmal <jaruwan@gits.net.th>\n> Cc: <pgsql-hackers@postgresql.org>; <pgsql-sql@postgresql.org>\n> Sent: Friday, March 02, 2001 8:04 PM\n> Subject: Re: [HACKERS] why the DB file size does not reduce when 'delete'the data in DB?\n> \n> > On Fri, 2 Mar 2001, Jaruwan Laongmal wrote:\n> > \n> > > I had deleted a very large number of records out of my SQL table in order to\n> > > decrease the harddisk space. But after I use command 'ls -l\n> > > /usr/local/pgsql/data/base/', it is found that the size of concerning files\n> > > do not reduce due to the effect of 'delete' SQL command. What should I do\n> > > if I would like to decrease the harddisk space?\n> > \n> > VACUUM\n> \n> could anyone remove this nasty bug in 7.2? this is already a big pain and is the reason \n> why am I still using MySQL in my product server. another nasty thing is it does not \n> allow me to reference table in another database. sigh.\n\nWhy would this be a bug? Sure, maybe it's not what you expected, but I hardly think\nit qualifies as a bug. For instance, Oracle doesn't release storage (datafiles\nspecifically) after it has allocated space for them. In fact, I wish I could force\npgsql to allocate storage it might need in the future. It would be great if I could\nforce pgsql to allocated four datafiles spread across four harddisks, so I would\nenjoy a) better database performance and b) rest assured I have the diskspace when\nI need it in the future. Call it a poor mans RAID; I think MySQL can perform this\ntrick. If pgsql can do this, please let me know\n\nBut back to your problem. One way to get the amount of space allocated to shrink is\nby recreating the database. Dump it using pg_dump and recreate it using the backup\nyou just made. This is a fairly simple and quick process. Give it a try on a small\ntest database first; you don't want to risk loosing your data.\n\nCheers,\n\nMathijs\n-- \nIt's not that perl programmers are idiots, it's that the language\nrewards idiotic behavior in a way that no other language or tool has\never done.\n Erik Naggum\n", "msg_date": "Wed, 7 Mar 2001 00:46:01 +0100", "msg_from": "Mathijs Brands <mathijs@ilse.nl>", "msg_from_op": false, "msg_subject": "Re: Re: [HACKERS] why the DB file size does not reduce when\n\t'delete'the data in DB?" }, { "msg_contents": "> do you really know the problem of PGSQL storage manager? it DOES NOT\n> reuse deleted record space. it also grows database size when you just\n> update but not insert record. it is a MS ACCESS like storage manager.\n> it is a functional bug. there is logic bug, performance bug...\n\nimho a designed-in feature can not be called a bug, even if you disagree\nwith its intent or implementation. The term \"bug\" should be reserved for\ncode which does not behave as designed.\n\nYou are not quite factually correct above, even given your definition of\n\"bug\". PostgreSQL does reuse deleted record space, but requires an\nexplicit maintenance step to do this.\n\nWe have continuing discussions on how to evolve the performance and\nbehavior of PostgreSQL, and you can check the archives on these past\ndiscussions.\n\nRegards.\n\n - Thomas\n", "msg_date": "Wed, 07 Mar 2001 02:25:27 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Re: why the DB file size does not reduce when 'delete'the data\n\tin DB?" }, { "msg_contents": "Hello Mathijs,\n\nWednesday, March 07, 2001, 7:46:01 AM, you wrote:\n\nMB> On Sun, Mar 04, 2001 at 10:01:37AM +0800, xuyifeng allegedly wrote:\n>> ----- Original Message ----- \n>> From: The Hermit Hacker <scrappy@hub.org>\n>> To: Jaruwan Laongmal <jaruwan@gits.net.th>\n>> Cc: <pgsql-hackers@postgresql.org>; <pgsql-sql@postgresql.org>\n>> Sent: Friday, March 02, 2001 8:04 PM\n>> Subject: Re: [HACKERS] why the DB file size does not reduce when 'delete'the data in DB?\n>> \n>> > On Fri, 2 Mar 2001, Jaruwan Laongmal wrote:\n>> > \n>> > > I had deleted a very large number of records out of my SQL table in order to\n>> > > decrease the harddisk space. But after I use command 'ls -l\n>> > > /usr/local/pgsql/data/base/', it is found that the size of concerning files\n>> > > do not reduce due to the effect of 'delete' SQL command. What should I do\n>> > > if I would like to decrease the harddisk space?\n>> > \n>> > VACUUM\n>> \n>> could anyone remove this nasty bug in 7.2? this is already a big pain and is the reason \n>> why am I still using MySQL in my product server. another nasty thing is it does not \n>> allow me to reference table in another database. sigh.\n\nMB> Why would this be a bug? Sure, maybe it's not what you expected, but I hardly think\nMB> it qualifies as a bug. For instance, Oracle doesn't release storage (datafiles\nMB> specifically) after it has allocated space for them. In fact, I wish I could force\nMB> pgsql to allocate storage it might need in the future. It would be great if I could\nMB> force pgsql to allocated four datafiles spread across four harddisks, so I would\nMB> enjoy a) better database performance and b) rest assured I have the diskspace when\nMB> I need it in the future. Call it a poor mans RAID; I think MySQL can perform this\nMB> trick. If pgsql can do this, please let me know\n\nMB> But back to your problem. One way to get the amount of space allocated to shrink is\nMB> by recreating the database. Dump it using pg_dump and recreate it using the backup\nMB> you just made. This is a fairly simple and quick process. Give it a try on a small\nMB> test database first; you don't want to risk loosing your data.\n\nMB> Cheers,\n\nMB> Mathijs\n\ndo you really know the problem of PGSQL storage manager? it DOES NOT\nreuse deleted record space. it also grows database size when you just\nupdate but not insert record. it is a MS ACCESS like storage manager.\nit is a functional bug. there is logic bug, performance bug...\n\n-- \nBest regards,\nXu Yifeng\n\n\n", "msg_date": "Wed, 7 Mar 2001 11:10:45 +0800", "msg_from": "Xu Yifeng <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re[2]: Re: [HACKERS] why the DB file size does not reduce when\n\t'delete'the data in DB?" }, { "msg_contents": "On Wed, 7 Mar 2001, Xu Yifeng wrote:\n\n> do you really know the problem of PGSQL storage manager? it DOES NOT\n> reuse deleted record space. it also grows database size when you just\n> update but not insert record. it is a MS ACCESS like storage manager.\n> it is a functional bug. there is logic bug, performance bug...\n\nWell, as always, we look forward to seeing patches from you to fix this\nglaring functional bug :)\n\n\n", "msg_date": "Tue, 6 Mar 2001 23:14:42 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re[2]: Re: [HACKERS] why the DB file size does not reduce when\n\t'delete'the data in DB?" }, { "msg_contents": "> do you really know the problem of PGSQL storage manager? it DOES NOT\n> reuse deleted record space. it also grows database size when you just\n> update but not insert record. it is a MS ACCESS like storage manager.\n> it is a functional bug. there is logic bug, performance bug...\n\nIt's not a bug but a feature invented by Michael Stonebraker. Write\nto him why do you think that is a bug:-)\n--\nTatsuo Ishii\n", "msg_date": "Wed, 07 Mar 2001 12:33:07 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Re: [HACKERS] why the DB file size does not\n\treduce when 'delete'the data in DB?" }, { "msg_contents": "On Fri, Mar 16, 2001 at 12:01:36AM +0000, Thomas Lockhart wrote:\n> > > You are not quite factually correct above, even given your definition of\n> > > \"bug\". PostgreSQL does reuse deleted record space, but requires an\n> > > explicit maintenance step to do this.\n> > Could you tell us what that maintenance step is? dumping the db and restoring into a fresh one ? :/\n> \n> :) No, \"VACUUM\" is your friend for this. Look in the reference manual\n> for details.\n> \n> - Thomas\n\nI'm having this problem:\nI have a database that is 3 megabyte in size (measured using pg_dump). When\ni go to the corresponding data directory (eg du -h data/base/mydbase), it\nseems the real disk usage is 135 megabyte! Doing a VACUUM doesn't really\nchange the disk usage.\n\nAlso query & updating speed increases when i dump all data and restore\nit into a fresh new database.\n\nI'm running postgresql-7.0.2-6 on a Debian potato.\n\n-Yves\n", "msg_date": "Fri, 16 Mar 2001 02:28:46 +0100", "msg_from": "yves@mail2.vlaanderen.net", "msg_from_op": false, "msg_subject": "Re: Re: [SQL] Re: why the DB file size does not reduce when\n\t'delete'the data in DB?" } ]
[ { "msg_contents": "I am *not* feeling good about pushing out an RC1 release candidate\ntoday.\n\nI've been going through the WAL code, trying to understand it and\ndocument it. I've found a number of minor problems and several major\nones (\"major\" meaning \"can't really fix without an incompatible file\nformat change, hence initdb\"). I've reported the major problems to\nthe mailing lists but gotten almost no feedback about what to do.\n\nIn addition, I'm still looking for the bug that I originally went in to\nfind: Scott Parish's report of being unable to restart after a normal\nshutdown of beta4. Examination of his WAL log shows some pretty serious\nlossage (see attached dump). My current theory is that the\nbuffer-slinging logic in xlog.c dropped one or more whole buffers' worth\nof log records, but I haven't figured out exactly how.\n\nI want to veto putting out an RC1 until these issues are resolved...\ncomments?\n\n\t\t\tregards, tom lane\n\n\n...\n0/00599890: prv 0/00599854; xprv 0/00599854; xid 18871; RM 10 info 00 len 65\n0/005998F4: prv 0/00599890; xprv 0/00599890; xid 18871; RM 11 info 90 len 50\n0/00599948: prv 0/005998F4; xprv 0/005998F4; xid 18871; RM 1 info 00 len 4\ncommit: 2001-02-26 17:19:57\n0/0059996C: prv 0/00599948; xprv 0/00000000; xid 0; RM 0 info 00 len 32\ncheckpoint: redo 0/0059996C; undo 0/00000000; sui 29; nextxid 18903; nextoid 35195; online\n-- this is the last normal-looking checkpoint record. Judging from the\n-- commit timestamps surrounding prior checkpoints, checkpoints were\n-- happening every five minutes approximately on the 5-minute mark, so\n-- this one happened about 17:20. (There really should be a timestamp\n-- in the checkpoint records...)\n0/005999AC: prv 0/0059996C; xprv 0/00000000; xid 18923; RM 10 info 08 len 8226; bkpb 1\n0/0059B9FC: prv 0/005999AC; xprv 0/005999AC; xid 18923; RM 11 info 98 len 8226; bkpb 1\n0/0059DA4C: prv 0/0059B9FC; xprv 0/0059B9FC; xid 18923; RM 10 info 00 len 72\n0/0059DAB4: prv 0/0059DA4C; xprv 0/0059DA4C; xid 18923; RM 11 info 90 len 26\n0/0059DAF0: prv 0/0059DAB4; xprv 0/0059DAB4; xid 18923; RM 10 info 00 len 72\n0/0059DB58: prv 0/0059DAF0; xprv 0/0059DAF0; xid 18923; RM 11 info 90 len 26\n0/0059DB94: prv 0/0059DB58; xprv 0/0059DB58; xid 18923; RM 10 info 00 len 72\n0/0059DBFC: prv 0/0059DB94; xprv 0/0059DB94; xid 18923; RM 11 info 90 len 26\n0/0059DC38: prv 0/0059DBFC; xprv 0/0059DBFC; xid 18923; RM 10 info 08 len 8226; bkpb 1\n0/0059FC88: prv 0/0059DC38; xprv 0/0059DC38; xid 18923; RM 11 info 98 len 8226; bkpb 1\n0/005A1CD8: prv 0/0059FC88; xprv 0/0059FC88; xid 18923; RM 1 info 00 len 4\ncommit: 2001-02-26 17:21:10\n0/005A1CFC: prv 0/005A1CD8; xprv 0/00000000; xid 18951; RM 15 info 00 len 100\n0/005A1D80: prv 0/005A1CFC; xprv 0/005A1CFC; xid 18951; RM 10 info 00 len 72\n0/005A1DE8: prv 0/005A1D80; xprv 0/005A1D80; xid 18951; RM 11 info 90 len 26\n0/005A1E24: prv 0/005A1DE8; xprv 0/005A1DE8; xid 18951; RM 10 info 00 len 72\n0/005A1E8C: prv 0/005A1E24; xprv 0/005A1E24; xid 18951; RM 11 info 90 len 26\n0/005A1EC8: prv 0/005A1E8C; xprv 0/005A1E8C; xid 18951; RM 10 info 00 len 72\n0/005A1F30: prv 0/005A1EC8; xprv 0/005A1EC8; xid 18951; RM 11 info 90 len 26\n0/005A1F6C: prv 0/005A1F30; xprv 0/005A1F30; xid 18951; RM 10 info 00 len 72\n0/005A1FD4: prv 0/005A1F6C; xprv 0/005A1F6C; xid 18951; RM 11 info 90 len 26\n0/005A201C: prv 0/005A1FD4; xprv 0/005A1FD4; xid 18951; RM 10 info 00 len 65\n0/005A2080: prv 0/005A201C; xprv 0/005A201C; xid 18951; RM 11 info 98 len 8226; bkpb 1\n0/005A40D0: prv 0/005A2080; xprv 0/005A2080; xid 18951; RM 1 info 00 len 4\ncommit: 2001-02-26 17:21:33\n0/005A40F4: prv 0/005A40D0; xprv 0/00000000; xid 18986; RM 10 info 00 len 72\n0/005A415C: prv 0/005A40F4; xprv 0/005A40F4; xid 18986; RM 11 info 90 len 26\n0/005A4198: prv 0/005A415C; xprv 0/005A415C; xid 18986; RM 10 info 00 len 72\n0/005A4200: prv 0/005A4198; xprv 0/005A4198; xid 18986; RM 11 info 90 len 26\n0/005A423C: prv 0/005A4200; xprv 0/005A4200; xid 18986; RM 10 info 00 len 72\n0/005A42A4: prv 0/005A423C; xprv 0/005A423C; xid 18986; RM 11 info 90 len 26\n0/005A42E0: prv 0/005A42A4; xprv 0/005A42A4; xid 18986; RM 10 info 00 len 72\n0/005A4348: prv 0/005A42E0; xprv 0/005A42E0; xid 18986; RM 11 info 90 len 26\n0/005A4384: prv 0/005A4348; xprv 0/005A4348; xid 18986; RM 10 info 00 len 65\n0/005A43E8: prv 0/005A4384; xprv 0/005A4384; xid 18986; RM 11 info 90 len 50\n0/005A443C: prv 0/005A43E8; xprv 0/005A43E8; xid 18986; RM 1 info 00 len 4\ncommit: 2001-02-26 17:22:20\n0/005A4460: prv 0/005A443C; xprv 0/00000000; xid 19020; RM 10 info 00 len 72\n0/005A44C8: prv 0/005A4460; xprv 0/005A4460; xid 19020; RM 11 info 90 len 26\n0/005A4504: prv 0/005A44C8; xprv 0/005A44C8; xid 19020; RM 10 info 00 len 72\n0/005A456C: prv 0/005A4504; xprv 0/005A4504; xid 19020; RM 11 info 90 len 26\n0/005A45A8: prv 0/005A456C; xprv 0/005A456C; xid 19020; RM 10 info 00 len 72\n0/005A4610: prv 0/005A45A8; xprv 0/005A45A8; xid 19020; RM 11 info 90 len 26\n0/005A464C: prv 0/005A4610; xprv 0/005A4610; xid 19020; RM 10 info 00 len 72\n0/005A46B4: prv 0/005A464C; xprv 0/005A464C; xid 19020; RM 11 info 90 len 26\n0/005A46F0: prv 0/005A46B4; xprv 0/005A46B4; xid 19020; RM 10 info 00 len 65\n0/005A4754: prv 0/005A46F0; xprv 0/005A46F0; xid 19020; RM 11 info 90 len 50\n0/005A47A8: prv 0/005A4754; xprv 0/005A4754; xid 19020; RM 1 info 00 len 4\ncommit: 2001-02-26 17:24:34\n0/005A47CC: prv 0/005A47A8; xprv 0/00000000; xid 19115; RM 10 info 00 len 76\n0/005A4838: prv 0/005A47CC; xprv 0/005A47CC; xid 19115; RM 11 info 90 len 26\n0/005A4874: prv 0/005A4838; xprv 0/005A4838; xid 19115; RM 10 info 00 len 80\n0/005A48E4: prv 0/005A4874; xprv 0/005A4874; xid 19115; RM 11 info 90 len 26\n0/005A4920: prv 0/005A48E4; xprv 0/005A48E4; xid 19115; RM 10 info 00 len 76\n0/005A498C: prv 0/005A4920; xprv 0/005A4920; xid 19115; RM 11 info 90 len 26\n0/005A49C8: prv 0/005A498C; xprv 0/005A498C; xid 19115; RM 10 info 00 len 76\n0/005A4A34: prv 0/005A49C8; xprv 0/005A49C8; xid 19115; RM 11 info 90 len 26\n0/005A4A70: prv 0/005A4A34; xprv 0/005A4A34; xid 19115; RM 10 info 00 len 65\n0/005A4AD4: prv 0/005A4A70; xprv 0/005A4A70; xid 19115; RM 11 info 90 len 50\n0/005A4B28: prv 0/005A4AD4; xprv 0/005A4AD4; xid 19115; RM 1 info 00 len 4\ncommit: 2001-02-26 17:26:02\nReadRecord: record with zero len at 0/005A4B4C\n-- My dump program is unhappy here because the rest of the page is zero.\n-- Given that there is a continuation record at the start of the next\n-- page, there certainly should have been record(s) here. But it's\n-- worse than that: check the commit timestamps and the xid numbers\n-- before and after the discontinuity. Did time go backwards here?\n-- Also notice the back-pointers in the first valid record on the next\n-- page; they point not into the zeroed space, which would suggest a\n-- mere failure to write a buffer after filling it, but into the middle\n-- of one of the valid records on the prior page. It almost looks like\n-- page 5A6000 came from a completely different run than page 5A4000.\nUnexpected page info flags 0001 at offset 5A6000\nSkipping unexpected continuation record at offset 5A6000\n0/005A6904: prv 0/005A48B4(?); xprv 0/005A48B4; xid 19047; RM 11 info 98 len 8226; bkpb 1\n0/005A8954: prv 0/005A6904; xprv 0/005A6904; xid 19047; RM 10 info 00 len 72\n0/005A89BC: prv 0/005A8954; xprv 0/005A8954; xid 19047; RM 11 info 90 len 26\n0/005A89F8: prv 0/005A89BC; xprv 0/005A89BC; xid 19047; RM 10 info 00 len 72\n0/005A8A60: prv 0/005A89F8; xprv 0/005A89F8; xid 19047; RM 11 info 90 len 26\n0/005A8A9C: prv 0/005A8A60; xprv 0/005A8A60; xid 19047; RM 10 info 00 len 72\n0/005A8B04: prv 0/005A8A9C; xprv 0/005A8A9C; xid 19047; RM 11 info 90 len 26\n0/005A8B40: prv 0/005A8B04; xprv 0/005A8B04; xid 19047; RM 10 info 08 len 8226; bkpb 1\n0/005AAB90: prv 0/005A8B40; xprv 0/005A8B40; xid 19047; RM 11 info 98 len 8226; bkpb 1\n0/005ACBE0: prv 0/005AAB90; xprv 0/005AAB90; xid 19047; RM 1 info 00 len 4\ncommit: 2001-02-26 17:25:38\n0/005ACC04: prv 0/005ACBE0; xprv 0/00000000; xid 19088; RM 10 info 00 len 72\n0/005ACC6C: prv 0/005ACC04; xprv 0/005ACC04; xid 19088; RM 11 info 90 len 26\n0/005ACCA8: prv 0/005ACC6C; xprv 0/005ACC6C; xid 19088; RM 10 info 00 len 72\n0/005ACD10: prv 0/005ACCA8; xprv 0/005ACCA8; xid 19088; RM 11 info 90 len 26\n0/005ACD4C: prv 0/005ACD10; xprv 0/005ACD10; xid 19088; RM 10 info 00 len 72\n0/005ACDB4: prv 0/005ACD4C; xprv 0/005ACD4C; xid 19088; RM 11 info 90 len 26\n0/005ACDF0: prv 0/005ACDB4; xprv 0/005ACDB4; xid 19088; RM 10 info 00 len 72\n0/005ACE58: prv 0/005ACDF0; xprv 0/005ACDF0; xid 19088; RM 11 info 90 len 26\n0/005ACE94: prv 0/005ACE58; xprv 0/005ACE58; xid 19088; RM 10 info 00 len 65\n0/005ACEF8: prv 0/005ACE94; xprv 0/005ACE94; xid 19088; RM 11 info 90 len 50\n0/005ACF4C: prv 0/005ACEF8; xprv 0/005ACEF8; xid 19088; RM 1 info 00 len 4\ncommit: 2001-02-26 17:26:43\n0/005ACF70: prv 0/005ACF4C; xprv 0/00000000; xid 19109; RM 10 info 00 len 72\n0/005ACFD8: prv 0/005ACF70; xprv 0/005ACF70; xid 19109; RM 11 info 90 len 26\n0/005AD014: prv 0/005ACFD8; xprv 0/005ACFD8; xid 19109; RM 10 info 00 len 72\n0/005AD07C: prv 0/005AD014; xprv 0/005AD014; xid 19109; RM 11 info 90 len 26\n0/005AD0B8: prv 0/005AD07C; xprv 0/005AD07C; xid 19109; RM 10 info 00 len 72\n0/005AD120: prv 0/005AD0B8; xprv 0/005AD0B8; xid 19109; RM 11 info 90 len 26\n0/005AD15C: prv 0/005AD120; xprv 0/005AD120; xid 19109; RM 10 info 00 len 72\n0/005AD1C4: prv 0/005AD15C; xprv 0/005AD15C; xid 19109; RM 11 info 90 len 26\n0/005AD200: prv 0/005AD1C4; xprv 0/005AD1C4; xid 19109; RM 10 info 00 len 65\n0/005AD264: prv 0/005AD200; xprv 0/005AD200; xid 19109; RM 11 info 98 len 8226; bkpb 1\n0/005AF2B4: prv 0/005AD264; xprv 0/005AD264; xid 19109; RM 1 info 00 len 4\ncommit: 2001-02-26 17:26:59\n0/005AF2D8: prv 0/005AF2B4; xprv 0/00000000; xid 19224; RM 10 info 00 len 72\n0/005AF340: prv 0/005AF2D8; xprv 0/005AF2D8; xid 19224; RM 11 info 90 len 26\n0/005AF37C: prv 0/005AF340; xprv 0/005AF340; xid 19224; RM 10 info 00 len 72\n0/005AF3E4: prv 0/005AF37C; xprv 0/005AF37C; xid 19224; RM 11 info 90 len 26\n0/005AF420: prv 0/005AF3E4; xprv 0/005AF3E4; xid 19224; RM 10 info 00 len 72\n0/005AF488: prv 0/005AF420; xprv 0/005AF420; xid 19224; RM 11 info 90 len 26\n0/005AF4C4: prv 0/005AF488; xprv 0/005AF488; xid 19224; RM 10 info 00 len 72\n0/005AF52C: prv 0/005AF4C4; xprv 0/005AF4C4; xid 19224; RM 11 info 90 len 26\n0/005AF568: prv 0/005AF52C; xprv 0/005AF52C; xid 19224; RM 10 info 00 len 65\n0/005AF5CC: prv 0/005AF568; xprv 0/005AF568; xid 19224; RM 11 info 90 len 50\n0/005AF620: prv 0/005AF5CC; xprv 0/005AF5CC; xid 19224; RM 1 info 00 len 4\ncommit: 2001-02-26 17:28:39\n0/005AF644: prv 0/005AF620; xprv 0/00000000; xid 19229; RM 10 info 00 len 72\n0/005AF6AC: prv 0/005AF644; xprv 0/005AF644; xid 19229; RM 11 info 90 len 26\n0/005AF6E8: prv 0/005AF6AC; xprv 0/005AF6AC; xid 19229; RM 10 info 00 len 72\n0/005AF750: prv 0/005AF6E8; xprv 0/005AF6E8; xid 19229; RM 11 info 90 len 26\n0/005AF78C: prv 0/005AF750; xprv 0/005AF750; xid 19229; RM 10 info 00 len 72\n0/005AF7F4: prv 0/005AF78C; xprv 0/005AF78C; xid 19229; RM 11 info 90 len 26\n0/005AF830: prv 0/005AF7F4; xprv 0/005AF7F4; xid 19229; RM 10 info 00 len 72\n0/005AF898: prv 0/005AF830; xprv 0/005AF830; xid 19229; RM 11 info 90 len 26\n0/005AF8D4: prv 0/005AF898; xprv 0/005AF898; xid 19229; RM 10 info 00 len 65\n0/005AF938: prv 0/005AF8D4; xprv 0/005AF8D4; xid 19229; RM 11 info 90 len 50\n0/005AF98C: prv 0/005AF938; xprv 0/005AF938; xid 19229; RM 1 info 00 len 4\ncommit: 2001-02-26 17:28:50\n0/005AF9B0: prv 0/005AF98C; xprv 0/00000000; xid 0; RM 0 info 00 len 32\ncheckpoint: redo 0/005AF9B0; undo 0/00000000; sui 30; nextxid 19243; nextoid 43387; online\n-- This is the only checkpoint record present in the log after the\n-- normal-looking one at 17:20. There should have been checkpoints\n-- at 17:25, 17:30, 17:35, 17:40, 17:45, not to mention one from the\n-- eventual shutdown which seems to have been done around 17:49.\n-- From the surrounding timestamps this one must be either 17:30 or 17:35.\n-- What's even nastier (and the immediate cause of Scott's inability to\n-- restart) is that the pg_control file's checkPoint pointer points to\n-- 0/005AF9F0, which is *not* the location of this checkpoint, but of\n-- the record after it.\n-- Is that meaningful, or just random coincidence? Can't tell yet.\n-- Oh BTW, the timestamp in the pg_control file is 2001-02-26 17:34:09,\n-- which does not correspond to any scheduled checkpoint.\n0/005AF9F0: prv 0/005AF9B0; xprv 0/00000000; xid 19444; RM 10 info 08 len 8226; bkpb 1\n0/005B1A40: prv 0/005AF9F0; xprv 0/005AF9F0; xid 19444; RM 11 info 98 len 8226; bkpb 1\n0/005B3A90: prv 0/005B1A40; xprv 0/005B1A40; xid 19444; RM 10 info 00 len 80\n0/005B3B00: prv 0/005B3A90; xprv 0/005B3A90; xid 19444; RM 11 info 90 len 26\n0/005B3B3C: prv 0/005B3B00; xprv 0/005B3B00; xid 19444; RM 10 info 00 len 72\n0/005B3BA4: prv 0/005B3B3C; xprv 0/005B3B3C; xid 19444; RM 11 info 90 len 26\n0/005B3BE0: prv 0/005B3BA4; xprv 0/005B3BA4; xid 19444; RM 10 info 00 len 72\n0/005B3C48: prv 0/005B3BE0; xprv 0/005B3BE0; xid 19444; RM 11 info 90 len 26\n0/005B3C84: prv 0/005B3C48; xprv 0/005B3C48; xid 19444; RM 10 info 08 len 8226; bkpb 1\n0/005B5CD4: prv 0/005B3C84; xprv 0/005B3C84; xid 19444; RM 11 info 98 len 8226; bkpb 1\n0/005B7D24: prv 0/005B5CD4; xprv 0/005B5CD4; xid 19444; RM 1 info 00 len 4\ncommit: 2001-02-26 17:35:13\n0/005B7D48: prv 0/005B7D24; xprv 0/00000000; xid 19495; RM 10 info 00 len 72\n0/005B7DB0: prv 0/005B7D48; xprv 0/005B7D48; xid 19495; RM 11 info 90 len 26\n0/005B7DEC: prv 0/005B7DB0; xprv 0/005B7DB0; xid 19495; RM 10 info 00 len 72\n0/005B7E54: prv 0/005B7DEC; xprv 0/005B7DEC; xid 19495; RM 11 info 90 len 26\n0/005B7E90: prv 0/005B7E54; xprv 0/005B7E54; xid 19495; RM 10 info 00 len 72\n0/005B7EF8: prv 0/005B7E90; xprv 0/005B7E90; xid 19495; RM 11 info 90 len 26\n0/005B7F34: prv 0/005B7EF8; xprv 0/005B7EF8; xid 19495; RM 10 info 00 len 72\n0/005B7F9C: prv 0/005B7F34; xprv 0/005B7F34; xid 19495; RM 11 info 90 len 26\n0/005B7FD8: prv 0/005B7F9C; xprv 0/005B7F9C; xid 19495; RM 10 info 00 len 69\n0/005B804C: prv 0/005B7FD8; xprv 0/005B7FD8; xid 19495; RM 11 info 98 len 8226; bkpb 1\n0/005BA09C: prv 0/005B804C; xprv 0/005B804C; xid 19495; RM 1 info 00 len 4\ncommit: 2001-02-26 17:36:32\n0/005BA0C0: prv 0/005BA09C; xprv 0/00000000; xid 19527; RM 10 info 00 len 72\n0/005BA128: prv 0/005BA0C0; xprv 0/005BA0C0; xid 19527; RM 11 info 90 len 26\n0/005BA164: prv 0/005BA128; xprv 0/005BA128; xid 19527; RM 10 info 00 len 76\n0/005BA1D0: prv 0/005BA164; xprv 0/005BA164; xid 19527; RM 11 info 90 len 26\n0/005BA20C: prv 0/005BA1D0; xprv 0/005BA1D0; xid 19527; RM 10 info 00 len 72\n0/005BA274: prv 0/005BA20C; xprv 0/005BA20C; xid 19527; RM 11 info 90 len 26\n0/005BA2B0: prv 0/005BA274; xprv 0/005BA274; xid 19527; RM 10 info 00 len 72\n0/005BA318: prv 0/005BA2B0; xprv 0/005BA2B0; xid 19527; RM 11 info 90 len 26\n0/005BA354: prv 0/005BA318; xprv 0/005BA318; xid 19527; RM 10 info 00 len 65\n0/005BA3B8: prv 0/005BA354; xprv 0/005BA354; xid 19527; RM 11 info 90 len 50\n0/005BA40C: prv 0/005BA3B8; xprv 0/005BA3B8; xid 19527; RM 1 info 00 len 4\ncommit: 2001-02-26 17:37:59\n0/005BA430: prv 0/005BA40C; xprv 0/00000000; xid 19540; RM 10 info 00 len 72\n0/005BA498: prv 0/005BA430; xprv 0/005BA430; xid 19540; RM 11 info 90 len 26\n0/005BA4D4: prv 0/005BA498; xprv 0/00000000; xid 19540; RM 15 info 00 len 100\n0/005BA558: prv 0/005BA4D4; xprv 0/005BA4D4; xid 19540; RM 10 info 00 len 76\n0/005BA5C4: prv 0/005BA558; xprv 0/005BA558; xid 19540; RM 11 info 90 len 26\n0/005BA600: prv 0/005BA5C4; xprv 0/005BA5C4; xid 19540; RM 10 info 00 len 72\n0/005BA668: prv 0/005BA600; xprv 0/005BA600; xid 19540; RM 11 info 90 len 26\n0/005BA6A4: prv 0/005BA668; xprv 0/005BA668; xid 19540; RM 10 info 00 len 72\n0/005BA70C: prv 0/005BA6A4; xprv 0/005BA6A4; xid 19540; RM 11 info 90 len 26\n0/005BA748: prv 0/005BA70C; xprv 0/005BA70C; xid 19540; RM 10 info 00 len 65\n0/005BA7AC: prv 0/005BA748; xprv 0/005BA748; xid 19540; RM 11 info 90 len 50\n0/005BA800: prv 0/005BA7AC; xprv 0/005BA7AC; xid 19540; RM 1 info 00 len 4\ncommit: 2001-02-26 17:39:03\n0/005BA824: prv 0/005BA800; xprv 0/00000000; xid 19605; RM 10 info 00 len 72\n0/005BA88C: prv 0/005BA824; xprv 0/005BA824; xid 19605; RM 11 info 90 len 26\n0/005BA8C8: prv 0/005BA88C; xprv 0/005BA88C; xid 19605; RM 10 info 00 len 72\n0/005BA930: prv 0/005BA8C8; xprv 0/005BA8C8; xid 19605; RM 11 info 90 len 26\n0/005BA96C: prv 0/005BA930; xprv 0/005BA930; xid 19605; RM 10 info 00 len 72\n0/005BA9D4: prv 0/005BA96C; xprv 0/005BA96C; xid 19605; RM 11 info 90 len 26\n0/005BAA10: prv 0/005BA9D4; xprv 0/005BA9D4; xid 19605; RM 10 info 00 len 72\n0/005BAA78: prv 0/005BAA10; xprv 0/005BAA10; xid 19605; RM 11 info 90 len 26\n0/005BAAB4: prv 0/005BAA78; xprv 0/005BAA78; xid 19605; RM 10 info 00 len 65\n0/005BAB18: prv 0/005BAAB4; xprv 0/005BAAB4; xid 19605; RM 11 info 90 len 50\n0/005BAB6C: prv 0/005BAB18; xprv 0/005BAB18; xid 19605; RM 1 info 00 len 4\ncommit: 2001-02-26 17:41:09\n0/005BAB90: prv 0/005BAB6C; xprv 0/00000000; xid 19610; RM 10 info 00 len 72\n0/005BABF8: prv 0/005BAB90; xprv 0/005BAB90; xid 19610; RM 11 info 90 len 26\n0/005BAC34: prv 0/005BABF8; xprv 0/005BABF8; xid 19610; RM 10 info 00 len 72\n0/005BAC9C: prv 0/005BAC34; xprv 0/005BAC34; xid 19610; RM 11 info 90 len 26\n0/005BACD8: prv 0/005BAC9C; xprv 0/005BAC9C; xid 19610; RM 10 info 00 len 72\n0/005BAD40: prv 0/005BACD8; xprv 0/005BACD8; xid 19610; RM 11 info 90 len 26\n0/005BAD7C: prv 0/005BAD40; xprv 0/005BAD40; xid 19610; RM 10 info 00 len 72\n0/005BADE4: prv 0/005BAD7C; xprv 0/005BAD7C; xid 19610; RM 11 info 90 len 26\n0/005BAE20: prv 0/005BADE4; xprv 0/005BADE4; xid 19610; RM 10 info 00 len 65\n0/005BAE84: prv 0/005BAE20; xprv 0/005BAE20; xid 19610; RM 11 info 90 len 50\n0/005BAED8: prv 0/005BAE84; xprv 0/005BAE84; xid 19610; RM 1 info 00 len 4\ncommit: 2001-02-26 17:41:11\n0/005BAEFC: prv 0/005BAED8; xprv 0/00000000; xid 19718; RM 10 info 00 len 72\n0/005BAF64: prv 0/005BAEFC; xprv 0/005BAEFC; xid 19718; RM 11 info 90 len 26\n0/005BAFA0: prv 0/005BAF64; xprv 0/005BAF64; xid 19718; RM 10 info 00 len 72\n0/005BB008: prv 0/005BAFA0; xprv 0/005BAFA0; xid 19718; RM 11 info 90 len 26\n0/005BB044: prv 0/005BB008; xprv 0/005BB008; xid 19718; RM 10 info 00 len 72\n0/005BB0AC: prv 0/005BB044; xprv 0/005BB044; xid 19718; RM 11 info 90 len 26\n0/005BB0E8: prv 0/005BB0AC; xprv 0/005BB0AC; xid 19718; RM 10 info 00 len 72\n0/005BB150: prv 0/005BB0E8; xprv 0/005BB0E8; xid 19718; RM 11 info 90 len 26\n0/005BB18C: prv 0/005BB150; xprv 0/005BB150; xid 19718; RM 10 info 00 len 65\n0/005BB1F0: prv 0/005BB18C; xprv 0/005BB18C; xid 19718; RM 11 info 90 len 50\n0/005BB244: prv 0/005BB1F0; xprv 0/005BB1F0; xid 19718; RM 1 info 00 len 4\ncommit: 2001-02-26 17:44:57\n0/005BB268: prv 0/005BB244; xprv 0/00000000; xid 19775; RM 10 info 00 len 72\n0/005BB2D0: prv 0/005BB268; xprv 0/005BB268; xid 19775; RM 11 info 90 len 26\n0/005BB30C: prv 0/005BB2D0; xprv 0/005BB2D0; xid 19775; RM 10 info 00 len 72\n0/005BB374: prv 0/005BB30C; xprv 0/005BB30C; xid 19775; RM 11 info 90 len 26\n0/005BB3B0: prv 0/005BB374; xprv 0/005BB374; xid 19775; RM 10 info 00 len 72\n0/005BB418: prv 0/005BB3B0; xprv 0/005BB3B0; xid 19775; RM 11 info 90 len 26\n0/005BB454: prv 0/005BB418; xprv 0/005BB418; xid 19775; RM 10 info 00 len 72\n0/005BB4BC: prv 0/005BB454; xprv 0/005BB454; xid 19775; RM 11 info 90 len 26\n0/005BB4F8: prv 0/005BB4BC; xprv 0/005BB4BC; xid 19775; RM 10 info 00 len 65\n0/005BB55C: prv 0/005BB4F8; xprv 0/005BB4F8; xid 19775; RM 11 info 90 len 50\n0/005BB5B0: prv 0/005BB55C; xprv 0/005BB55C; xid 19775; RM 1 info 00 len 4\ncommit: 2001-02-26 17:47:38\n0/005BB5D4: prv 0/005BB5B0; xprv 0/00000000; xid 19827; RM 10 info 00 len 72\n0/005BB63C: prv 0/005BB5D4; xprv 0/005BB5D4; xid 19827; RM 11 info 90 len 26\n0/005BB678: prv 0/005BB63C; xprv 0/005BB63C; xid 19827; RM 10 info 00 len 72\n0/005BB6E0: prv 0/005BB678; xprv 0/005BB678; xid 19827; RM 11 info 90 len 26\n0/005BB71C: prv 0/005BB6E0; xprv 0/005BB6E0; xid 19827; RM 10 info 00 len 72\n0/005BB784: prv 0/005BB71C; xprv 0/005BB71C; xid 19827; RM 11 info 90 len 26\n0/005BB7C0: prv 0/005BB784; xprv 0/005BB784; xid 19827; RM 10 info 00 len 72\n0/005BB828: prv 0/005BB7C0; xprv 0/005BB7C0; xid 19827; RM 11 info 90 len 26\n0/005BB864: prv 0/005BB828; xprv 0/005BB828; xid 19827; RM 10 info 00 len 65\n0/005BB8C8: prv 0/005BB864; xprv 0/005BB864; xid 19827; RM 11 info 90 len 50\n0/005BB91C: prv 0/005BB8C8; xprv 0/005BB8C8; xid 19827; RM 1 info 00 len 4\ncommit: 2001-02-26 17:49:00\n0/005BB940: prv 0/005BB91C; xprv 0/00000000; xid 19832; RM 10 info 00 len 72\n0/005BB9A8: prv 0/005BB940; xprv 0/005BB940; xid 19832; RM 11 info 90 len 26\n0/005BB9E4: prv 0/005BB9A8; xprv 0/005BB9A8; xid 19832; RM 10 info 00 len 72\n0/005BBA4C: prv 0/005BB9E4; xprv 0/005BB9E4; xid 19832; RM 11 info 90 len 26\n0/005BBA88: prv 0/005BBA4C; xprv 0/005BBA4C; xid 19832; RM 10 info 00 len 72\n0/005BBAF0: prv 0/005BBA88; xprv 0/005BBA88; xid 19832; RM 11 info 90 len 26\n0/005BBB2C: prv 0/005BBAF0; xprv 0/005BBAF0; xid 19832; RM 10 info 00 len 72\n0/005BBB94: prv 0/005BBB2C; xprv 0/005BBB2C; xid 19832; RM 11 info 90 len 26\n0/005BBBD0: prv 0/005BBB94; xprv 0/005BBB94; xid 19832; RM 10 info 00 len 65\n0/005BBC34: prv 0/005BBBD0; xprv 0/005BBBD0; xid 19832; RM 11 info 90 len 50\n0/005BBC88: prv 0/005BBC34; xprv 0/005BBC34; xid 19832; RM 1 info 00 len 4\ncommit: 2001-02-26 17:49:06\nReadRecord: record with zero len at 0/005BBCAC\n-- this is where the log actually ends --- zeroes from here out.\n", "msg_date": "Fri, 02 Mar 2001 10:37:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "WAL & RC1 status" }, { "msg_contents": "> I am *not* feeling good about pushing out an RC1 release candidate\n> today.\n> \n> I've been going through the WAL code, trying to understand it and\n> document it. I've found a number of minor problems and several major\n> ones (\"major\" meaning \"can't really fix without an incompatible file\n> format change, hence initdb\"). I've reported the major problems to\n> the mailing lists but gotten almost no feedback about what to do.\n> \n> In addition, I'm still looking for the bug that I originally went in to\n> find: Scott Parish's report of being unable to restart after a normal\n> shutdown of beta4. Examination of his WAL log shows some pretty serious\n> lossage (see attached dump). My current theory is that the\n> buffer-slinging logic in xlog.c dropped one or more whole buffers' worth\n> of log records, but I haven't figured out exactly how.\n> \n> I want to veto putting out an RC1 until these issues are resolved...\n> comments?\n\nI was not sure how to respond. Requiring an initdb at this stage seems\nlike it could be a pretty major blow to beta testers. However, if we\nwill have 7.1 problems with WAL that can not be fixed without a file\nformat change, we will have problems down the road. Is there a version\nnumber in the WAL file? Can we put conditional code in there to create\nnew log file records with an updated format?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Mar 2001 10:43:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there a version number in the WAL file?\n\ncatversion.h will do fine, no?\n\n> Can we put conditional code in there to create\n> new log file records with an updated format?\n\nThe WAL stuff is *far* too complex already. I've spent a week studying\nit and I only partially understand it. I will not consent to trying to\nsupport multiple log file formats concurrently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2001 10:48:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL & RC1 status " }, { "msg_contents": "On Fri, 2 Mar 2001, Tom Lane wrote:\n\n> I am *not* feeling good about pushing out an RC1 release candidate\n> today.\n>\n> I've been going through the WAL code, trying to understand it and\n> document it. I've found a number of minor problems and several major\n> ones (\"major\" meaning \"can't really fix without an incompatible file\n> format change, hence initdb\"). I've reported the major problems to\n> the mailing lists but gotten almost no feedback about what to do.\n>\n> In addition, I'm still looking for the bug that I originally went in to\n> find: Scott Parish's report of being unable to restart after a normal\n> shutdown of beta4. Examination of his WAL log shows some pretty serious\n> lossage (see attached dump). My current theory is that the\n> buffer-slinging logic in xlog.c dropped one or more whole buffers' worth\n> of log records, but I haven't figured out exactly how.\n>\n> I want to veto putting out an RC1 until these issues are resolved...\n> comments?\n\nWill second it ... Vadim is supposed to be back on the 6th, and Peter has\na couple of changes to configure he wants to do this weekend for the JDBC\nstuff ... Thomas and I are in SF the end of next week for some meetings,\nso if you can pop off a summary of what you've found to either of us, and\nassuming that Vadim doesn't get caught up by then, we can bring them up\n\"in person\" at that time ... ?\n\n\n", "msg_date": "Fri, 2 Mar 2001 11:51:11 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there a version number in the WAL file?\n> \n> catversion.h will do fine, no?\n> \n> > Can we put conditional code in there to create\n> > new log file records with an updated format?\n> \n> The WAL stuff is *far* too complex already. I've spent a week studying\n> it and I only partially understand it. I will not consent to trying to\n> support multiple log file formats concurrently.\n\nWell, I was thinking a few things. Right now, if we update the\ncatversion.h, we will require a dump/reload. If we can update just the\nWAL version stamp, that will allow us to fix WAL format problems without\nrequiring people to dump/reload. I can imagine this would be valuable\nif we find we need to make changes in 7.1.1, where we can not require\ndump/reload.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Mar 2001 10:54:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Well, I was thinking a few things. Right now, if we update the\n> catversion.h, we will require a dump/reload. If we can update just the\n> WAL version stamp, that will allow us to fix WAL format problems without\n> requiring people to dump/reload.\n\nSince there is not a separate WAL version stamp, introducing one now\nwould certainly force an initdb. I don't mind adding one if you think\nit's useful; another 4 bytes in pg_control won't hurt anything. But\nit's not going to save anyone's bacon on this cycle.\n\nAt least one of my concerns (single point of failure) would require a\nchange to the layout of pg_control, which would force initdb anyway.\nAnyone want to propose a third version# for pg_control?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2001 11:03:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL & RC1 status " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, I was thinking a few things. Right now, if we update the\n> > catversion.h, we will require a dump/reload. If we can update just the\n> > WAL version stamp, that will allow us to fix WAL format problems without\n> > requiring people to dump/reload.\n> \n> Since there is not a separate WAL version stamp, introducing one now\n> would certainly force an initdb. I don't mind adding one if you think\n> it's useful; another 4 bytes in pg_control won't hurt anything. But\n> it's not going to save anyone's bacon on this cycle.\n\nHaving a version number of binary files has saved me many times because\nI can add a little 'if' to allow upward binary compatibility without\nbreaking old binary files. I think we should have one.\n\nI see our btree files, but I don't see one in heap. I am going to\nrecommend that for 7.2. All our files should have versions just in case\nwe ever need it. Some day, we may be able to skip dump/reload for major\nversions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Mar 2001 11:09:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "> I've been going through the WAL code, trying to understand it and\n> document it. I've found a number of minor problems and several major\n> ones (\"major\" meaning \"can't really fix without an incompatible file\n> format change, hence initdb\"). I've reported the major problems to\n> the mailing lists but gotten almost no feedback about what to do.\n\nSorry for the \"no feedback\", but I've assumed that this will be more\nproductively discussed with Vadim in the loop. I don't disagree with\nyour observations, but of course that is from a position of happy\nignorance :)\n\n> ... I want to veto putting out an RC1 until these issues are resolved...\n> comments?\n\nOK with me.\n\n - Thomas\n", "msg_date": "Fri, 02 Mar 2001 16:21:29 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, I was thinking a few things. Right now, if we update the\n> > catversion.h, we will require a dump/reload. If we can update just the\n> > WAL version stamp, that will allow us to fix WAL format problems without\n> > requiring people to dump/reload.\n> \n> Since there is not a separate WAL version stamp, introducing one now\n> would certainly force an initdb. I don't mind adding one if you think\n> it's useful; another 4 bytes in pg_control won't hurt anything. But\n> it's not going to save anyone's bacon on this cycle.\n> \n> At least one of my concerns (single point of failure) would require a\n> change to the layout of pg_control, which would force initdb anyway.\n> Anyone want to propose a third version# for pg_control?\n\nI now remember Hiroshi complaining about major WAL problems also,\nparticularly corrupt WAL files preventing the database from starting.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Mar 2001 11:37:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "On Fri, Mar 02, 2001 at 10:54:04AM -0500, Bruce Momjian wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Is there a version number in the WAL file?\n> > \n> > catversion.h will do fine, no?\n> > \n> > > Can we put conditional code in there to create\n> > > new log file records with an updated format?\n> > \n> > The WAL stuff is *far* too complex already. I've spent a week studying\n> > it and I only partially understand it. I will not consent to trying to\n> > support multiple log file formats concurrently.\n> \n> Well, I was thinking a few things. Right now, if we update the\n> catversion.h, we will require a dump/reload. If we can update just the\n> WAL version stamp, that will allow us to fix WAL format problems without\n> requiring people to dump/reload. I can imagine this would be valuable\n> if we find we need to make changes in 7.1.1, where we can not require\n> dump/reload.\n\nIt Seems to Me that after an orderly shutdown, the WAL files should be, \neffectively, slag -- they should contain no deltas from the current \ntable contents. In practice that means the only part of the format that \n*should* matter is whatever it takes to discover that they really are \nslag.\n\nThat *should* mean that, at worst, a change to the WAL file format should \nonly require doing an orderly shutdown, and then (perhaps) running a simple\nprogram to generate a new-format empty WAL. It ought not to require an \ninitdb. \n\nOf course the details of the current implementation may interfere with\nthat ideal, but it seems a worthy goal for the next beta, if it's not\npossible already. Given the opportunity to change the current WAL format, \nit ought to be possible to avoid even needing to run a program to generate \nan empty WAL.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 2 Mar 2001 12:54:12 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "> It Seems to Me that after an orderly shutdown, the WAL files should be, \n> effectively, slag -- they should contain no deltas from the current \n> table contents. In practice that means the only part of the format that \n> *should* matter is whatever it takes to discover that they really are \n> slag.\n\n> \n> That *should* mean that, at worst, a change to the WAL file format should \n> only require doing an orderly shutdown, and then (perhaps) running a simple\n> program to generate a new-format empty WAL. It ought not to require an \n> initdb. \n> \n> Of course the details of the current implementation may interfere with\n> that ideal, but it seems a worthy goal for the next beta, if it's not\n> possible already. Given the opportunity to change the current WAL format, \n> it ought to be possible to avoid even needing to run a program to generate \n> an empty WAL.\n\nThis was my question too. If we are just changing WAL, why can't we\njust have them stop the postmaster, install the new binaries, and\nrestart.\n\nTom told me on the phone that there was a magic number in the WAL log\nfile, and I see it now:\n\n\t#define XLOG_PAGE_MAGIC 0x17345168\n\nCouldn't we just have our new beta ignore WAL pages with this entry,\nknowing that startup/shutdown creates new WAL files anyway, \n\nAside from inconveniencing the beta users, people can do testing easier\nif we don't require a dump/reload for every WAL format change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 2 Mar 2001 17:10:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & RC1 status" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> It Seems to Me that after an orderly shutdown, the WAL files should be, \n> effectively, slag -- they should contain no deltas from the current \n> table contents. In practice that means the only part of the format that \n> *should* matter is whatever it takes to discover that they really are \n> slag.\n\n> That *should* mean that, at worst, a change to the WAL file format should \n> only require doing an orderly shutdown, and then (perhaps) running a simple\n> program to generate a new-format empty WAL. It ought not to require an \n> initdb. \n\nExcellent point, considering that we were already thinking of making a\nhandy-dandy little utility to remove broken WAL files... Shouldn't take\nmuch more than that to build something that also reformats pg_control.\nThanks for the suggestion!\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2001 19:21:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL & RC1 status " } ]
[ { "msg_contents": "\n\n> From:\tBruce Momjian [SMTP:pgman@candle.pha.pa.us]\n> Sent:\tFriday, March 02, 2001 9:54 AM\n> To:\tTom Lane\n> Cc:\tpgsql-core@postgresql.org; pgsql-hackers@postgresql.org\n> Subject:\tRe: [HACKERS] WAL & RC1 status\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Is there a version number in the WAL file?\n> > \n> > catversion.h will do fine, no?\n> > \n> > > Can we put conditional code in there to create\n> > > new log file records with an updated format?\n> > \n> \n\tWhile it may be unfortunate to have to do an initdb at this point in\nthe beta cycle, it is a beta and that is part of the deal. Postgre has the\nreputation of being the highest quality opensource database and we should do\nnothing to tarnish that. Release it when it's ready and not before.\n", "msg_date": "Fri, 2 Mar 2001 10:22:58 -0600 ", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "RE: WAL & RC1 status" } ]
[ { "msg_contents": "All,\n\nWe're pleased to announce the availablity of binary packaged versions of \nPostgreSQL 7.0.3 for Solaris 7.\n\nWe've set up a project on the GreatBridge.org open source project \nhosting site called \"PgSol,\" which we hope will be a focal point for \nSolaris packaging work. We're looking for volunteers to test the \npackages; any and all comments, criticism, and help welcome.\n\nhttp://www.greatbridge.com/download/pgsol.php\n\nhttp://www.greatbridge.org/project/pgsol/\n\nRegards,\nNed\n\n-- \n----------------------------------------------------\nNed Lilly e: ned@greatbridge.com\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Fri, 02 Mar 2001 17:01:22 -0500", "msg_from": "Ned Lilly <ned@greatbridge.com>", "msg_from_op": true, "msg_subject": "PostgreSQL for Solaris packages" }, { "msg_contents": "While I'm working on both Solaris 7 and 8, I'll check it out later (on heavy\nwork right now).\nNormally I compile, so if you need any help, info, or output from a compilation,\ncall me volunteer number 1!\n\nsaludos... :-)\n\nMensaje citado por: Ned Lilly <ned@greatbridge.com>:\n\n> All,\n> \n> We're pleased to announce the availablity of binary packaged versions of\n> \n> PostgreSQL 7.0.3 for Solaris 7.\n> \n> We've set up a project on the GreatBridge.org open source project \n> hosting site called \"PgSol,\" which we hope will be a focal point for \n> Solaris packaging work. We're looking for volunteers to test the \n> packages; any and all comments, criticism, and help welcome.\n> \n> http://www.greatbridge.com/download/pgsol.php\n> \n> http://www.greatbridge.org/project/pgsol/\n> \n> Regards,\n> Ned\n> \n> -- \n> ----------------------------------------------------\n> Ned Lilly e: ned@greatbridge.com\n> Vice President w: www.greatbridge.com\n> Evangelism / Hacker Relations v: 757.233.5523\n> Great Bridge, LLC f: 757.233.5555\n> \n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\n> majordomo@postgresql.org)\n> \n\n\n\nSystem Administration: It's a dirty job,\nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s email: martin@math.unl.edu.ar\nSanta Fe - Argentina http://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 02 Mar 2001 19:10:46 -0300 (ART)", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL for Solaris packages" } ]
[ { "msg_contents": "Hello,\n\nWe noticed that after upgrading to 7.1beta[245] the execution time for\nsome often used queries went up by a factor of 2 or more. Considering\nthe early beta state I was not alarmed. But since I noticed that\nyesterday's snapshot still has the problem, I'd really like to tell you\nabout it.\n\nHere is one of the queries, it takes about half a second on our computer\n(PII 233 with 256MB RAM) to execute and returns typically 1-4 rows via\ntwo index scans with high selectivity. So it looks to me that planning\ntime outwages execution time by far. 7.0 took about 0.15 seconds (which\nis still much).\n\nHere is the query:\n\nexplain verbose select gaenge , s . artikelid , text from\nschaertabelle s , extartbez e where maschine = int2(109) and\nschaerdatum = '2001-01-13' and s . artikelid = e . artikelid and\nextartbezid = 1 and bezkomptype = 0 order by text limit 10;\n\nAnd the plan for 7.0 and 7.1 (attached).\n\nThe data and schema is accessible via\nhttp://home.wtal.de/petig/pg_test.sql.gz\n\nIf you omit 'int2(' the index scan collapses into a sequential scan.\n(Well known problem with int2 indices)\n\n Christof\n\nOh, I'll attach the schema, too. So if you just want to take a look at\nthe table definition you don't have to download the data.", "msg_date": "Sat, 03 Mar 2001 12:30:58 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": true, "msg_subject": "Query Planning time increased 3 times on 7.1 compared to 7.0.3" } ]
[ { "msg_contents": "> I've reported the major problems to the mailing lists\n> but gotten almost no feedback about what to do.\n\nI can't comment without access to code -:(\n\n> commit: 2001-02-26 17:19:57\n> 0/0059996C: prv 0/00599948; xprv 0/00000000; xid 0;\n> RM 0 info 00 len 32\n> checkpoint: redo 0/0059996C; undo 0/00000000; sui 29;\n> nextxid 18903; nextoid 35195; online\n> -- this is the last normal-looking checkpoint record.\n> -- Judging from the commit timestamps surrounding prior\n> -- checkpoints, checkpoints were happening every five\n> -- minutes approximately on the 5-minute mark, so\n\nYou can't count on this: postmaster runs checkpoint\n\"maker\" in 5 minutes *after* prev checkpoint was created,\nnot from the moment \"maker\" started. And checkpoint can\ntake *minutes*.\n\n> -- this one happened about 17:20.\n> -- (There really should be a timestamp\n> -- in the checkpoint records...)\n\nAgreed.\n\n> commit: 2001-02-26 17:26:02\n> ReadRecord: record with zero len at 0/005A4B4C\n> -- My dump program is unhappy here because the rest\n> -- of the page is zero. Given that there is a\n> -- continuation record at the start of the next\n> -- page, there certainly should have been record(s)\n> -- here. But it's worse than that: check the commit\n> -- timestamps and the xid numbers before and after the\n> -- discontinuity. Did time go backwards here?\n\nCommit timestamps are created *before* XLogInsert call,\nwhich can suspend backend for some time (in multi-user\nenv). Random xid-s are also ok, generally.\n\n> -- Also notice the back-pointers in the first valid\n> -- record on the next page; they point not into the\n> -- zeroed space, which would suggest a mere failure\n> -- to write a buffer after filling it, but into the\n> -- middle of one of the valid records on the prior\n> -- page. It almost looks like page 5A6000 came from\n> -- a completely different run than page 5A4000.\n> Unexpected page info flags 0001 at offset 5A6000\n> Skipping unexpected continuation record at offset 5A6000\n> 0/005A6904: prv 0/005A48B4(?); xprv 0/005A48B4; xid 19047;\n ^^^^^^^^^^ ^^^^^^^^^^\nSame. So, TX 19047 really inserted record at 0/005A48B4\nposition.\n\n> -- What's even nastier (and the immediate cause of\n> -- Scott's inability to restart) is that the pg_control\n> -- file's checkPoint pointer points to 0/005AF9F0, which\n> -- is *not* the location of this checkpoint, but of\n> -- the record after it.\n\nWell, well. Checkpoint position is taken from\nMyLastRecord - I wonder how could this internal var\ntake \"invalid\" data from concurrent backend.\n\nOk, we're leaving Krasnoyarsk in 8 hrs and should\narrive SF Feb 5 ~ 10pm.\n\nVadim\n\n-----------------------------------------------\nFREE! The World's Best Email Address @email.com\nReserve your name now at http://www.email.com\n\n\n", "msg_date": "Sat, 3 Mar 2001 13:46:06 -0500 (EST)", "msg_from": "Vadim Mikheev <vadim4o@email.com>", "msg_from_op": true, "msg_subject": "RE: [CORE] WAL & RC1 status" }, { "msg_contents": "Vadim Mikheev <vadim4o@email.com> writes:\n>> -- Judging from the commit timestamps surrounding prior\n>> -- checkpoints, checkpoints were happening every five\n>> -- minutes approximately on the 5-minute mark, so\n\n> You can't count on this: postmaster runs checkpoint\n> \"maker\" in 5 minutes *after* prev checkpoint was created,\n> not from the moment \"maker\" started. And checkpoint can\n> take *minutes*.\n\nGood point, although with so little going on (this is the *whole*\nrelevant section of the log), that seems unlikely.\n\n>> -- here. But it's worse than that: check the commit\n>> -- timestamps and the xid numbers before and after the\n>> -- discontinuity. Did time go backwards here?\n\n> Commit timestamps are created *before* XLogInsert call,\n> which can suspend backend for some time (in multi-user\n> env). Random xid-s are also ok, generally.\n\nHmm ... maybe. Though again, this installation doesn't seem to have\nbeen busy enough to cause a commit to be delayed for very long.\n\nWhat I realized after posting that analysis is that the last checkpoint\nrecord has SUI 30 whereas the earlier ones have SUI 29 ... so there was\na system restart in there somewhere. That still leaves me wondering\nabout the discontinuity and broken back-link, but it may account for\nthe \"missing\" checkpoint records --- perhaps they weren't generated\nbecause the system wasn't up the entire interval.\n\n>> -- What's even nastier (and the immediate cause of\n>> -- Scott's inability to restart) is that the pg_control\n>> -- file's checkPoint pointer points to 0/005AF9F0, which\n>> -- is *not* the location of this checkpoint, but of\n>> -- the record after it.\n\n> Well, well. Checkpoint position is taken from\n> MyLastRecord - I wonder how could this internal var\n> take \"invalid\" data from concurrent backend.\n\nI have not been able to figure that one out either.\n\n> Ok, we're leaving Krasnoyarsk in 8 hrs and should\n> arrive SF Feb 5 ~ 10pm.\n\nHave a safe trip!\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 03 Mar 2001 14:06:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [CORE] WAL & RC1 status " } ]
[ { "msg_contents": "Hi,\n\nDid I read allegations here a while ago that someone\nhad a multi-process version of pgbench? I've poked\naround the website and mail archives, but couldn't\nfind it.\n\nI have access to a couple of 4-CPU boxes, and reckon\nthat a single-process benching tool could well prove\na bottleneck.\n\nMatthew.\n\n", "msg_date": "Sun, 4 Mar 2001 12:17:59 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Multi-process pgbench?" }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> I have access to a couple of 4-CPU boxes, and reckon\n> that a single-process benching tool could well prove\n> a bottleneck.\n\nIt's not, as far as I can tell.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 04 Mar 2001 11:22:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Multi-process pgbench? " } ]
[ { "msg_contents": "I posted about this some time back.\n\nMS have published an article on their website\nhttp://support.microsoft.com/support/kb/articles/Q128/8/09.asp\n\nreading\n\nACC: \"#Deleted\" Errors with Linked ODBC Tables\n\n----------------------------------------------------------------------------\n----\nThe information in this article applies to:\n\nMicrosoft Access 2.0\nMicrosoft Access for Windows 95, version 7.0\nMicrosoft Access 97\n\n----------------------------------------------------------------------------\n----\n\n\nSYMPTOMS\nWhen you retrieve, insert, or update records in a linked ODBC table, each\nfield in a record contains the \"#Deleted\" error message. When you retrieve,\ninsert, or update records using code, you receive the error message \"Record\nis deleted.\"\n\n\n\nCAUSE\nThe Microsoft Jet database engine is designed around a keyset-driven model.\nThis means that data is retrieved, inserted, and updated based on key values\n(in the case of a linked ODBC table, the unique index of a table).\n\nAfter Microsoft Access performs an insert or an update of a linked ODBC\ntable, it uses a Where criteria to select the record again to verify the\ninsert or update. The Where criteria is based on the unique index. Although\nnumerous factors can cause the select not to return any records, most often\nthe cause is that the key value Microsoft Access has cached is not the same\nas the actual key value on the ODBC table. Other possible causes are as\nfollows:\n\n\nHaving an update or insert trigger on the table, modifying the key value.\n\n\nBasing the unique index on a float value.\n\n\nUsing a fixed-length text field that may be padded on the server with the\ncorrect amount of spaces.\n\n\nHaving a linked ODBC table containing Null values in any of the fields\nmaking up the unique index.\n\n\nThese factors do not directly cause the \"#Deleted\" error message. Instead,\nthey cause Microsoft Access to go to the next step in maintaining the key\nvalues, which is to select the record again, this time with the criteria\nbased on all the other fields in the record. If this step returns more than\none record, Microsoft Access returns the \"#Deleted\" message because it does\nnot have a reliable key value to work with. If you close and re-open the\ntable or choose Show All Records from the Records menu, the \"#Deleted\"\nerrors are removed.\n\nMicrosoft Access uses a similar process to retrieve records from an linked\nODBC table. First, it retrieves the key values and then the rest of the\nfields that match the key values. If Microsoft Access is not able to find\nthat value again when it tries to find the rest of the record, it assumes\nthat the record is deleted.\n\n\n\nRESOLUTION\nThe following are some strategies that you can use to avoid this behavior:\n\n\nAvoid entering records that are exactly the same except for the unique\nindex.\n\n\nAvoid an update that triggers updates of both the unique index and another\nfield.\n\n\nDo not use a Float field as a unique index or as part of a unique index\nbecause of the inherent rounding problems of this data type.\n\n\nDo all the updates and inserts by using SQL pass-through queries so that you\nknow exactly what is sent to the ODBC data source.\n\n\nRetrieve records with an SQL pass-through query. An SQL pass-through query\nis not updateable, and therefore does not cause \"#Delete\" errors.\n\n\nAvoid storing Null values within any field making up the unique index of\nyour linked ODBC table.\n\n\n\n\n\nMORE INFORMATION\nNote: In Microsoft Access 2.0, linked tables were called attached tables.\n\nSteps to Reproduce Behavior\n\n\nOpen the sample database Northwind.mdb (or NWIND.MDB. in Microsoft Access\n2.0)\n\n\nUse the Upsizing Tools to upsize the Shippers table.\n\nNOTE: This table contains an AutoNumber field (or Counter field in Microsoft\nAccess 2.0) that is translated on SQL Server by the Upsizing Tools into a\ntrigger that emulates a counter.\n\n\nOpen the linked Shippers table and enter a new record. Make sure that the\nrecord you enter has the same data in the Company Name field as the previous\nrecord.\n\n\nPress TAB to move to a new record. Note that the \"#Deleted\" error fills the\nrecord you entered.\n\n\nClose and re-open the table. Note that the record is correct.\n\n\n=======================================================================\n\nMy database consists of two linked tables. One table has a primary key. The\nsecond table has a different primary key, and a field that is a foreign key\nwhich comes from the primary key of the first table.\n\nThe problem I have experienced was thought to be due to the use of serials\n(similar to Access's Autonumber) to generate unique primary key values,\nparticularly when adding data to the second table which is in a 1 to many\nrelationship with the first table. I tried generating my own unique PK\nvalues and alternately posting the new record before editing it (forces the\nserial to be generated and stored in the record so that it is available to\nthe secondary table). The second approach has not been totally successful,\nnot sure about the first but will experiment with that. Does anyone have\nopinions on ways of resolving these issues?\n\n\n=======================================================================\nPatrick Dunford, Christchurch, NZ - http://pdunford.godzone.net.nz/\n\n Peter replied, �Repent and be baptized, every one of you, in the\nname of Jesus Christ for the forgiveness of your sins. And you will\nreceive the gift of the Holy Spirit. The promise is for you and\nyour children and for all who are far off�for all whom the Lord our\nGod will call.�\n -- Acts 2:38\nhttp://www.heartlight.org/cgi-shl/todaysverse.cgi?day=20010304\n=======================================================================\nCreated by Mail2Sig - http://pdunford.godzone.net.nz/software/mail2sig/\n\n", "msg_date": "Mon, 5 Mar 2001 13:37:25 +1300", "msg_from": "\"Patrick Dunford\" <dunfordsoft@clear.net.nz>", "msg_from_op": true, "msg_subject": "Access 97 & PostgreSQL ODBC Driver Problems" } ]
[ { "msg_contents": "Hi,\n\nI sometimes encountered SEGV errors in my test case\nwhen I canceled the execution.\nProbably it's due to the almost simultaneous arrival\nof multiple signals and the following patch seems to\nfix the bug. However I'm afraid that the change should\ncause another bug.\n\nComments ?\n\nRegards,\nHiroshi Inoue\n\n\nIndex: proc.c\n===================================================================\nRCS file:\n/home/projects/pgsql/cvsroot/pgsql/src/backend/storage/lmgr/proc.c,v\nretrieving revision 1.98\ndiff -c -c -r1.98 proc.c\n*** proc.c 2001/01/26 18:23:12 1.98\n--- proc.c 2001/03/05 02:28:09\n***************\n*** 327,334 ****\n if (!waitingForLock)\n return false;\n\n- waitingForLock = false;\n-\n /* Turn off the deadlock timer, if it's still running (see\nProcSleep) */\n #ifndef __BEOS__\n {\n--- 327,332 ----\n***************\n*** 345,350 ****\n--- 343,349 ----\n\n /* Unlink myself from the wait queue, if on it (might not be\nanymore!) *\n/\n LockLockTable();\n+ waitingForLock = false;\n if (MyProc->links.next != INVALID_OFFSET)\n RemoveFromWaitQueue(MyProc);\n UnlockLockTable();\n", "msg_date": "Mon, 05 Mar 2001 12:05:16 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "How to handle waitingForLock in LockWaitCancel()" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I sometimes encountered SEGV errors in my test case\n> when I canceled the execution.\n\nCan you provide backtraces from these SEGVs?\n\n> Probably it's due to the almost simultaneous arrival\n> of multiple signals and the following patch seems to\n> fix the bug. However I'm afraid that the change should\n> cause another bug.\n\nI do not like that change at *all*. In the first place, how could it\nstop whatever is causing the SEGV? The waitingForLock flag is not\nexamined anywhere else, so unless things are already broken this cannot\nhave any effect. In the second place, postponing the reset of the\nflag has the potential for an infinite loop, because this routine is\ncalled during error exit. Suppose LockLockTable causes an elog(ERROR)?\n\nI think we need to look harder to find the cause of the SEGVs you are\nseeing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 01:16:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel() " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I sometimes encountered SEGV errors in my test case\n> > when I canceled the execution.\n> \n> Can you provide backtraces from these SEGVs?\n> \n\nISTM the backtrace isn't sufficient to figure out\nhow the error occured. As far as I see the error\nwas caused by other backends which terimnated\nsafely with the similar sequence of signals.\nI have a (about 7MB) postmaster log with the\ntrace_locks option on. I could send it to you\nif you want. Note that I applied the following\npatch to debug my suspicion. In addition I added\nan SegvException Handler to postgres.c.\n\ndiff -c -r1.35 proc.c\n*** storage/lmgr/proc.c 2001/01/29 10:00:58 1.35\n--- storage/lmgr/proc.c 2001/03/05 07:56:56\n***************\n*** 327,332 ****\n--- 327,334 ----\n if (!waitingForLock)\n return false;\n \n+ if (MyProc->links.next != INVALID_OFFSET)\n+ fprintf(stderr, \"LockWaitCancel pid=%d must\nRemoveFromWaitQueue\\n\", MyProc->pid);\n waitingForLock = false;\n \n /* Turn off the deadlock timer, if it's still running (see\nProcSleep) */\n \nAnd the backtrace is as follows.\n\n#0 0x40130db7 in __libc_pause ()\n#1 0x811981c in SegvExceptionHandler (postgres_signal_arg=11)\n at postgres.c:966\n#2 <signal handler called>\n#3 0x8482 in ?? ()\n#4 0x811547e in ProcLockWakeup (lockMethodTable=0x8204260,\nlock=0x409d6bdc)\n at proc.c:771\n#5 0x8114500 in LockReleaseAll (lockmethod=1, proc=0x409d9d20, \n allxids=1 '\\001', xid=0) at lock.c:1438\n#6 0x81150b4 in ProcKill () at proc.c:446\n#7 0x810df3b in shmem_exit (code=0) at ipc.c:187\n#8 0x810dead in proc_exit (code=0) at ipc.c:146\n#9 0x815891e in elog (lev=1, fmt=0x81aee71 \"The system is shutting\ndown\")\n at elog.c:465\n#10 0x8119927 in ProcessInterrupts () at postgres.c:1039\n#11 0x810c526 in s_lock (lock=0x40195016 \"\\001\", file=0x81ac4ea\n\"spin.c\", \n line=156) at s_lock.c:140\n#12 0x810fec7 in SpinAcquire (lockid=6) at spin.c:156\n#13 0x8114f74 in LockWaitCancel () at proc.c:349\n#14 0x81197d8 in die (postgres_signal_arg=15) at postgres.c:947\n#15 <signal handler called>\n#16 0x8482 in ?? ()\n#17 0x400ffcd4 in _IO_old_file_xsputn (f=0x81ce3a8, data=0xbfffc288,\nn=49)\n at oldfileops.c:294\n#18 0x400f1acf in buffered_vfprintf (s=0x81ce3a8, \n format=0x81adba0 \"LockWaitCancel pid=%d must RemoveFromWaitQueue\\n\", \n args=0xbfffe8c8) at vfprintf.c:1767\n#19 0x400ed5cc in _IO_vfprintf (s=0x81ce3a8, \n format=0x81adba0 \"LockWaitCancel pid=%d must RemoveFromWaitQueue\\n\", \n ap=0xbfffe8c8) at vfprintf.c:1029\n#20 0x400f4f03 in fprintf (stream=0x81ce3a8, \n format=0x81adba0 \"LockWaitCancel pid=%d must RemoveFromWaitQueue\\n\")\n at fprintf.c:32\n#21 0x8114f25 in LockWaitCancel () at proc.c:331\n#22 0x811988c in QueryCancelHandler (postgres_signal_arg=2) at\npostgres.c:993\n#23 <signal handler called>\n#24 0x8482 in ?? ()\n#25 0x810e22b in IpcSemaphoreLock (semId=32768, sem=8, interruptOK=1\n'\\001')\n at ipc.c:423\n#26 0x811530f in ProcSleep (lockMethodTable=0x8204260, lockmode=4, \n lock=0x409d726c, holder=0x409d7e08) at proc.c:669\n#27 0x8112eb3 in WaitOnLock (lockmethod=1, lockmode=4, lock=0x409d726c, \n holder=0x409d7e08) at lock.c:995\n#28 0x81124d8 in LockAcquire (lockmethod=1, locktag=0xbfffeb5c,\nxid=373273, \n lockmode=4) at lock.c:779\n#29 0x8111372 in XactLockTableWait (xid=373282) at lmgr.c:309\n#30 0x80cf5ef in EvalPlanQual (estate=0x8260a0c, rti=1, tid=0xbfffebe0)\n at execMain.c:1751\n#31 0x80ceebd in ExecReplace (slot=0x8260bec, tupleid=0xbfffec3c, \n estate=0x8260a0c) at execMain.c:1449\n#32 0x80ceb2e in ExecutePlan (estate=0x8260a0c, plan=0x8260964, \n operation=CMD_UPDATE, numberTuples=0,\ndirection=ForwardScanDirection, \n destfunc=0x82611c0) at execMain.c:1119\n#33 0x80cdf73 in ExecutorRun (queryDesc=0x82609f0, estate=0x8260a0c, \n feature=3, count=0) at execMain.c:233\n#34 0x811ac2b in ProcessQuery (parsetree=0x825379c, plan=0x8260964, \n dest=Remote) at pquery.c:297\n#35 0x811964b in pg_exec_query_string (\n query_string=0x8253400 \"update branches set bbalance = bbalance +\n687 where bid = 1\", dest=Remote, parse_context=0x8237640) at\npostgres.c:810\n#36 0x811a712 in PostgresMain (argc=9, argv=0xbfffee9c, real_argc=12, \n real_argv=0xbffff774, username=0x82104b9 \"reindex\") at\npostgres.c:1900\n#37 0x80ffb03 in DoBackend (port=0x8210250) at postmaster.c:2080\n#38 0x80ff6fa in BackendStartup (port=0x8210250) at postmaster.c:1863\n#39 0x80fea96 in ServerLoop () at postmaster.c:963\n#40 0x80fe527 in PostmasterMain (argc=12, argv=0xbffff774) at\npostmaster.c:662\n#41 0x80df8e9 in main (argc=12, argv=0xbffff774) at main.c:149\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Mon, 05 Mar 2001 17:50:49 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel()" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> [ backtrace snipped ]\n\nHmm, this is definitely not operating as intended: LockWaitCancel is\ngetting interrupted, because ProcessInterrupts may be called when it's\ntrying to acquire the lockmanager spinlock, and ProcessInterrupts will\nsee the ProcDiePending flag already set. I think the correct fix (or\nat least part of it) is in postgres.c's die():\n\n /*\n * If it's safe to interrupt, and we're waiting for input or a lock,\n * service the interrupt immediately\n */\n if (ImmediateInterruptOK && InterruptHoldoffCount == 0 &&\n CritSectionCount == 0)\n {\n+ /* bump holdoff count to make ProcessInterrupts() a no-op */\n+ /* until we are done getting ready for it */\n+ InterruptHoldoffCount++;\n DisableNotifyInterrupt();\n /* Make sure HandleDeadLock won't run while shutting down... */\n LockWaitCancel();\n+ InterruptHoldoffCount--;\n ProcessInterrupts();\n }\n\nQueryCancelHandler probably needs similar additions.\n\nI suspect you will find that these crashes occur during the window just\nafter the semop() call in IpcSemaphoreLock() --- see the comment\nbeginning at line 399 of ipc.c. You could probably make the crash\neasier to reproduce by inserting a delay there, if you want to test\nmore.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 14:43:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel() " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > [ backtrace snipped ]\n> \n> Hmm, this is definitely not operating as intended: LockWaitCancel is\n> getting interrupted, because ProcessInterrupts may be called when it's\n> trying to acquire the lockmanager spinlock, and ProcessInterrupts will\n> see the ProcDiePending flag already set. I think the correct fix (or\n> at least part of it) is in postgres.c's die():\n> \n> /*\n> * If it's safe to interrupt, and we're waiting for input or a lock,\n> * service the interrupt immediately\n> */\n> if (ImmediateInterruptOK && InterruptHoldoffCount == 0 &&\n> CritSectionCount == 0)\n> {\n> + /* bump holdoff count to make ProcessInterrupts() a no-op */\n> + /* until we are done getting ready for it */\n> + InterruptHoldoffCount++;\n> DisableNotifyInterrupt();\n> /* Make sure HandleDeadLock won't run while shutting down... */\n> LockWaitCancel();\n> + InterruptHoldoffCount--;\n> ProcessInterrupts();\n> }\n> \n> QueryCancelHandler probably needs similar additions.\n> \n\nAgreed. Adding similar code to QueryCancelHandler seems\nsufficient.\n\n> I suspect you will find that these crashes occur during the window just\n> after\n\nSorry what does 'just after' mean ?\nIsn't it during the semop() ?\n\n> the semop() call in IpcSemaphoreLock() --- see the comment\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 06 Mar 2001 11:14:32 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel()" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> I suspect you will find that these crashes occur during the window just\n>> after\n\n> Sorry what does 'just after' mean ?\n> Isn't it during the semop() ?\n\n>> the semop() call in IpcSemaphoreLock() --- see the comment\n\nIf an interrupt during the semop led to a crash, it would be easy to\nreproduce. I suspect that the crash condition arises only when the\ninterrupt occurs in a narrow time window ... such as the few\ninstructions just before or just after the semop call. It's just a\nhunch though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 21:25:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel() " }, { "msg_contents": "I Inoue wrote:\n> \n> Tom Lane wrote:\n> >\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > [ backtrace snipped ]\n> >\n> > Hmm, this is definitely not operating as intended: LockWaitCancel is\n> > getting interrupted, because ProcessInterrupts may be called when it's\n> > trying to acquire the lockmanager spinlock, and ProcessInterrupts will\n> > see the ProcDiePending flag already set. I think the correct fix (or\n> > at least part of it) is in postgres.c's die():\n> >\n> > /*\n> > * If it's safe to interrupt, and we're waiting for input or a lock,\n> > * service the interrupt immediately\n> > */\n> > if (ImmediateInterruptOK && InterruptHoldoffCount == 0 &&\n> > CritSectionCount == 0)\n> > {\n> > + /* bump holdoff count to make ProcessInterrupts() a no-op */\n> > + /* until we are done getting ready for it */\n> > + InterruptHoldoffCount++;\n> > DisableNotifyInterrupt();\n> > /* Make sure HandleDeadLock won't run while shutting down... */\n> > LockWaitCancel();\n> > + InterruptHoldoffCount--;\n> > ProcessInterrupts();\n> > }\n> >\n> > QueryCancelHandler probably needs similar additions.\n> >\n> \n> Agreed. Adding similar code to QueryCancelHandler seems\n> sufficient.\n> \n\nIs it OK to commit the change before 7.1 release ?\nI want to do it before forgetting this issue.\n(I've completely forgotten the CheckPoint hang problem\nI reported once until I see your report today).\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Fri, 09 Mar 2001 11:23:58 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel()" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Is it OK to commit the change before 7.1 release ?\n> I want to do it before forgetting this issue.\n\nIf that fixes the problem for you, then commit it. I was waiting to\nhear back whether you still saw a crash or not...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 23:33:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel() " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Is it OK to commit the change before 7.1 release ?\n> > I want to do it before forgetting this issue.\n> \n> If that fixes the problem for you, then commit it. I was waiting to\n> hear back whether you still saw a crash or not...\n> \n\nI see no crash in my test case.\nI would commit the change.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Fri, 09 Mar 2001 15:33:15 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: How to handle waitingForLock in LockWaitCancel()" } ]
[ { "msg_contents": "Hello,all\n\nI use your a example of PL/pgSQL, but I found some errors when I execute \nthese codes. The details are followings,\n\nFirst, I create a exam.sql that includes these codes as followings,\n\nCREATE TABLE emp (\n empname text,\n salary int4,\n last_date datetime,\n last_user name);\n\nCREATE FUNCTION emp_stamp () RETURNS OPAQUE AS'\n BEGIN\n -- Check that empname and salary are given\n IF NEW.empname ISNULL THEN\n RAISE EXCEPTION ''empname cannot be NULL value'';\n END IF;\n IF NEW.salary ISNULL THEN\n RAISE EXCEPTION ''% cannot have NULL salary'', NEW.empname;\n END IF;\n\n -- Who works for us when she must pay for?\n IF NEW.salary < 0 THEN\n RAISE EXCEPTION ''% cannot have a negative salary'', NEW.empname;\n END IF;\n\n -- Remember who changed the payroll when\n NEW.last_date := ''now'';\n NEW.last_user := getpgusername();\n RETURN NEW;\n END;\n' LANGUAGE 'plpgsql';\n\nCREATE TRIGGER emp_stamp BEFORE INSERT OR UPDATE ON emp\n FOR EACH ROW EXECUTE PROCEDURE emp_stamp();\n\n\n\nSecondly, I execute exam.sql and the postgress can create the table emp,\nthe function emp_stamp() and the trigger emp_stamp seccessfully.But when I \ninsert one record to table emp, there are some errors on the screen.\n the insert statement is followings,\n INSERT INTO emp Values('','','20001220','raymond');\n\nthe error of screen is:\nNOTICE: plpgsql: ERROR during compile of emp_stamp near line 1\n\"RROR: parse error at or near \"\n\nWhy? and what wrong is it? Please give me reply as possible as you can. \nThanks!\n\n\n\n\n\n\n\n\n\n_________________________________________________________________________\nGet Your Private, Free E-mail from MSN Hotmail at http://www.hotmail.com.\n\n", "msg_date": "Mon, 05 Mar 2001 06:02:01 -0000", "msg_from": "\"Lu Raymond\" <raymond_lu99@hotmail.com>", "msg_from_op": true, "msg_subject": "There is error at the examples in PL/pgSQL" }, { "msg_contents": "\"Lu Raymond\" <raymond_lu99@hotmail.com> writes:\n> NOTICE: plpgsql: ERROR during compile of emp_stamp near line 1\n> \"RROR: parse error at or near \"\n\nSave your script with Unix-style newlines, not DOS-style (LF not CR/LF).\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 10:51:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: There is error at the examples in PL/pgSQL " } ]
[ { "msg_contents": "People will have seen my post on problems with PostgreSQL ODBC driver and MS\nAccess 97.\n\nAccess 97 has some problems when a record is added that contains a primary\nkey field of type SERIAL. This has something to do with the fact that the\nvalue of the primary key is not actually generated until the record is sent\nto the server.\n\nIt seems it is easiest for me to get the unique ID from the server myself\nand insert it into the record when Access creates it.\n\nIn the realm of file based databases on a local machine it is easy to do\nthis: store the unique variable into a special table, read it out, increment\nit and store it back. Very quick and there may only ever be one user.\n\nThings become different on an SQL server because there may be multiple users\nsimultaneously accessing the database. Two SQL operations are required to\nretrieve the variable's value and update it: a SELECT and UPDATE. Depending\non how fast your connection is, between the SELECT and UPDATE, someone else\ncould have run the same SELECT and got the same value back. Then when both\nrecords are sent to the server with duplicate values in the same primary\nkey, one will fail.\n\nWhat I need is some foolproof way of getting and updating the variable in\none operation. Is it going to be an Int4 stored in a special table, or can\nit be a serial? Do I use a stored procedure or what? How do I get its value\nfrom Access?\n\nWhatever you think of Access, the alternative seems to be clunky PHP forms\nwith lots of code behind them for data entry and editing.\n\n=======================================================================\nPatrick Dunford, Christchurch, NZ - http://pdunford.godzone.net.nz/\n\n Peter replied, �Repent and be baptized, every one of you, in the\nname of Jesus Christ for the forgiveness of your sins. And you will\nreceive the gift of the Holy Spirit. The promise is for you and\nyour children and for all who are far off�for all whom the Lord our\nGod will call.�\n -- Acts 2:38\nhttp://www.heartlight.org/cgi-shl/todaysverse.cgi?day=20010304\n=======================================================================\nCreated by Mail2Sig - http://pdunford.godzone.net.nz/software/mail2sig/\n\n", "msg_date": "Mon, 5 Mar 2001 20:11:28 +1300", "msg_from": "\"Patrick Dunford\" <dunfordsoft@clear.net.nz>", "msg_from_op": true, "msg_subject": "Getting unique ID through SQL" }, { "msg_contents": "Hi Patrick,\n\nWith PostgreSQL, I do this inside PL/PGSQL functions (but I'll do it\noutside a function here to make it simpler) :\n\nLets say you have :\n\nfoobar=# create table demonstration (barfoo serial, data varchar(10));\nNOTICE: CREATE TABLE will create implicit sequence\n'demonstration_barfoo_seq' for SERIAL column 'demonstration.barfoo'\nNOTICE: CREATE TABLE/UNIQUE will create implicit index\n'demonstration_barfoo_key' for table 'demonstration'\nCREATE\nfoobar=# \\d demonstration\n Table \"demonstration\"\n Attribute | Type | Modifier\n-----------+-------------+------------------------------------------------------------\n barfoo | integer | not null default\nnextval('demonstration_barfoo_seq'::text)\n data | varchar(10) |\nIndex: demonstration_barfoo_key\n \nfoobar=#\n\nThe way I insert data in a scalable manner is :\n\nfoobar=# select nextval('demonstration_barfoo_seq'); /* Put this\nreturned value in a variable */\n nextval\n---------\n 1\n(1 row)\n \nfoobar=# insert into demonstration (barfoo, data) values (1, 'Some\ndata'); /* Insert the data using the previously generated serial number\n*/ \nINSERT 28776302 1\nfoobar=#\n\nPretty simple eh? No two clients can get the same value, and therefore\nthere's no conflict. It's even transaction safe, as rolling back a\ntransaction won't let the same value be generated again. This does mean\nyou will get gaps in the sequence numbering after a while, but for my\napplications that's not a problem.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n\n\nPatrick Dunford wrote:\n> \n> People will have seen my post on problems with PostgreSQL ODBC driver and MS\n> Access 97.\n> \n> Access 97 has some problems when a record is added that contains a primary\n> key field of type SERIAL. This has something to do with the fact that the\n> value of the primary key is not actually generated until the record is sent\n> to the server.\n> \n> It seems it is easiest for me to get the unique ID from the server myself\n> and insert it into the record when Access creates it.\n> \n> In the realm of file based databases on a local machine it is easy to do\n> this: store the unique variable into a special table, read it out, increment\n> it and store it back. Very quick and there may only ever be one user.\n> \n> Things become different on an SQL server because there may be multiple users\n> simultaneously accessing the database. Two SQL operations are required to\n> retrieve the variable's value and update it: a SELECT and UPDATE. Depending\n> on how fast your connection is, between the SELECT and UPDATE, someone else\n> could have run the same SELECT and got the same value back. Then when both\n> records are sent to the server with duplicate values in the same primary\n> key, one will fail.\n> \n> What I need is some foolproof way of getting and updating the variable in\n> one operation. Is it going to be an Int4 stored in a special table, or can\n> it be a serial? Do I use a stored procedure or what? How do I get its value\n> from Access?\n> \n> Whatever you think of Access, the alternative seems to be clunky PHP forms\n> with lots of code behind them for data entry and editing.\n> \n> =======================================================================\n> Patrick Dunford, Christchurch, NZ - http://pdunford.godzone.net.nz/\n> \n> Peter replied, ?Repent and be baptized, every one of you, in the\n> name of Jesus Christ for the forgiveness of your sins. And you will\n> receive the gift of the Holy Spirit. The promise is for you and\n> your children and for all who are far off-for all whom the Lord our\n> God will call.?\n> -- Acts 2:38\n> http://www.heartlight.org/cgi-shl/todaysverse.cgi?day=20010304\n> =======================================================================\n> Created by Mail2Sig - http://pdunford.godzone.net.nz/software/mail2sig/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Mon, 05 Mar 2001 21:39:09 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Getting unique ID through SQL" }, { "msg_contents": "Does anyone has pointers on CORBA and PostgreSQL?\n\nWhat is the story ?\n\nCheers...\nFranck@sopac.org\n\n", "msg_date": "Mon, 05 Mar 2001 23:48:00 +1200", "msg_from": "Franck Martin <franck@sopac.org>", "msg_from_op": false, "msg_subject": "CORBA and PG" }, { "msg_contents": "Quoting Franck Martin <franck@sopac.org>:\n\n> Does anyone has pointers on CORBA and PostgreSQL?\n> \n> What is the story ?\n\nThere's some old stubs for one of the orbs somewhere in the source (C/C++)\n\nAlso the old JDBC/Corba example is still there \n(src/interfaces/jdbc/example/corba)\n\nPeter\n\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Mon, 05 Mar 2001 10:51:45 -0500 (EST)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: CORBA and PG" } ]
[ { "msg_contents": "Hello,\n\nWe noticed that after upgrading to 7.1beta[245] the execution time for\nsome often used queries went up by a factor of 2 or more. Considering\nthe early beta state I was not alarmed. But since I noticed that\nyesterday's snapshot still has the problem, I'd really like to tell you\nabout it.\n\nHere is one of the queries, it takes about half a second on our computer\n(PII 233 with 256MB RAM) to execute and returns typically 1-4 rows via\ntwo index scans with high selectivity. So it looks to me that planning\ntime outwages execution time by far. 7.0 took about 0.15 seconds (which\nis still much).\n\nHere is the query:\n\nexplain verbose select gaenge , s . artikelid , text from\nschaertabelle s , extartbez e where maschine = int2(109) and\nschaerdatum = '2001-01-13' and s . artikelid = e . artikelid and\nextartbezid = 1 and bezkomptype = 0 order by text limit 10;\n\nAnd the plan for 7.0 and 7.1 (attached).\n\nThe data and schema is accessible via\nhttp://home.wtal.de/petig/pg_test.sql.gz\n\nIf you omit 'int2(' the index scan collapses into a sequential scan.\n(Well known problem with int2 indices)\n\n Christof\n\nOh, I'll attach the schema, too. So if you just want to take a look at\nthe table definition you don't have to download the data.", "msg_date": "Mon, 05 Mar 2001 08:27:39 +0100", "msg_from": "Christof Petig <christof.petig@wtal.de>", "msg_from_op": true, "msg_subject": "Query Planning time increased 3 times on 7.1 compared to 7.0.3" }, { "msg_contents": "Justin Clift wrote:\n\n> Hi Christof,\n>\n> I'm not aware of the problem with int2 indexes collapsing. Can you give\n> me some more info, and I'll put it on the techdocs.postgresql.org\n> website.\n\nOh, I'm sorry for my strange wording.\n\nI said that the index search collapses to a sequential scan if you do not\ncast the number to int2.\n\nBecause an int2 index is not used to look up an int4.\nAnd untyped numbers are int4 or numeric the int2 index is never used unless\nexplicitely specified (by a type cast).\nYes this is a known bug in PostgreSQL 7.1 and below. Hopefully this will\nget addressed in 7.2?\nWhy don't I code it? I'm busy working on ecpg (dyn. SQL) at the moment.\n\nChristof\n\n\n", "msg_date": "Mon, 05 Mar 2001 15:00:47 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: Query Planning time increased 3 times on 7.1 compared to\n 7.0.3" }, { "msg_contents": "Justin Clift wrote:\n\n> Hi Christof,\n>\n> I'm not aware of the problem with int2 indexes collapsing. Can you give\n> me some more info, and I'll put it on the techdocs.postgresql.org\n> website.\n\nOh, I'm sorry for my strange wording.\n\nI said that the index search collapses to a sequential scan if you do\nnot\ncast the number to int2.\n\nBecause an int2 index is not used to look up an int4.\nAnd untyped numbers are int4 or numeric the int2 index is never used\nunless\nexplicitely specified (by a type cast).\nYes this is a known bug in PostgreSQL 7.1 and below. Hopefully this will\nget addressed in 7.2?\nWhy don't I code it? I'm busy working on ecpg (dyn. SQL) at the moment.\n\nChristof\n", "msg_date": "Mon, 05 Mar 2001 16:07:04 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: Query Planning time increased 3 times on 7.1 compared to\n 7.0.3" }, { "msg_contents": "Christof Petig <christof.petig@wtal.de> writes:\n> We noticed that after upgrading to 7.1beta[245] the execution time for\n> some often used queries went up by a factor of 2 or more.\n\nI get the desired plan after doing VACUUM ANALYZE ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 11:26:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Query Planning time increased 3 times on 7.1 compared to 7.0.3 " }, { "msg_contents": "Tom Lane wrote:\n\n> Christof Petig <christof.petig@wtal.de> writes:\n> > We noticed that after upgrading to 7.1beta[245] the execution time for\n> > some often used queries went up by a factor of 2 or more.\n>\n> I get the desired plan after doing VACUUM ANALYZE ...\n>\n> regards, tom lane\n\nI apologize. I must have been smoking something when I did the vacuum\nanalyze. And my nightly script did not work. 7.1 is much faster.\n\nChristof\n\n\n", "msg_date": "Wed, 07 Mar 2001 16:08:47 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: Query Planning time increased 3 times on 7.1 compared to\n 7.0.3" } ]
[ { "msg_contents": " Hi\n I try this \"construction\" : select myfield from mytable where\nmyfield=-1\nAnd get this:\nERROR: Unable to identify an operator '=-' for types 'numeric' and\n'int4'\n You will have to retype this query using an explicit cast\n I don't lik this!\nVic\n\n", "msg_date": "Mon, 05 Mar 2001 11:33:35 +0300", "msg_from": "Vic <vic@dcc.dp.ua>", "msg_from_op": true, "msg_subject": "Oops! Its bug in parser???? " }, { "msg_contents": "> What version of PostgreSQL are you using ?\n>\n\n PostgreSQL 7.0.0 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n\n>\n> I believe this is fixed a long time ago.\n\nIf you don't want to upgrade just put spaces in\n\n> > select myfield from mytable where myfield = -1\n\n Ya - i don't want make upgrage all system for this little bug (and\ncan't make it - i'm not root),\n but I maybe want a path to it.\nVic.\n\n", "msg_date": "Mon, 05 Mar 2001 15:01:33 +0300", "msg_from": "Vic <vic@dcc.dp.ua>", "msg_from_op": true, "msg_subject": "Re: Oops! Its bug in parser????" }, { "msg_contents": "Vic wrote:\n\n> Hi\n> I try this \"construction\" : select myfield from mytable where\n> myfield=-1\n> And get this:\n> ERROR: Unable to identify an operator '=-' for types 'numeric' and\n> 'int4'\n\nWhat version of PostgreSQL are you using ?\n\nI believe this is fixed a long time ago.\n\nIf you don't want to upgrade just put spaces in\n\n > select myfield from mytable where myfield = -1\n\n----------------\nHannu\n\n", "msg_date": "Mon, 05 Mar 2001 14:27:45 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Oops! Its bug in parser????" }, { "msg_contents": "Vic <vic@dcc.dp.ua> writes:\n>> What version of PostgreSQL are you using ?\n\n> PostgreSQL 7.0.0 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66\n\nEvidently not a released version, but some beta, since this behavior was\nchanged before 7.0 release (cf. scan.l CVS log for 18-Mar-2000).\n\nI *strongly* suggest an update to 7.0.3 ... there are some pretty nasty\ncritters waiting to bite you in 7.0 beta.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 10:51:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Oops! Its bug in parser???? " } ]
[ { "msg_contents": "\n> Since there is not a separate WAL version stamp, introducing one now\n> would certainly force an initdb. I don't mind adding one if you think\n> it's useful; another 4 bytes in pg_control won't hurt anything. But\n> it's not going to save anyone's bacon on this cycle.\n\nYes, if initdb, that would probably be a good idea.\nImho the initdb now is not a real issue, since all beta testers\nknow that for serious issues there might be an initdb after beta started.\n\n> At least one of my concerns (single point of failure) would require a\n> change to the layout of pg_control, which would force initdb anyway.\n\nWas that the \"only one checkpoint back in time in pg_control\" issue ?\nOne issue about too many checkpoints in pg_control, is that you then need \nto keep more logs, and in my pgbench tests the log space was a real issue,\neven for the one checkpoint case. I think a utility to recreate a busted pg_control\nwould add a lot more stability, than one more checkpoint in pg_control.\n\nWe should probably have additional criteria to time, that can trigger a \ncheckpoint, like N logs filled since last checkpoint. I do not think \nreducing the checkpoint interval is a solution for once in a while heavy activity.\n\nAndreas\n", "msg_date": "Mon, 5 Mar 2001 10:46:50 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@Wien.Spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL & RC1 status " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@Wien.Spardat.at> writes:\n>> At least one of my concerns (single point of failure) would require a\n>> change to the layout of pg_control, which would force initdb anyway.\n\n> Was that the \"only one checkpoint back in time in pg_control\" issue ?\n\nYes.\n\n> One issue about too many checkpoints in pg_control, is that you then\n> need to keep more logs, and in my pgbench tests the log space was a\n> real issue, even for the one checkpoint case. I think a utility to\n> recreate a busted pg_control would add a lot more stability, than one\n> more checkpoint in pg_control.\n\nWell, there is a big difference between 1 and 2 checkpoints stored in\npg_control. I don't intend to go further than 2. But I disagree about\na log-reset utility being more useful than an extra checkpoint. The\nutility would be for manual recovery after a disaster, and it wouldn't\noffer 100% recovery: you couldn't be sure that the last few transactions\nhad been applied atomically, ie, all or none. (Perhaps pg_log got\nupdated to show them committed, but not all of their tuple changes made\nit to disk; how will you know?) If you can back up to the prior\ncheckpoint and then roll forward, you *do* have a shot at guaranteeing\na consistent database state after loss of the primary checkpoint.\n\n> We should probably have additional criteria to time, that can trigger a \n> checkpoint, like N logs filled since last checkpoint.\n\nPerhaps. I don't have time to work on that now, but we can certainly\nimprove the strategy in future releases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 11:13:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL & RC1 status " }, { "msg_contents": "> Zeugswetter Andreas SB <ZeugswetterA@Wien.Spardat.at> writes:\n> >> At least one of my concerns (single point of failure) would require a\n> >> change to the layout of pg_control, which would force initdb anyway.\n> \n> > Was that the \"only one checkpoint back in time in pg_control\" issue ?\n> \n> Yes.\n\nIs changing pg_control the thing that is going to require the initdb?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 11:25:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL & RC1 status" } ]
[ { "msg_contents": "\n> Here is one of the queries, it takes about half a second on our computer\n> (PII 233 with 256MB RAM) to execute and returns typically 1-4 rows via\n> two index scans with high selectivity. So it looks to me that planning\n> time outwages execution time by far. 7.0 took about 0.15 \n> seconds (which is still much).\n\nThe plans show two different indexes and different statistics for the \ntwo different versions. No wonder, you see different response times.\n\nIs the \"vacuum [analyze]\" up to date in both versions ?\n\nAndreas \n", "msg_date": "Mon, 5 Mar 2001 16:18:35 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Query Planning time increased 3 times on 7.1 compar\n\ted to 7.0.3" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n\n> > Here is one of the queries, it takes about half a second on our computer\n> > (PII 233 with 256MB RAM) to execute and returns typically 1-4 rows via\n> > two index scans with high selectivity. So it looks to me that planning\n> > time outwages execution time by far. 7.0 took about 0.15\n> > seconds (which is still much).\n>\n> The plans show two different indexes and different statistics for the\n> two different versions. No wonder, you see different response times.\n>\n> Is the \"vacuum [analyze]\" up to date in both versions ?\n\nI cannot guarantee, that I did vacuum analyze right before I issued the\nexplain verbose (one is on a busy server, one on a development machine) but I\nhave seen the _visible_ slowdown of 7.1 compared to 7.0 too often to be a\nlack of vacuum analyze.\n\nIt seems that in most cases a sort is involved. 0.5 seconds is far too much\nfor a two row return via two index scans.\n\nBut I will try again (empty database, vacuum analyze, issue query) and\nreport.\n\nWhat startled me most was that both versions agree that index scan is the\nfastest method but it took 0.2 secs on one and 0.5 secs on the other.\nThe tables do not carry soooo much data.\n\n-------------\n\nTom Lane wrote:\n\n> I get the desired plan after doing VACUUM ANALYZE ...\n\nBoth Versions agree on the best plan, but to me it looks like 7.0 gets this\nclue first (in about half/third of the time).\nAnd that unmodified programs take twice of the time with 7.1 _after_ a fresh\ndb load and analyze is strange.\n\nChristof\n\n\n\n\n", "msg_date": "Mon, 05 Mar 2001 23:17:24 +0100", "msg_from": "Christof Petig <christof@petig-baender.de>", "msg_from_op": false, "msg_subject": "Re: Query Planning time increased 3 times on 7.1 compared to\n 7.0.3" } ]
[ { "msg_contents": "> > One issue about too many checkpoints in pg_control, is that you then\n> > need to keep more logs, and in my pgbench tests the log space was a\n> > real issue, even for the one checkpoint case. I think a utility to\n> > recreate a busted pg_control would add a lot more stability, than one\n> > more checkpoint in pg_control.\n> \n> Well, there is a big difference between 1 and 2 checkpoints stored in\n> pg_control. I don't intend to go further than 2. But I disagree about\n> a log-reset utility being more useful than an extra checkpoint.\n\nYes I agree, I thought there was already one additional checkpoint info in \npg_control.\n\n> The\n> utility would be for manual recovery after a disaster, and it wouldn't\n> offer 100% recovery: you couldn't be sure that the last few transactions\n> had been applied atomically, ie, all or none. (Perhaps pg_log got\n> updated to show them committed, but not all of their tuple changes made\n> it to disk; how will you know?) If you can back up to the prior\n> checkpoint and then roll forward, you *do* have a shot at guaranteeing\n> a consistent database state after loss of the primary checkpoint.\n\nYes, but a consistent db can only be guaranteed if all txlog logs up to the \ncrash are eighter rolled forward or at least the physical log pages are written \nback to disk.\n\nThe consequence is imho, that a good utility to reset the logs should keep\nall \"physical log\" pages, and only clear the log from all other records\n[optionally starting at the position that hinders rollforward].\n\nAndreas\n", "msg_date": "Mon, 5 Mar 2001 17:47:33 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: WAL & RC1 status " } ]
[ { "msg_contents": "Consider the following scenario:\n\n1. A new transaction inserts a tuple. The tuple is entered into its\nheap file with the new transaction's XID, and an associated WAL log\nentry is made. Neither one of these are on disk yet --- the heap tuple\nis in a shmem disk buffer, and the WAL entry is in the shmem WAL buffer.\n\n2. Now do a lot of read-only operations, in the same or another backend.\nThe WAL log stays where it is, but eventually the shmem disk buffer will\nget flushed to disk so that the buffer can be re-used for some other\ndisk page.\n\n3. Assume we now crash. Now, we have a heap tuple on disk with an XID\nthat does not correspond to any XID visible in the on-disk WAL log.\n\n4. Upon restart, WAL will initialize the XID counter to the first XID\nnot seen in the WAL log. Guess which one that is.\n\n5. We will now run a new transaction with the same XID that was in use\nbefore the crash. If that transaction commits, then we have a tuple on\ndisk that will be considered valid --- and should not be.\n\n\nAfter thinking about this for a little, it seems to me that XID\nassignment should be handled more like OID assignment: rather than\nhanding out XIDs one-at-a-time, varsup.c should allocate them in blocks,\nand should write an XLOG record to reflect the allocation of each block\nof XIDs. Furthermore, the above example demonstrates that *we must\nflush that XLOG entry to disk* before we can start to actually hand out\nthe XIDs. This ensures that the next system cycle won't re-use any XIDs\nthat may have been in use at the time of a crash.\n\nOID assignment is not quite so critical. Consider again the scenario\nabove: we don't really care if after restart we reuse the OID that was\nassigned to the crashed transaction's inserted tuple. As long as the\ntuple itself is not considered committed, it doesn't matter what OID it\ncontains. So, it's not necessary to force XLOG flush for OID-assignment\nXLOG entries.\n\nIn short then: make the XID allocation machinery just like the OID\nallocation machinery presently is, plus an XLogFlush() after writing\nthe NEXTXID XLOG record.\n\nComments?\n\n\t\t\tregards, tom lane\n\nPS: oh, another thing: redo of a checkpoint record ought to advance the\nXID and OID counters to be at least what the checkpoint record shows.\n", "msg_date": "Mon, 05 Mar 2001 13:31:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "WAL-based allocation of XIDs is insecure" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> After thinking about this for a little, it seems to me that XID\n> assignment should be handled more like OID assignment: rather than\n> handing out XIDs one-at-a-time, varsup.c should allocate them in blocks,\n> and should write an XLOG record to reflect the allocation of each block\n> of XIDs. Furthermore, the above example demonstrates that *we must\n> flush that XLOG entry to disk* before we can start to actually hand out\n> the XIDs. This ensures that the next system cycle won't re-use any XIDs\n> that may have been in use at the time of a crash.\n\nI think your example demonstrates something slightly different. I\nthink it demonstrates that Postgres must flush the XLOG entry to disk\nbefore it flushes any buffer to disk which uses an XID which was just\nallocated.\n\nFor each buffer, heap_update could record the last XID stored into\nthat buffer. When a buffer is forced out to disk, Postgres could make\nsure that the XLOG entry which uses the XID is previously forced out\nto disk.\n\nA simpler and less accurate approach: when any dirty buffer is forced\nto disk in order to allocate a buffer, make sure that any XLOG entry\nwhich allocates new XIDs is flushed to disk first.\n\nI don't know if these are better. I raise them because you are\nsuggesting putting an occasional fsync at transaction start to avoid\nan unlikely scenario. A bit of bookkeeping can be used instead to\nnotice the unlikely scenario when it occurs.\n\nIan\n", "msg_date": "05 Mar 2001 11:35:56 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: WAL-based allocation of XIDs is insecure" }, { "msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> I think your example demonstrates something slightly different. I\n> think it demonstrates that Postgres must flush the XLOG entry to disk\n> before it flushes any buffer to disk which uses an XID which was just\n> allocated.\n\nThat would be an alternative solution, but it's considerably more\ncomplex to implement and I'm not convinced it is more efficient.\n\nThe above could result, worst case, in double the normal number of\nfsyncs --- each new transaction might need an fsync to dump its first\nfew XLOG records (in addition to the fsync for its commit), if the\nshmem disk buffer traffic is not in your favor. This worst case is\nnot even difficult to produce: consider a series of standalone\ntransactions that each touch more than -B pages (-B = # of buffers).\n\nIn contrast, syncing NEXTXID records will require exactly one extra\nfsync every few thousand transactions. That seems quite acceptable\nto me, and better than an fsync load that we can't predict. Perhaps\nthe average case of fsync-on-buffer-flush would be better than that,\nor perhaps not, but the worst case is definitely far worse.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 15:02:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > I think your example demonstrates something slightly different. I\n> > think it demonstrates that Postgres must flush the XLOG entry to disk\n> > before it flushes any buffer to disk which uses an XID which was just\n> > allocated.\n> \n> That would be an alternative solution, but it's considerably more\n> complex to implement and I'm not convinced it is more efficient.\n> \n> The above could result, worst case, in double the normal number of\n> fsyncs --- each new transaction might need an fsync to dump its first\n> few XLOG records (in addition to the fsync for its commit), if the\n> shmem disk buffer traffic is not in your favor. This worst case is\n> not even difficult to produce: consider a series of standalone\n> transactions that each touch more than -B pages (-B = # of buffers).\n> \n> In contrast, syncing NEXTXID records will require exactly one extra\n> fsync every few thousand transactions. That seems quite acceptable\n> to me, and better than an fsync load that we can't predict. Perhaps\n> the average case of fsync-on-buffer-flush would be better than that,\n> or perhaps not, but the worst case is definitely far worse.\n\nI described myself unclearly. I was suggesting an addition to what\nyou are suggesting. The worst case can not be worse.\n\nIf you are going to allocate a few thousand XIDs each time, then I\nagree that my suggested addition is not worth it. But how do you deal\nwith XID wraparound on an unstable system?\n\nIan\n", "msg_date": "05 Mar 2001 12:07:28 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: WAL-based allocation of XIDs is insecure" }, { "msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> I described myself unclearly. I was suggesting an addition to what\n> you are suggesting. The worst case can not be worse.\n\nThen I didn't (and still don't) understand your suggestion. Want to\ntry again?\n\n> If you are going to allocate a few thousand XIDs each time, then I\n> agree that my suggested addition is not worth it. But how do you deal\n> with XID wraparound on an unstable system?\n\nAbout the same as we do now: not very well. But if your system is that\nunstable, XID wrap is the least of your worries, I think.\n\nUp through 7.0, Postgres allocated XIDs a thousand at a time, and not\nonly did the not-yet-used XIDs get lost in a crash, they'd get lost in\na normal shutdown too. What I propose will waste XIDs in a crash but\nnot in a normal shutdown, so it's still an improvement over prior\nversions as far as XID consumption goes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 15:15:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Ian Lance Taylor <ian@airs.com> writes:\n> > I described myself unclearly. I was suggesting an addition to what\n> > you are suggesting. The worst case can not be worse.\n> \n> Then I didn't (and still don't) understand your suggestion. Want to\n> try again?\n\nYour suggestion requires an obligatory fsync at an occasional\ntransaction start.\n\nI was observing that in most cases, that fsync is not needed. It can\nbe avoided with a bit of additional bookkeeping.\n\nI was assuming, incorrectly, that you would not want to allocate that\nmany XIDs at once. If you allocate 1000s of XIDs at once, the\nobligatory fsync is not that bad, and my suggestion should be ignored.\n\n> > If you are going to allocate a few thousand XIDs each time, then I\n> > agree that my suggested addition is not worth it. But how do you deal\n> > with XID wraparound on an unstable system?\n> \n> About the same as we do now: not very well. But if your system is that\n> unstable, XID wrap is the least of your worries, I think.\n> \n> Up through 7.0, Postgres allocated XIDs a thousand at a time, and not\n> only did the not-yet-used XIDs get lost in a crash, they'd get lost in\n> a normal shutdown too. What I propose will waste XIDs in a crash but\n> not in a normal shutdown, so it's still an improvement over prior\n> versions as far as XID consumption goes.\n\nI find this somewhat troubling, since I like to think in terms of\nlong-running systems--like, decades. But I guess it's OK (for me) if\nit is fixed in the next couple of years.\n\nIan\n", "msg_date": "05 Mar 2001 12:22:27 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: WAL-based allocation of XIDs is insecure" }, { "msg_contents": "Ian Lance Taylor <ian@airs.com> writes:\n> Tom Lane <tgl@sss.pgh.pa.us> writes:\n>> Up through 7.0, Postgres allocated XIDs a thousand at a time, and not\n>> only did the not-yet-used XIDs get lost in a crash, they'd get lost in\n>> a normal shutdown too. What I propose will waste XIDs in a crash but\n>> not in a normal shutdown, so it's still an improvement over prior\n>> versions as far as XID consumption goes.\n\n> I find this somewhat troubling, since I like to think in terms of\n> long-running systems--like, decades. But I guess it's OK (for me) if\n> it is fixed in the next couple of years.\n\nAgreed, we need to do something about the XID-wrap problem pretty soon.\nBut we're not solving it for 7.1, and in the meantime I don't think\nthese changes make much difference either way.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 15:29:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "Sounds good.\n\n\n> Could we use some other letter for the pg_restore '-U' option? psql and\n> wrapper scripts use -U to select the user name to connect as (different\n> from -u), and I'd like to implement this sometime for the pg_dump tools as\n> well. How does -L (\"list\") sound?\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 16:47:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_restore -U" }, { "msg_contents": "Could we use some other letter for the pg_restore '-U' option? psql and\nwrapper scripts use -U to select the user name to connect as (different\nfrom -u), and I'd like to implement this sometime for the pg_dump tools as\nwell. How does -L (\"list\") sound?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Mon, 5 Mar 2001 22:51:10 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "pg_restore -U" }, { "msg_contents": "At 22:51 5/03/01 +0100, Peter Eisentraut wrote:\n>Could we use some other letter for the pg_restore '-U' option? psql and\n>wrapper scripts use -U to select the user name to connect as (different\n>from -u), and I'd like to implement this sometime for the pg_dump tools as\n>well. How does -L (\"list\") sound?\n\nI'm about to prevent some disabling of triggers, so I may as well make this\nchange as well, unless you are already doing it?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 06 Mar 2001 13:08:14 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_restore -U" } ]
[ { "msg_contents": "Trying example from:\n\nhttp://www.postgresql.org/devel-corner/docs/user/functions-datetime.html\n\npatrimoine=# SELECT EXTRACT(EPOCH FROM TIMESTAMP '2001-02-16 20:38:40');\nERROR: parser: parse error at or near \"epoch\"\npatrimoine=# select version();\n version \n-------------------------------------------------------------------------------\n PostgreSQL 7.1beta3 on i386-unknown-netbsdelf1.5Q, compiled by GCC egcs-1.1.2\n(1 row)\n\npatrimoine=# select date_part('epoch','2001-02-16 20:38:40'::timestamp);\n date_part \n-----------\n 982355920\n(1 row)\n\nIs my version already too old?\n\nCheers,\n\nPatrick\n", "msg_date": "Mon, 5 Mar 2001 22:09:14 +0000", "msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>", "msg_from_op": true, "msg_subject": "epoch" }, { "msg_contents": "Patrick Welche writes:\n\n> http://www.postgresql.org/devel-corner/docs/user/functions-datetime.html\n>\n> patrimoine=# SELECT EXTRACT(EPOCH FROM TIMESTAMP '2001-02-16 20:38:40');\n> ERROR: parser: parse error at or near \"epoch\"\n> patrimoine=# select version();\n> version\n> -------------------------------------------------------------------------------\n> PostgreSQL 7.1beta3 on i386-unknown-netbsdelf1.5Q, compiled by GCC egcs-1.1.2\n> (1 row)\n\n> Is my version already too old?\n\nYes.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 6 Mar 2001 21:39:13 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: epoch" } ]
[ { "msg_contents": "I guess these stubs are for accessing PG as a corba server...\n\nI'm trying to look to see if I can store CORBA objects inside PG, any\nideas...\n\nFranck Martin\nNetwork and Database Development Officer\nSOPAC South Pacific Applied Geoscience Commission\nFiji\nE-mail: franck@sopac.org <mailto:franck@sopac.org> \nWeb site: http://www.sopac.org/\n<http://www.sopac.org/> Support FMaps: http://fmaps.sourceforge.net/\n<http://fmaps.sourceforge.net/> \n\nThis e-mail is intended for its addresses only. Do not forward this e-mail\nwithout approval. The views expressed in this e-mail may not be necessarily\nthe views of SOPAC.\n\n\n\n-----Original Message-----\nFrom: Peter T Mount [mailto:peter@retep.org.uk]\nSent: Tuesday, 6 March 2001 3:52 \nTo: Franck Martin\nCc: PostgreSQL List\nSubject: Re: [HACKERS] CORBA and PG\n\n\nQuoting Franck Martin <franck@sopac.org>:\n\n> Does anyone has pointers on CORBA and PostgreSQL?\n> \n> What is the story ?\n\nThere's some old stubs for one of the orbs somewhere in the source (C/C++)\n\nAlso the old JDBC/Corba example is still there \n(src/interfaces/jdbc/example/corba)\n\nPeter\n\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Tue, 6 Mar 2001 10:14:21 +1200 ", "msg_from": "Franck Martin <Franck@sopac.org>", "msg_from_op": true, "msg_subject": "RE: CORBA and PG" }, { "msg_contents": "> I'm trying to look to see if I can store CORBA objects inside PG, any\n> ideas...\n\nCORBA has mechanisms for locating and executing remote objects. Some\nservices, like the naming service, could use a database as a persistant\nstore. Other services, like the implementation repository, could use a\ndatabase to hold rules for *how* to start a service, as well as holding\npersistant info.\n\nCORBA IORs are glue holding clients and servers together; storing those\nin a database would make them persistant (as mentioned above for the\nnaming service). An actual CORBA object typically is an executable,\nwhich would need to be stored as a binary object. Not sure what storing\nthat in a database would do for you; perhaps you could give us a use\ncase?\n\n - Thomas\n", "msg_date": "Tue, 06 Mar 2001 02:31:32 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: CORBA and PG" }, { "msg_contents": "> I'm trying to look to see if I can store CORBA objects inside PG, any\n> ideas...\n\nCORBA has mechanisms for locating and executing remote objects. Some\nservices, like the naming service, could use a database as a persistant\nstore. Other services, like the implementation repository, could use a\ndatabase to hold rules for *how* to start a service, as well as holding\npersistant info.\n\nCORBA IORs are glue holding clients and servers together; storing those\nin a database would make them persistant (as mentioned above for the\nnaming service). An actual CORBA object typically is an executable,\nwhich would need to be stored as a binary object. Not sure what storing\nthat in a database would do for you; perhaps you could give us a use\ncase?\n\n - Thomas\n", "msg_date": "Tue, 06 Mar 2001 02:34:17 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: CORBA and PG" }, { "msg_contents": "Quoting Franck Martin <Franck@sopac.org>:\n\n> I guess these stubs are for accessing PG as a corba server...\n> \n> I'm trying to look to see if I can store CORBA objects inside PG, any\n> ideas...\n\nAlthough I've not tried it (yet) it should be possible to access Java EJB's \nfrom corba.\n\nIf so, then using an EJB server (JBoss www.jboss.org) you could then store them \nas Entity beans. Each one would then have its own table in the database.\n\nPeter\n\n> \n> Franck Martin\n> Network and Database Development Officer\n> SOPAC South Pacific Applied Geoscience Commission\n> Fiji\n> E-mail: franck@sopac.org <mailto:franck@sopac.org> \n> Web site: http://www.sopac.org/\n> <http://www.sopac.org/> Support FMaps: http://fmaps.sourceforge.net/\n> <http://fmaps.sourceforge.net/> \n> \n> This e-mail is intended for its addresses only. Do not forward this\n> e-mail\n> without approval. The views expressed in this e-mail may not be\n> necessarily\n> the views of SOPAC.\n> \n> \n> \n> -----Original Message-----\n> From: Peter T Mount [mailto:peter@retep.org.uk]\n> Sent: Tuesday, 6 March 2001 3:52 \n> To: Franck Martin\n> Cc: PostgreSQL List\n> Subject: Re: [HACKERS] CORBA and PG\n> \n> \n> Quoting Franck Martin <franck@sopac.org>:\n> \n> > Does anyone has pointers on CORBA and PostgreSQL?\n> > \n> > What is the story ?\n> \n> There's some old stubs for one of the orbs somewhere in the source\n> (C/C++)\n> \n> Also the old JDBC/Corba example is still there \n> (src/interfaces/jdbc/example/corba)\n> \n> Peter\n> \n> \n> -- \n> Peter Mount peter@retep.org.uk\n> PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\n> RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n> \n\n\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Tue, 06 Mar 2001 04:39:09 -0500 (EST)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "RE: CORBA and PG" }, { "msg_contents": "> I'm trying to look to see if I can store CORBA objects inside PG, any\n> ideas...\n\nCORBA has several mechanisms for finding CORBA objects, including the\nnaming service and the implementation repository. The naming service\nprovides a directory for objects, returning IORs to allow a client to\ncontact a server. A database could be used to provide a persistant store\nfor this information. One could use a database to store rules for an\nimplementation repository, as well as IOR info.\n\nA CORBA object itself is an executable. So it could be stored as a\nbinary object, but I'm not sure what the benefits of storage in a\ndatabase would be. Some time ago I saw an article on using PostgreSQL to\nimplment a versioned file system, which might have some aspects similar\nto what you are asking about.\n\nDo you have a use case to help us out?\n\n - Thomas\n", "msg_date": "Tue, 06 Mar 2001 15:07:18 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: CORBA and PG" }, { "msg_contents": "Hi,\n\nThis was mentioned a while back on this list (pg hackers) - thanks to whoever\nprovided the pointer :-) I have not yet looked at it in depth, though that is high\non my list of TO-DO's. It is released under an apache style licence. Any reason\nwhy there are no pointers to it on the PostgreSQL related projects or interfaces\npages?\n\nproject page: http://4suite.org/index.epy\ndocs on ODMG support: http://services.4Suite.org/documents/4Suite/4ODS-userguide\n\n From project page:\n\"4Suite is a collection of Python tools for XML processing and object database\nmanagement. It provides support for XML parsing, several transient and persistent\nDOM implementations, XPath expressions, XPointer, XSLT transforms, XLink, RDF and\nODMG object databases.\n\n4Suite server ... features an XML data repository, a rules-based engine, and XSLT\ntransforms, XPath and RDF-based indexing and query, XLink resolution and many\nother XML services. It also supports related services such as distributed\ntransactions and access control lists. Along with basic console and command-line\nmanagement, it supports remote, cross-platform and cross-language access through\nCORBA, WebDAV, HTTP and other request protocols to be added shortly.\"\n\nDrivers for PostgreSQL and Oracle are provided.\n\nBTW, page pays postgresql quite a compliment too: \"PostgresQL is a brilliant,\nenterprise-quality, open-source, SQL DBMS.\" :-)\n\nPeter T Mount wrote:\n\n> Quoting Franck Martin <Franck@sopac.org>:\n>\n> > I guess these stubs are for accessing PG as a corba server...\n> >\n> > I'm trying to look to see if I can store CORBA objects inside PG, any\n> > ideas...\n\n> Although I've not tried it (yet) it should be possible to access Java EJB's\n> from corba.\n>\n> If so, then using an EJB server (JBoss www.jboss.org) you could then store them\n> as Entity beans. Each one would then have its own table in the database.\n>\n> Peter\n>\n> >\n> > Franck Martin\n> > Network and Database Development Officer\n> > SOPAC South Pacific Applied Geoscience Commission\n> > Fiji\n> > E-mail: franck@sopac.org <mailto:franck@sopac.org>\n> > Web site: http://www.sopac.org/\n> > <http://www.sopac.org/> Support FMaps: http://fmaps.sourceforge.net/\n> > <http://fmaps.sourceforge.net/>\n> >\n> > This e-mail is intended for its addresses only. Do not forward this\n> > e-mail\n> > without approval. The views expressed in this e-mail may not be\n> > necessarily\n> > the views of SOPAC.\n> >\n> >\n> >\n> > -----Original Message-----\n> > From: Peter T Mount [mailto:peter@retep.org.uk]\n> > Sent: Tuesday, 6 March 2001 3:52\n> > To: Franck Martin\n> > Cc: PostgreSQL List\n> > Subject: Re: [HACKERS] CORBA and PG\n> >\n> >\n> > Quoting Franck Martin <franck@sopac.org>:\n> >\n> > > Does anyone has pointers on CORBA and PostgreSQL?\n> > >\n> > > What is the story ?\n> >\n> > There's some old stubs for one of the orbs somewhere in the source\n> > (C/C++)\n> >\n> > Also the old JDBC/Corba example is still there\n> > (src/interfaces/jdbc/example/corba)\n> >\n> > Peter\n> >\n> >\n> > --\n> > Peter Mount peter@retep.org.uk\n> > PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\n> > RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n> >\n>\n> --\n> Peter Mount peter@retep.org.uk\n> PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\n> RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n--\n----------------------------------------------------------------------\njohn reid e-mail john_reid@uow.edu.au\ntechnical officer room G02, building 41\nschool of geosciences phone +61 02 4221 3963\nuniversity of wollongong fax +61 02 4221 4250\n\nuproot your questions from their ground and the dangling roots will be\nseen. more questions!\n -mentat zensufi\n\napply standard disclaimers as desired...\n----------------------------------------------------------------------\n\n\n", "msg_date": "Wed, 07 Mar 2001 12:37:26 +1100", "msg_from": "John Reid <jgreid@uow.edu.au>", "msg_from_op": false, "msg_subject": "Re: CORBA and PG" }, { "msg_contents": "At 12:37 07/03/01 +1100, John Reid wrote:\n>Hi,\n>\n>This was mentioned a while back on this list (pg hackers) - thanks to whoever\n>provided the pointer :-) I have not yet looked at it in depth, though \n>that is high\n>on my list of TO-DO's. It is released under an apache style licence. Any \n>reason\n>why there are no pointers to it on the PostgreSQL related projects or \n>interfaces\n>pages?\n\nProbably no one's asked to put it on there ;-)\n\nActually there's quite a few projects out there that use PostgreSQL and \ndon't say so here or register it on the web site, hence the lack of links...\n\nPeter\n\n\n>project page: http://4suite.org/index.epy\n>docs on ODMG support: \n>http://services.4Suite.org/documents/4Suite/4ODS-userguide\n>\n> From project page:\n>\"4Suite is a collection of Python tools for XML processing and object database\n>management. It provides support for XML parsing, several transient and \n>persistent\n>DOM implementations, XPath expressions, XPointer, XSLT transforms, XLink, \n>RDF and\n>ODMG object databases.\n\nHmmm, nothing to do with postgres but I think I may have seen a demo of \nthis about a month back. If it was that, it was pretty interesting...\n\nPeter\n\n", "msg_date": "Mon, 12 Mar 2001 20:08:11 +0000", "msg_from": "Peter Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: CORBA and PG" } ]
[ { "msg_contents": "I have spent several days now puzzling over the corrupted WAL logfile\nthat Scott Parish was kind enough to send me from a 7.1beta4 crash.\nIt looks a lot like two different series of transactions were getting\nwritten into the same logfile. I'd been digging like mad in the WAL\ncode to try to explain this as a buffer-management logic error, but\nafter a fresh exchange of info it turns out that I was barking up the\nwrong tree. There *were* two different series of transactions.\nSpecifically, here's what happened:\n\n1. Scott (or actually his associate) shut down and restarted the\npostmaster using the /etc/rc.d/init.d/pgsql script that ships with\nour RPMs. That script shuts down the old postmaster with\n\tkillproc postmaster\nIt turns out that at least on Scott's machine (RedHat 6.1), the default\nkill level for the killproc function is kill -9. (This is clearly a bad\nbug in the init script, but I digress.)\n\n2. So, the old postmaster was killed with kill -9, but its child\nbackends were still running. The new postmaster will start up\nsuccessfully because it'll think the old postmaster crashed, and\nso it will go through the usual recovery procedure.\n\n3. Now we have two sets of backends running in different shmem blocks\n(7.0 might have choked on that part, but 7.1 doesn't care) and running\ndifferent sets of transactions. But they're writing to the same WAL\nlog. Result: guaranteed corruption of the log.\n\nIt actually took two iterations of this to expose the bug: the third\nattempted postmaster start went looking for the checkpoint record last\nwritten by the second one, which meanwhile had got overwritten by\nactivity of the first backend set.\n\n\nNow, killing the postmaster -9 and not cleaning up the backends has\nalways been a good way to shoot yourself in the foot, but up to now the\nworst thing that was likely to happen to you was isolated corruption in\nspecific tables. In the brave new world of WAL the stakes are higher,\nbecause the system will refuse to start up if it finds a corrupted\ncheckpoint record. Clueless admins who resort to kill -9 as a routine\nadmin tool *will* lose their databases. Moreover, the init scripts\nthat are running around now are dangerous weapons if used with 7.1.\n\nI think we need a stronger interlock to prevent this scenario, but I'm\nunsure what it should be. Ideas?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 17:30:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "At 3/5/2001 04:30 PM, you wrote:\n>Now, killing the postmaster -9 and not cleaning up the backends has\n>always been a good way to shoot yourself in the foot, but up to now the\n>worst thing that was likely to happen to you was isolated corruption in\n>specific tables. In the brave new world of WAL the stakes are higher,\n>because the system will refuse to start up if it finds a corrupted\n>checkpoint record. Clueless admins who resort to kill -9 as a routine\n>admin tool *will* lose their databases. Moreover, the init scripts\n>that are running around now are dangerous weapons if used with 7.1.\n>\n>I think we need a stronger interlock to prevent this scenario, but I'm\n>unsure what it should be. Ideas?\n\nIs there anyway to see if the other processes (child) have a lock on the \nlog file?\n\nOn a lot of systems, when a daemon starts, will record the PID in a file so \nit/'the admin' can do a 'shutdown' script with the PID listed.\nCan child processes list themselves like child.PID in a configurable \ndirectory, and have the starting process look for all of these and shut the \n\"orphaned\" child processes down?\n\nJust thoughts...\n\nThomas\n\n", "msg_date": "Mon, 05 Mar 2001 17:19:34 -0600", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010305 14:51] wrote:\n> \n> I think we need a stronger interlock to prevent this scenario, but I'm\n> unsure what it should be. Ideas?\n\nRe having multiple postmasters active by accident.\n\nThe sysV IPC stuff has some hooks in it that may help you.\n\nOne idea is to check the 'struct shmid_ds' feild 'shm_nattch',\nbasically at startup if it's not 1 (or 0) then you have more than\none postgresql instance messing with it and it should not proceed.\n\nI'd also suggest looking into using sysV semaphores and the semundo\nstuff, afaik it can be used to track the number of consumers of\na reasource.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Mon, 5 Mar 2001 15:47:51 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Tom Lane wrote:\n> checkpoint record. Clueless admins who resort to kill -9 as a routine\n> admin tool *will* lose their databases. Moreover, the init scripts\n> that are running around now are dangerous weapons if used with 7.1.\n\nThanks for the headsup, Tom. Time to nix killproc and do something\ncleaner -- compatible, but cleaner. I'll have to research what the\ndefaults are for later RH's -- but, as 6.1 is one of my target platforms\nat this time, I have to fix that issue for sure.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 20:46:41 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Thanks for the headsup, Tom. Time to nix killproc and do something\n> cleaner -- compatible, but cleaner.\n\nAs far as I could tell from the 6.1 scripts, it would work to do\n\n\tkillproc postmaster -TERM\n\nThe problem is just that killproc has an overenthusiastic default...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 20:49:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "killproc should send a kill -15 to the process, wait a few seconds for\nit to exit. If it does not, try kill -1, and if that doesn't kill it,\nthen kill -9.\n\n> Tom Lane wrote:\n> > checkpoint record. Clueless admins who resort to kill -9 as a routine\n> > admin tool *will* lose their databases. Moreover, the init scripts\n> > that are running around now are dangerous weapons if used with 7.1.\n> \n> Thanks for the headsup, Tom. Time to nix killproc and do something\n> cleaner -- compatible, but cleaner. I'll have to research what the\n> defaults are for later RH's -- but, as 6.1 is one of my target platforms\n> at this time, I have to fix that issue for sure.\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 20:52:13 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Thanks for the headsup, Tom. Time to nix killproc and do something\n> > cleaner -- compatible, but cleaner.\n> \n> As far as I could tell from the 6.1 scripts, it would work to do\n> \n> \tkillproc postmaster -TERM\n> \n\nYes, amazing it has a -9 default.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 20:52:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> killproc should send a kill -15 to the process, wait a few seconds for\n> it to exit. If it does not, try kill -1, and if that doesn't kill it,\n> then kill -9.\n\nTell it to the Linux people ... this is their boot-script code we're\ntalking about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 20:55:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > killproc should send a kill -15 to the process, wait a few seconds for\n> > it to exit. If it does not, try kill -1, and if that doesn't kill it,\n> > then kill -9.\n> \n> Tell it to the Linux people ... this is their boot-script code we're\n> talking about.\n\nRedHat, in particular. I can't vouch for any others.\n\nOn my RH 6.2 box, with initscripts-5.00-1 loaded, here's what killproc\ndoes if no killlevel is set (even though a default $killlevel is set to\n-9, it's not used in this code):\n($pid is the pid of the proc to kill, $base is the name of the proc,\netc)\n\n if [ \"$notset\" = \"1\" ] ; then\n if ps h $pid>/dev/null 2>&1; then\n # TERM first, then KILL if not dead\n kill -TERM $pid\n usleep 100000\n if ps h $pid >/dev/null 2>&1 ; then\n sleep 1\n if ps h $pid >/dev/null 2>&1 ; then\n sleep 3\n if ps h $pid >/dev/null 2>&1 ; then\n kill -KILL $pid\n fi\n fi\n fi\n fi\n ps h $pid >/dev/null 2>&1\n RC=$?\n [ $RC -eq 0 ] && failure \"$base shutdown\" || success \"$base\nshutdown\"\n RC=$((! $RC))\n # use specified level only\n else\n if ps h $pid >/dev/null 2>&1; then\n kill $killlevel $pid\n RC=$?\n [ $RC -eq 0 ] && success \"$base $killlevel\" || failure \"$base\n$killlevel\"\n fi\n fi\n\n\nIs 6.1 this different from 6.2? This code on the surface seems\nreasonable to me -- am I missing something? The 6.2 code (found in\n/etc/rc.d/init.d/functions, for those who might not know where to find\nkillproc) sets a default killlevel but never uses it -- ignorant but not\nstupid.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 21:11:38 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> if [ \"$notset\" = \"1\" ] ; then\n> if ps h $pid>/dev/null 2>&1; then\n> # TERM first, then KILL if not dead\n> kill -TERM $pid\n> usleep 100000\n> if ps h $pid >/dev/null 2>&1 ; then\n> sleep 1\n> if ps h $pid >/dev/null 2>&1 ; then\n> sleep 3\n> if ps h $pid >/dev/null 2>&1 ; then\n> kill -KILL $pid\n> fi\n> fi\n> fi\n> fi\n\nYes, this seems like the proper way to do it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 21:14:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "On Mon, Mar 05, 2001 at 08:55:41PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > killproc should send a kill -15 to the process, wait a few seconds for\n> > it to exit. If it does not, try kill -1, and if that doesn't kill it,\n> > then kill -9.\n> \n> Tell it to the Linux people ... this is their boot-script code we're\n> talking about.\n\nNot to be a zealot, but this isn't _Linux_ boot-script code, it's\n_Red Hat_ boot-script code. Red Hat would like for us all to confuse\nthe two, but they jes' ain't the same. (As a rule of thumb, where it\nworks right, credit Linux; where it doesn't, blame Red Hat. :-)\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 5 Mar 2001 18:19:25 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Tom Lane wrote:\n> \n> Now, killing the postmaster -9 and not cleaning up the backends has\n> always been a good way to shoot yourself in the foot, but up to now the\n> worst thing that was likely to happen to you was isolated corruption in\n> specific tables. In the brave new world of WAL the stakes are higher,\n> because the system will refuse to start up if it finds a corrupted\n> checkpoint record. Clueless admins who resort to kill -9 as a routine\n> admin tool *will* lose their databases. Moreover, the init scripts\n> that are running around now are dangerous weapons if used with 7.1.\n> \n> I think we need a stronger interlock to prevent this scenario, but I'm\n> unsure what it should be. Ideas?\n> \n\nSeems the simplest way is to inhibit starting postmaster\nif the pid file exists.\nAnother way is to use flock() if flock() is available.\nWe could flock() the pid file so that another postmaster\ncould detect the lock of the file.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 06 Mar 2001 11:19:33 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Bruce Momjian wrote:\n> > # TERM first, then KILL if not dead\n> Yes, this seems like the proper way to do it.\n\nNow to verify that 6.1 is the same....or different.... Hmmmm.... The\nmirrors of ftp.redhat.com (and, in fact, RedHat.com itself) no longer\nhave the updates or the original for 6.1's initscripts-4.70 package. \nCan a RedHat 6.1 user (using as close as possible to 6.1's release\ninitscripts package) send me a copy of /etc/rc.d/init.d/functions, or\nverify how that initscripts package defines killproc? I cannot at this\nmoment locate my RH 6.1 SRPMS CD. Found my RH _4_.1 CD, but that's just\na _little_ old :-).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 21:23:55 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> I think we need a stronger interlock to prevent this scenario, but I'm\n>> unsure what it should be. Ideas?\n\n> Seems the simplest way is to inhibit starting postmaster\n> if the pid file exists.\n\nThen we're unable to recover from a crash without manual intervention.\n\nThe tricky part of this is not to give up the ability to restart when\nthere *has* been a crash.\n\n> Another way is to use flock() if flock() is available.\n> We could flock() the pid file so that another postmaster\n> could detect the lock of the file.\n\nThis would only work if every backend is holding flock on the file,\nwhich would mean they'd all have to keep it open all the time. Kind\nof annoying to use up that many file descriptors on it. Might be the\nbest answer though; I haven't thought of anything I like better...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 21:28:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Nathan Myers wrote:\n> Not to be a zealot, but this isn't _Linux_ boot-script code, it's\n> _Red Hat_ boot-script code. Red Hat would like for us all to confuse\n> the two, but they jes' ain't the same. (As a rule of thumb, where it\n> works right, credit Linux; where it doesn't, blame Red Hat. :-)\n\nSo we're going to credit Linux for PostgreSQL being shipped as part of\nthe RedHat distribution since RH 5.0, then? :-0\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 21:33:19 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Is 6.1 this different from 6.2?\n\nScott sent me a copy of /etc/init.d/functions from his box, and it has\nlargely the same behavior (I hadn't read the whole code to notice that\nit doesn't use the default killlevel...). What's actually happening\nhere is that the init script sends SIGTERM, and then SIGKILL four\nseconds later if the postmaster hasn't shut down yet. Unfortunately,\nunless your clients are very short-lived four seconds isn't going to\nbe enough for a \"polite\" shutdown. (It's pretty marginal even for\nan impolite one, since a checkpoint will take at least a couple of\nseconds.)\n\nHowever, with an explicit kill level that doesn't happen: you get one\nsignal of the specified value, no more. Possibly it would be better for\nthe init script to send SIGINT (forcibly disconnect clients) instead of\nSIGTERM, however. So I'm now leaning to \"killproc postmaster -INT\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 21:36:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> The tricky part of this is not to give up the ability to restart when\n> there *has* been a crash.\n\nBut kill -9 effectively _is_ an admin-initiated crash.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 21:36:56 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Tom Lane wrote:\n>> The tricky part of this is not to give up the ability to restart when\n>> there *has* been a crash.\n\n> But kill -9 effectively _is_ an admin-initiated crash.\n\nYeah, but only a partial crash. If the admin finishes the job by\nkilling the backends too, we're fine. Postmaster down, backends alive\nis not a scenario we're currently prepared for. We need a way to plug\nthat gap.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 21:40:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> However, with an explicit kill level that doesn't happen: you get one\n> signal of the specified value, no more. Possibly it would be better for\n> the init script to send SIGINT (forcibly disconnect clients) instead of\n> SIGTERM, however. So I'm now leaning to \"killproc postmaster -INT\".\n\nOk, since I can't seem to count on killproc's exact behavior, istm that\nI can:\nkillproc postmaster -INT\nwait some number of seconds\nif postmaster still up\n killproc postmaster -TERM\nwait some number of seconds\nif postmaster STILL up\n killproc postmaster #and let the grim reaper do its dirty work.\n\nAfter all, the system shutdown is relying on this script to properly and\nthoroughly shut things down, or it WILL do the 'kill -9\npid-of-postmaster' for you.\n\nNow, what's a good delay here? Or is there a better metric that a\nsimple delay? After all, I want to avoid the kill -9 unless we have an\nemergency hard lock situation -- what's a good indicator of the backend\nfleet of processes actually _doing_ something? Or should I key on an\nindicator of processor speed (Linux does provide a nice bogus metric\nknown as BogoMIPS for such a purpose)? The last thing I want to do is\nwait too long on some platforms and not long enough on others.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 21:45:08 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> The last thing I want to do is\n> wait too long on some platforms and not long enough on others.\n\nThe difficulty is to know how long the final checkpoint will take.\nThis depends on (at least) your hard disk speed and the number of\ndirty buffers, so I think you're going to have some difficulty\nestimating it with any reliability. BogoMIPS won't help, for sure.\n\nHowever, if you do SIGINT and then wait a few seconds, you can be fairly\nsure that all the extant backends are dead (if not frozen up...) and\nthat the checkpoint is in progress. That may be about the best you can\ndo.\n\nI do not agree that this script should take it on itself to kill -9 the\npostmaster. Please note that the reason we're having this discussion at\nall is that the init script may be used for purposes other than system\nshutdown. So the argument that \"it's going to happen anyway\" is wrong.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 21:53:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> Yeah, but only a partial crash. If the admin finishes the job by\n> killing the backends too, we're fine. Postmaster down, backends alive\n> is not a scenario we're currently prepared for. We need a way to plug\n> that gap.\n\nPostmaster can easily enough find out if zombie backends are 'out there'\nduring startup, right? What can postmaster _do_ about it, though? It\nwon't necessarily be able to kill them -- but it also can't control\nthem. If it _can_ kill them, should it try?\n\nAfter all, if those zombies are out there on this PGDATA there's going\nto be big trouble if we even try to start. If we can't kill the zombies\n(that might still be doing something useful with their clients) from our\nstarting postmaster, how can we possibly start up underneath running\nbackends?\n\nShould the backend look for the presence of its parent postmaster\nperiodically and gracefully come down if postmaster goes away without\nthe proper handshake? A watchdog semaphore (or shared memory flag) that\nthe backend resets and then checks periodically for it being set by its\nparent postmaster?\n\nShould a set of backends detect a new postmaster coming up and try to\n'sync up' with that postmaster, like the baroque GEMM handshake dance\nperformed by 386 memory managers when Windows needs to start its own\nVMM?\n\nOr should we spend that much time protecting Barney Fife's from their\nown single bullet? :-)\n\nJust a nor-easter of a brainstorm....\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 21:55:11 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> Ok, since I can't seem to count on killproc's exact behavior, istm that\n> I can:\n> killproc postmaster -INT\n> wait some number of seconds\n> if postmaster still up\n> killproc postmaster -TERM\n> wait some number of seconds\n> if postmaster STILL up\n> killproc postmaster #and let the grim reaper do its dirty work.\n> \n> After all, the system shutdown is relying on this script to properly and\n> thoroughly shut things down, or it WILL do the 'kill -9\n> pid-of-postmaster' for you.\n> \n> Now, what's a good delay here? Or is there a better metric that a\n> simple delay? After all, I want to avoid the kill -9 unless we have an\n> emergency hard lock situation -- what's a good indicator of the backend\n> fleet of processes actually _doing_ something? Or should I key on an\n> indicator of processor speed (Linux does provide a nice bogus metric\n> known as BogoMIPS for such a purpose)? The last thing I want to do is\n> wait too long on some platforms and not long enough on others.\n\nIn remembering how other databases handle it, I think you should use\npg_ctl to shut it down. You need to enable wait mode, not sure if that\nis the default or not. That will wait for it to shut down before\ncontinuing. I realize a hung shutdown would stop the kernel from\nshutting down. You could put a sleep 100 in there and call a trap on a\ntimeout.\n\nHere is some shell code:\n\n\tTIME=60\t\n\tpg_ctl -w stop &\n\tBG=\"$!\"; export BG\n\n\t(sleep \"$TIME\"; kill \"$BG\" ) &\n\tBG2=\"$!\"; export BG2\n\n\twait \"$BG\"\n\tif ! kill -0 \"$BG2\"\n\telse kill \"$BG2\"\n\tfi\n\n\nThis will try a pg_ctl shutdown for 60 seconds, then kill pg_ctl. You\nwould then need a kill of you own.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 21:57:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Tom Lane wrote:\n> Please note that the reason we're having this discussion at\n> all is that the init script may be used for purposes other than system\n> shutdown. So the argument that \"it's going to happen anyway\" is wrong.\n\nBelieve it or not, you just disproved your own statement that the\ninitscript should not take it upon itself to issue the kill -9. So,\nwhat if I issue '/etc/rc.d/init.d/postgresql restart' -- and backends\ndon't go away during the 'stop' phase, while postmaster may actually\nhave died? Or is it even possible for postmaster to drop out with a\nrunning backend out there?\n\nNo, more is needed. But I think a careful reap through the running\nbackends to kill those that need killing if postmaster won't go down\nmight be prudent. Currently it is not possible to run multiple\npostmasters with the RPM install (I am working on that little problem,\nbut it won't be for 7.1's RPMset yet), so all backends that are running\non the RPM PGDATA location (which I am looking at making configurable as\nwell) will belong to the one postmaster. Of course, that would be an\nabsolute last resort.\n\nOh well -- the real solution is elsewhere, anyway. I just have to make\nsure it is not data-corruption broken. And, if leaving the -9 out\ncompletely is the only solution, then, well, it's the only solution.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 22:03:49 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Tom Lane wrote:\n>> Postmaster down, backends alive is not a scenario we're currently\n>> prepared for. We need a way to plug that gap.\n\n> Postmaster can easily enough find out if zombie backends are 'out there'\n> during startup, right?\n\nIf you think it's easy enough, enlighten the rest of us ;-). Be sure\nyour solution only finds leftover backends from the previous instance of\nthe same postmaster, else it will prevent running multiple postmasters\non one system.\n\n> What can postmaster _do_ about it, though? It\n> won't necessarily be able to kill them -- but it also can't control\n> them. If it _can_ kill them, should it try?\n\nI think refusal to start is sufficient. They should go away by\nthemselves as their clients disconnect, and forcing the issue doesn't\nseem like it will improve matters. The admin can kill them (hopefully\nwith just a SIGTERM ;-)) if he wants to move things along ... but I'd\nnot like to see a newly-starting postmaster do that automatically.\n\n> Should the backend look for the presence of its parent postmaster\n> periodically and gracefully come down if postmaster goes away without\n> the proper handshake?\n\nUnless we checked just before every disk write, this wouldn't represent\na safe failure mode. The onus has to be on the newly-starting\npostmaster, I think, not on the old backends.\n\n> Should a set of backends detect a new postmaster coming up and try to\n> 'sync up' with that postmaster,\n\nNice try ;-). How will you persuade the kernel that these processes are\nnow children of the new postmaster?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:04:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Bruce Momjian wrote:\n> This will try a pg_ctl shutdown for 60 seconds, then kill pg_ctl. You\n> would then need a kill of you own.\n\nI missed something somehwere: wasn't the consensus a few weeks ago that\npg_ctl shouldn't be used for a system initscript? Or did I black out\nthat day? :-) I certainly have no problem using pg_ctl for this purpose\n-- as I have been using pg_ctl to start postmaster all along (then why\nam I not using it to stop -- don't answer that :-))......\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 22:06:41 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> Bruce Momjian wrote:\n> > This will try a pg_ctl shutdown for 60 seconds, then kill pg_ctl. You\n> > would then need a kill of you own.\n> \n> I missed something somehwere: wasn't the consensus a few weeks ago that\n> pg_ctl shouldn't be used for a system initscript? Or did I black out\n> that day? :-) I certainly have no problem using pg_ctl for this purpose\n> -- as I have been using pg_ctl to start postmaster all along (then why\n> am I not using it to stop -- don't answer that :-))......\n\nI don't remember that discussion. My guess was that you didn't want\npg_ctl to hang forever. My script handles that, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 22:08:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Tom Lane wrote:\n>> Please note that the reason we're having this discussion at\n>> all is that the init script may be used for purposes other than system\n>> shutdown. So the argument that \"it's going to happen anyway\" is wrong.\n\n> Believe it or not, you just disproved your own statement that the\n> initscript should not take it upon itself to issue the kill -9.\n\nHow?\n\n> So, what if I issue '/etc/rc.d/init.d/postgresql restart' -- and\n> backends don't go away during the 'stop' phase, while postmaster may\n> actually have died? Or is it even possible for postmaster to drop out\n> with a running backend out there?\n\nThe postmaster will certainly not do so voluntarily. If you kill -9 it,\nof course, that's the situation you're left with ... but your reasoning\nseems circular to me. \"I should kill -9 the postmaster to prevent the\nsituation where I've kill -9'd the postmaster.\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:10:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> I missed something somehwere: wasn't the consensus a few weeks ago that\n> pg_ctl shouldn't be used for a system initscript?\n\nI thought there was some concern about whether pg_ctl is really \"ready\nfor prime time\". But I don't recall the details either.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:11:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen wrote:\n> > Postmaster can easily enough find out if zombie backends are 'out there'\n> > during startup, right?\n \n> If you think it's easy enough, enlighten the rest of us ;-).\n\nIf postgres reported PGDATA on the command line it would be easy enough.\n\n> > What can postmaster _do_ about it, though? It\n> > won't necessarily be able to kill them -- but it also can't control\n> > them. If it _can_ kill them, should it try?\n \n> I think refusal to start is sufficient. They should go away by\n> themselves as their clients disconnect, and forcing the issue doesn't\n\n???? I have misunderstood your previous statement about not wanting to\nforce a manual crash recovery, then.\n\n> > Should a set of backends detect a new postmaster coming up and try to\n> > 'sync up' with that postmaster,\n \n> Nice try ;-). How will you persuade the kernel that these processes are\n> now children of the new postmaster?\n\nYeah, that's the kicker.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 22:12:00 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Tom Lane wrote:\n>> If you think it's easy enough, enlighten the rest of us ;-).\n\n> If postgres reported PGDATA on the command line it would be easy enough.\n\nIn ps status you mean? I don't think we are prepared to require ps\nstatus functionality to let the system start up... we'd lose a number\nof supported platforms that way.\n\n\n>> I think refusal to start is sufficient. They should go away by\n>> themselves as their clients disconnect, and forcing the issue doesn't\n\n> ???? I have misunderstood your previous statement about not wanting to\n> force a manual crash recovery, then.\n\nIn the case of an actual crash and restart, postgres should come back up\nwithout help. However, the situation here is not a crash, it is\nincomplete admin intervention. I don't think that expecting the admin\nto complete his intervention is the same thing as manual crash recovery.\nI especially don't think that we should second-guess what the admin\nwants us to do by auto-killing backends that are still serving clients.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:17:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> of course, that's the situation you're left with ... but your reasoning\n> seems circular to me. \"I should kill -9 the postmaster to prevent the\n> situation where I've kill -9'd the postmaster.\"\n\nOk, while the script can certainly be used from the command line, its\nprimary purpose is system shutdown.\n\nAnd, I am thinking kindof circituitously at this point -- I only now\nrealize just how circituitously. If I keep slapping my forehead like\nthis, I'm going to be bald in a few years....\n\nI don't want to reap the postmaster off -- I want to reap off the\nbackends associated with that particular postmaster, allowing that\npostmaster to die on its own. Duh. Doing this in a safe manner is not\ngoing to be easy, given that the PGDATA is not on the command line to\nthe backend as echoed by ps. Although I could key on PPID for the\nbackends.... I'll have to experiment. But not tonight -- last week was\nmore taxing than I thought. :-(.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 22:24:11 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Tom Lane wrote:\n> >> If you think it's easy enough, enlighten the rest of us ;-).\n> > If postgres reported PGDATA on the command line it would be easy enough.\n \n> In ps status you mean? I don't think we are prepared to require ps\n> status functionality to let the system start up... we'd lose a number\n> of supported platforms that way.\n\nThat is one downside. A major downside. Again, alot of work to protect\nthe Barney Fife's out there.\n\n> In the case of an actual crash and restart, postgres should come back up\n> without help. However, the situation here is not a crash, it is\n> incomplete admin intervention. I don't think that expecting the admin\n\nIs it a correct assumption that this is the only time postmaster might\ndrop out?\n\nBut, thanks for the clarification, as I had misunderstood what you\nmeant.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 22:27:23 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> I don't want to reap the postmaster off -- I want to reap off the\n> backends associated with that particular postmaster, allowing that\n> postmaster to die on its own. Duh. Doing this in a safe manner is not\n> going to be easy, given that the PGDATA is not on the command line to\n> the backend as echoed by ps. Although I could key on PPID for the\n> backends.... I'll have to experiment.\n\nPPID should work fine, actually. Keep in mind though that SIGINT'ing\nthe postmaster will already have sent a terminate signal to its children\n(barring postmaster breakage), and that if you wait around for awhile\nand then kill off remaining children, you may well accomplish nothing\nexcept to kill off the checkpoint process :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:31:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> Is it a correct assumption that this is the only time postmaster might\n> drop out?\n\nWell, there's always the possibility of a bug leading to postmaster\ncoredump. Historically those have been pretty rare though.\n\nIn any case, I'm not sure that the init script is the place to be\nsolving these problems. We do need some internal mechanism to protect\nagainst a crashed or kill -9'd postmaster.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:33:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Tom Lane wrote:\n> Well, there's always the possibility of a bug leading to postmaster\n> coredump. Historically those have been pretty rare though.\n\nI have never personally seen one, since 6.1.1.\n \n> In any case, I'm not sure that the init script is the place to be\n> solving these problems.\n\nWell, I do kindof have the responsibility to allow the system to shut\ndown..... I'll have to double check -- there may be a timeout mechanism\nin the RedHat init to reap off shutdown scripts -- but I haven't yet\nfound it. Better to gracefully yank the plugs than have the grim reaper\nyank them in the wrong order for you, in any case.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 05 Mar 2001 22:44:36 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010305 19:13] wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Tom Lane wrote:\n> >> Postmaster down, backends alive is not a scenario we're currently\n> >> prepared for. We need a way to plug that gap.\n> \n> > Postmaster can easily enough find out if zombie backends are 'out there'\n> > during startup, right?\n> \n> If you think it's easy enough, enlighten the rest of us ;-). Be sure\n> your solution only finds leftover backends from the previous instance of\n> the same postmaster, else it will prevent running multiple postmasters\n> on one system.\n\nI'm sure some sort of encoding of the PGDATA directory along with\nthe pids stored in the shm segment...\n\n> > What can postmaster _do_ about it, though? It\n> > won't necessarily be able to kill them -- but it also can't control\n> > them. If it _can_ kill them, should it try?\n> \n> I think refusal to start is sufficient. They should go away by\n> themselves as their clients disconnect, and forcing the issue doesn't\n> seem like it will improve matters. The admin can kill them (hopefully\n> with just a SIGTERM ;-)) if he wants to move things along ... but I'd\n> not like to see a newly-starting postmaster do that automatically.\n\nI agree, shooting down processes incorrectly should be left up to\nvendors braindead scripts. :)\n\n> > Should the backend look for the presence of its parent postmaster\n> > periodically and gracefully come down if postmaster goes away without\n> > the proper handshake?\n> \n> Unless we checked just before every disk write, this wouldn't represent\n> a safe failure mode. The onus has to be on the newly-starting\n> postmaster, I think, not on the old backends.\n> \n> > Should a set of backends detect a new postmaster coming up and try to\n> > 'sync up' with that postmaster,\n> \n> Nice try ;-). How will you persuade the kernel that these processes are\n> now children of the new postmaster?\n\nOh, easy, use ptrace. :)\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Mon, 5 Mar 2001 21:43:13 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> I especially don't think that we should second-guess what the admin\n> wants us to do by auto-killing backends that are still serving\n> clients.\n\n Sure. But it would be nice anyway if pg_ctl could do this with a\nspecific command line switch. \n\n-- \n<< Tout n'y est pas parfait, mais on y honore certainement les jardiniers >>\n\n\t\t\tDominique Quatravaux <dom@kilimandjaro.dyndns.org>\n", "msg_date": "Tue, 6 Mar 2001 13:30:06 +0100", "msg_from": "dom@idealx.com", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > This will try a pg_ctl shutdown for 60 seconds, then kill pg_ctl. You\n> > would then need a kill of you own.\n> \n> pg_ctl automatically times out after 60 seconds.\n\nOh, yea, that's right, I saw that in the documenation. Forget my\nscript. Just run pg_ctl first, then kill the postmaster if it is still\nthere. Much safer than doing kill and checking because pg_ctl knows\nwhen the system cleanly shuts down and exits.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 6 Mar 2001 11:54:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Bruce Momjian writes:\n\n> This will try a pg_ctl shutdown for 60 seconds, then kill pg_ctl. You\n> would then need a kill of you own.\n\npg_ctl automatically times out after 60 seconds.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 6 Mar 2001 17:57:48 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Lamar Owen writes:\n\n> I missed something somehwere: wasn't the consensus a few weeks ago that\n> pg_ctl shouldn't be used for a system initscript?\n\nThe consensus(?) was that there was some work to do in pg_ctl before it\nwas robust enough to be used (for anything). That work has been done.\nAn example Linux init.d script is at contrib/start-scripts/linux. The\nonly fault in that script that I can see is that it has no recipe for the\ncase when the postmaster does not come down after 60 seconds. But this is\nreally no problem for the issue at hand because if you do a normal\nrunlevel switch then the postmaster will simply keep running, and during a\nsystem shutdown all the backends are going to die anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 6 Mar 2001 18:04:44 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Peter Eisentraut wrote:\n> \n> Lamar Owen writes:\n> \n> > I missed something somehwere: wasn't the consensus a few weeks ago that\n> > pg_ctl shouldn't be used for a system initscript?\n> \n> The consensus(?) was that there was some work to do in pg_ctl before it\n> was robust enough to be used (for anything). That work has been done.\n\nThat was the detail I missed.\n\n> case when the postmaster does not come down after 60 seconds. But this is\n> really no problem for the issue at hand because if you do a normal\n> runlevel switch then the postmaster will simply keep running, and during a\n> system shutdown all the backends are going to die anyway.\n\nOnly if each and every shutdown script succeeds in its task. And I have\nto make sure that the RPM's shipping script successfully pulls down the\nsystem in an orderly fashion -- of course, I don't have to worry about\nthe case where a postmaster is going to be started back up if we are in\nsystem shutdown -- but, as Tom also stated, I can't assume I'm in the\nsystem's death throes when called with the stop parameter.\n\nAnd it _is_ possible for an admin to set up the runlevels such that a\nlevel is set aside where even networking isn't running (actually, that\nlevel already exists, and is called 'single user mode') -- or a run\nlevel for website maintenance where networking is still up, but the\nwebserver and postgresql (and other associated) processes are to be shut\ndown. I personally use this -- I have set up runlevel 4 as a 'remote\nsingle user mode' of sorts where I still have sshd running (and the\nnetworking stack, obviously), but AOLserver, postgresql, and RealServer\nare shut down. I then switch runlevels back to 3 to return to normal. \nMuch easier than manually stopping and restarting (in the correct order,\nas AOLserver is not a happy camper if postmaster drops out from\nunderneath it) all the necessary pieces.\n\nSo I can't assume anything. The default RPM installation used to\nautomatically configure runlevels 3, 4, and 5 (not any more), but my\nscript can't assume that the system is actually in that state by any\nmeans.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Mar 2001 12:55:54 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> I'm sure some sort of encoding of the PGDATA directory along with\n> the pids stored in the shm segment...\n\nI thought about this too, but it strikes me as not very trustworthy.\nThe problem is that there's no guarantee that the new postmaster will\neven notice the old shmem segment: it might select a different shmem\nkey. (The 7.1 coding of shmem key selection makes this more likely\nthan it used to be, but even under 7.0, it will certainly fail to work\nif I choose to start the new postmaster using a different port number\nthan the old one had. The shmem key is driven primarily by port number\nnot data directory ...)\n\nThe interlock has to be tightly tied to the PGDATA directory, because\nwhat we're trying to protect is the files in and under that directory.\nIt seems that something based on file(s) in that directory is the way\nto go.\n\nThe best idea I've seen so far is Hiroshi's idea of having all the\nbackends hold fcntl locks on the same file (probably postmaster.pid\nwould do fine). Then the new postmaster can test whether any backends\nare still alive by trying to lock the old postmaster.pid file.\nUnfortunately, I read in the fcntl man page:\n\n Locks are not inherited by a child process in a fork(2) system call.\n\nThis makes the idea much less attractive than I originally thought:\na new backend would not automatically inherit a lock on the\npostmaster.pid file from the postmaster, but would have to open/lock it\nfor itself. That means there's a window where the new backend exists\nbut would be invisible to a hypothetical new postmaster.\n\nWe could work around this with the following, very ugly protocol:\n\n1. Postmaster normally maintains fcntl read lock on its postmaster.pid\nfile. Each spawned backend immediately opens and read-locks\npostmaster.pid, too, and holds that file open until it dies. (Thus\nwasting a kernel FD per backend, which is one of the less attractive\nthings about this.) If the backend is unable to obtain read lock on\npostmaster.pid, then it complains and dies. We must use read locks\nhere so that all these processes can hold them separately.\n\n2. If a newly started postmaster sees a pre-existing postmaster.pid\nfile, it tries to obtain a *write* lock on that file. If it fails,\nconclude that an old postmaster or backend is still alive; complain\nand quit. If it succeeds, sit for say 1 second before deleting the file\nand creating a new one. (The delay here is to allow any just-started\nold backends to fail to acquire read lock and quit. A possible\nobjection is that we have no way to guarantee 1 second is enough, though\nit ought to be plenty if the lock acquisition is just after the fork.)\n\nOne thing that worries me a little bit is that this means an fcntl\nread-lock request will exist inside the kernel for each active backend.\nDoes anyone know of any performance problems or hard kernel limits we\nmight run into with large numbers of backends (lots and lots of fcntl\nlocks)? At least the locks are on a file that we don't actually touch\nin the normal course of business.\n\nA small savings is that the backends don't actually need to open new FDs\nfor the postmaster.pid file; they can use the one they inherit from the\npostmaster, even though they do need to lock it again. I'm not sure how\nmuch that saves inside the kernel, but at least something.\n\nThere are also the usual set of concerns about portability of flock,\nthough this time we're locking a plain file and not a socket, so it\nshouldn't be as much trouble as it was before.\n\nComments? Does anyone see a better way to do it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 13:10:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Lamar Owen writes:\n\n> > case when the postmaster does not come down after 60 seconds. But this is\n> > really no problem for the issue at hand because if you do a normal\n> > runlevel switch then the postmaster will simply keep running, and during a\n> > system shutdown all the backends are going to die anyway.\n>\n> Only if each and every shutdown script succeeds in its task. And I have\n> to make sure that the RPM's shipping script successfully pulls down the\n> system in an orderly fashion -- of course, I don't have to worry about\n> the case where a postmaster is going to be started back up if we are in\n> system shutdown -- but, as Tom also stated, I can't assume I'm in the\n> system's death throes when called with the stop parameter.\n\nWell, if you have something clever you want to do if the postmaster\ndoesn't come down after an orderly shutdown then please share it. The\ncurrent alternatives are 'leave running' or 'kill -9'. I know I'd prefer\nthe former.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 6 Mar 2001 19:20:40 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010306 10:10] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> > I'm sure some sort of encoding of the PGDATA directory along with\n> > the pids stored in the shm segment...\n> \n> I thought about this too, but it strikes me as not very trustworthy.\n> The problem is that there's no guarantee that the new postmaster will\n> even notice the old shmem segment: it might select a different shmem\n> key. (The 7.1 coding of shmem key selection makes this more likely\n> than it used to be, but even under 7.0, it will certainly fail to work\n> if I choose to start the new postmaster using a different port number\n> than the old one had. The shmem key is driven primarily by port number\n> not data directory ...)\n\nThis seems like a mistake. \n\nI'm suprised you guys aren't just using some form of the FreeBSD\nftok() algorithm for this:\n\nFTOK(3) FreeBSD Library Functions Manual FTOK(3)\n\n...\n\n The ftok() function attempts to create a unique key suitable for use with\n the msgget(3), semget(2) and shmget(2) functions given the path of an ex-\n isting file and a user-selectable id.\n\n The specified path must specify an existing file that is accessible to\n the calling process or the call will fail. Also, note that links to\n files will return the same key, given the same id.\n\nBUGS\n The returned key is computed based on the device minor number and inode\n of the specified path in combination with the lower 8 bits of the given\n id. Thus it is quite possible for the routine to return duplicate keys.\n\nThe \"BUGS\" seems to be exactly what you guys are looking for, a somewhat\nreliable method of obtaining a system id. If that sounds evil, read \nbelow for an alternate suggestion.\n\n> The interlock has to be tightly tied to the PGDATA directory, because\n> what we're trying to protect is the files in and under that directory.\n> It seems that something based on file(s) in that directory is the way\n> to go.\n> \n> The best idea I've seen so far is Hiroshi's idea of having all the\n> backends hold fcntl locks on the same file (probably postmaster.pid\n> would do fine). Then the new postmaster can test whether any backends\n> are still alive by trying to lock the old postmaster.pid file.\n> Unfortunately, I read in the fcntl man page:\n> \n> Locks are not inherited by a child process in a fork(2) system call.\n> \n> This makes the idea much less attractive than I originally thought:\n> a new backend would not automatically inherit a lock on the\n> postmaster.pid file from the postmaster, but would have to open/lock it\n> for itself. That means there's a window where the new backend exists\n> but would be invisible to a hypothetical new postmaster.\n> \n> We could work around this with the following, very ugly protocol:\n> \n> 1. Postmaster normally maintains fcntl read lock on its postmaster.pid\n> file. Each spawned backend immediately opens and read-locks\n> postmaster.pid, too, and holds that file open until it dies. (Thus\n> wasting a kernel FD per backend, which is one of the less attractive\n> things about this.) If the backend is unable to obtain read lock on\n> postmaster.pid, then it complains and dies. We must use read locks\n> here so that all these processes can hold them separately.\n> \n> 2. If a newly started postmaster sees a pre-existing postmaster.pid\n> file, it tries to obtain a *write* lock on that file. If it fails,\n> conclude that an old postmaster or backend is still alive; complain\n> and quit. If it succeeds, sit for say 1 second before deleting the file\n> and creating a new one. (The delay here is to allow any just-started\n> old backends to fail to acquire read lock and quit. A possible\n> objection is that we have no way to guarantee 1 second is enough, though\n> it ought to be plenty if the lock acquisition is just after the fork.)\n> \n> One thing that worries me a little bit is that this means an fcntl\n> read-lock request will exist inside the kernel for each active backend.\n> Does anyone know of any performance problems or hard kernel limits we\n> might run into with large numbers of backends (lots and lots of fcntl\n> locks)? At least the locks are on a file that we don't actually touch\n> in the normal course of business.\n> \n> A small savings is that the backends don't actually need to open new FDs\n> for the postmaster.pid file; they can use the one they inherit from the\n> postmaster, even though they do need to lock it again. I'm not sure how\n> much that saves inside the kernel, but at least something.\n> \n> There are also the usual set of concerns about portability of flock,\n> though this time we're locking a plain file and not a socket, so it\n> shouldn't be as much trouble as it was before.\n> \n> Comments? Does anyone see a better way to do it?\n\nPossibly...\n\nWhat about encoding the shm id in the pidfile? Then one can just ask\nhow many processes are attached to that segment? (if it doesn't\nexist, one can assume all backends have exited)\n\nyou want the field 'shm_nattch'\n\n The shmid_ds struct is defined as follows:\n\n struct shmid_ds {\n struct ipc_perm shm_perm; /* operation permission structure */\n int shm_segsz; /* size of segment in bytes */\n pid_t shm_lpid; /* process ID of last shared memory op */\n pid_t shm_cpid; /* process ID of creator */\n short shm_nattch; /* number of current attaches */\n time_t shm_atime; /* time of last shmat() */\n time_t shm_dtime; /* time of last shmdt() */\n time_t shm_ctime; /* time of last change by shmctl() */\n void *shm_internal; /* sysv stupidity */\n };\n\n\n--\n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 10:22:46 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010306 10:10] wrote:\n>> The shmem key is driven primarily by port number\n>> not data directory ...)\n\n> This seems like a mistake. \n\n> I'm suprised you guys aren't just using some form of the FreeBSD\n> ftok() algorithm for this:\n\nThis has been discussed before --- see the archives. The conclusion was\nthat since ftok doesn't guarantee uniqueness, it adds nothing except\nlack of predictability to the shmem key selection process. We'd still\nneed logic to cope with key collisions, and given that, we might as well\nselect keys that have some obvious relationship to user-visible\nparameters, viz the port number. As is, you can fairly easily tell\nwhich shmem segment belongs to which postmaster from the shmem key;\nwith ftok-derived keys, you couldn't tell a thing.\n\n>> Comments? Does anyone see a better way to do it?\n\n> What about encoding the shm id in the pidfile? Then one can just ask\n> how many processes are attached to that segment? (if it doesn't\n> exist, one can assume all backends have exited)\n\nHmm ... that might actually be a pretty good idea. A small problem is\nthat the shm key isn't yet selected at the time we initially create the\nlockfile, but I can't think of any reason that we could not go back and\nappend the key to the lockfile afterwards.\n\n> you want the field 'shm_nattch'\n\nAre there any portability problems with relying on shm_nattch to be\navailable? If not, I like this a lot...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 13:35:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010306 10:35] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> \n> > What about encoding the shm id in the pidfile? Then one can just ask\n> > how many processes are attached to that segment? (if it doesn't\n> > exist, one can assume all backends have exited)\n> \n> Hmm ... that might actually be a pretty good idea. A small problem is\n> that the shm key isn't yet selected at the time we initially create the\n> lockfile, but I can't think of any reason that we could not go back and\n> append the key to the lockfile afterwards.\n> \n> > you want the field 'shm_nattch'\n> \n> Are there any portability problems with relying on shm_nattch to be\n> available? If not, I like this a lot...\n\nWell it's available on FreeBSD and Solaris, I'm sure Redhat has\nsome deamon that resets the value to 0 periodically just for kicks\nso it might not be viable... :)\n\nSeriously, there's some dispute on the type that 'shm_nattch' is,\nunder Solaris it's \"shmatt_t\" (unsigned long afaik), under FreeBSD\nit's 'short' (i should fix this. :)).\n\nBut since you're really only testing for 0'ness then it shouldn't\nreally be a problem.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 10:44:47 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n>> Are there any portability problems with relying on shm_nattch to be\n>> available? If not, I like this a lot...\n\n> Well it's available on FreeBSD and Solaris, I'm sure Redhat has\n> some deamon that resets the value to 0 periodically just for kicks\n> so it might not be viable... :)\n\nI notice that our BeOS and QNX emulations of shmctl() don't support\nIPC_STAT, but that could be dealt with, at least to the extent of\nstubbing it out.\n\nThis does raise the question of what to do if shmctl(IPC_STAT) fails\nfor a reason other than EINVAL. I think the conservative thing to do\nis refuse to start up. On EPERM, for example, it's possible that there\nis a postmaster running in your PGDATA but with a different userid.\n\n\n> Seriously, there's some dispute on the type that 'shm_nattch' is,\n> under Solaris it's \"shmatt_t\" (unsigned long afaik), under FreeBSD\n> it's 'short' (i should fix this. :)).\n\n> But since you're really only testing for 0'ness then it shouldn't\n> really be a problem.\n\nWe need not copy the value anywhere, so as long as the struct is\ncorrectly declared in the system header files I don't think it matters\nwhat the field type is ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 13:57:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Peter Eisentraut wrote:\n> Well, if you have something clever you want to do if the postmaster\n> doesn't come down after an orderly shutdown then please share it. The\n> current alternatives are 'leave running' or 'kill -9'. I know I'd prefer\n> the former.\n\nWell, my preferences aren't really relevant here. I have a job to do as\nan initscript in the RPMish environment -- and I really have to meet my\nobligations (using the first personal pronoun there to anthropomorph the\ninitscript to a person, allowing us to have a little sympathy for the\npoor shell script's plight :-)).\n\nMy preference is to let it float in limbo -- if it's in limbo and won't\ncome out, then we have bigger issues.\n\nHowever, I could do something really sneaky in the RedHat environment\nand let init do the dirty work for me -- but, again, I am not at all\nguaranteed that things will come down orderly -- if it is at all\npossible for me to bring about an orderly (if slow) shutdown that does\nterminate as the rest of the system needs it to do, then I'll attempt to\ndo so.\n\nBut, the immediate issue is preventing chaotic stops within the\ninitscript, so I'm going to experiment with things and see if I can make\nthe initscript hang -- if I can't, then I'll likely put in the 'killproc\npostmaster -INT' with escalation to -TERM if it doesn't come down within\nsixty seconds (and, no, I am not going to sleep 60 then check things --\nI am going to sleep 1 and loop sixty times) -- no need to unnecessarily\ndelay system shutdown (and potential restart). And I won't put in the\n-KILL unless I can find a safe and thorough way to do so.\n\nOr I may go ahead and pg_ctl-ize things and let pg_ctl do the dirty\nwork, as that IS what pg_ctl is supposed to accomplish.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Mar 2001 14:07:32 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010306 11:03] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> >> Are there any portability problems with relying on shm_nattch to be\n> >> available? If not, I like this a lot...\n> \n> > Well it's available on FreeBSD and Solaris, I'm sure Redhat has\n> > some deamon that resets the value to 0 periodically just for kicks\n> > so it might not be viable... :)\n> \n> I notice that our BeOS and QNX emulations of shmctl() don't support\n> IPC_STAT, but that could be dealt with, at least to the extent of\n> stubbing it out.\n\nWell since we already have spinlocks, I can't see why we can't\nkeep the refcount and spinlock in a special place in the shm\nfor all cases?\n\n> This does raise the question of what to do if shmctl(IPC_STAT) fails\n> for a reason other than EINVAL. I think the conservative thing to do\n> is refuse to start up. On EPERM, for example, it's possible that there\n> is a postmaster running in your PGDATA but with a different userid.\n\nYes, if possible a more meaningfull error message and pointer to\nsome docco would be nice or even a nice \"i don't care, i killed\nall the backends, just start darnit\" flag, it's really no fun at\nall to have to attempt to decypher some cryptic error message at\n3am when the database/system is acting up. :)\n\n> > Seriously, there's some dispute on the type that 'shm_nattch' is,\n> > under Solaris it's \"shmatt_t\" (unsigned long afaik), under FreeBSD\n> > it's 'short' (i should fix this. :)).\n> \n> > But since you're really only testing for 0'ness then it shouldn't\n> > really be a problem.\n> \n> We need not copy the value anywhere, so as long as the struct is\n> correctly declared in the system header files I don't think it matters\n> what the field type is ...\n\nYup, my point exactly.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 11:12:16 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein writes:\n\n> Seriously, there's some dispute on the type that 'shm_nattch' is,\n> under Solaris it's \"shmatt_t\" (unsigned long afaik), under FreeBSD\n> it's 'short' (i should fix this. :)).\n\nWhat I don't like is that my /usr/include/sys/shm.h (through other\nheaders) has:\n\ntypedef unsigned long int shmatt_t;\n\n/* Data structure describing a set of semaphores. */\nstruct shmid_ds\n {\n struct ipc_perm shm_perm; /* operation permission struct */\n size_t shm_segsz; /* size of segment in bytes */\n __time_t shm_atime; /* time of last shmat() */\n unsigned long int __unused1;\n __time_t shm_dtime; /* time of last shmdt() */\n unsigned long int __unused2;\n __time_t shm_ctime; /* time of last change by shmctl() */\n unsigned long int __unused3;\n __pid_t shm_cpid; /* pid of creator */\n __pid_t shm_lpid; /* pid of last shmop */\n shmatt_t shm_nattch; /* number of current attaches */\n unsigned long int __unused4;\n unsigned long int __unused5;\n };\n\nwhereas /usr/src/linux/include/shm.h has:\n\nstruct shmid_ds {\n struct ipc_perm shm_perm; /* operation perms */\n int shm_segsz; /* size of segment (bytes) */\n __kernel_time_t shm_atime; /* last attach time */\n __kernel_time_t shm_dtime; /* last detach time */\n __kernel_time_t shm_ctime; /* last change time */\n __kernel_ipc_pid_t shm_cpid; /* pid of creator */\n __kernel_ipc_pid_t shm_lpid; /* pid of last operator */\n unsigned short shm_nattch; /* no. of current attaches */\n unsigned short shm_unused; /* compatibility */\n void *shm_unused2; /* ditto - used by DIPC */\n void *shm_unused3; /* unused */\n};\n\n\nNot only note the shm_nattch type, but also shm_segsz, and the \"unused\"\nfields in between. I don't know a thing about the Linux kernel sources,\nbut this doesn't seem right.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 6 Mar 2001 20:19:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010306 11:03] wrote:\n>> I notice that our BeOS and QNX emulations of shmctl() don't support\n>> IPC_STAT, but that could be dealt with, at least to the extent of\n>> stubbing it out.\n\n> Well since we already have spinlocks, I can't see why we can't\n> keep the refcount and spinlock in a special place in the shm\n> for all cases?\n\nNo, we mustn't go there. If the kernel isn't keeping the refcount\nthen it's worse than useless: as soon as some process crashes without\ndecrementing its refcount, you have a condition that you can't recover\nfrom without reboot.\n\nWhat I'm currently imagining is that the stub implementations will just\nreturn a failure code for IPC_STAT, and the outer code will in turn fail\nwith a message along the lines of \"It looks like there's a pre-existing\nshmem block (id XXX) still in use. If you're sure there are no old\nbackends still running, remove the shmem block with ipcrm(1), or just\ndelete $PGDATA/postmaster.pid.\" I dunno what shmem management tools\nexist on BeOS/QNX, but deleting the lockfile will definitely suppress\nthe startup interlock ;-).\n\n> Yes, if possible a more meaningfull error message and pointer to\n> some docco would be nice\n\nIs the above good enough?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 14:24:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Peter Eisentraut wrote:\n> Not only note the shm_nattch type, but also shm_segsz, and the \"unused\"\n> fields in between. I don't know a thing about the Linux kernel sources,\n> but this doesn't seem right.\n\nRed Hat 7, right? My RedHat 7 system isn't running RH 7 right now (it's\nthis notebook that I'm running Win95 on right now), but see which RPM's\nown the two headers. You may be in for a shock. IIRC, the first system\ninclude is from the 2.4 kernel, and the second in the kernel source tree\nis from the 2.2 kernel.\n\nOdd, but not really broken. Should be fixed in the latest public beta\nof RedHat, that actually has the 2.4 kernel. I can't really say any\nmore about that, however.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Mar 2001 14:27:12 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010306 11:30] wrote:\n> Alfred Perlstein <bright@wintelcom.net> writes:\n> > * Tom Lane <tgl@sss.pgh.pa.us> [010306 11:03] wrote:\n> >> I notice that our BeOS and QNX emulations of shmctl() don't support\n> >> IPC_STAT, but that could be dealt with, at least to the extent of\n> >> stubbing it out.\n> \n> > Well since we already have spinlocks, I can't see why we can't\n> > keep the refcount and spinlock in a special place in the shm\n> > for all cases?\n> \n> No, we mustn't go there. If the kernel isn't keeping the refcount\n> then it's worse than useless: as soon as some process crashes without\n> decrementing its refcount, you have a condition that you can't recover\n> from without reboot.\n\nNot if the postmaster outputs the following:\n\n> What I'm currently imagining is that the stub implementations will just\n> return a failure code for IPC_STAT, and the outer code will in turn fail\n> with a message along the lines of \"It looks like there's a pre-existing\n> shmem block (id XXX) still in use. If you're sure there are no old\n> backends still running, remove the shmem block with ipcrm(1), or just\n> delete $PGDATA/postmaster.pid.\" I dunno what shmem management tools\n> exist on BeOS/QNX, but deleting the lockfile will definitely suppress\n> the startup interlock ;-).\n> \n> > Yes, if possible a more meaningfull error message and pointer to\n> > some docco would be nice\n> \n> Is the above good enough?\n\nSure. :)\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 11:34:32 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> What I don't like is that my /usr/include/sys/shm.h (through other\n> headers) has [foo]\n> whereas /usr/src/linux/include/shm.h has [bar]\n\nAre those declarations perhaps bit-compatible? Looks a tad endian-\ndependent, though ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 14:36:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "* Lamar Owen <lamar.owen@wgcr.org> [010306 11:39] wrote:\n> Peter Eisentraut wrote:\n> > Not only note the shm_nattch type, but also shm_segsz, and the \"unused\"\n> > fields in between. I don't know a thing about the Linux kernel sources,\n> > but this doesn't seem right.\n> \n> Red Hat 7, right? My RedHat 7 system isn't running RH 7 right now (it's\n> this notebook that I'm running Win95 on right now), but see which RPM's\n> own the two headers. You may be in for a shock. IIRC, the first system\n> include is from the 2.4 kernel, and the second in the kernel source tree\n> is from the 2.2 kernel.\n> \n> Odd, but not really broken. Should be fixed in the latest public beta\n> of RedHat, that actually has the 2.4 kernel. I can't really say any\n> more about that, however.\n\nY'know, I was only kidding about Linux going out of its way to\ndefeat the 'shm_nattch' trick... *sigh*\n\nAs a FreeBSD developer I'm wondering if Linux keeps compatibility\ncalls around for old binaries or not. Any idea?\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 11:49:45 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010306 11:49] wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > What I don't like is that my /usr/include/sys/shm.h (through other\n> > headers) has [foo]\n> > whereas /usr/src/linux/include/shm.h has [bar]\n> \n> Are those declarations perhaps bit-compatible? Looks a tad endian-\n> dependent, though ...\n\nOf course not, the size of the struct changed (short->unsigned\nlong, basically int16_t -> uint32_t), because the kernel and userland\nin Linux are hardly in sync you have the fun of guessing if you\nget:\n\nold struct -> old syscall (ok)\nnew struct -> old syscall (boom)\nold struct -> new syscall (boom)\nnew struct -> new syscall (ok)\n\nHonestly I think this problem should be left to the vendor to fix\nproperly (if it needs fixing), the sysV API was published at least\n6 years ago, they ought to have it mostly correct by now.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 11:54:44 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein <bright@wintelcom.net> writes:\n> Of course not, the size of the struct changed (short->unsigned\n> long, basically int16_t -> uint32_t), because the kernel and userland\n> in Linux are hardly in sync you have the fun of guessing if you\n> get:\n\n> old struct -> old syscall (ok)\n> new struct -> old syscall (boom)\n> old struct -> new syscall (boom)\n> new struct -> new syscall (ok)\n\nUgh. However, it looks like it might be fairly fail-soft: if we\nhave the wrong declaration then we pick up some other field of the\nstruct, and probably end up complaining because nattch appears nonzero.\nRecovery method (clean up the shm seg or delete lockfile) is the same.\n\nI'm still inclined to go with this; it beats corrupting the WAL log,\nand the fcntl(SETLK) alternative has its own set of portability\nbooby-traps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 15:20:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "On Tue, Mar 06, 2001 at 08:19:12PM +0100, Peter Eisentraut wrote:\n> Alfred Perlstein writes:\n> \n> > Seriously, there's some dispute on the type that 'shm_nattch' is,\n> > under Solaris it's \"shmatt_t\" (unsigned long afaik), under FreeBSD\n> > it's 'short' (i should fix this. :)).\n> \n> What I don't like is that my /usr/include/sys/shm.h (through other\n> headers) has:\n> \n> typedef unsigned long int shmatt_t;\n> \n> /* Data structure describing a set of semaphores. */\n> struct shmid_ds\n> {\n> struct ipc_perm shm_perm; /* operation permission struct */\n> size_t shm_segsz; /* size of segment in bytes */\n> __time_t shm_atime; /* time of last shmat() */\n> unsigned long int __unused1;\n> __time_t shm_dtime; /* time of last shmdt() */\n> unsigned long int __unused2;\n> __time_t shm_ctime; /* time of last change by shmctl() */\n> unsigned long int __unused3;\n> __pid_t shm_cpid; /* pid of creator */\n> __pid_t shm_lpid; /* pid of last shmop */\n> shmatt_t shm_nattch; /* number of current attaches */\n> unsigned long int __unused4;\n> unsigned long int __unused5;\n> };\n> \n> whereas /usr/src/linux/include/shm.h has:\n> \n> struct shmid_ds {\n> struct ipc_perm shm_perm; /* operation perms */\n> int shm_segsz; /* size of segment (bytes) */\n> __kernel_time_t shm_atime; /* last attach time */\n> __kernel_time_t shm_dtime; /* last detach time */\n> __kernel_time_t shm_ctime; /* last change time */\n> __kernel_ipc_pid_t shm_cpid; /* pid of creator */\n> __kernel_ipc_pid_t shm_lpid; /* pid of last operator */\n> unsigned short shm_nattch; /* no. of current attaches */\n> unsigned short shm_unused; /* compatibility */\n> void *shm_unused2; /* ditto - used by DIPC */\n> void *shm_unused3; /* unused */\n> };\n> \n> \n> Not only note the shm_nattch type, but also shm_segsz, and the \"unused\"\n> fields in between. I don't know a thing about the Linux kernel sources,\n> but this doesn't seem right.\n\nOn Linux, /usr/src/linux/include is meaningless for anything in userland; \nit's meant only for building the kernel and kernel modules. That Red Hat \ntends to expose it to user-level builds is a long-standing bug in Red \nHat's distribution, in violation of the File Hierarchy Standard as well \nas explicit instructions from Linus & crew and from the maintainer of the \nC library.\n\nUser-level programs see what's in /usr/include, which only has to match \nwhat the C library wants. It's the C library's job to do any mapping \nneeded, and it does. The C library is maintained very, very carefully\nto keep binary compatibility with all old versions. (One sometimes\nencounters commercial programs that rely on a bug or undocumented/\nusupported feature that disappears in a later library version.)\n\nThat is why there is no problem with version skew in the syscall\nargument structures on a correctly-configured Linux system. (On a\nRed Hat system it is very easy to get them out of sync, but RH fans \nare used to problems.)\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 6 Mar 2001 12:46:24 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Nathan Myers wrote:\n> That is why there is no problem with version skew in the syscall\n> argument structures on a correctly-configured Linux system. (On a\n> Red Hat system it is very easy to get them out of sync, but RH fans\n> are used to problems.)\n\nIs RedHat bashing really necessary here? At least they are payrolling\nSecond Chair on the Linux kernel hierarchy. And they are very\nsupportive of PostgreSQL (by shipping us with their distribution).\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Mar 2001 16:20:13 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "On Tue, Mar 06, 2001 at 12:46:24PM -0800, Nathan Myers wrote:\n> \n> On Linux, /usr/src/linux/include is meaningless for anything in userland; \n> it's meant only for building the kernel and kernel modules. That Red Hat \n> tends to expose it to user-level builds is a long-standing bug in Red \n> Hat's distribution, in violation of the File Hierarchy Standard as well \n> as explicit instructions from Linus & crew and from the maintainer of the \n> C library.\n> \nRed Hat's Fisher Beta has split the 2 includes, which caused an error trying\nto compile a (I guess badly configured) kernel module. The header files in\n/usr/include now give an error if you try to build a kernel module that gets\nheader files from there.\n\nSo whether they were wrong in the past or not, they are now doing things the\nway you say is proper.\n\n", "msg_date": "Tue, 6 Mar 2001 13:56:25 -0800", "msg_from": "Samuel Sieb <samuel@sieb.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "El Mar 06 Mar 2001 18:56, Samuel Sieb escribi�:\n> On Tue, Mar 06, 2001 at 12:46:24PM -0800, Nathan Myers wrote:\n> > On Linux, /usr/src/linux/include is meaningless for anything in userland;\n> > it's meant only for building the kernel and kernel modules. That Red Hat\n> > tends to expose it to user-level builds is a long-standing bug in Red\n> > Hat's distribution, in violation of the File Hierarchy Standard as well\n> > as explicit instructions from Linus & crew and from the maintainer of the\n> > C library.\n>\n> Red Hat's Fisher Beta has split the 2 includes, which caused an error\n> trying to compile a (I guess badly configured) kernel module. The header\n> files in /usr/include now give an error if you try to build a kernel module\n> that gets header files from there.\n>\n> So whether they were wrong in the past or not, they are now doing things\n> the way you say is proper.\n\nI am very happy for seeing RedHat let out beta releases of there \ndistribution. That's whats importante about it all.\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Tue, 6 Mar 2001 19:01:48 -0300", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "* Lamar Owen <lamar.owen@wgcr.org> [010306 13:27] wrote:\n> Nathan Myers wrote:\n> > That is why there is no problem with version skew in the syscall\n> > argument structures on a correctly-configured Linux system. (On a\n> > Red Hat system it is very easy to get them out of sync, but RH fans\n> > are used to problems.)\n> \n> Is RedHat bashing really necessary here? At least they are payrolling\n> Second Chair on the Linux kernel hierarchy. And they are very\n> supportive of PostgreSQL (by shipping us with their distribution).\n\nJust because they do some really nice things and have some really\nnice stuff doesn't mean they should really get cut any slack for\ndoing things like shipping out of sync kernel/system headers, kill\n-9'ing databases and having programs like 'tmpwatch' running on\nthe boxes. It really shows a lack of understanding of how Unix is\nsupposed to run.\n\nWhat they really need to do is hire some grey beards (old school\nUnix folks) to QA the releases and keep stuff like this from\nhappening/shipping. \n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 14:43:05 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "\n BeOS haven't this stat (I have a bunch of others but not this one).\n\n If I unsterstand correctly, you want to check if there is some backend \nstill attached to shared mem segment of a given key ? In this case, I have an \neasy solution to fake the stat, because all segment have an encoded name \ncontaining this key, so I can count them.\n\n\n cyril\n\n>\n>Alfred Perlstein <bright@wintelcom.net> writes:\n>>> Are there any portability problems with relying on shm_nattch to be\n>>> available? If not, I like this a lot...\n>\n>> Well it's available on FreeBSD and Solaris, I'm sure Redhat has\n>> some deamon that resets the value to 0 periodically just for kicks\n>> so it might not be viable... :)\n>\n>I notice that our BeOS and QNX emulations of shmctl() don't support\n>IPC_STAT, but that could be dealt with, at least to the extent of\n>stubbing it out.\n>\n>This does raise the question of what to do if shmctl(IPC_STAT) fails\n>for a reason other than EINVAL. I think the conservative thing to do\n>is refuse to start up. On EPERM, for example, it's possible that there\n>is a postmaster running in your PGDATA but with a different userid.\n>\n>\n>> Seriously, there's some dispute on the type that 'shm_nattch' is,\n>> under Solaris it's \"shmatt_t\" (unsigned long afaik), under FreeBSD\n>> it's 'short' (i should fix this. :)).\n>\n>> But since you're really only testing for 0'ness then it shouldn't\n>> really be a problem.\n>\n>We need not copy the value anywhere, so as long as the struct is\n>correctly declared in the system header files I don't think it matters\n>what the field type is ...\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 07 Mar 2001 01:04:20 +0100", "msg_from": "Cyril VELTER <cyril.velter@libertysurf.fr>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "> >Alfred Perlstein <bright@wintelcom.net> writes:\n> >>> Are there any portability problems with relying on shm_nattch to be\n> >>> available? If not, I like this a lot...\n> >\n> >> Well it's available on FreeBSD and Solaris, I'm sure Redhat has\n> >> some deamon that resets the value to 0 periodically just for kicks\n> >> so it might not be viable... :)\n> >\n> >I notice that our BeOS and QNX emulations of shmctl() don't support\n> >IPC_STAT, but that could be dealt with, at least to the extent of\n> >stubbing it out.\n\n* Cyril VELTER <cyril.velter@libertysurf.fr> [010306 16:15] wrote:\n> \n> BeOS haven't this stat (I have a bunch of others but not this one).\n> \n> If I unsterstand correctly, you want to check if there is some backend \n> still attached to shared mem segment of a given key ? In this case, I have an \n> easy solution to fake the stat, because all segment have an encoded name \n> containing this key, so I can count them.\n\nWe need to be able to take a single shared memory segment and\ndetermine if any other process is using it.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n", "msg_date": "Tue, 6 Mar 2001 16:22:10 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "Alfred Perlstein wrote:\n> What they really need to do is hire some grey beards (old school\n> Unix folks) to QA the releases and keep stuff like this from\n> happening/shipping.\n\nLike the 250-strong RedHat Beta Team, of which I am a member? :-) I\ncan't disclose the discussions on that list, but, suffice to say the\ntraffic there is at least as great as the traffic on this one.\n\nOf course, 7.1 hasn't shipped with a RedHat release yet -- and it's my\njob to make sure the postmaster gets killed properly in my initscript\ninside the package for 7.1 -- there will be no kill -9 unless it is an\nemergency to do so for postmaster.\n\nI've seen the advisories and the bug lists -- RedHat is not alone with\nbugs -- not even unusual with bugs. And every OS I know of (and you\ntoo) has had a brown paper bag release before. Even PostgreSQL, given\nits high release quality standards, has had a brown paper bag release --\nwe all still make mistakes (I know -- I've made more than my share of\nthem).\n\nAnyway, that's more than what the rest of the list wanted to read.\nReplies to private e-mail, please. \n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Mar 2001 19:29:41 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "On Tue, Mar 06, 2001 at 04:20:13PM -0500, Lamar Owen wrote:\n> Nathan Myers wrote:\n> > That is why there is no problem with version skew in the syscall\n> > argument structures on a correctly-configured Linux system. (On a\n> > Red Hat system it is very easy to get them out of sync, but RH fans\n> > are used to problems.)\n> \n> Is RedHat bashing really necessary here? \n\nI recognize that my last seven words above contributed nothing.\nIn the future I will only post strictly factual statements about\nRed Hat and similarly charged topics, and keep the opinions to\nmyself. I value the collegiality of this list too much to risk \nit further. I offer my apologies for violating it.\n\nBy the way... do they call Red Hat \"RedHat\" at Red Hat? \n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Tue, 6 Mar 2001 16:58:16 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Red Hat bashing" }, { "msg_contents": "Nathan Myers wrote:\n> it further. I offer my apologies for violating it.\n\nWell, an apology is not really necessary -- but I do get a little tired\nat the treatment a good open source company gets at the hands of open\nsource advocates. Yes, they make mistakes. Everyone does.\n \n> By the way... do they call Red Hat \"RedHat\" at Red Hat?\n\nNo, they don't. I don't know how I got into the habit of leaving out\nthe space, but the space is supposed to be there -- unless you are on\nthe Red Hat CD, where you will find a directory called 'RedHat'.\n\nOh well. Totally off topic. If the from header had your personal\naddress in it (Reply-All only lets me reply to the list for that\nmessage) I wouldn't grieve the list further with it.\n\nMy last words on that subject. Let's go on making PostgreSQL better. \nAnd preventing the kill -9 will make PostgreSQL better, even if it is\nmasking a certain amount of shortsightedness on a certain initscripts\nauthor's part. :-)\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Tue, 06 Mar 2001 20:16:18 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Red Hat bashing" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n\n> On Linux, /usr/src/linux/include is meaningless for anything in userland; \n> it's meant only for building the kernel and kernel modules. That Red Hat \n> tends to expose it to user-level builds is a long-standing bug in Red \n> Hat's distribution\n\n1) it isn't this way anyore\n2) this was so for most distributions for a loong time, not a \"Red\n Hat\" bug.\n\n> in violation of the File Hierarchy Standard as well as explicit\n> instructions from Linus & crew and from the maintainer of the C\n> library.\n\nWhich obviously hasn't always been the case - the FHS isn't exactly old. \nThings have changed since then, we have followed.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "06 Mar 2001 20:50:52 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> The interlock has to be tightly tied to the PGDATA directory, because\n> what we're trying to protect is the files in and under that directory.\n> It seems that something based on file(s) in that directory is the way\n> to go.\n> \n> The best idea I've seen so far is Hiroshi's idea of having all the\n> backends hold fcntl locks on the same file (probably postmaster.pid\n> would do fine). Then the new postmaster can test whether any backends\n> are still alive by trying to lock the old postmaster.pid file.\n> Unfortunately, I read in the fcntl man page:\n> \n> Locks are not inherited by a child process in a fork(2) system call.\n> \n\nYes flock() works well here but fcntl() doesn't.\n\n> This makes the idea much less attractive than I originally thought:\n> a new backend would not automatically inherit a lock on the\n> postmaster.pid file from the postmaster, but would have to open/lock it\n> for itself. That means there's a window where the new backend exists\n> but would be invisible to a hypothetical new postmaster.\n> \n> We could work around this with the following, very ugly protocol:\n> \n> 1. Postmaster normally maintains fcntl read lock on its postmaster.pid\n> file. Each spawned backend immediately opens and read-locks\n> postmaster.pid, too, and holds that file open until it dies. (Thus\n> wasting a kernel FD per backend, which is one of the less attractive\n> things about this.) If the backend is unable to obtain read lock on\n> postmaster.pid, then it complains and dies. We must use read locks\n> here so that all these processes can hold them separately.\n> \n> 2. If a newly started postmaster sees a pre-existing postmaster.pid\n> file, it tries to obtain a *write* lock on that file. If it fails,\n> conclude that an old postmaster or backend is still alive; complain\n> and quit. If it succeeds, sit for say 1 second before deleting the file\n> and creating a new one. (The delay here is to allow any just-started\n> old backends to fail to acquire read lock and quit. A possible\n> objection is that we have no way to guarantee 1 second is enough, though\n> it ought to be plenty if the lock acquisition is just after the fork.)\n> \n\nI have another idea. My main point is to not remove the existent\npidfile. For example\n1) A newly started postmaster tries to obtain a write lock on the\n first byte of the pidfile. If it fails the postmaster quit.\n2) The postmaster tries to obtain a write lock on the second byte\n of the pidfile. If it fails the postmaster quit.\n3) The postmaster releases the lock of 2).\n4) Each backend obtains a read-lock on the second byte of the\n pidfile.\n\nRegards,\nHiroshi Inoue \n", "msg_date": "Wed, 7 Mar 2001 13:38:36 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Samuel Sieb <samuel@sieb.net> writes:\n\n> On Tue, Mar 06, 2001 at 12:46:24PM -0800, Nathan Myers wrote:\n> > \n> > On Linux, /usr/src/linux/include is meaningless for anything in userland; \n> > it's meant only for building the kernel and kernel modules. That Red Hat \n> > tends to expose it to user-level builds is a long-standing bug in Red \n> > Hat's distribution, in violation of the File Hierarchy Standard as well \n> > as explicit instructions from Linus & crew and from the maintainer of the \n> > C library.\n> > \n> Red Hat's Fisher Beta has split the 2 includes, which caused an error trying\n> to compile a (I guess badly configured) kernel module. \n\nIt was split in Red Hat Linux 7 as well.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "08 Mar 2001 11:43:16 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" } ]
[ { "msg_contents": "It used to be that if you had a pg_dump file without ACL checks, you\ncould load it as an unprivileged user. Now you get a ton of complaints\nabout those handy little \"update pg_class\" commands.\n\nI suppose this is only a cosmetic issue, but we're going to get\nquestions/complaints about it... is there any way to avoid needing\nthose UPDATEs?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 20:00:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pg_dump scripts are no longer ordinary-user friendly" }, { "msg_contents": "At 20:00 5/03/01 -0500, Tom Lane wrote:\n>I suppose this is only a cosmetic issue, but we're going to get\n>questions/complaints about it... is there any way to avoid needing\n>those UPDATEs?\n\nI definitely prefer it to match the old behaviour, and since by default we\nput triggers at the end, we could go back to the old model of only updating\npg_class in a data-only dump/restore.\n\nThis only has a problem if the user reorders the restore to put triggers at\nthe start, and then I suspect they may want them enabled anyway. If anybody\ncan see another case where this will be a problem, speak up...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 06 Mar 2001 13:07:03 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump scripts are no longer ordinary-user friendly" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> ... we could go back to the old model of only updating\n> pg_class in a data-only dump/restore.\n\nWorks for me ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 21:37:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user friendly " }, { "msg_contents": "At 21:37 5/03/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> ... we could go back to the old model of only updating\n>> pg_class in a data-only dump/restore.\n>\n>Works for me ...\n>\n\nShould we have an option to turn off this feature entirely?\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 06 Mar 2001 14:23:14 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user\n friendly" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 21:37 5/03/01 -0500, Tom Lane wrote:\n>> Philip Warner <pjw@rhyme.com.au> writes:\n>>> ... we could go back to the old model of only updating\n>>> pg_class in a data-only dump/restore.\n>> \n>> Works for me ...\n\n> Should we have an option to turn off this feature entirely?\n\nNow that you mention it, is it a feature at all? Or a bug? ISTM poor\nform for a data-only restore to assume it may turn off all pre-existing\ntriggers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:26:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user friendly" }, { "msg_contents": "At 22:26 5/03/01 -0500, Tom Lane wrote:\n>\n>> Should we have an option to turn off this feature entirely?\n>\n>Now that you mention it, is it a feature at all? Or a bug? ISTM poor\n>form for a data-only restore to assume it may turn off all pre-existing\n>triggers.\n\nDo you recall any of the history - why was it added in the first place? I\nvaguely recall something about doing a schema restore then data restore. In\nthis case, you need to disable triggers, but maybe that should be an option\nonly. ie. default to no messing with pg_class, but if the user requests it,\noutput code to disable triggers. \n\nThe only thing that worries me in this, is that we are changing the\nbehaviour from 7.0.\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 06 Mar 2001 14:32:15 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user\n friendly" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 22:26 5/03/01 -0500, Tom Lane wrote:\n>> Now that you mention it, is it a feature at all? Or a bug? ISTM poor\n>> form for a data-only restore to assume it may turn off all pre-existing\n>> triggers.\n\n> Do you recall any of the history - why was it added in the first place?\n\nNo, I don't recall. It might be worth digging in the archives.\n\n> I vaguely recall something about doing a schema restore then data\n> restore. In this case, you need to disable triggers, but maybe that\n> should be an option only. ie. default to no messing with pg_class, but\n> if the user requests it, output code to disable triggers.\n\nWell, mumble. I guess the question is what are the triggers going to\n*do*? If they are going to cross-check against tables that may not be\nrestored yet, then you have a problem if you don't turn them off. OTOH\nit's easy to imagine that this may allow you to load inconsistent data.\n'Tis a puzzlement.\n\nFor now, I'd be happy if the normal case of a simple restore doesn't\ngenerate warnings. Improving on that probably takes more thought and\nrisk than we should be putting in at the end of beta.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:40:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user friendly" }, { "msg_contents": "On Mon, 5 Mar 2001, Tom Lane wrote:\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > At 21:37 5/03/01 -0500, Tom Lane wrote:\n> >> Philip Warner <pjw@rhyme.com.au> writes:\n> >>> ... we could go back to the old model of only updating\n> >>> pg_class in a data-only dump/restore.\n> >> \n> >> Works for me ...\n> \n> > Should we have an option to turn off this feature entirely?\n> \n> Now that you mention it, is it a feature at all? Or a bug? ISTM poor\n> form for a data-only restore to assume it may turn off all pre-existing\n> triggers.\n\nThe problem is that in general if you do a schema dump and data dump\nseparately (which was the case that was put in for really), you're already\nscrewed if you've got triggers that alter or check other data unless\nyou do manual work for restore.\n\nIf you've got a trigger that logs changes, you don't want to log the \nreinserted data if you're also restoring the data for the log table.\nYou can't not restore the log table if it logs modifications because\nyou'll lose the modification data.\n\nIf you're doing any triggers that are doing anything like fk (say you\nwant to do something other than direct comparisons) you run into the\nissue of having the data not be there.\n\nIf you're doing an insert/update trigger that sets a modification date,\nyou probably don't want to blow away the modification dates on a restore.\n\nI don't think turning off triggers is a good idea, but I'm not certain\nthat turning them on always will actually be better for the average user. \nI think an option is a good idea though.\n\n\n", "msg_date": "Mon, 5 Mar 2001 19:51:49 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user friendly" }, { "msg_contents": "At 22:40 5/03/01 -0500, Tom Lane wrote:\n>\n>For now, I'd be happy if the normal case of a simple restore doesn't\n>generate warnings. \n\nI'll commit the changes shortly.\n\n\n>Improving on that probably takes more thought and\n>risk than we should be putting in at the end of beta.\n\nAgreed.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 06 Mar 2001 14:56:24 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user\n friendly" }, { "msg_contents": "On Mon, 5 Mar 2001, Tom Lane wrote:\n\n> Philip Warner <pjw@rhyme.com.au> writes:\n> > At 22:26 5/03/01 -0500, Tom Lane wrote:\n> >> Now that you mention it, is it a feature at all? Or a bug? ISTM poor\n> >> form for a data-only restore to assume it may turn off all pre-existing\n> >> triggers.\n> \n> > Do you recall any of the history - why was it added in the first place?\n> \n> No, I don't recall. It might be worth digging in the archives.\n\nForeign key constraints with data following the full constraint definition\nif the data was in the wrong order. \n\nUnfortunately it does allow invalid data to be loaded, but for circular\ncases I'm not sure how you can do this safely. I guess for fk, if all the\ndata loading was in a single transaction and you did something to override\nthe normal deferrable-ness of the constraint and forced the constraints to\nbe deferred, it would check at the end of the full load. This still\nbreaks for multiple dump files per table and for other random user\ntriggers that are unsafe on restore though.\n\n\n\n", "msg_date": "Mon, 5 Mar 2001 20:04:11 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Re: pg_dump scripts are no longer ordinary-user friendly" } ]
[ { "msg_contents": "I wonder if the new Tips at the bottom of email messages can be enabled\nfor users during their first 30 days of mailing list subscription, then\nnot appear?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 22:00:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "mailing list messages" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I wonder if the new Tips at the bottom of email messages can be enabled\n> for users during their first 30 days of mailing list subscription, then\n> not appear?\n\nI'm pretty durn tired of 'em, and it's not been 30 days yet ;-)\n\nA subscription option to turn 'em off seems sufficient, though.\nWe don't need to automate the choice for people.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 22:23:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mailing list messages " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I wonder if the new Tips at the bottom of email messages can be enabled\n> > for users during their first 30 days of mailing list subscription, then\n> > not appear?\n> \n> I'm pretty durn tired of 'em, and it's not been 30 days yet ;-)\n\nI hear you.\n\n> A subscription option to turn 'em off seems sufficient, though.\n> We don't need to automate the choice for people.\n\nI figured it is a pain to have people do that when 95% of current users\ndon't need it now anyway because they have been around for >30 days.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 22:35:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: mailing list messages" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I wonder if the new Tips at the bottom of email messages can be enabled\n> > for users during their first 30 days of mailing list subscription, then\n> > not appear?\n> \n> I'm pretty durn tired of 'em, and it's not been 30 days yet ;-)\n\nI think the tips would be greatly enhanced if there was a 25% chance\nthat they included the output of the fortune program.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 57: I have discovered the art of deceiving diplomats. I tell them the truth\nand they never believe me.\n\t\t-- Camillo Di Cavour\n", "msg_date": "06 Mar 2001 09:06:26 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: mailing list messages" }, { "msg_contents": "Bruce Momjian writes:\n\n> I wonder if the new Tips at the bottom of email messages can be enabled\n> for users during their first 30 days of mailing list subscription, then\n> not appear?\n\nI once again refer people to RFC 2369\n<http://www.faqs.org/rfcs/rfc2369.html> about how to embed email list\nmanagement information into messages. Secondly, I would also tolerate the\n\"monthly reminders\" that some list managers send out (e.g., GNU, Great\nBridge). At the very least, though, the tips should be preceded by a\n'<dash><dash><space><newline>' sequence so that some mail readers can\nstrip them off in replies.\n\nI filter out the tips anyway, so I really don't care a lot.\n\n# for procmail users\n:0 Bbf\n| sed -n -e '/^---------------------------(end of broadcast)---------------------------$/q' -e 'p'\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 6 Mar 2001 18:18:06 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: mailing list messages" }, { "msg_contents": "\noh, let me add that as a TIP :)\n\nOn Tue, 6 Mar 2001, Peter Eisentraut wrote:\n\n> Bruce Momjian writes:\n>\n> > I wonder if the new Tips at the bottom of email messages can be enabled\n> > for users during their first 30 days of mailing list subscription, then\n> > not appear?\n>\n> I once again refer people to RFC 2369\n> <http://www.faqs.org/rfcs/rfc2369.html> about how to embed email list\n> management information into messages. Secondly, I would also tolerate the\n> \"monthly reminders\" that some list managers send out (e.g., GNU, Great\n> Bridge). At the very least, though, the tips should be preceded by a\n> '<dash><dash><space><newline>' sequence so that some mail readers can\n> strip them off in replies.\n>\n> I filter out the tips anyway, so I really don't care a lot.\n>\n> # for procmail users\n> :0 Bbf\n> | sed -n -e '/^---------------------------(end of broadcast)---------------------------$/q' -e 'p'\n>\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Tue, 6 Mar 2001 14:06:29 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: mailing list messages" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> I wonder if the new Tips at the bottom of email messages can be enabled\n> for users during their first 30 days of mailing list subscription, then\n> not appear?\n\nWhat about having some basic _PostgreSQL_ tips in there? This would be\nespecially cute for -novice, I think.\n\nWe must all be able to come up with 100 or so little one or two liners\nabout PostgreSQL can't we?\n\nJust a thought,\n Andrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Wed, 07 Mar 2001 07:47:45 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: mailing list messages" }, { "msg_contents": "On Wed, 7 Mar 2001, Andrew McMillan wrote:\n\n> Bruce Momjian wrote:\n> >\n> > I wonder if the new Tips at the bottom of email messages can be enabled\n> > for users during their first 30 days of mailing list subscription, then\n> > not appear?\n>\n> What about having some basic _PostgreSQL_ tips in there? This would be\n> especially cute for -novice, I think.\n>\n> We must all be able to come up with 100 or so little one or two liners\n> about PostgreSQL can't we?\n\nSince Peter has shown how easy it is to get rid of the TIPs for those that\ndont' like it, I think that's a cool idea :)\n\n\n\n", "msg_date": "Tue, 6 Mar 2001 14:56:32 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: mailing list messages" } ]
[ { "msg_contents": "I have just sent to the pgsql-patches list a rather large set of\nproposed diffs for the WAL code. These changes:\n\n* Store two past checkpoint locations, not just one, in pg_control.\n On startup, we fall back to the older checkpoint if the newer one\n is unreadable. Also, a physical copy of the newest checkpoint record\n is kept in pg_control for possible use in disaster recovery (ie,\n complete loss of pg_xlog). Also add a version number for pg_control\n itself. Remove archdir from pg_control; it ought to be a GUC\n parameter, not a special case (not that it's implemented yet anyway).\n\n* Suppress successive checkpoint records when nothing has been entered\n in the WAL log since the last one. This is not so much to avoid I/O\n as to make it actually useful to keep track of the last two\n checkpoints. If the things are right next to each other then there's\n not a lot of redundancy gained...\n\n* Change CRC scheme to a true 64-bit CRC, not a pair of 32-bit CRCs\n on alternate bytes.\n\n* Fix XLOG record length handling so that it will work at BLCKSZ = 32k.\n\n* Change XID allocation to work more like OID allocation, so that we\n can flush XID alloc info to the log before there is any chance an XID\n will appear in heap files.\n\n* Fix a number of minor bugs, such as off-by-one logic for XLOG file\n wraparound at the 4 gig mark.\n\n* Add documentation and clean up some coding infelicities; move file\n format declarations out to include files where planned contrib\n utilities can get at them.\n\n\nBefore committing this stuff, I intend to prepare a contrib utility that\ncan be used to reset pg_control and pg_xlog. This is mainly for\ndisaster recovery purposes, but as a side benefit it will allow people\nto update 7.1beta installations to this new code without doing initdb.\nI need to update contrib/pg_controldata, too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 05 Mar 2001 23:39:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Proposed WAL changes" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> I have just sent to the pgsql-patches list a rather large set of\n> proposed diffs for the WAL code. These changes:\n> \n> * Store two past checkpoint locations, not just one, in pg_control.\n> On startup, we fall back to the older checkpoint if the newer one\n> is unreadable. Also, a physical copy of the newest checkpoint record\n> is kept in pg_control for possible use in disaster recovery (ie,\n> complete loss of pg_xlog). Also add a version number for pg_control\n> itself. Remove archdir from pg_control; it ought to be a GUC\n> parameter, not a special case (not that it's implemented yet anyway).\n> \n\nIs archdir really a GUC parameter ?\n\nRegards,\nHiroshi Inoue \n", "msg_date": "Tue, 6 Mar 2001 23:55:35 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Proposed WAL changes" }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n>> Remove archdir from pg_control; it ought to be a GUC\n>> parameter, not a special case (not that it's implemented yet anyway).\n\n> Is archdir really a GUC parameter ?\n\nWhy shouldn't it be? I see nothing wrong with changing it on-the-fly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 10:33:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Proposed WAL changes " }, { "msg_contents": "Please forgive me if I'm misunderstanding something of these rather\ncomplex issues. But I think this is an important question from the\nperspective of a sytem administrator responsible for the safety and\nuncorruptedness of his users' data.\n\nIf I understand correctly, it is possible, through power failure,\nPostgres or OS crash, or disk failure, to end up with a situation\nwhere Postgres cannot put the database in a consistent state. Rather\nthan failing to start at all, something will be put in place that\nallows PG to partially recover and start up, so that you can get to\nyour data. I think that leaves DBAs wondering two things:\n\nFirst, how will I know that my database is corrupted? I may not be\npresent to witness the power failure, unattended reboot, and automatic\nrestart/quasi-recovery. If the DB has become inconsistent, it is\ncritical that users do not continue to use it. I'm all for having\nPostgres throw up its hands and refuse to start until I put it in\ndisaster-dump mode.\n\nSecondly, since disaster-dump seems to be a good term for it, is there\nsome way to know the extent of the damage? I.e. in the typical case\nof power failure, is the inconsistent part just the \"recent\" data,\nand would it be possible to find out which records are part of that\ndamaged set?\n-- \nChristopher Masto Senior Network Monkey NetMonger Communications\nchris@netmonger.net info@netmonger.net http://www.netmonger.net\n\nFree yourself, free your machine, free the daemon -- http://www.freebsd.org/\n", "msg_date": "Thu, 8 Mar 2001 17:47:48 -0500", "msg_from": "Christopher Masto <chris@netmonger.net>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" } ]
[ { "msg_contents": "\n> 1. A new transaction inserts a tuple. The tuple is entered into its\n> heap file with the new transaction's XID, and an associated WAL log\n> entry is made. Neither one of these are on disk yet --- the heap tuple\n> is in a shmem disk buffer, and the WAL entry is in the shmem \n> WAL buffer.\n> \n> 2. Now do a lot of read-only operations, in the same or another backend.\n> The WAL log stays where it is, but eventually the shmem disk buffer will\n> get flushed to disk so that the buffer can be re-used for some other\n> disk page.\n> \n> 3. Assume we now crash. Now, we have a heap tuple on disk with an XID\n> that does not correspond to any XID visible in the on-disk WAL log.\n> \n> 4. Upon restart, WAL will initialize the XID counter to the first XID\n> not seen in the WAL log. Guess which one that is.\n> \n> 5. We will now run a new transaction with the same XID that was in use\n> before the crash. If that transaction commits, then we have a tuple on\n> disk that will be considered valid --- and should not be.\n\nI do not think this is true. Before any modification to a page the original page will be \nwritten to the log (aka physical log).\nOn startup rollforward this original page, that does not contain the inserted\ntuple with the stale XID is rewritten over the modified page.\n\nAndreas\n\nPS: I thus object to your proposed XID allocation change\n", "msg_date": "Tue, 6 Mar 2001 09:25:08 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL-based allocation of XIDs is insecure" }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > 1. A new transaction inserts a tuple. The tuple is entered into its\n> > heap file with the new transaction's XID, and an associated WAL log\n> > entry is made. Neither one of these are on disk yet --- the heap tuple\n> > is in a shmem disk buffer, and the WAL entry is in the shmem\n> > WAL buffer.\n> >\n> > 2. Now do a lot of read-only operations, in the same or another backend.\n> > The WAL log stays where it is, but eventually the shmem disk buffer will\n> > get flushed to disk so that the buffer can be re-used for some other\n> > disk page.\n> >\n> > 3. Assume we now crash. Now, we have a heap tuple on disk with an XID\n> > that does not correspond to any XID visible in the on-disk WAL log.\n> >\n> > 4. Upon restart, WAL will initialize the XID counter to the first XID\n> > not seen in the WAL log. Guess which one that is.\n> >\n> > 5. We will now run a new transaction with the same XID that was in use\n> > before the crash. If that transaction commits, then we have a tuple on\n> > disk that will be considered valid --- and should not be.\n> \n> I do not think this is true. Before any modification to a page the original page will be\n> written to the log (aka physical log).\n\nYes there must be XLogFlush() before writing buffers.\nBTW how do we get the next XID if WAL files are corrupted ?\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 06 Mar 2001 19:21:26 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: WAL-based allocation of XIDs is insecure" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> 5. We will now run a new transaction with the same XID that was in use\n>> before the crash. If that transaction commits, then we have a tuple on\n>> disk that will be considered valid --- and should not be.\n\n> I do not think this is true. Before any modification to a page the\n> original page will be written to the log (aka physical log).\n\nHmm. Actually, what is written to the log is the *modified* page not\nits original contents. However, on studying the buffer manager I see\nthat it tries to fsync the log entry describing the last mod to a data\npage before it writes out the page itself. So perhaps that can be\nrelied on to ensure all XIDs known in the heap are known in the log.\n\nHowever, I'd just as soon have the NEXTXID log records too to be doubly\nsure. I do now agree that we needn't fsync the NEXTXID records,\nhowever.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 10:20:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL-based allocation of XIDs is insecure " }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Yes there must be XLogFlush() before writing buffers.\n> BTW how do we get the next XID if WAL files are corrupted ?\n\nMy not-yet-committed changes include storing the latest CheckPoint\nrecord in pg_control (as well as in the WAL files). Recovery from\nXLOG disaster will consist of generating a new XLOG that's empty\nexcept for a CheckPoint record based on the one cached in pg_control.\nIn particular we can extract the nextOid and nextXid fields.\n\nIt might be that writing NEXTXID or NEXTOID log records should update\npg_control too with new nextXid/nextOid values --- what do you think?\nOtherwise there's a possibility that the stored checkpoint is too far\nback to cover all the values used since then. OTOH, we are not going\nto be able to guarantee absolute consistency in this disaster recovery\nscenario anyway; duplicate XIDs may be the least of one's worries.\n\nOf course, if you lose both XLOG and pg_control, you're still in big\ntrouble. So it seems we should minimize the number of writes to\npg_control, which is an argument not to update it more than we must.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 10:28:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "\nThis just came to the webmaster mailbox:\n\n-------\nMost of the top banner links on http://jdbc.postgresql.org (like\nDocumentation, Tutorials, Resources, Development) throw up 404s if\nfollowed. Thought you ought to know.\n\nStill trying to find the correct driverClass/connectString for the\nPostgres JDBC driver...\n-------\n\nWho maintains this site? It's certainly not me. From looking\nat the page I'm guessing Peter Mount, can we get some kind of\nprominent contact info on it? I've had a few emails on it so\nfar.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 6 Mar 2001 06:11:46 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Banner links not working (fwd)" } ]
[ { "msg_contents": "\n> > > 5. We will now run a new transaction with the same XID that was in use\n> > > before the crash. If that transaction commits, then we have a tuple on\n> > > disk that will be considered valid --- and should not be.\n> > \n> > I do not think this is true. Before any modification to a page the original page will be\n> > written to the log (aka physical log).\n> \n> Yes there must be XLogFlush() before writing buffers.\n> BTW how do we get the next XID if WAL files are corrupted ?\n\nNormally:\n1. pg_control checkpoint info\n2. checkpoint record in WAL ?\n3. then rollforward of WAL\n\nIf WAL is corrupt the only way to get a consistent state is to bring the\ndb into a state as it was during last good checkpoint. But this is only possible\nif you can at least read all \"physical log\" records from WAL.\n\nFailing that, the only way would probably be to scan all heap files for XID's that are \ngreater than the XID from checkpoint.\n\nI think the utility Tom has in mind, that resets WAL, will allow you to dump the db\nso you can initdb and reload. I don't think it is intended that you can immediately \nresume operation, (unless of course for the mentioned case of an upgrade with\na good checkpoint as last WAL record (== proper shutdown)).\n\nAndreas\n", "msg_date": "Tue, 6 Mar 2001 12:16:34 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: WAL-based allocation of XIDs is insecure" } ]
[ { "msg_contents": "\n> >> 5. We will now run a new transaction with the same XID that was in use\n> >> before the crash. If that transaction commits, then we have a tuple on\n> >> disk that will be considered valid --- and should not be.\n> \n> > I do not think this is true. Before any modification to a page the\n> > original page will be written to the log (aka physical log).\n> \n> Hmm. Actually, what is written to the log is the *modified* page not\n> its original contents.\n\nWell, that sure is not what was discussed on the list for implementation !!\nThe physical log page should be the page as it was during the last checkpoint. \nAnything else would also not have the benefit of fixing the index page problem \nthis solution was intended to fix in the first place. I thus really doubt above statement.\n\n> However, on studying the buffer manager I see\n> that it tries to fsync the log entry describing the last mod to a data\n> page before it writes out the page itself. So perhaps that can be\n> relied on to ensure all XIDs known in the heap are known in the log.\n\nEach page about to be modified should be written to the txlog once,\nand only once before the first modification after each checkpoint.\n\nDuring rollforward the pages are written back to the heap, thus no open\nXIDs can be in heap pages.\n\n> However, I'd just as soon have the NEXTXID log records too to be doubly\n> sure. I do now agree that we needn't fsync the NEXTXID records,\n> however.\n\nI do not really see an additional benefit. If the WAL is busted those records are \nlikely busted too.\n\nAndreas\n", "msg_date": "Tue, 6 Mar 2001 17:05:22 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: WAL-based allocation of XIDs is insecure " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> Hmm. Actually, what is written to the log is the *modified* page not\n>> its original contents.\n\n> I thus really doubt above statement.\n\nRead the code.\n\n> Each page about to be modified should be written to the txlog once,\n> and only once before the first modification after each checkpoint.\n\nYes, there's only one page dump per page per checkpoint. But the\nsequence is (1) make the modification in shmem buffers then (2) make\nthe XLOG entry. \n\nI believe this is OK since the XLOG entry is flushed before any of\nthe pages it affects are written out from shmem. Since we have not\nchanged the storage management policy, it's OK if heap pages contain\nchanges from uncommitted transactions --- all we must avoid is\ninconsistencies (eg not all three pages of a btree split written out),\nand redo of the XLOG entry will ensure that for us.\n\n>> However, I'd just as soon have the NEXTXID log records too to be doubly\n>> sure. I do now agree that we needn't fsync the NEXTXID records,\n>> however.\n\n> I do not really see an additional benefit. If the WAL is busted those\n> records are likely busted too.\n\nThe point is to make the allocation of XIDs and OIDs work the same way.\nIn particular, if we are forced to reset the XLOG using what's stored in\npg_control, it would be good if what's stored in pg_control is a value\nbeyond the last-used XID/OID, not a value less than the last-used ones.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 11:17:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "\n> >> Remove archdir from pg_control; it ought to be a GUC\n> >> parameter, not a special case (not that it's implemented \n> yet anyway).\n> \n> > Is archdir really a GUC parameter ?\n> \n> Why shouldn't it be? I see nothing wrong with changing it on-the-fly.\n\nYes, I think this is a good change, like all others except XID assignment :-)\n\nAndreas\n", "msg_date": "Tue, 6 Mar 2001 17:09:30 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Proposed WAL changes " }, { "msg_contents": "> -----Original Message-----\n> From: Zeugswetter Andreas SB\n> \n> > >> Remove archdir from pg_control; it ought to be a GUC\n> > >> parameter, not a special case (not that it's implemented \n> > yet anyway).\n> > \n> > > Is archdir really a GUC parameter ?\n> > \n> > Why shouldn't it be? I see nothing wrong with changing it on-the-fly.\n> \n> Yes, I think this is a good change, like all others except XID \n> assignment :-)\n> \n\nCould GUC parameters be changed permanently e.g. by SET command ?\n\nFor example,\n1) start postmaster\n2) set archdir to '....'\n3) shutdown postmaster\n\nDoes PostgreSQL remember the archdir parameter ?\n\nRegards,\nHiroshi Inoue\n \n", "msg_date": "Wed, 7 Mar 2001 13:38:42 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Proposed WAL changes " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Could GUC parameters be changed permanently e.g. by SET command ?\n\nThat's what postgresql.conf is for ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 00:16:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Could GUC parameters be changed permanently e.g. by SET command ?\n> \n> That's what postgresql.conf is for ...\n> \n\nDo I have to send SIGHUP after changing postgresql.conf ?\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Wed, 07 Mar 2001 16:06:25 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Do I have to send SIGHUP after changing postgresql.conf ?\n\nIn general, yes. I think that for the (still vaporware) archdir option,\nyou might not need to: archdir will only be looked at by the checkpoint\nsubprocess, and I think that a newly spawned backend will reread\npostgresql.conf anyway. Peter, is that correct?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 10:36:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " }, { "msg_contents": "Tom Lane writes:\n\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Do I have to send SIGHUP after changing postgresql.conf ?\n>\n> In general, yes. I think that for the (still vaporware) archdir option,\n> you might not need to: archdir will only be looked at by the checkpoint\n> subprocess, and I think that a newly spawned backend will reread\n> postgresql.conf anyway. Peter, is that correct?\n\nNope. The configuration file is only read at postmaster start and after\nSIGHUP. If any starting backend would read it automatically, the admin\ncould never be sure about his edits.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 7 Mar 2001 17:57:42 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " } ]
[ { "msg_contents": "\n> >> Hmm. Actually, what is written to the log is the *modified* page not\n> >> its original contents.\n> > Well, that sure is not what was discussed on the list for implementation !!\n> > I thus really doubt above statement.\n> \n> Read the code.\n\nOk, sad.\n\n> \n> > Each page about to be modified should be written to the txlog once,\n> > and only once before the first modification after each checkpoint.\n> \n> Yes, there's only one page dump per page per checkpoint. But the\n> sequence is (1) make the modification in shmem buffers then (2) make\n> the XLOG entry. \n> \n> I believe this is OK since the XLOG entry is flushed before any of\n> the pages it affects are written out from shmem. Since we have not\n> changed the storage management policy, it's OK if heap pages contain\n> changes from uncommitted transactions\n\nSure, but the other way would be a lot less complex.\n \n> --- all we must avoid is\n> inconsistencies (eg not all three pages of a btree split written out),\n> and redo of the XLOG entry will ensure that for us.\n\nIs it so hard to swap ? First write page to log then modify in shmem. \nThen those pages would have additional value, because\nthen utilities could do all sorts of things with those pages.\n\n1. Create a consistent state of the db by only applying \"physical log\" pages\n\tafter checkpoint (in case a complete WAL rollforward bails out)\n2. Create a consistent online backup snapshot, by first doing something like an \n\tordinary tar, and after that save all \"physical log\" pages.\n\nAndreas\n", "msg_date": "Tue, 6 Mar 2001 17:38:50 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: WAL-based allocation of XIDs is insecure " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> Is it so hard to swap ? First write page to log then modify in shmem. \n> Then those pages would have additional value, because\n> then utilities could do all sorts of things with those pages.\n\nAfter thinking about this a little, I believe I see why Vadim did it\nthe way he did. Suppose we tried to make the code sequence be\n\n\tobtain write lock on buffer;\n\tXLogOriginalPage(buffer); // copy page to xlog if first since ckpt\n\tmodify buffer;\n\tXLogInsert(xlog entry for modification);\n\tmark buffer dirty and release write lock;\n\nso that the saving of the original page is a separate xlog entry from\nthe modification data. Looks easy, and it'd sure simplify XLogInsert\na lot. The only problem is it's wrong. What if a checkpoint occurs\nbetween the two XLOG records?\n\nThe decision whether to log the whole buffer has to be atomic with the\nactual entry of the xlog record. Unless we want to hold the xlog insert\nlock for the entire time that we're (eg) splitting a btree page, that\nmeans we log the buffer after the modification work is done, not before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 11:58:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL-based allocation of XIDs is insecure " }, { "msg_contents": "I wrote:\n> The decision whether to log the whole buffer has to be atomic with the\n> actual entry of the xlog record. Unless we want to hold the xlog insert\n> lock for the entire time that we're (eg) splitting a btree page, that\n> means we log the buffer after the modification work is done, not before.\n\nOn third thought --- we could still log the original page contents and\nthe modification log record atomically, if what were logged in the xlog\nrecord were (essentially) the parameters to the operation being logged,\nnot its results. That is, make the log entry before you start doing the\nmod work, not after. This might also simplify redo, since redo would be\nno different from the normal case. I'm not sure why Vadim didn't choose\nto do it that way; maybe there's some other fine point I'm missing.\n\nIn any case, it'd be a big code change and not something I'd want to\nundertake at this point in the release cycle ... maybe we can revisit\nthis issue for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 12:31:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "\n> After thinking about this a little, I believe I see why Vadim did it\n> the way he did. Suppose we tried to make the code sequence be\n> \n> \tobtain write lock on buffer;\n> \tXLogOriginalPage(buffer); // copy page to xlog if first since ckpt\n> \tmodify buffer;\n> \tXLogInsert(xlog entry for modification);\n> \tmark buffer dirty and release write lock;\n> \n> so that the saving of the original page is a separate xlog entry from\n> the modification data. Looks easy, and it'd sure simplify XLogInsert\n> a lot. The only problem is it's wrong. What if a checkpoint occurs\n> between the two XLOG records?\n> \n> The decision whether to log the whole buffer has to be atomic with the\n> actual entry of the xlog record. Unless we want to hold the xlog insert\n> lock for the entire time that we're (eg) splitting a btree page, that\n> means we log the buffer after the modification work is done, not before.\n\nYes, I see. Can't currently come up with a workaround eighter. Hmm ..\nDuplicating the buffer is probably not a workable solution.\n\nI do not however see how the current solution fixes the original problem,\nthat we don't have a rollback for index modifications.\nThe index would potentially point to an empty heaptuple slot.\nWhen this slot, because marked empty is reused after startup, the index points \nto the wrong record.\nUnless of course startup rollforward visits all heap pages pointed at\nby index xlog records and inserts a tuple into heap marked deleted.\n\nAdditionally I do not see how this all works for userland index types.\n\nIn short I do not think that the current implementation of \"physical log\" does\nwhat it was intended to do :-(\n\nAndreas\n", "msg_date": "Tue, 6 Mar 2001 18:46:30 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: WAL-based allocation of XIDs is insecur\n\te" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> I do not however see how the current solution fixes the original problem,\n> that we don't have a rollback for index modifications.\n> The index would potentially point to an empty heaptuple slot.\n\nHow? There will be an XLOG entry inserting the heap tuple before the\nXLOG entry that updates the index. Rollforward will redo both. The\nheap tuple might not get committed, but it'll be there.\n\n> Additionally I do not see how this all works for userland index types.\n\nNone of it works for index types that don't do XLOG entries (which I\nthink may currently be true for everything except btree :-( ...). I\ndon't see how that changes if we alter the way this bit is done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 13:18:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: WAL-based allocation of XIDs is insecur e " } ]
[ { "msg_contents": "Why do I get a message about COBOL when doing a gmake install and what\nam I supposed to do about it?\n\nThis is on a Solaris 2.7 system.\n\nThanks,\n\nJarom\n\nP.S. The make was successful.\n\ngmake[3]: `SUBSYS.o' is up to date.\ngmake[3]: Leaving directory\n`/data/postgresql-7.0.3/src/backend/utils/time'\ngmake[2]: Leaving directory `/data/postgresql-7.0.3/src/backend/utils'\n/usr/local/bin/install -c -m 555 postgres /usr/local/pgsql/bin/postgres\nYou must have a COBOL system present to install this product\ngmake[1]: *** [install-bin] Error 1\ngmake[1]: Leaving directory `/data/postgresql-7.0.3/src/backend'\ngmake: *** [install] Error 2\n", "msg_date": "Tue, 06 Mar 2001 14:02:40 -0500", "msg_from": "Jarom Hagen <jhagen@telematch.com>", "msg_from_op": true, "msg_subject": "COBOL" }, { "msg_contents": "Jarom Hagen <jhagen@telematch.com> writes:\n> /usr/local/bin/install -c -m 555 postgres /usr/local/pgsql/bin/postgres\n> You must have a COBOL system present to install this product\n\nWeird. It looks like you have some exceedingly nonstandard program\nin /usr/local/bin/install --- certainly not what configure thought that\nthat program would do, anyway. Do you know where that program came from\n(perhaps a Sun COBOL package)?\n\nA nondestructive workaround would be to hand-edit src/Makefile.global's\nINSTALL variable to refer to our install-sh script (also in src/) rather\nthan /usr/local/bin/install. However, that install is going to bite a\nlot of other open-source packages that expect to find a standard-ish\ninstall script available, so I'd suggest deleting or at least renaming\nit...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 19:38:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: COBOL " }, { "msg_contents": "Jarom Hagen <jhagen@telematch.com> writes:\n> Yes, we have an evil COBOL compiler from MicroFocus that put that install\n> script there. I was really confused why postgres wanted a COBOL system. :-)\n\nI've suggested a couple of times that since we include install-sh in our\ndistro anyway, it's pointless and unnecessarily risky to go looking for\na platform-supplied install program. However, I could never quite get\nanyone else to see the reasoning. Now that I have this sterling example\nto point to, I'm going to start rattling the cage again. Why don't we\nget rid of the configure-time search for 'install', and just always use\nour own script?\n\n\t\t\tregards, tom lane\n\n\n> On Wed, Mar 07, 2001 at 07:38:30PM -0500, Tom Lane wrote:\n>> Jarom Hagen <jhagen@telematch.com> writes:\n>>> /usr/local/bin/install -c -m 555 postgres /usr/local/pgsql/bin/postgres\n>>> You must have a COBOL system present to install this product\n>> \n>> Weird. It looks like you have some exceedingly nonstandard program\n>> in /usr/local/bin/install --- certainly not what configure thought that\n>> that program would do, anyway. Do you know where that program came from\n>> (perhaps a Sun COBOL package)?\n>> \n>> A nondestructive workaround would be to hand-edit src/Makefile.global's\n>> INSTALL variable to refer to our install-sh script (also in src/) rather\n>> than /usr/local/bin/install. However, that install is going to bite a\n>> lot of other open-source packages that expect to find a standard-ish\n>> install script available, so I'd suggest deleting or at least renaming\n>> it...\n", "msg_date": "Thu, 08 Mar 2001 09:59:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Depending on system install scripts (was Re: COBOL)" }, { "msg_contents": "Tom Lane writes:\n\n> I've suggested a couple of times that since we include install-sh in our\n> distro anyway, it's pointless and unnecessarily risky to go looking for\n> a platform-supplied install program. However, I could never quite get\n> anyone else to see the reasoning. Now that I have this sterling example\n> to point to, I'm going to start rattling the cage again. Why don't we\n> get rid of the configure-time search for 'install', and just always use\n> our own script?\n\nI've sent this to the Autoconf list for some comment, but in general I\nagree with you.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 9 Mar 2001 00:13:10 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Depending on system install scripts (was Re: COBOL)" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Tom Lane writes:\n> \n> > I've suggested a couple of times that since we include install-sh in our\n> > distro anyway, it's pointless and unnecessarily risky to go looking for\n> > a platform-supplied install program. However, I could never quite get\n> > anyone else to see the reasoning. Now that I have this sterling example\n> > to point to, I'm going to start rattling the cage again. Why don't we\n> > get rid of the configure-time search for 'install', and just always use\n> > our own script?\n> \n> I've sent this to the Autoconf list for some comment, but in general I\n> agree with you.\n\nAll the programs which use the Cygnus configure tree (e.g., gdb, GNU\nbinutils) always use install-sh rather than the system install\nprogram.\n\nThe system install program can be faster. But it isn't standardized,\nso if you want to be highly portable, using the shell script really is\nbest.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 83: Drop the vase and it will become a Ming of the past.\n\t\t-- The Adventurer\n", "msg_date": "08 Mar 2001 16:18:43 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Depending on system install scripts (was Re: COBOL)" } ]
[ { "msg_contents": "I like this idea too. How about TIP #1: Don't 'kill -9' the postmaster ;-)\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tThe Hermit Hacker [SMTP:scrappy@hub.org]\nSent:\tTuesday, March 06, 2001 1:57 PM\nTo:\tAndrew McMillan\nCc:\tPostgreSQL-development\nSubject:\tRe: [HACKERS] mailing list messages\n\nOn Wed, 7 Mar 2001, Andrew McMillan wrote:\n\n> Bruce Momjian wrote:\n> >\n> > I wonder if the new Tips at the bottom of email messages can be enabled\n> > for users during their first 30 days of mailing list subscription, then\n> > not appear?\n>\n> What about having some basic _PostgreSQL_ tips in there? This would be\n> especially cute for -novice, I think.\n>\n> We must all be able to come up with 100 or so little one or two liners\n> about PostgreSQL can't we?\n\nSince Peter has shown how easy it is to get rid of the TIPs for those that\ndont' like it, I think that's a cool idea :)\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 3: if posting/reading through Usenet, please send an appropriate\nsubscribe-nomail command to majordomo@postgresql.org so that your\nmessage can get through to the mailing list cleanly\n\n", "msg_date": "Tue, 6 Mar 2001 14:38:18 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: mailing list messages" }, { "msg_contents": "\nthere, added ... and even gave an attribute to you :)\n\nOn Tue, 6 Mar 2001, Mike Mascari wrote:\n\n> I like this idea too. How about TIP #1: Don't 'kill -9' the postmaster ;-)\n>\n> Mike Mascari\n> mascarm@mascari.com\n>\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:scrappy@hub.org]\n> Sent:\tTuesday, March 06, 2001 1:57 PM\n> To:\tAndrew McMillan\n> Cc:\tPostgreSQL-development\n> Subject:\tRe: [HACKERS] mailing list messages\n>\n> On Wed, 7 Mar 2001, Andrew McMillan wrote:\n>\n> > Bruce Momjian wrote:\n> > >\n> > > I wonder if the new Tips at the bottom of email messages can be enabled\n> > > for users during their first 30 days of mailing list subscription, then\n> > > not appear?\n> >\n> > What about having some basic _PostgreSQL_ tips in there? This would be\n> > especially cute for -novice, I think.\n> >\n> > We must all be able to come up with 100 or so little one or two liners\n> > about PostgreSQL can't we?\n>\n> Since Peter has shown how easy it is to get rid of the TIPs for those that\n> dont' like it, I think that's a cool idea :)\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Tue, 6 Mar 2001 15:51:39 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "RE: mailing list messages" } ]
[ { "msg_contents": "I was trying to add a column to a table and fill it but ran into a big\nerror. Apparently now Postgres can't open this table to vacuum or to\nselect although it does show up when I ask psql to describe the table\n(i.e. db01=# /d center_out_analog_proc).\n\nI'm using Postgres 7.0.3 on a PII/400 MHz with RedHat Linux (kernel\n2.2.14-5).\n\nThe command that started the problem was from the script:\n\n-- Re-arranges the columns in a table\n--\n-- Tony Reina\n-- Created: 6 March 2001\n\n-- The BEGIN and COMMIT statements ensure that either all statements are\ndone or none are done\nBEGIN WORK;\n\n-- ADD THE NEW COLUMN TO THE TABLE\nALTER TABLE center_out_analog_proc ADD COLUMN name text;\n\n-- SELECT the columns from the table in whatever new format you wish.\nPlace into a temporary table.\nSELECT subject, arm, target, rep, channel, name, cut_off_freq,\n quality, analog_data INTO temp_table FROM\ncenter_out_analog_proc;\n\n-- DROP THE OLD TABLE\nDROP TABLE center_out_analog_proc;\n\n-- MAKE THE NEW TABLE INTO THE OLD TABLE\nALTER TABLE temp_table RENAME TO center_out_analog_proc;\n\n-- FILL THE NEW COLUMN WITH THE CORRECT DATA\nUPDATE center_out_analog_proc SET name = (SELECT name FROM\ncenter_out_analog AS a WHERE\n a.subject = center_out_analog_proc.subject AND a.arm =\ncenter_out_analog_proc.arm AND\n a.target = center_out_analog_proc.target AND a.rep =\ncenter_out_analog_proc.rep AND\n a.channel = center_out_analog_proc.channel);\n\n-- VACUUM THE TABLE\nVACUUM VERBOSE ANALYZE center_out_analog_proc;\n\nCOMMIT WORK;\n\n-----------------------------------------------------------------------\n\n\nWhen I ran this, I had an error in the UPDATE command (so the entire\ntransaction aborted). I assumed that becuase the transaction aborted\nthat nothing would have changed in the db. However, after this happened,\nI corrected the UPDATE command but ran into this error when I re-ran the\nscript:\n\ndb01=# \\i alter_table_format.sql\nBEGIN\npsql:alter_table_format.sql:14: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:14: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:14: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:14: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:14: ERROR: cannot open relation\ncenter_out_analog_proc\npsql:alter_table_format.sql:17: NOTICE: current transaction is aborted,\nqueries ignored until end of transaction block\n*ABORT STATE*\npsql:alter_table_format.sql:20: NOTICE: current transaction is aborted,\nqueries ignored until end of transaction block\n*ABORT STATE*\npsql:alter_table_format.sql:26: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:26: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:26: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:26: NOTICE: mdopen: couldn't open\ncenter_out_analog_proc: No such file or directory\npsql:alter_table_format.sql:26: NOTICE: current transaction is aborted,\nqueries ignored until end of transaction block\n*ABORT STATE*\npsql:alter_table_format.sql:29: NOTICE: current transaction is aborted,\nqueries ignored until end of transaction block\n*ABORT STATE*\nCOMMIT\n\nWhen I try to vacuum the table or the database I get:\n\nNOTICE: Pages 190: Changed 0, reaped 0, Empty 0, New 0; Tup 9280: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 652, MaxLen 652; Re-using:\nFree/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.07s/0.14u sec.\nNOTICE: --Relation circles_analog_proc --\nNOTICE: Pages 187: Changed 0, reaped 0, Empty 0, New 0; Tup 9140: Vac\n0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 652, MaxLen 652; Re-using:\nFree/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 0.08s/0.13u sec.\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: RelationIdBuildRelation: smgropen(center_out_analog_proc): No\nsuch file or directory\nNOTICE: --Relation center_out_analog_proc --\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nERROR: cannot open relation center_out_analog_proc\ndb01=# select distinct monkey from center_out_analog_proc;\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nERROR: cannot open relation center_out_analog_proc\n\n\nLikewise, a select gives me:\n\ndb01=# select distinct arm from center_out_analog_proc;\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nNOTICE: mdopen: couldn't open center_out_analog_proc: No such file or\ndirectory\nERROR: cannot open relation center_out_analog_proc\n\n\n\nCould someone help? Apparently something has gotten corrupted.\n\nThanks.\n\n-Tony\n\n\n\n\n", "msg_date": "Tue, 06 Mar 2001 11:46:57 -0800", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "ERROR: cannot open relation center_out_analog_proc" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> BEGIN WORK;\n> ...\n> DROP TABLE center_out_analog_proc;\n> ...\n> [fail transaction]\n\n> psql:alter_table_format.sql:14: NOTICE: mdopen: couldn't open\n> center_out_analog_proc: No such file or directory\n\nYou can't roll back a DROP TABLE under pre-7.1 releases (and 7.0 has\na big fat warning notice to tell you so!). The physical table file\nis deleted immediately by the DROP, so rolling back the system catalog\nchanges doesn't get you back to a working table.\n\nThe only way to clean up at this point is to drop the table for real.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 16:34:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: cannot open relation center_out_analog_proc " }, { "msg_contents": "Tom Lane wrote:\n\n> You can't roll back a DROP TABLE under pre-7.1 releases (and 7.0 has\n> a big fat warning notice to tell you so!). The physical table file\n> is deleted immediately by the DROP, so rolling back the system catalog\n> changes doesn't get you back to a working table.\n>\n> The only way to clean up at this point is to drop the table for real.\n>\n\nOkay, so then you are saying that even though the DROP TABLE and ALTER\nTABLE RENAME went through correctly, the line after that bombed out,\ntried to rollback the transaction, and gave me the error?\n\nI definitely missed that warning. Are there any big warnings for things\nthat don't work so well within a transaction (BEGIN WORK; COMMIT WORK)?\n\nThanks. Off to rebuild a table.\n-Tony\n\n\n\n", "msg_date": "Tue, 06 Mar 2001 13:46:27 -0800", "msg_from": "\"G. Anthony Reina\" <reina@nsi.edu>", "msg_from_op": true, "msg_subject": "Re: ERROR: cannot open relation center_out_analog_proc" }, { "msg_contents": "\"G. Anthony Reina\" <reina@nsi.edu> writes:\n> Tom Lane wrote:\n>> You can't roll back a DROP TABLE under pre-7.1 releases (and 7.0 has\n>> a big fat warning notice to tell you so!). The physical table file\n>> is deleted immediately by the DROP, so rolling back the system catalog\n>> changes doesn't get you back to a working table.\n\n> Okay, so then you are saying that even though the DROP TABLE and ALTER\n> TABLE RENAME went through correctly, the line after that bombed out,\n> tried to rollback the transaction, and gave me the error?\n\nRight. The system catalogs roll back just fine, but the Unix filesystem\ndoesn't know from rollbacks.\n\n> I definitely missed that warning. Are there any big warnings for things\n> that don't work so well within a transaction (BEGIN WORK; COMMIT WORK)?\n\nALTER TABLE RENAME is another one...\n\nThis is all fixed in 7.1 btw.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 06 Mar 2001 16:50:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ERROR: cannot open relation center_out_analog_proc " } ]
[ { "msg_contents": "Greetings,\n\nI wrote a few simple programs to log Apache access_log entries to pg. If \nthis is something anyone would be interested in or if there is someplace I \ncan submit these to, please let me know.\n\nThanks,\nMatthew\n\n", "msg_date": "Tue, 06 Mar 2001 17:52:21 -0500", "msg_from": "Matthew Hagerty <mhagerty@voyager.net>", "msg_from_op": true, "msg_subject": "Contributions?" }, { "msg_contents": "Hi Matthew,\n\nI would love to get this stuff...\n\nCould you send it to me or tell me where it is ?\n\nTIA\nOn Tue, 6 Mar 2001, Matthew Hagerty wrote:\n\n> Greetings,\n> \n> I wrote a few simple programs to log Apache access_log entries to pg. If \n> this is something anyone would be interested in or if there is someplace I \n> can submit these to, please let me know.\n> \n> Thanks,\n> Matthew\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 7 Mar 2001 16:17:09 +0100", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": false, "msg_subject": "Re: Contributions?" }, { "msg_contents": "On Wed, Mar 07, 2001 at 04:17:09PM +0100, Olivier PRENANT wrote:\n> Hi Matthew,\n> \n> I would love to get this stuff...\n> \n> Could you send it to me or tell me where it is ?\n> \n> TIA\n> On Tue, 6 Mar 2001, Matthew Hagerty wrote:\n> \n> > Greetings,\n> > \n> > I wrote a few simple programs to log Apache access_log entries to pg. If \n> > this is something anyone would be interested in or if there is someplace I \n> > can submit these to, please let me know.\n> > \n\n The simple inducement for this is already in the contrib/ tree.\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Wed, 7 Mar 2001 16:40:55 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Contributions?" } ]
[ { "msg_contents": "\nWhat about (optionally) printing the type of the column data?\n\ni.e:\n\n\n\n io | tu | tipo | data \n int | int | int2 | date \n--------+-------+------+------------\n 102242 | 26404 | 1203 | 2000-11-22 \n(1 row)\n\n\n", "msg_date": "Wed, 7 Mar 2001 01:15:49 +0100", "msg_from": "\"Michal Maru���ka\" <mmc@maruska.dyndns.org>", "msg_from_op": true, "msg_subject": "psql missing feature" }, { "msg_contents": "Michal Maru�ka writes:\n\n> What about (optionally) printing the type of the column data?\n\n> io | tu | tipo | data\n> int | int | int2 | date\n> --------+-------+------+------------\n> 102242 | 26404 | 1203 | 2000-11-22\n> (1 row)\n\nI've been meaning to implement this for a while. Now that someone is\nseemingly interested I might prioritize it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 7 Mar 2001 16:41:45 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: psql missing feature" }, { "msg_contents": "\n\n\nPeter Eisentraut writes:\n > Michal Maru�ka writes:\n > \n > > What about (optionally) printing the type of the column data?\n > \n > > io | tu | tipo | data\n > > int | int | int2 | date\n > > --------+-------+------+------------\n > > 102242 | 26404 | 1203 | 2000-11-22\n > > (1 row)\n > \n > I've been meaning to implement this for a while. Now that someone is\n > seemingly interested I might prioritize it.\n > \n > -- \n > Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n > \n > \n > ---------------------------(end of broadcast)---------------------------\n > TIP 4: Don't 'kill -9' the postmaster\n\n\n\n\nWait. As stated in my last message, I need _much_ more information. The \"table of\norigin\" just to start.\n\nIIRC the PGresult doesn't provide more info than the types, so the feature is\nsensible at this level, but for my work it is too little. I want to know the tree/path made by\nthe data.\n\nMaybe i am not clear enough, I must think about it, just wanted to note the\ntypes will not help me that much.\n\n\n", "msg_date": "Wed, 7 Mar 2001 17:00:58 +0100 (MET)", "msg_from": "\"Michal Maru���ka\" <mmc@maruska.dyndns.org>", "msg_from_op": true, "msg_subject": "Re: psql missing feature" } ]
[ { "msg_contents": "\nI am writing an Emacs based navigation/editing tool. I would like to have the\nbackend parse a query, EXPLAIN VERBOSE it .... I want to know for every non\nfunctional column the source (table/attribute). How to interpret the parse tree?\n\n", "msg_date": "Wed, 7 Mar 2001 06:23:03 +0100", "msg_from": "\"Michal Maru���ka\" <mmc@maruska.dyndns.org>", "msg_from_op": true, "msg_subject": "output from EXPLAIN VERBOSE " } ]
[ { "msg_contents": "Hi!\n\nSnow in New York -> I'm arrived only today.\nReading mail...\n\nVadim\n\n\n", "msg_date": "Tue, 6 Mar 2001 21:57:24 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "I'm back" } ]
[ { "msg_contents": "> This isn't a 64-bit CRC. It's two independent 32-bit CRCs, one done\n> on just the odd-numbered bytes and one on just the even-numbered bytes\n> of the datastream. That's hardly any stronger than a single 32-bit CRC;\n\nI believe that the longer data the more chance to get same CRC/hash\nfor different data sets (if data length > CRC/hash length). Or am I wrong?\nHaving no crc64 implementation (see below) I decided to use 2 CRC32 instead\nof one - it looked better, without any additional cost (record header is\n8 byte aligned anyway, on, mmm, most platform).\n\n> it's certainly not what I thought we had agreed to implement.\n\nI've asked if anyone can send crc64 impl to me and got only one from\nNathan Myers. Unfortunately, SWISS-PROT impl assumes that long long\nis 8 bytes - is it portable?\n\nVadim\n\n\n", "msg_date": "Tue, 6 Mar 2001 23:40:13 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: Uh, this is *not* a 64-bit CRC ..." }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> I've asked if anyone can send crc64 impl to me and got only one from\n> Nathan Myers. Unfortunately, SWISS-PROT impl assumes that long long\n> is 8 bytes - is it portable?\n\nNo, it's not. I have written an implementation that uses uint64 if\navailable (per configure results) otherwise a pair of uint32 registers.\n(See pg_crc.h in as-yet-uncommitted patches I posted to pgpatches.)\n\nInterestingly, gcc -O on HP-PA generates essentially the same code\nsequence for either the 64- or dual-32-bit versions of the macro...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 10:34:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Uh, this is *not* a 64-bit CRC ... " } ]
[ { "msg_contents": "\n> Could GUC parameters be changed permanently e.g. by SET command ?\n> \n> For example,\n> 1) start postmaster\n> 2) set archdir to '....'\n> 3) shutdown postmaster\n\nI thought the intended way to change a GUC parameter permanently was to \nedit data/postgresql.conf . No ?\n\nAndreas\n", "msg_date": "Wed, 7 Mar 2001 08:56:58 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Proposed WAL changes " }, { "msg_contents": "Zeugswetter Andreas SB wrote:\n> \n> > Could GUC parameters be changed permanently e.g. by SET command ?\n> >\n> > For example,\n> > 1) start postmaster\n> > 2) set archdir to '....'\n> > 3) shutdown postmaster\n> \n> I thought the intended way to change a GUC parameter permanently was to\n> edit data/postgresql.conf . No ?\n> \n\nWhat I've thought is to implement a new command to\nchange archdir under WAL's control.\nIf it's different from Vadim's plan I don't object.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Wed, 07 Mar 2001 17:09:34 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes" }, { "msg_contents": "> > I thought the intended way to change a GUC parameter permanently was to\n> > edit data/postgresql.conf . No ?\n> > \n> \n> What I've thought is to implement a new command to\n> change archdir under WAL's control.\n> If it's different from Vadim's plan I don't object.\n\nActually, I have no concrete plans for archdir yet - this\none is for WAL based BAR we should discuss in future.\nSo, I don't see why to remove archdir from pg_control now.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 00:28:34 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes" }, { "msg_contents": "> > > I thought the intended way to change a GUC parameter permanently was to\n> > > edit data/postgresql.conf . No ?\n> > > \n> > \n> > What I've thought is to implement a new command to\n> > change archdir under WAL's control.\n> > If it's different from Vadim's plan I don't object.\n> \n> Actually, I have no concrete plans for archdir yet - this\n> one is for WAL based BAR we should discuss in future.\n> So, I don't see why to remove archdir from pg_control now.\n\nMaybe we can get BAR in 7.1.X so maybe we should have the option to add\nit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 09:50:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes" }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> So, I don't see why to remove archdir from pg_control now.\n\nI didn't like the space consumption. I think it's important that the\npg_control data fit in less than 512 bytes so that it doesn't cross\nphysical sectors on the disk. This reduces the odds of being left\nwith a corrupted pg_control due to partial write during power loss.\n\nThat's a second-order consideration, possibly, but I can see no\nredeeming social advantage whatever to having archdir in pg_control\nrather than in postgresql.conf where all the other system parameters\nlive. Unless you've got one, it's coming out...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 10:47:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes " } ]
[ { "msg_contents": "\n> > I do not however see how the current solution fixes the original problem,\n> > that we don't have a rollback for index modifications.\n> > The index would potentially point to an empty heaptuple slot.\n> \n> How? There will be an XLOG entry inserting the heap tuple before the\n> XLOG entry that updates the index. Rollforward will redo both. The\n> heap tuple might not get committed, but it'll be there.\n\nBefore commit or rollback the xlog is not flushed to disk, thus you can loose\nthose xlog entries, but the index page might already be on disk because of\nLRU buffer reuse, no ?\nAnother example would be a btree reorg, like adding a level, that is partway \nthrough before a crash.\n\n> > Additionally I do not see how this all works for userland index types.\n> \n> None of it works for index types that don't do XLOG entries (which I\n> think may currently be true for everything except btree :-( ...). I\n> don't see how that changes if we alter the way this bit is done.\n\nI really think that xlog entries should be done by a layer below the userland\nfunctions. I would not like to risc WAL integrity by allowing userland to \nwrite a messed up log record. The record would be something like:\ncalled userland index insert for \"key\" and \"ctid\". With that info you can \neasily redo, but undo would probably be hard. Thus the physical log.\nActually I am not sure index changes need to be (or are currently) logged at all.\nYou can deduce all necessary info from the heap xlog record \n(plus maybe the original record from disk).\n\nAndreas\n", "msg_date": "Wed, 7 Mar 2001 09:44:24 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: AW: WAL-based allocation of XIDs is ins\n\tecur e" }, { "msg_contents": "> Before commit or rollback the xlog is not flushed to disk, thus you can loose\n> those xlog entries, but the index page might already be on disk because of\n> LRU buffer reuse, no ?\n\nNo. Buffer page is written to disk *only after corresponding records are flushed\nto log* (WAL means Write-Ahead-Log - write log before modifying data pages).\n\n> Another example would be a btree reorg, like adding a level, that is partway \n> through before a crash.\n\nAnd this is what I hopefully fixed recently with btree runtime recovery.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 00:53:40 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: WAL-based allocation of XIDs is insecur e " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> I really think that xlog entries should be done by a layer below the\n> userland functions.\n\nThat seems somewhere between impractical and impossible: how will you\ntie the functional xlog entries (\"insert foo into index bar\") to the\nresulting page modifications, unless the entries are made from code\nthat knows all about which pages contain what index entries? Don't\nforget these things need to go into the xlog atomically.\n\n> I would not like to risc WAL integrity by allowing\n> userland to write a messed up log record.\n\nIndex access method code is just as critical a part of the system as\nanything else. The above makes no more sense than saying that you don't\nwant to trust heapam.c to generate correct WAL records.\n\n> Actually I am not sure index changes need to be (or are currently)\n> logged at all. You can deduce all necessary info from the heap xlog\n> record (plus maybe the original record from disk).\n\nThis assumes that pg_index, pg_am and friends are (a) not corrupt; (b)\nin the same state that they were in when the portion of the XLOG being\nreplayed was made. Neither of these assumptions is acceptable for WAL\nrecovery.\n\nI do think there's something to your notion that XLOG should be logging\nthe pre-modification pages rather than post-modification, but that's\nsomething we will have to come back to in 7.2 or later. For 7.1's\npurposes there is nothing wrong with the current scheme, and I have no\ndesire to postpone release another few months to change it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 11:00:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: AW: WAL-based allocation of XIDs is ins ecur e " } ]
[ { "msg_contents": "\n> > Before commit or rollback the xlog is not flushed to disk, thus you can loose\n> > those xlog entries, but the index page might already be on disk because of\n> > LRU buffer reuse, no ?\n> \n> No. Buffer page is written to disk *only after corresponding records are flushed\n> to log* (WAL means Write-Ahead-Log - write log before modifying data pages).\n\nYou mean, that for each dirty buffer that is reused, the reusing backend fsyncs\nthe xlog before writing the buffer to disk ?\n\nAndreas\n", "msg_date": "Wed, 7 Mar 2001 10:06:50 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: AW: AW: AW: WAL-based allocation of XIDs is ins\n\tecur e" }, { "msg_contents": "> > > Before commit or rollback the xlog is not flushed to disk, thus you can loose\n> > > those xlog entries, but the index page might already be on disk because of\n> > > LRU buffer reuse, no ?\n> > \n> > No. Buffer page is written to disk *only after corresponding records are flushed\n> > to log* (WAL means Write-Ahead-Log - write log before modifying data pages).\n> \n> You mean, that for each dirty buffer that is reused, the reusing backend fsyncs\n> the xlog before writing the buffer to disk ?\n\nIn short - yes.\nTo be accurate - XLogFlush is called to ensure that records reflecting buffer' modifications\nare on disk. That's how it works everywhere. And that's why LRU is not good policy for\nbufmgr anymore (we discussed this already).\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 01:12:42 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: AW: AW: AW: AW: WAL-based allocation of XIDs is insecur e " } ]
[ { "msg_contents": "> I have just sent to the pgsql-patches list a rather large set of\n\nPlease send it to me directly - pgsql-patches' archieve is dated by Feb -:(\n\n> proposed diffs for the WAL code. These changes:\n> \n> * Store two past checkpoint locations, not just one, in pg_control.\n> On startup, we fall back to the older checkpoint if the newer one\n> is unreadable. Also, a physical copy of the newest checkpoint record\n\nAnd what to do if older one is unreadable too?\n(Isn't it like using 2 x CRC32 instead of CRC64 ? -:))\nAnd what to do if pg_control was lost? (We already discussed that we\nshould read all logs from newest to oldest ones to find checkpoint).\nAnd why to keep old log files with older checkpoint?\n\n> is kept in pg_control for possible use in disaster recovery (ie,\n> complete loss of pg_xlog). Also add a version number for pg_control\n\nMmmm, how recovery is possible if log was lost? All what could be done\nwith DB in the event of corrupted/lost log is dumping data from tables\n*asis*, without any guarantee about consistency. How checkpoint' content\ncould be useful?\n\nI feel that the fact that\n\nWAL can't help in the event of disk errors\n\nis often overlooked.\n\n> itself. Remove archdir from pg_control; it ought to be a GUC\n> parameter, not a special case (not that it's implemented yet anyway).\n\nI would discuss WAL based BAR management before deciding how to\nstore/assign archdir. On the other hand it's easy to add archdir\nto pg_control later -:)\n\n> * Change CRC scheme to a true 64-bit CRC, not a pair of 32-bit CRCs\n> on alternate bytes.\n\nGreat if you've found reliable CRC64 impl!\n\n> * Fix XLOG record length handling so that it will work at BLCKSZ = 32k.\n\nCase I've overlooked -:(\n(Though, I always considered BLCKSZ > 8K as temp hack -:))\n\n> * Change XID allocation to work more like OID allocation, so that we\n> can flush XID alloc info to the log before there is any chance an XID\n> will appear in heap files.\n\nI didn't read you postings about this yet.\n\n> * Add documentation and clean up some coding infelicities; move file\n> format declarations out to include files where planned contrib\n> utilities can get at them.\n\nThanks for that!\n \n> Before committing this stuff, I intend to prepare a contrib utility that\n> can be used to reset pg_control and pg_xlog. This is mainly for\n> disaster recovery purposes, but as a side benefit it will allow people\n\nOnce again, I would call this \"disaster *dump* purposes\" -:)\nAfter such operation DB shouldn't be used for anything but dump!\n\n> to update 7.1beta installations to this new code without doing initdb.\n> I need to update contrib/pg_controldata, too.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 01:07:53 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: Proposed WAL changes" }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n>> I have just sent to the pgsql-patches list a rather large set of\n> Please send it to me directly - pgsql-patches' archieve is dated by Feb -:(\n\nDone under separate cover.\n\n>> proposed diffs for the WAL code. These changes:\n>> \n>> * Store two past checkpoint locations, not just one, in pg_control.\n>> On startup, we fall back to the older checkpoint if the newer one\n>> is unreadable. Also, a physical copy of the newest checkpoint record\n\n> And what to do if older one is unreadable too?\n> (Isn't it like using 2 x CRC32 instead of CRC64 ? -:))\n\nThen you lose --- but two checkpoints gives you twice the chance of\nrecovery (probably more, actually, since it's much more likely that\nthe previous checkpoint will have reached disk safely).\n\n> And what to do if pg_control was lost? (We already discussed that we\n> should read all logs from newest to oldest ones to find checkpoint).\n\nIf you have valid WAL files and broken pg_control, then reading the WAL\nfiles is a way to recover. If you have valid pg_control and broken WAL\nfiles, you have a big problem, but using pg_control to generate a new\nempty WAL will at least let you get at your heap files.\n\n> And why to keep old log files with older checkpoint?\n\nNot much point in remembering the older checkpoint location if the\nassociated WAL file is removed...\n\n> Mmmm, how recovery is possible if log was lost? All what could be done\n> with DB in the event of corrupted/lost log is dumping data from tables\n> *asis*, without any guarantee about consistency.\n\nExactly. That is still better than not being able to dump the data at\nall.\n\n>> * Change XID allocation to work more like OID allocation, so that we\n>> can flush XID alloc info to the log before there is any chance an XID\n>> will appear in heap files.\n\n> I didn't read you postings about this yet.\n\nSee later discussion --- Andreas convinced me that flushing NEXTXID\nrecords to disk isn't really needed after all. (I didn't take the flush\nout of my patch yet, but will do so.) I still want to leave the NEXTXID\nrecords in there, though, because I think that XID and OID assignment\nought to work as nearly alike as possible.\n\n>> Before committing this stuff, I intend to prepare a contrib utility that\n>> can be used to reset pg_control and pg_xlog. This is mainly for\n>> disaster recovery purposes, but as a side benefit it will allow people\n\n> Once again, I would call this \"disaster *dump* purposes\" -:)\n> After such operation DB shouldn't be used for anything but dump!\n\nFair enough. But we need it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 11:09:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " }, { "msg_contents": "On Wed, 7 Mar 2001, Vadim Mikheev wrote:\n\n> > I have just sent to the pgsql-patches list a rather large set of\n>\n> Please send it to me directly - pgsql-patches' archieve is dated by Feb -:(\n\nHuh?\n\nhttp://www.postgresql.org/mhonarc/pgsql-patches/2001-03/index.html\n\n\n", "msg_date": "Wed, 7 Mar 2001 12:57:02 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n\n> I feel that the fact that\n> \n> WAL can't help in the event of disk errors\n> \n> is often overlooked.\n\nThis is true in general. But, nevertheless, WAL can be written to\nprotect against predictable disk errors, when possible. Failing to\nwrite a couple of disk blocks when the system crashes is a reasonably\npredictable disk error. WAL should ideally be written to work\ncorrectly in that situation.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 102: An atom-blaster is a good weapon, but it can point both ways.\n\t\t-- Isaac Asimov\n", "msg_date": "07 Mar 2001 09:37:49 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" }, { "msg_contents": "On Wed, Mar 07, 2001 at 11:09:25AM -0500, Tom Lane wrote:\n> \"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> >> * Store two past checkpoint locations, not just one, in pg_control.\n> >> On startup, we fall back to the older checkpoint if the newer one\n> >> is unreadable. Also, a physical copy of the newest checkpoint record\n> \n> > And what to do if older one is unreadable too?\n> > (Isn't it like using 2 x CRC32 instead of CRC64 ? -:))\n> \n> Then you lose --- but two checkpoints gives you twice the chance of\n> recovery (probably more, actually, since it's much more likely that\n> the previous checkpoint will have reached disk safely).\n\nActually far more: if the checkpoints are minutes apart, even the \nworst disk drive will certainly have flushed any blocks written for \nthe earlier checkpoint.\n\n--\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 7 Mar 2001 13:10:52 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" } ]
[ { "msg_contents": "> I have spent several days now puzzling over the corrupted WAL logfile\n> that Scott Parish was kind enough to send me from a 7.1beta4 crash.\n> It looks a lot like two different series of transactions were getting\n> written into the same logfile. I'd been digging like mad in the WAL\n> code to try to explain this as a buffer-management logic error, but\n> after a fresh exchange of info it turns out that I was barking up the\n> wrong tree.\n\nSorry about that. This is the same situation I was in myself couple\nof times and \"fresh exchange of info\" was saving too -:)\nAnyway it's good to know that it wasn't buffer/etc logic error -:)\n(Actually, logs from you looked sooo grave so it becomes unclear\nhow WAL worked at all -:).\n\nNevertheless, subj is rised. BTW, does anybody know results of kill -9\nin Oracle/Informix/etc? Just curious -:)\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 01:32:38 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster " }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> Nevertheless, subj is rised. BTW, does anybody know results of kill -9\n> in Oracle/Informix/etc? Just curious -:)\n\nProgress has no problem with it that I have ever seen.\n\nRegards,\n\t\t\t\t\tAndrew.\n-- \n_____________________________________________________________________\n Andrew McMillan, e-mail: Andrew@catalyst.net.nz\nCatalyst IT Ltd, PO Box 10-225, Level 22, 105 The Terrace, Wellington\nMe: +64 (21) 635 694, Fax: +64 (4) 499 5596, Office: +64 (4) 499 2267\n", "msg_date": "Wed, 07 Mar 2001 23:39:28 +1300", "msg_from": "Andrew McMillan <andrew@catalyst.net.nz>", "msg_from_op": false, "msg_subject": "Re: How to shoot yourself in the foot: kill -9 postmaster" } ]
[ { "msg_contents": "\n> Nevertheless, subj is rised. BTW, does anybody know results of kill -9\n> in Oracle/Informix/etc? Just curious -:)\n\nInformix has no problem with it. Oracle dba's fear it, to say the least.\n\nAndreas \n", "msg_date": "Wed, 7 Mar 2001 11:25:20 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: How to shoot yourself in the foot: kill -9 postmast\n\ter" } ]
[ { "msg_contents": "> Consider the following scenario:\n> \n> 1. A new transaction inserts a tuple. The tuple is entered into its\n> heap file with the new transaction's XID, and an associated WAL log\n> entry is made. Neither one of these are on disk yet --- the heap tuple\n> is in a shmem disk buffer, and the WAL entry is in the shmem WAL buffer.\n> \n> 2. Now do a lot of read-only operations, in the same or another backend.\n> The WAL log stays where it is, but eventually the shmem disk buffer will\n> get flushed to disk so that the buffer can be re-used for some other\n> disk page.\n> \n> 3. Assume we now crash. Now, we have a heap tuple on disk with an XID\n> that does not correspond to any XID visible in the on-disk WAL log.\n\nImpossible (with fsync ON -:)).\n\nSeems my description of core WAL rule was bad, I'm sorry -:(\nWAL = Write-*Ahead*-Log = Write data pages *only after* log records\nreflecting data pages modifications are *flushed* on disk =\nIf a modification was not logged then it's neither in data pages.\n\nNo matter when bufmgr writes data buffer (at commit time or to re-use\nit) bufmgr first ensures that buffer' modifications are logged.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 02:29:49 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "> The point is to make the allocation of XIDs and OIDs work the same way.\n> In particular, if we are forced to reset the XLOG using what's stored in\n> pg_control, it would be good if what's stored in pg_control is a value\n> beyond the last-used XID/OID, not a value less than the last-used ones.\n\nIf we're forced to reset log (ie it's corrupted/lost) then we're forced\nto dump, and only dump, data *because of they are not consistent*.\nSo, I wouldn't worry about XID/OID/anything - we can only provide user\nwith way to restore data ... *manually*.\n\nIf user really cares about his data he must\n\nU1. Buy good disks for WAL (data may be on not so good disks).\nU2. Set up distributed DB if U1. is not enough.\n\nTo help user with above we must\n\nD1. Avoid bugs in WAL\nD2. Implement WAL based BAR (so U1 will have sence).\nD3. Implement distributed DB.\n\nThere will be no D2 & D3 in 7.1, and who knows about D1. \nSo, manual restoring data is the best we can do for 7.1.\nAnd actually, \"manual restoring\" is what we had before,\nanyway.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 02:55:52 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> wrote:\n>\n> In short I do not think that the current implementation of\n> \"physical log\" does what it was intended to do :-(\n\nHm, wasn't it handling non-atomic disk writes, Andreas?\nAnd for what else \"physical log\" could be used?\n\nThe point was - copy entire page content on first after\ncheckpoint modification, so on recovery first restore page\nto consistent state, so all subsequent logged modifications\ncould be applied without fear about page inconsistency.\n\nNow, why should we log page as it was *before* modification?\nWe would log modification anyway (yet another log record!) and\nwould apply it to page, so result would be the same as now when\nwe log page after modification - consistent *modifyed* page.\n\n?\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 03:28:11 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure" } ]
[ { "msg_contents": "> On third thought --- we could still log the original page contents and\n> the modification log record atomically, if what were logged in the xlog\n> record were (essentially) the parameters to the operation being logged,\n> not its results. That is, make the log entry before you start doing the\n> mod work, not after. This might also simplify redo, since redo would be\n> no different from the normal case. I'm not sure why Vadim didn't choose\n> to do it that way; maybe there's some other fine point I'm missing.\n\nThere is one - indices over user defined data types: catalog is not\navailable at the time of recovery, so, eg, we can't know how to order\nkeys of \"non-standard\" types. (This is also why we have to recover\naborted index split ops at runtime, when catalog is already available.)\n\nAlso, there is no point why should we log original page content and\nthe next modification record separately.\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 03:45:52 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": true, "msg_subject": "Re: WAL-based allocation of XIDs is insecure " } ]
[ { "msg_contents": "\n> > In short I do not think that the current implementation of\n> > \"physical log\" does what it was intended to do :-(\n> \n> Hm, wasn't it handling non-atomic disk writes, Andreas?\n\nYes, but for me, that was only one (for me rather minor) issue.\nI still think that the layout of PostgreSQL pages was designed to\nreduce the risc of a (heap) page beeing inconsistent because it is \nonly partly written to an acceptable minimum. If your hw and os can \nguarantee that it does not overwrite one [OS] block with data that was \nnot supplied (== junk data), the risc is zero.\n\n> And for what else \"physical log\" could be used?\n\n1. create a consistent state if rollforward bails out for some reason\n\tbut log is still readable\n2. have an easy way to handle index rollforward/abort\n\t(might need to block some index modifications during checkpoint though)\n3. ease the conversion to overwrite smgr\n4. ease the creation of BAR to create consistent snapshot without \n\tneed for log rollforward\n\n> Now, why should we log page as it was *before* modification?\n> We would log modification anyway (yet another log record!) and\n\nOh, so currently you only do eighter ? I would at least add the \ninfo which slot was inserted/modified (maybe that is already there (XID)).\n\n> would apply it to page, so result would be the same as now when\n> we log page after modification - consistent *modifyed* page.\n\nMaybe I am too focused on the implementation of one particular db,\nthat I am not able to see this without prejudice, and all is well as is :-)\n\nAndreas\n", "msg_date": "Wed, 7 Mar 2001 12:58:37 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL-based allocation of XIDs is insecure" }, { "msg_contents": "> > Hm, wasn't it handling non-atomic disk writes, Andreas?\n> \n> Yes, but for me, that was only one (for me rather minor) issue.\n> I still think that the layout of PostgreSQL pages was designed to\n> reduce the risc of a (heap) page beeing inconsistent because it is \n> only partly written to an acceptable minimum. If your hw and os can \n\nI believe that I explained why it's not minor issue (and never was).\nEg - PageRepaireFragmentation \"compacts\" page exactly like other,\noverwriting, DBMSes do and partial write of modified page means\nlost page content.\n\n> > And for what else \"physical log\" could be used?\n> \n> 1. create a consistent state if rollforward bails out for some reason\n> but log is still readable\n\nWhat difference between consistent state as it was before checkpoint and\nafter that? Why should we log old page images? New (after modification) page\nimages are also consistent and can be used to create consistent state.\n\n> 2. have an easy way to handle index rollforward/abort\n> (might need to block some index modifications during checkpoint though)\n\nThere is no problems now. Page is either splitted (new page created/\nproperly initialized, right sibling updated) or not.\n\n> 3. ease the conversion to overwrite smgr\n\n?\n\n> 4. ease the creation of BAR to create consistent snapshot without \n> need for log rollforward\n\nIsn't it the same as 1. with \"snapshot\" == \"state\"?\n\n> > Now, why should we log page as it was *before* modification?\n> > We would log modification anyway (yet another log record!) and\n> \n> Oh, so currently you only do eighter ? I would at least add the \n> info which slot was inserted/modified (maybe that is already there (XID)).\n\nRelfilenode + TID are saved, as well as anything else that would required\nto UNDO operation, in future.\n\n> > would apply it to page, so result would be the same as now when\n> > we log page after modification - consistent *modifyed* page.\n> \n> Maybe I am too focused on the implementation of one particular db,\n> that I am not able to see this without prejudice,\n> and all is well as is :-)\n ^^^^^^^^^^^^^^^^^^^^^\nI hope so -:)\n\nVadim\n\n\n", "msg_date": "Wed, 7 Mar 2001 04:43:41 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: WAL-based allocation of XIDs is insecure" } ]
[ { "msg_contents": "Is anyone on this list in Hannover for CeBit? Maybe we could arrange a\nmeeting.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 7 Mar 2001 13:20:02 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "CeBit" }, { "msg_contents": "Hello,\n\nmaybe I missed something, but in last days I was thinking how would I\nwrite my own sql server. I got several ideas and because these are not\nused in PG they are probably bad - but I can't figure why.\n\n1) WAL\nWe have buffer manager, ok. So why not to use WAL as part of it and don't\nlog INSERT/UPDATE/DELETE xlog records but directly changes into buffer\npages ? When someone dirties page it has to inform bmgr about dirty region\nand bmgr would formulate xlog record. The record could be for example\nfixed bitmap where each bit corresponds to part of page (of size\npgsize/no-of-bits) which was changed. These changed regions follows.\nMultiple writes (by multiple backends) can be coalesced together as long\nas their transactions overlaps and there is enough memory to keep changed \nbuffer pages in memory.\n\nPros: \tupper layers can think thet buffers are always safe/logged and there\n\tis no special handling for indices; very simple/fast redo\nCons:\tcan't implement undo - but in non-overwriting is not needed (?)\n\n2) SHM vs. MMAP\nWhy don't use mmap to share pages (instead of shm) ? There would be no\nproblem with tuning pg's buffer cache size - it is balanced by OS.\nWhen using SHM there are often two copies of page: one in OS' page cache\nand one in SHM (vaste of memory).\nWhen using mmap the data goes (almost) directly from HDD into your memory\npage - now you need to copy it from OS' page to PG's page.\nThere is one problem: how to assure that dirtied page is not flushed\nbefore its xlog. One can use mlock but you often need root privileges to\nuse it. Another way is to implement own COW (copy on write) to create\nintermediate buffers used only until xlog is flushed.\n\nAre there considerations correct ?\n\nregards, devik\n\n", "msg_date": "Wed, 7 Mar 2001 13:47:07 +0100 (CET)", "msg_from": "Martin Devera <devik@cdi.cz>", "msg_from_op": false, "msg_subject": "WAL & SHM principles" }, { "msg_contents": "> 2) SHM vs. MMAP\n> Why don't use mmap to share pages (instead of shm) ? There would be no\n> problem with tuning pg's buffer cache size - it is balanced by OS.\n> When using SHM there are often two copies of page: one in OS' page cache\n> and one in SHM (vaste of memory).\n> When using mmap the data goes (almost) directly from HDD into your memory\n> page - now you need to copy it from OS' page to PG's page.\n> There is one problem: how to assure that dirtied page is not flushed\n> before its xlog. One can use mlock but you often need root privileges to\n> use it. Another way is to implement own COW (copy on write) to create\n> intermediate buffers used only until xlog is flushed.\n\nThis was brought up a week ago, and I consider it an interesting idea. \nThe only problem is that we would no longer have control over which\npages made it to disk. The OS would perhaps write pages as we modified\nthem. Not sure how important that is.\n\nThe good news is that most/all OS's are smart enought that if two\nprocesses mmap() the same file, they see each other's changes, so in a\nsense it is shared memory, but a much larger, smarter pool of shared\nmemory than what we have now. We would still need buffer headers and\nstuff because we need to synchronize access to the buffers.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 09:59:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "> This was brought up a week ago, and I consider it an interesting idea. \n> The only problem is that we would no longer have control over which\n> pages made it to disk. The OS would perhaps write pages as we modified\n> them. Not sure how important that is.\n\nYes. As I work on linux kernel I know something about it. When page is\naccessed the CPU sets one bit in PTE. The OS writes the page when it\nneeds page frame. It also tries to launder pages periodicaly but actual\nalghoritm changes too often in recent kernels ;-)\nAlso page write is not atomic - several buffer heads are filled for the\npage and asynchronously posted for write. Elevator then sort and coalesce\nthese buffers heads and create actual scsi/ide write requests. But there\nis no guarantee that buffer heads from one page will be coalested to one\nwrite request ...\nYou can call mlock (PageLock on Win32) to lock page in memory. You can\npostpone write using it. It is ok under Win32 and many unices but under\nlinux only admin or one with CAP_MEMLOCK (not exact name) can mlock. \n\n> The good news is that most/all OS's are smart enought that if two\n> processes mmap() the same file, they see each other's changes, so in a\n\nyes, when using SHARED flag to mmap then IMHO it is mandatory for an OS\n\n> sense it is shared memory, but a much larger, smarter pool of shared\n> memory than what we have now. We would still need buffer headers and\n> stuff because we need to synchronize access to the buffers.\n\nAlso some smart algorithm which tries to mmap several pages in one\ncontinuous block. You can mmap each page at its own but OSes stores mmap\ninformations per page range. You need to minimize number of such ranges.\n\ndevik\n\n", "msg_date": "Wed, 7 Mar 2001 16:39:38 +0100 (CET)", "msg_from": "Martin Devera <devik@cdi.cz>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The only problem is that we would no longer have control over which\n> pages made it to disk. The OS would perhaps write pages as we modified\n> them. Not sure how important that is.\n\nUnfortunately, this alone is a *fatal* objection. See nearby\ndiscussions about WAL behavior: we must be able to control the relative\ntiming of WAL write/flush and data page writes.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 11:21:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The only problem is that we would no longer have control over which\n> > pages made it to disk. The OS would perhaps write pages as we modified\n> > them. Not sure how important that is.\n> \n> Unfortunately, this alone is a *fatal* objection. See nearby\n> discussions about WAL behavior: we must be able to control the relative\n> timing of WAL write/flush and data page writes.\n\nBummer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 11:22:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "On Wed, Mar 07, 2001 at 11:21:37AM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The only problem is that we would no longer have control over which\n> > pages made it to disk. The OS would perhaps write pages as we modified\n> > them. Not sure how important that is.\n> \n> Unfortunately, this alone is a *fatal* objection. See nearby\n> discussions about WAL behavior: we must be able to control the relative\n> timing of WAL write/flush and data page writes.\n\nNot so fast!\n\nIt is possible to build a logging system so that you mostly don't care\nwhen the data blocks get written; a particular data block on disk is \nconsidered garbage until the next checkpoint, so that you might as well \nallow the blocks to be written any time, even before the log entry.\n\nLetting the OS manage sharing of disk block images via mmap should be \nan enormous win vs. a fixed shm and manual scheduling by PG. If that\nrequires changes in the logging protocol, it's worth it.\n\n(What supported platforms don't have mmap?)\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 7 Mar 2001 13:21:08 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The only problem is that we would no longer have control over which\n> > > pages made it to disk. The OS would perhaps write pages as we modified\n> > > them. Not sure how important that is.\n> > \n> > Unfortunately, this alone is a *fatal* objection. See nearby\n> > discussions about WAL behavior: we must be able to control the relative\n> > timing of WAL write/flush and data page writes.\n> \n> Bummer.\n> \nBTW, what means \"bummer\" ?\nBut for many OSes you CAN control when to write data - you can mlock\nindividual pages.\n\n", "msg_date": "Thu, 8 Mar 2001 09:55:09 +0100 (CET)", "msg_from": "Martin Devera <devik@cdi.cz>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "On Thu, 8 Mar 2001, Martin Devera wrote:\n\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Unfortunately, this alone is a *fatal* objection. See nearby\n> > > discussions about WAL behavior: we must be able to control the relative\n> > > timing of WAL write/flush and data page writes.\n> > \n> > Bummer.\n> > \n> BTW, what means \"bummer\" ?\n\nIt's a Postgres-specific extension to the SQL standard. It means \"I am\ndisappointed\". As far as I can tell, you _may_ use it as a column or table\nname. :-)\n\nTim\n\n-- \n-----------------------------------------------\nTim Allen tim@proximity.com.au\nProximity Pty Ltd http://www.proximity.com.au/\n http://www4.tpg.com.au/users/rita_tim/\n\n", "msg_date": "Thu, 8 Mar 2001 20:44:13 +1100 (EST)", "msg_from": "Tim Allen <tim@proximity.com.au>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > > The only problem is that we would no longer have control over which\n> > > > pages made it to disk. The OS would perhaps write pages as we modified\n> > > > them. Not sure how important that is.\n> > > \n> > > Unfortunately, this alone is a *fatal* objection. See nearby\n> > > discussions about WAL behavior: we must be able to control the relative\n> > > timing of WAL write/flush and data page writes.\n> > \n> > Bummer.\n> > \n> BTW, what means \"bummer\" ?\n\nSorry, it means, \"Oh, I am disappointed.\"\n\n> But for many OSes you CAN control when to write data - you can mlock\n> individual pages.\n\nmlock() controls locking in physical memory. I don't see it controling\nwrite().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Mar 2001 11:45:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "> > BTW, what means \"bummer\" ?\n> \n> Sorry, it means, \"Oh, I am disappointed.\"\n\nthanks :)\n\n> > But for many OSes you CAN control when to write data - you can mlock\n> > individual pages.\n> \n> mlock() controls locking in physical memory. I don't see it controling\n> write().\n\nWhen you mmap, you don't use write() !\nmlock actualy locks page in memory and as long as the page is locked\nthe OS doesn't attempt to store the dirty page. It is intended also\nfor security app to ensure that sensitive data are not written to unsecure\nstorage (hdd). It is definition of mlock so that you can be probably sure\nwith it.\n\nThere is way to do it without mlock (fallback):\nYou definitely need some kind of page headers. The header should has info\nwhether the page can be mmaped or is in \"dirty pool\". Pages in dirty pool\nare pages which are dirty but not written yet and are waiting to\nappropriate log record to be flushed. When log is flushed the data at\ndirty pool can be copied to its regular mmap location and discarded.\n\nIf dirty pool is too large, simply sync log and whole pool can be\ndiscarded.\n\nmmap version could be faster when loading data from hdd and will result in\nbetter utilization of memory (because you are directly working with data\nat OS' page-cache instead of having duplicates in pg's buffer cache).\nAlso page cache expiration is handled by OS and it will allow pg to use as\nmuch memory as is available (no need to specify buffer page size).\n\ndevik\n\n", "msg_date": "Fri, 9 Mar 2001 15:49:46 +0100 (CET)", "msg_from": "Martin Devera <devik@cdi.cz>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "\n> When you mmap, you don't use write() ! mlock actualy locks page in\n> memory and as long as the page is locked the OS doesn't attempt to\n> store the dirty page. It is intended also for security app to\n> ensure that sensitive data are not written to unsecure storage\n> (hdd). It is definition of mlock so that you can be probably sure\n> with it.\n\nNews to me ... can you please point to such a definition? I see no\nreference to such semantics in the mlock() manual page in UNIX98\n(Single Unix Standard, version 2).\n\nmlock() guarantees that the locked address space is in memory. This\ndoesn't imply that updates are not written to the backing file.\n\nI would expect an OS that doesn't have a unified buffer cache but\ntries to keep a consistent view for mmap() and read()/write() to\nupdate the file.\n\nRegards,\n\nGiles\n", "msg_date": "Sun, 11 Mar 2001 09:00:53 +1100", "msg_from": "Giles Lean <giles@nemeton.com.au>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles " }, { "msg_contents": "> > When you mmap, you don't use write() ! mlock actualy locks page in\n> > memory and as long as the page is locked the OS doesn't attempt to\n> > store the dirty page. It is intended also for security app to\n> > ensure that sensitive data are not written to unsecure storage\n> > (hdd). It is definition of mlock so that you can be probably sure\n> > with it.\n> \n> News to me ... can you please point to such a definition? I see no\n> reference to such semantics in the mlock() manual page in UNIX98\n> (Single Unix Standard, version 2).\n\nsorry, maybe I'm biased toward Linux. The statement above is from Linux's\nman page and as I looked into mm code it seems to be right.\nI'm not sore about other unices.\n\n> mlock() guarantees that the locked address space is in memory. This\n> doesn't imply that updates are not written to the backing file.\n\nyes, probably it depends on OS in question. In Linux kernel the page is\nnot written when mlocked (but I'm not sure about msync here).\n\n> I would expect an OS that doesn't have a unified buffer cache but\n> tries to keep a consistent view for mmap() and read()/write() to\n> update the file.\n\nhmm but why to mlock page then ? Only to be sure the page is not wsapped\nout ?\n\nregards, devik\n\n", "msg_date": "Mon, 12 Mar 2001 12:26:47 +0100 (CET)", "msg_from": "Martin Devera <devik@cdi.cz>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles " }, { "msg_contents": "Giles Lean <giles@nemeton.com.au> wrote:\n\n> > When you mmap, you don't use write() ! mlock actualy locks page in\n> > memory and as long as the page is locked the OS doesn't attempt to\n> > store the dirty page. It is intended also for security app to\n> > ensure that sensitive data are not written to unsecure storage\n> > (hdd). It is definition of mlock so that you can be probably sure\n> > with it.\n>\n> News to me ... can you please point to such a definition? I see no\n> reference to such semantics in the mlock() manual page in UNIX98\n> (Single Unix Standard, version 2).\n>\n> mlock() guarantees that the locked address space is in memory. This\n> doesn't imply that updates are not written to the backing file.\n\nI've wondered about this myself. It _is_ true on Linux that mlock prevents\nwrites to the backing store, and this is used as a security feature for\ncryptography software. The code for gnupg assumes that if you have mlock()\non any operating system, it does mean this--which doesn't mean it's true,\nbut perhaps whoever wrote it does have good reason to think so.\n\nBut I don't know about other systems. Does anybody know what the POSIX.1b\nstandard says?\n\nIt was even suggested to me on the linux-fsdev mailing list that mlock() was\na good way to insure the write-ahead condition.\n\nKen Hirsch\n\n\n", "msg_date": "Tue, 13 Mar 2001 12:38:24 -0500", "msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "Giles Lean <giles@nemeton.com.au> wrote:\n\n> > When you mmap, you don't use write() ! mlock actualy locks page in\n> > memory and as long as the page is locked the OS doesn't attempt to\n> > store the dirty page. It is intended also for security app to\n> > ensure that sensitive data are not written to unsecure storage\n> > (hdd). It is definition of mlock so that you can be probably sure\n> > with it.\n>\n> News to me ... can you please point to such a definition? I see no\n> reference to such semantics in the mlock() manual page in UNIX98\n> (Single Unix Standard, version 2).\n>\n> mlock() guarantees that the locked address space is in memory. This\n> doesn't imply that updates are not written to the backing file.\n\nI've wondered about this myself. It _is_ true on Linux that mlock prevents\nwrites to the backing store, and this is used as a security feature for\ncryptography software. The code for gnupg assumes that if you have mlock()\non any operating system, it does mean this--which doesn't mean it's true,\nbut perhaps whoever wrote it does have good reason to think so.\n\nBut I don't know about other systems. Does anybody know what the POSIX.1b\nstandard says?\n\nIt was even suggested to me on the linux-fsdev mailing list that mlock() was\na good way to insure the write-ahead condition.\n\nKen Hirsch\n\n\n\n", "msg_date": "Tue, 13 Mar 2001 13:04:48 -0500", "msg_from": "\"Ken Hirsch\" <kahirsch@bellsouth.net>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "On Tue, 13 Mar 2001, Ken Hirsch wrote:\n\n> > mlock() guarantees that the locked address space is in memory. This\n> > doesn't imply that updates are not written to the backing file.\n>\n> I've wondered about this myself. It _is_ true on Linux that mlock\n> prevents writes to the backing store,\n\nI don't believe that this is true. The manpage offers no\nsuch promises, and the semantics are not useful.\n\n> and this is used as a security feature for cryptography software.\n\nmlock() is used to prevent pages being swapped out. Its\nuse for crypto software is essentially restricted to anon\nmemory (allocated via brk() or mmap() of /dev/zero).\n\nIf my understanding is accurate, before 2.4 Linux would\nnever swap out pages which had a backing store. It would\nsimply write them back or drop them (if clean). (This is\nwhy you need around twice as much swap with 2.4.)\n\n> The code for gnupg assumes that if you have mlock() on any operating\n> system, it does mean this--which doesn't mean it's true, but perhaps\n> whoever wrote it does have good reason to think so.\n\nstrace on gpg startup says:\n\nmmap(0, 16384, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40015000\ngetuid() = 500\nmlock(0x40015000) = -1 EPERM (Operation not permitted)\n\nso whatever the authors think, it does not require this semantic.\n\nMatthew.\n\n", "msg_date": "Tue, 13 Mar 2001 21:10:09 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "* Matthew Kirkwood <matthew@hairy.beasts.org> [010313 13:12] wrote:\n> On Tue, 13 Mar 2001, Ken Hirsch wrote:\n> \n> > > mlock() guarantees that the locked address space is in memory. This\n> > > doesn't imply that updates are not written to the backing file.\n> >\n> > I've wondered about this myself. It _is_ true on Linux that mlock\n> > prevents writes to the backing store,\n> \n> I don't believe that this is true. The manpage offers no\n> such promises, and the semantics are not useful.\n\nAfaik FreeBSD's Linux emulator:\n\nrevision 1.13\ndate: 2001/02/28 04:30:27; author: dillon; state: Exp; lines: +3 -1\nLinux does not filesystem-sync file-backed writable mmap pages on\na regular basis. Adjust our linux emulation to conform. This will\ncause more dirty pages to be left for the pagedaemon to deal with,\nbut our new low-memory handling code can deal with it. The linux\nway appears to be a trend, and we may very well make MAP_NOSYNC the\ndefault for FreeBSD as well (once we have reasonable sequential\nwrite-behind heuristics for random faults).\n(will be MFC'd prior to 4.3 freeze)\n\nSuggested by: Andrew Gallatin\n\nBasically any mmap'd data doesn't seem to get sync()'d out on\na regular basis.\n\n> > and this is used as a security feature for cryptography software.\n> \n> mlock() is used to prevent pages being swapped out. Its\n> use for crypto software is essentially restricted to anon\n> memory (allocated via brk() or mmap() of /dev/zero).\n\nWhat about userland device drivers that want to send parts\nof a disk backed file to a driver's dma routine?\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n\n", "msg_date": "Tue, 13 Mar 2001 13:21:54 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "On Tue, 13 Mar 2001, Alfred Perlstein wrote:\n\n[..]\n> Linux does not filesystem-sync file-backed writable mmap pages on a\n> regular basis.\n\nVery intersting. I'm not sure that is necessarily the case in\n2.4, though -- my understanding is that the new all-singing,\nall-dancing page cache makes very little distinction between\nmapped and unmapped dirty pages.\n\n> Basically any mmap'd data doesn't seem to get sync()'d out on\n> a regular basis.\n\nHmm.. I'd call that a bug, anyway.\n\n> > > and this is used as a security feature for cryptography software.\n> >\n> > mlock() is used to prevent pages being swapped out. Its\n> > use for crypto software is essentially restricted to anon\n> > memory (allocated via brk() or mmap() of /dev/zero).\n>\n> What about userland device drivers that want to send parts\n> of a disk backed file to a driver's dma routine?\n\nAnd realtime software. I'm not disputing that mlock is useful,\nbut what it can do be security software is not that huge. The\nLinux manpage says:\n\n Memory locking has two main applications: real-time algo�\n rithms and high-security data processing.\n\nMatthew.\n\n", "msg_date": "Tue, 13 Mar 2001 21:54:08 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "Michael Meskes wrote:\n> Is anyone on this list in Hannover for CeBit? Maybe we could arrange a\n> meeting.\n\n Looks pretty much that I'll be still in Hamburg by then. What\n are the days you planned?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 15 Mar 2001 09:44:53 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: CeBit" } ]
[ { "msg_contents": "hey friends can u give me a master password so that i can create user in the postgresql.\nthanx\nShailendra Kumar\n\n\n\n\n\n\n\n\nhey friends can u give me a master password so that \ni can create user in the postgresql.\nthanx\nShailendra Kumar", "msg_date": "Wed, 7 Mar 2001 18:44:24 +0530", "msg_from": "\"shailendra\" <shailendra@iicenter.com>", "msg_from_op": true, "msg_subject": "user name n password" } ]
[ { "msg_contents": "\n> > > Hm, wasn't it handling non-atomic disk writes, Andreas?\n> > \n> > Yes, but for me, that was only one (for me rather minor) issue.\n> > I still think that the layout of PostgreSQL pages was designed to\n> > reduce the risc of a (heap) page beeing inconsistent because it is \n> > only partly written to an acceptable minimum. If your hw and os can \n> \n> I believe that I explained why it's not minor issue (and never was).\n> Eg - PageRepaireFragmentation \"compacts\" page exactly like other,\n\nBut this is currently only done during vacuum and as such a special case, no ? \n\n> overwriting, DBMSes do and partial write of modified page means\n> lost page content.\n\nYes, if contents move around. Not with the original Postgres 4 heap page design \nin combination with non overwrite smgr. Maybe this has changed because someone \noversaw the consequences ?\nThis certainly changes when converting to overwrite smgr, because\nthen you reuse a slot that might not be the correct size and contents need to be \nshifted around. For this case your \"physical log\" is also good, of course :-)\n\nAndreas\n", "msg_date": "Wed, 7 Mar 2001 14:27:39 +0100 ", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: WAL-based allocation of XIDs is insecure" } ]
[ { "msg_contents": "I have started coding a PostgreSQL performance monitor. It will be like\ntop, but allow you to click on a backend to see additional information.\n\nIt will be written in Tcl/Tk. I may ask to add something to 7.1 so when\na backend receives a special signal, it dumps a file in /tmp with some\nbackend status. It would be done similar to how we handle Cancel\nsignals.\n\nHow do people feel about adding a single handler to 7.1? Is it\nsomething I can slip into the current CVS, or will it have to exist as a\npatch to 7.1. Seems it would be pretty isolated unless someone sends\nthe signal, but it is clearly a feature addition.\n\nWe don't really have any way of doing process monitoring except ps, so I\nthink this is needed. I plan to have something done in the next week or\ntwo.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 10:56:33 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Performance monitor" }, { "msg_contents": "On Wed, 7 Mar 2001, Bruce Momjian wrote:\n\n> I have started coding a PostgreSQL performance monitor. It will be like\n> top, but allow you to click on a backend to see additional information.\n>\n> It will be written in Tcl/Tk. I may ask to add something to 7.1 so when\n> a backend receives a special signal, it dumps a file in /tmp with some\n> backend status. It would be done similar to how we handle Cancel\n> signals.\n>\n> How do people feel about adding a single handler to 7.1? Is it\n> something I can slip into the current CVS, or will it have to exist as a\n> patch to 7.1. Seems it would be pretty isolated unless someone sends\n> the signal, but it is clearly a feature addition.\n\nTotally dead set against it ...\n\n... the only hold up on RC1 right now was awaiting Vadim getting back so\nthat he and Tom could work out the WAL related issues ... adding a new\nsignal handler *definitely* counts as \"adding a new feature\" ...\n\n\n\n", "msg_date": "Wed, 7 Mar 2001 14:44:24 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> > How do people feel about adding a single handler to 7.1? Is it\n> > something I can slip into the current CVS, or will it have to exist as a\n> > patch to 7.1. Seems it would be pretty isolated unless someone sends\n> > the signal, but it is clearly a feature addition.\n> \n> Totally dead set against it ...\n> \n> ... the only hold up on RC1 right now was awaiting Vadim getting back so\n> that he and Tom could work out the WAL related issues ... adding a new\n> signal handler *definitely* counts as \"adding a new feature\" ...\n\nOK, I will distribute it as a patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 16:41:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n>> How do people feel about adding a single handler to 7.1?\n\n> Totally dead set against it ...\n\nDitto. Particularly a signal handler that performs I/O. That's going\nto create all sorts of re-entrancy problems.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 16:42:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> How do people feel about adding a single handler to 7.1? Is it\n> something I can slip into the current CVS, or will it have to exist as a\n> patch to 7.1. Seems it would be pretty isolated unless someone sends\n> the signal, but it is clearly a feature addition.\n\n> OK, I will distribute it as a patch.\n\nPatch or otherwise, this approach seems totally unworkable. A signal\nhandler cannot do I/O safely, it cannot look at shared memory safely,\nit cannot even look at the backend's own internal state safely. How's\nit going to do any useful status reporting?\n\nFiring up a separate backend process that looks at shared memory seems\nlike a more useful design in the long run. That will mean exporting\nmore per-backend status into shared memory, however, and that means that\nthis is not a trivial change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 17:30:08 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > How do people feel about adding a single handler to 7.1? Is it\n> > something I can slip into the current CVS, or will it have to exist as a\n> > patch to 7.1. Seems it would be pretty isolated unless someone sends\n> > the signal, but it is clearly a feature addition.\n> \n> > OK, I will distribute it as a patch.\n> \n> Patch or otherwise, this approach seems totally unworkable. A signal\n> handler cannot do I/O safely, it cannot look at shared memory safely,\n> it cannot even look at the backend's own internal state safely. How's\n> it going to do any useful status reporting?\n\nWhy can't we do what we do with Cancel, where we set a flag and check it\nat safe places?\n\n> Firing up a separate backend process that looks at shared memory seems\n> like a more useful design in the long run. That will mean exporting\n> more per-backend status into shared memory, however, and that means that\n> this is not a trivial change.\n\nRight, that is a lot of work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 17:42:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Patch or otherwise, this approach seems totally unworkable. A signal\n>> handler cannot do I/O safely, it cannot look at shared memory safely,\n>> it cannot even look at the backend's own internal state safely. How's\n>> it going to do any useful status reporting?\n\n> Why can't we do what we do with Cancel, where we set a flag and check it\n> at safe places?\n\nThere's a lot of assumptions hidden in that phrase \"safe places\".\nI don't think that everyplace we check for Cancel is going to be safe,\nfor example. Cancel is able to operate in places where the internal\nstate isn't completely self-consistent, because it knows we are just\ngoing to clean up and throw away intermediate status anyhow if the\ncancel occurs.\n\nAlso, if you are expecting the answers to come back in a short amount of\ntime, then you do have to be able to do the work in the signal handler\nin cases where the backend is blocked on a lock or something like that.\nSo that introduces a set of issues about how you know when it's\nappropriate to do that and how to be sure that the signal handler\ndoesn't screw things up when it tries to do the report in-line.\n\nAll in all, I do not see this as an easy task that you can whip out and\nthen release as a 7.1 patch without extensive testing. And given that,\nI'd rather see it done with what I consider the right long-term approach,\nrather than a dead-end hack. I think doing it in a signal handler is\nultimately going to be a dead-end hack.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 18:02:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "> All in all, I do not see this as an easy task that you can whip out and\n> then release as a 7.1 patch without extensive testing. And given that,\n> I'd rather see it done with what I consider the right long-term approach,\n> rather than a dead-end hack. I think doing it in a signal handler is\n> ultimately going to be a dead-end hack.\n\nWell, the signal stuff will get me going at least.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 18:05:47 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "At 18:05 7/03/01 -0500, Bruce Momjian wrote:\n>> All in all, I do not see this as an easy task that you can whip out and\n>> then release as a 7.1 patch without extensive testing. And given that,\n>> I'd rather see it done with what I consider the right long-term approach,\n>> rather than a dead-end hack. I think doing it in a signal handler is\n>> ultimately going to be a dead-end hack.\n>\n>Well, the signal stuff will get me going at least.\n\nDidn't someone say this can't be done safely - or am I missing something?\n\nISTM that doing the work to put things in shared memory will be much more\nprofitable in the long run. You have previously advocated self-tuning\nalgorithms for performance - a prerequisite for these will be performance\ndata in shared memory.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 08 Mar 2001 10:51:41 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Hi all,\n\nWouldn't another approach be to write a C function that does the\nnecessary work, then just call it like any other C function?\n\ni.e. Connect to the database and issue a \"select\nperf_stats('/tmp/stats-2001-03-08-01.txt')\" ?\n\nOr similar?\n\nSure, that means another database connection which would change the\nresource count but it sounds like a more consistent approach.\n\nRegards and best wishes,\n\nJustin Clift\n\nPhilip Warner wrote:\n> \n> At 18:05 7/03/01 -0500, Bruce Momjian wrote:\n> >> All in all, I do not see this as an easy task that you can whip out and\n> >> then release as a 7.1 patch without extensive testing. And given that,\n> >> I'd rather see it done with what I consider the right long-term approach,\n> >> rather than a dead-end hack. I think doing it in a signal handler is\n> >> ultimately going to be a dead-end hack.\n> >\n> >Well, the signal stuff will get me going at least.\n> \n> Didn't someone say this can't be done safely - or am I missing something?\n> \n> ISTM that doing the work to put things in shared memory will be much more\n> profitable in the long run. You have previously advocated self-tuning\n> algorithms for performance - a prerequisite for these will be performance\n> data in shared memory.\n> \n> ----------------------------------------------------------------\n> Philip Warner | __---_____\n> Albatross Consulting Pty. Ltd. |----/ - \\\n> (A.B.N. 75 008 659 498) | /(@) ______---_\n> Tel: (+61) 0500 83 82 81 | _________ \\\n> Fax: (+61) 0500 83 82 82 | ___________ |\n> Http://www.rhyme.com.au | / \\|\n> | --________--\n> PGP key available upon request, | /\n> and from pgp5.ai.mit.edu:11371 |/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Thu, 08 Mar 2001 11:33:45 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "At 11:33 8/03/01 +1100, Justin Clift wrote:\n>Hi all,\n>\n>Wouldn't another approach be to write a C function that does the\n>necessary work, then just call it like any other C function?\n>\n>i.e. Connect to the database and issue a \"select\n>perf_stats('/tmp/stats-2001-03-08-01.txt')\" ?\n>\n\nI think Bruce wants per-backend data, and this approach would seem to only\nget the data for the current backend. \n\nAlso, I really don't like the proposal to write files to /tmp. If we want a\nperf tool, then we need to have something like 'top', which will\ncontinuously update. With 40 backends, the idea of writing 40 file to /tmp\nevery second seems a little excessive to me.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 08 Mar 2001 11:42:28 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> I think Bruce wants per-backend data, and this approach would seem to only\n> get the data for the current backend. \n> \n> Also, I really don't like the proposal to write files to /tmp. If we want a\n> perf tool, then we need to have something like 'top', which will\n> continuously update. With 40 backends, the idea of writing 40 file to /tmp\n> every second seems a little excessive to me.\n\nMy idea was to use 'ps' to gather most of the information, and just use\nthe internal stats when someone clicked on a backend and wanted more\ninformation.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 22:06:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> At 18:05 7/03/01 -0500, Bruce Momjian wrote:\n> >> All in all, I do not see this as an easy task that you can whip out and\n> >> then release as a 7.1 patch without extensive testing. And given that,\n> >> I'd rather see it done with what I consider the right long-term approach,\n> >> rather than a dead-end hack. I think doing it in a signal handler is\n> >> ultimately going to be a dead-end hack.\n> >\n> >Well, the signal stuff will get me going at least.\n> \n> Didn't someone say this can't be done safely - or am I missing something?\n\nOK, I will write just the all-process display part, that doesn't need\nany per-backend info because it gets it all from 'ps'. Then maybe\nsomeone will come up with a nifty idea, or I will play with my local\ncopy to see how it can be done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 22:12:16 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Mike Mascari's idea (er... his assembling of the other ideas) still\nsounds like the Best Solution though.\n\n:-)\n\n+ Justin\n\n+++\n\nI like the idea of updating shared memory with the performance\nstatistics, \ncurrent query execution information, etc., providing a function to fetch \nthose statistics, and perhaps providing a system view (i.e.\npg_performance) \nbased upon such functions which can be queried by the administrator.\n\nFWIW,\n\nMike Mascari\nmascarm@mascari.com\n\n+++\n\nBruce Momjian wrote:\n> \n> > I think Bruce wants per-backend data, and this approach would seem to only\n> > get the data for the current backend.\n> >\n> > Also, I really don't like the proposal to write files to /tmp. If we want a\n> > perf tool, then we need to have something like 'top', which will\n> > continuously update. With 40 backends, the idea of writing 40 file to /tmp\n> > every second seems a little excessive to me.\n> \n> My idea was to use 'ps' to gather most of the information, and just use\n> the internal stats when someone clicked on a backend and wanted more\n> information.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 08 Mar 2001 14:14:40 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Yes, seems that is best. I will probably hack something up here so I\ncan do some testing of the app itself.\n\n> Mike Mascari's idea (er... his assembling of the other ideas) still\n> sounds like the Best Solution though.\n> \n> :-)\n> \n> + Justin\n> \n> +++\n> \n> I like the idea of updating shared memory with the performance\n> statistics, \n> current query execution information, etc., providing a function to fetch \n> those statistics, and perhaps providing a system view (i.e.\n> pg_performance) \n> based upon such functions which can be queried by the administrator.\n> \n> FWIW,\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> +++\n> \n> Bruce Momjian wrote:\n> > \n> > > I think Bruce wants per-backend data, and this approach would seem to only\n> > > get the data for the current backend.\n> > >\n> > > Also, I really don't like the proposal to write files to /tmp. If we want a\n> > > perf tool, then we need to have something like 'top', which will\n> > > continuously update. With 40 backends, the idea of writing 40 file to /tmp\n> > > every second seems a little excessive to me.\n> > \n> > My idea was to use 'ps' to gather most of the information, and just use\n> > the internal stats when someone clicked on a backend and wanted more\n> > information.\n> > \n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 22:17:51 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "\nOn 2001.03.07 22:06 Bruce Momjian wrote:\n> > I think Bruce wants per-backend data, and this approach would seem to\n> only\n> > get the data for the current backend. \n> > \n> > Also, I really don't like the proposal to write files to /tmp. If we\n> want a\n> > perf tool, then we need to have something like 'top', which will\n> > continuously update. With 40 backends, the idea of writing 40 file to\n> /tmp\n> > every second seems a little excessive to me.\n> \n> My idea was to use 'ps' to gather most of the information, and just use\n> the internal stats when someone clicked on a backend and wanted more\n> information.\n\nMy own experience is that parsing ps can be difficult if you want to be\nportable and want more than basic information. Quite clearly, I could just\nbe dense, but if it helps, you can look at the configure.in in the CVS tree\nat http://sourceforge.net/projects/netsaintplug (GPL, sorry. But if you\nfind anything worthwhile, and borrowing concepts results in similar code, I\nwon't complain).\n\nI wouldn't be at all surprised if you found a better approach - my\nconfiguration above, to my mind at least, is not pretty. I hope you do find\na better approach - I know I'll be peeking at your code to see. \n\n-- \nKarl\n\n", "msg_date": "Wed, 7 Mar 2001 22:32:48 -0500", "msg_from": "Karl DeBisschop <karl@debisschop.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> I wouldn't be at all surprised if you found a better approach - my\n> configuration above, to my mind at least, is not pretty. I hope you do find\n> a better approach - I know I'll be peeking at your code to see. \n\nYes, I have an idea and hope it works.\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 22:34:39 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "On Wed, Mar 07, 2001 at 10:06:38PM -0500, Bruce Momjian wrote:\n> > I think Bruce wants per-backend data, and this approach would seem to only\n> > get the data for the current backend. \n> > \n> > Also, I really don't like the proposal to write files to /tmp. If we want a\n> > perf tool, then we need to have something like 'top', which will\n> > continuously update. With 40 backends, the idea of writing 40 file to /tmp\n> > every second seems a little excessive to me.\n> \n> My idea was to use 'ps' to gather most of the information, and just use\n> the internal stats when someone clicked on a backend and wanted more\n> information.\n\n Are you sure about 'ps' stuff portability? I don't known how data you\nwant read from 'ps', but /proc utils are very OS specific and for example\non Linux within a few years was libproc several time overhauled.\n I spent several years with /proc stuff (processes manager: \nhttp://home.zf.jcu.cz/~zakkr/kim).\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 8 Mar 2001 11:26:02 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "On Wednesday 07 March 2001 21:56, Bruce Momjian wrote:\n> I have started coding a PostgreSQL performance monitor. It will be like\n> top, but allow you to click on a backend to see additional information.\n>\n> It will be written in Tcl/Tk. I may ask to add something to 7.1 so when\n> a backend receives a special signal, it dumps a file in /tmp with some\n> backend status. It would be done similar to how we handle Cancel\n> signals.\n\nSmall question... Will it work in console? Or it will be X only?\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Thu, 8 Mar 2001 16:52:44 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> On Wed, Mar 07, 2001 at 10:06:38PM -0500, Bruce Momjian wrote:\n> > > I think Bruce wants per-backend data, and this approach would seem to only\n> > > get the data for the current backend. \n> > > \n> > > Also, I really don't like the proposal to write files to /tmp. If we want a\n> > > perf tool, then we need to have something like 'top', which will\n> > > continuously update. With 40 backends, the idea of writing 40 file to /tmp\n> > > every second seems a little excessive to me.\n> > \n> > My idea was to use 'ps' to gather most of the information, and just use\n> > the internal stats when someone clicked on a backend and wanted more\n> > information.\n> \n> Are you sure about 'ps' stuff portability? I don't known how data you\n> want read from 'ps', but /proc utils are very OS specific and for example\n> on Linux within a few years was libproc several time overhauled.\n> I spent several years with /proc stuff (processes manager: \n> http://home.zf.jcu.cz/~zakkr/kim).\n\nI am not going to do a huge amount with the actual ps columns, except\nallow you to sort on them. I will do most on the ps status display we\nuse as the command line in ps.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Mar 2001 12:18:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Are you sure about 'ps' stuff portability?\n\n> I am not going to do a huge amount with the actual ps columns, except\n> allow you to sort on them. I will do most on the ps status display we\n> use as the command line in ps.\n\n... which in itself is not necessarily portable. How many of our\nsupported platforms actually have working ps-status code? (This is\nan honest question: I don't know.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 12:25:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Are you sure about 'ps' stuff portability?\n> \n> > I am not going to do a huge amount with the actual ps columns, except\n> > allow you to sort on them. I will do most on the ps status display we\n> > use as the command line in ps.\n> \n> ... which in itself is not necessarily portable. How many of our\n> supported platforms actually have working ps-status code? (This is\n> an honest question: I don't know.)\n\nNo idea. My first version will probably only work a few platforms.\n\nThe problem I see with the shared memory idea is that some of the\ninformation needed may be quite large. For example, query strings can\nbe very long. Do we just allocate 512 bytes and clip off the rest. And\nas I add more info, I need more shared memory per backend. I just liked\nthe file system dump solution because I could modify it pretty easily,\nand because the info only appears when you click on the process, it\ndoesn't happen often.\n\nOf course, if we start getting the full display partly from each\nbackend, we will have to use shared memory.\n\nI could have started on a user admin tool, or GUC config tool, but a\nperformance monitor is the one item we really don't have yet. Doing\n'ps' over and over is sort of lame.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Mar 2001 12:35:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Bruce Momjian writes:\n...\n > The problem I see with the shared memory idea is that some of the\n > information needed may be quite large. For example, query strings can\n > be very long. Do we just allocate 512 bytes and clip off the rest. And\n > as I add more info, I need more shared memory per backend. I just liked\n > the file system dump solution because I could modify it pretty easily,\n > and because the info only appears when you click on the process, it\n > doesn't happen often.\n > \nHave you thought about using a named pipe? They've been around for quite a\nwhile, and should (he said with a :-)) be available on most-if-not-all\ncurrently supported systems.\n-- \nRichard Kuhns\t\t\trjk@grauel.com\nPO Box 6249\t\t\tTel: (765)477-6000 \\\n100 Sawmill Road\t\t\t\t x319\nLafayette, IN 47903\t\t (800)489-4891 /\n", "msg_date": "Thu, 8 Mar 2001 13:44:05 -0500", "msg_from": "Richard J Kuhns <rjk@grauel.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> Bruce Momjian writes:\n> ...\n> > The problem I see with the shared memory idea is that some of the\n> > information needed may be quite large. For example, query strings can\n> > be very long. Do we just allocate 512 bytes and clip off the rest. And\n> > as I add more info, I need more shared memory per backend. I just liked\n> > the file system dump solution because I could modify it pretty easily,\n> > and because the info only appears when you click on the process, it\n> > doesn't happen often.\n> > \n> Have you thought about using a named pipe? They've been around for quite a\n> while, and should (he said with a :-)) be available on most-if-not-all\n> currently supported systems.\n\nNifty idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Mar 2001 14:46:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Tom Lane writes:\n\n> How many of our supported platforms actually have working ps-status\n> code? (This is an honest question: I don't know.)\n\nBeOS, DG/UX, and Cygwin don't have support code, the rest *should* work.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 8 Mar 2001 22:54:31 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "> Tom Lane writes:\n> \n> > How many of our supported platforms actually have working ps-status\n> > code? (This is an honest question: I don't know.)\n> \n> BeOS, DG/UX, and Cygwin don't have support code, the rest *should* work.\n\nSeems we will find out when people complain my performance monitor\ndoesn't show the proper columns. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Mar 2001 17:06:13 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "\nI don't believe that UnixWare will take the PS change without having \nROOT.\n\nLER\n\n>>>>>>>>>>>>>>>>> Original Message <<<<<<<<<<<<<<<<<<\n\nOn 3/8/01, 3:54:31 PM, Peter Eisentraut <peter_e@gmx.net> wrote regarding \nRe: [HACKERS] Performance monitor :\n\n\n> Tom Lane writes:\n\n> > How many of our supported platforms actually have working ps-status\n> > code? (This is an honest question: I don't know.)\n\n> BeOS, DG/UX, and Cygwin don't have support code, the rest *should* work.\n\n> --\n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Thu, 08 Mar 2001 22:25:06 GMT", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> \n> Seems we will find out when people complain my performance monitor\n> doesn't show the proper columns. :-)\n> \n\nSo what is the proper columns ? Or let me rephrase, what will your DBA be\nable to monitor using the performance monitor ? \n\nQuery stats, IO stats, cache hit/miss ratios ? \n\nAs someone often recomending the database, I would like to have more\nprecise info about where the problem is when pgsql hits the roof - but this\nmight more into the auditing land, than straight performance\nterritory. Anyway it would be very nice to have tools that could help to\nidentify the database bottlenecks of your apps. I've already got some tools\non the Java side, but getting recorded data from the database side could\nonly help me analyze the system and blast bottlenecks.\n\nregards, \n\n\tGunnar\n", "msg_date": "09 Mar 2001 05:05:37 +0100", "msg_from": "Gunnar R|nning <gunnar@candleweb.no>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> So what is the proper columns ? Or let me rephrase, what will your DBA be\n> able to monitor using the performance monitor ? \n> \n> Query stats, IO stats, cache hit/miss ratios ? \n\nFor starters, it will be more of a wrapper around ps. In the future, I\nhope to add more features.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 8 Mar 2001 23:13:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "In article <200103081735.MAA06567@candle.pha.pa.us>, \"Bruce Momjian\"\n<pgman@candle.pha.pa.us> wrote:\n> The problem I see with the shared memory idea is that some of the\n> information needed may be quite large. For example, query strings can\n> be very long. Do we just allocate 512 bytes and clip off the rest. And\n> as I add more info, I need more shared memory per backend. I just liked\n> the file system dump solution because I could modify it pretty easily,\n> and because the info only appears when you click on the process, it\n> doesn't happen often.\n> \n> Of course, if we start getting the full display partly from each\n> backend, we will have to use shared memory.\n\nLong-term, perhaps a monitor server (like Sybase ASE uses) might \nbe a reasonable approach. That way, only one process (and a well-\nregulated one at that) would be accessing the shared memory, which\nshould make it safer and have less of an impact performance-wise\nif semaphores are needed to regulate access to the various regions\nof shared memory.\n\nThen, 1-N clients may access the monitor server to get performance\ndata w/o impacting the backends.\n\nGordon.\n-- \nIt doesn't get any easier, you just go faster.\n -- Greg LeMond\n", "msg_date": "Fri, 09 Mar 2001 20:29:29 -0500", "msg_from": "\"Gordon A. Runkle\" <gar@integrated-dynamics.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "[ Charset KOI8-R unsupported, converting... ]\n> On Wednesday 07 March 2001 21:56, Bruce Momjian wrote:\n> > I have started coding a PostgreSQL performance monitor. It will be like\n> > top, but allow you to click on a backend to see additional information.\n> >\n> > It will be written in Tcl/Tk. I may ask to add something to 7.1 so when\n> > a backend receives a special signal, it dumps a file in /tmp with some\n> > backend status. It would be done similar to how we handle Cancel\n> > signals.\n> \n> Small question... Will it work in console? Or it will be X only?\n\nIt will be tck/tk, so I guess X only.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 12 Mar 2001 17:08:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> > Small question... Will it work in console? Or it will be X only?\n>\n> It will be tck/tk, so I guess X only.\n\nThat's bad. Cause it will be unuseful for people having databases far away...\nLike me... :-((( Another point is that it is a little bit strange to have \nX-Window on machine with database server... At least if it is not for play, \nbut production one...\n\nAlso there should be a possibility of remote monitoring of the database. But \nthat's just dream... :-)))\n\n-- \nSincerely Yours,\nDenis Perchine\n\n----------------------------------\nE-Mail: dyp@perchine.com\nHomePage: http://www.perchine.com/dyp/\nFidoNet: 2:5000/120.5\n----------------------------------\n", "msg_date": "Tue, 13 Mar 2001 16:06:48 +0600", "msg_from": "Denis Perchine <dyp@perchine.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Denis Perchine <dyp@perchine.com> writes:\n\n> > > Small question... Will it work in console? Or it will be X only?\n> >\n> > It will be tck/tk, so I guess X only.\n> \n> That's bad. Cause it will be unuseful for people having databases far away...\n> Like me... :-((( Another point is that it is a little bit strange to have \n> X-Window on machine with database server... At least if it is not for play, \n> but production one...\n\nWell, I use X all the time over far distances - it depends on your\nconnection... And you are not running an X server on the database server, only\nan X client.\n\nBut I agree that it would be nice to have monitoring architecture that\nallowed a multitude of clients...\n\nregards, \n\n\tGunnar\n\n\n", "msg_date": "13 Mar 2001 11:56:05 +0100", "msg_from": "Gunnar R|nning <gunnar@candleweb.no>", "msg_from_op": false, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "[ Charset KOI8-R unsupported, converting... ]\n> > > Small question... Will it work in console? Or it will be X only?\n> >\n> > It will be tck/tk, so I guess X only.\n> \n> That's bad. Cause it will be unuseful for people having databases far away...\n> Like me... :-((( Another point is that it is a little bit strange to have \n> X-Window on machine with database server... At least if it is not for play, \n> but production one...\n> \n> Also there should be a possibility of remote monitoring of the database. But \n> that's just dream... :-)))\n\nWhat about remote-X using the DISPLAY variable?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 09:57:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "Denis Perchine <dyp@perchine.com> writes:\n>>> Small question... Will it work in console? Or it will be X only?\n>> \n>> It will be tck/tk, so I guess X only.\n\n> That's bad.\n\ntcl/tk is cross-platform; there's no reason that a tcl-coded\nperformance monitor client couldn't run on Windows or Mac.\n\nThe real problem with the ps-based implementation that Bruce is\nproposing is that it cannot work remotely at all, because there's\nno way to get the ps data from another machine (unless you're\noldfashioned/foolish enough to be running a finger server that\nallows remote ps). This I think is the key reason why we'll\nultimately want to forget about ps and go to a shared-memory-based\narrangement for performance info. That could support a client/server\narchitecture where the server is a backend process (or perhaps a\nnot-quite-backend process, but anyway attached to shared memory)\nand the client is communicating with it over TCP.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 13 Mar 2001 10:19:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor " }, { "msg_contents": "> Denis Perchine <dyp@perchine.com> writes:\n> \n> > > > Small question... Will it work in console? Or it will be X only?\n> > >\n> > > It will be tck/tk, so I guess X only.\n> > \n> > That's bad. Cause it will be unuseful for people having databases far away...\n> > Like me... :-((( Another point is that it is a little bit strange to have \n> > X-Window on machine with database server... At least if it is not for play, \n> > but production one...\n> \n> Well, I use X all the time over far distances - it depends on your\n> connection... And you are not running an X server on the database server, only\n> an X client.\n\nYes, works fine.\n\n> But I agree that it would be nice to have monitoring architecture that\n> allowed a multitude of clients...\n\nRight now, pgmonitor is just 'ps' with some gdb/kill actions. If I add\nanything to the backend, you can be sure I will make it so any app can\naccess it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 10:49:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: Performance monitor'" }, { "msg_contents": "> Denis Perchine <dyp@perchine.com> writes:\n> >>> Small question... Will it work in console? Or it will be X only?\n> >> \n> >> It will be tck/tk, so I guess X only.\n> \n> > That's bad.\n> \n> tcl/tk is cross-platform; there's no reason that a tcl-coded\n> performance monitor client couldn't run on Windows or Mac.\n> \n> The real problem with the ps-based implementation that Bruce is\n> proposing is that it cannot work remotely at all, because there's\n> no way to get the ps data from another machine (unless you're\n> oldfashioned/foolish enough to be running a finger server that\n> allows remote ps). This I think is the key reason why we'll\n> ultimately want to forget about ps and go to a shared-memory-based\n> arrangement for performance info. That could support a client/server\n> architecture where the server is a backend process (or perhaps a\n> not-quite-backend process, but anyway attached to shared memory)\n> and the client is communicating with it over TCP.\n\nHard to say. 'ps' gives some great information about cpu/memory usage\nthat may be hard/costly to put in shared memory. One idea should be to\nissue periodic 'ps/kill' commands though a telnet/ssh pipe to the\nremote machine, or just to the remote X display option.\n\nOf course, getrusage() gives us much of that information.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 10:56:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" }, { "msg_contents": "> Hard to say. 'ps' gives some great information about cpu/memory usage\n> that may be hard/costly to put in shared memory. One idea should be to\n> issue periodic 'ps/kill' commands though a telnet/ssh pipe to the\n> remote machine, or just to the remote X display option.\n> \n> Of course, getrusage() gives us much of that information.\n\nForget getrusage(). Only works on current process, so each backend\nwould have to update its own statistics. Sounds expensive.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 11:37:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor" } ]
[ { "msg_contents": "> > > What I've thought is to implement a new command to\n> > > change archdir under WAL's control.\n> > > If it's different from Vadim's plan I don't object.\n> > \n> > Actually, I have no concrete plans for archdir yet - this\n> > one is for WAL based BAR we should discuss in future.\n> > So, I don't see why to remove archdir from pg_control now.\n> \n> Maybe we can get BAR in 7.1.X so maybe we should have the \n> option to add it.\n\nSo, it's better to leave archdir in pg_control now - if we'll\ndecide that GUC is better place then we'll just ignore archdir\nin pg_control. But if it will be better to have it in pg_control\nthen we'll not be able to add it there.\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 11:43:19 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: AW: Proposed WAL changes" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> So, it's better to leave archdir in pg_control now - if we'll\n> decide that GUC is better place then we'll just ignore archdir\n> in pg_control. But if it will be better to have it in pg_control\n> then we'll not be able to add it there.\n\nBut what possible reason is there for keeping it in pg_control?\nAFAICS that would just mean that we'd need special code for setting it,\ninstead of making use of all of Peter's hard work on GUC.\n\nWe should be moving away from special-case parameter mechanisms, not\ninventing new ones without darn good reasons for them.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 17:15:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> > So, it's better to leave archdir in pg_control now - if we'll\n> > decide that GUC is better place then we'll just ignore archdir\n> > in pg_control. But if it will be better to have it in pg_control\n> > then we'll not be able to add it there.\n> \n> But what possible reason is there for keeping it in pg_control?\n> AFAICS that would just mean that we'd need special code for setting it,\n> instead of making use of all of Peter's hard work on GUC.\n>\n\nI don't think it's appropriate to edit archdir by hand.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Thu, 08 Mar 2001 08:14:14 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> But what possible reason is there for keeping it in pg_control?\n>> AFAICS that would just mean that we'd need special code for setting it,\n>> instead of making use of all of Peter's hard work on GUC.\n\n> I don't think it's appropriate to edit archdir by hand.\n\nWhy not? How is this a critical parameter (more critical than, say,\nfsync enable)? I see no reason to forbid the administrator from\nchanging it ... indeed, I think an admin who found out he couldn't\nchange it on-the-fly would be justifiably unhappy. (\"What do you\nMEAN I can't change archdir? I'm about to run out of space in\n/usr/logs/foobar!!!\")\n\nI agree that we don't want random users changing the value via SET and\nthen issuing a CHECKPOINT (which would use whatever they'd SET :-().\nBut that's easily managed by setting an appropriate protection level\non the GUC variable. Looks like SIGHUP level would be appropriate.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 18:59:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >> But what possible reason is there for keeping it in pg_control?\n> >> AFAICS that would just mean that we'd need special code for setting it,\n> >> instead of making use of all of Peter's hard work on GUC.\n> \n> > I don't think it's appropriate to edit archdir by hand.\n> \n> Why not? How is this a critical parameter (more critical than, say,\n> fsync enable)? \n\nI don't think 'fsync enable' is a critical parameter.\nIt's a dangerous parameter and it's not appropriate\nas a GUC paramter either. Does it have any meaning\nother than testing ? IMHO recovery system doesn't\nallow any optimism and archdir is also a part of\nrecovery system though I'm not sure how critical\nthe parameter would be.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Thu, 08 Mar 2001 09:46:39 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Why not? How is this a critical parameter (more critical than, say,\n>> fsync enable)? \n\n> I don't think 'fsync enable' is a critical parameter.\n> It's a dangerous parameter and it's not appropriate\n> as a GUC paramter either.\n\nThat's also PGC_SIGHUP (recently fixed by me, it was set at a lower level\nbefore).\n\n> Does it have any meaning other than testing ? IMHO recovery system\n> doesn't allow any optimism and archdir is also a part of recovery\n> system though I'm not sure how critical the parameter would be.\n\nI still don't see your point. The admin *can* change these parameters\nif he wishes. Why should we make it more difficult to do so than is\nreasonably necessary? There is certainly no technical reason why we\nshould (say) force an initdb to change archdir.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 19:56:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Why not? How is this a critical parameter (more critical than, say,\n> >> fsync enable)?\n> \n> > Does it have any meaning other than testing ? IMHO recovery system\n> > doesn't allow any optimism and archdir is also a part of recovery\n> > system though I'm not sure how critical the parameter would be.\n> \n> I still don't see your point. The admin *can* change these parameters\n> if he wishes. Why should we make it more difficult to do so than is\n> reasonably necessary? There is certainly no technical reason why we\n> should (say) force an initdb to change archdir.\n> \n\nI've never objected to change archdir on the fly.\nThough GUC is profitable for general purpose it\ncould(must)n't be almighty. As for recovery \nwe must rely on DBA as less as possible. \n\nHiroshi Inoue\n", "msg_date": "Thu, 08 Mar 2001 10:47:58 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: AW: Proposed WAL changes" } ]
[ { "msg_contents": "> > > I have just sent to the pgsql-patches list a rather large set of\n> >\n> > Please send it to me directly - pgsql-patches' archieve is \n> dated by Feb -:(\n> \n> Huh?\n> \n> http://www.postgresql.org/mhonarc/pgsql-patches/2001-03/index.html\n\nBut there was nothing there yesterday -:)\n(Seems archive wasn't updated that time).\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 12:01:40 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes" } ]
[ { "msg_contents": "> > I feel that the fact that\n> > \n> > WAL can't help in the event of disk errors\n> > \n> > is often overlooked.\n> \n> This is true in general. But, nevertheless, WAL can be written to\n> protect against predictable disk errors, when possible. Failing to\n> write a couple of disk blocks when the system crashes is a reasonably\n> predictable disk error. WAL should ideally be written to work\n> correctly in that situation.\n\nBut what can be done if fsync returns before pages flushed?\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 12:03:41 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes" }, { "msg_contents": "On Wed, 7 Mar 2001, Mikheev, Vadim wrote:\n\n> But what can be done if fsync returns before pages flushed?\nNo, it won't. When fsync returns, data is promised by the OS to be on\ndisk.\n\n-alex\n\n", "msg_date": "Wed, 7 Mar 2001 16:06:50 -0500 (EST)", "msg_from": "Alex Pilosov <alex@pilosoft.com>", "msg_from_op": false, "msg_subject": "RE: Proposed WAL changes" }, { "msg_contents": "On Wed, Mar 07, 2001 at 12:03:41PM -0800, Mikheev, Vadim wrote:\n> Ian wrote:\n> > > I feel that the fact that\n> > > \n> > > WAL can't help in the event of disk errors\n> > > \n> > > is often overlooked.\n> > \n> > This is true in general. But, nevertheless, WAL can be written to\n> > protect against predictable disk errors, when possible. Failing to\n> > write a couple of disk blocks when the system crashes \n\nor, more likely, when power drops; a system crash shouldn't keep the\ndisk from draining its buffers ...\n\n> > is a reasonably predictable disk error. WAL should ideally be \n> > written to work correctly in that situation.\n> \n> But what can be done if fsync returns before pages flushed?\n\nJust what Tom has done: preserve a little more history. If it's not\ntoo expensive, then it doesn't hurt you when running on sound hardware,\nbut it offers a good chance of preventing embarrassments for (the \noverwhelming fraction of) users on garbage hardware.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Wed, 7 Mar 2001 13:58:25 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n\n> > > I feel that the fact that\n> > > \n> > > WAL can't help in the event of disk errors\n> > > \n> > > is often overlooked.\n> > \n> > This is true in general. But, nevertheless, WAL can be written to\n> > protect against predictable disk errors, when possible. Failing to\n> > write a couple of disk blocks when the system crashes is a reasonably\n> > predictable disk error. WAL should ideally be written to work\n> > correctly in that situation.\n> \n> But what can be done if fsync returns before pages flushed?\n\nWhen you write out critical information, you keep earlier versions of\nit. On startup, if the critical information is corrupt, you use the\nearlier versions of it. This helps protect against the scenario I\nmentioned: a few disk blocks may not have been written when the power\ngoes out.\n\nMy impression is that that is what Tom is doing with his patches.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 77: A beautiful man is paradise for the eyes, hell for the soul, and\npurgatory for the purse.\n", "msg_date": "07 Mar 2001 16:23:33 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" } ]
[ { "msg_contents": "> > But what can be done if fsync returns before pages flushed?\n> No, it won't. When fsync returns, data is promised by the OS to be on\n> disk.\n\nSeems you didn't follow discussions about this issue.\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 13:04:20 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes" } ]
[ { "msg_contents": "> It is possible to build a logging system so that you mostly don't care\n> when the data blocks get written; a particular data block on disk is \n> considered garbage until the next checkpoint, so that you \n\nHow to know if a particular data page was modified if there is no\nlog record for that modification?\n(Ie how to know where is garbage? -:))\n\n> might as well allow the blocks to be written any time,\n> even before the log entry.\n\nAnd what to do with index tuples pointing to unupdated heap pages\nafter that?\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 13:27:34 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL & SHM principles" }, { "msg_contents": "\n\"\"Mikheev, Vadim\"\" <vmikheev@SECTORBASE.COM> wrote in message\nnews:8F4C99C66D04D4118F580090272A7A234D32FA@sectorbase1.sectorbase.com...\n> > It is possible to build a logging system so that you mostly don't care\n> > when the data blocks get written; a particular data block on disk is\n> > considered garbage until the next checkpoint, so that you\n>\n> How to know if a particular data page was modified if there is no\n> log record for that modification?\n> (Ie how to know where is garbage? -:))\n>\n\nYou could store a log sequence number in the data page header that indicates\nthe log address of the last log record that was applied to the page. This is\ndescribed in Bernstein and Newcomer's book (sec 8.5 operation logging).\nSorry if I'm misunderstanding the question. Back to lurking mode...\n\n\n\n\n\n", "msg_date": "Wed, 7 Mar 2001 13:44:12 -0800", "msg_from": "\"Kevin T. Manley \\(Home\\)\" <kmanley@uswest.net>", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" }, { "msg_contents": "Sorry for taking so long to reply...\n\nOn Wed, Mar 07, 2001 at 01:27:34PM -0800, Mikheev, Vadim wrote:\n> Nathan wrote:\n> > It is possible to build a logging system so that you mostly don't care\n> > when the data blocks get written \n [after being changed, as long as they get written by an fsync];\n> > a particular data block on disk is \n> > considered garbage until the next checkpoint, so that you \n> \n> How to know if a particular data page was modified if there is no\n> log record for that modification?\n> (Ie how to know where is garbage? -:))\n\nIn such a scheme, any block on disk not referenced up to (and including) \nthe last checkpoint is garbage, and is either blank or reflects a recent \nlogged or soon-to-be-logged change. Everything written (except in the \nlog) after the checkpoint thus has to happen in blocks not otherwise \nreferenced from on-disk -- except in other post-checkpoint blocks.\n\nDuring recovery, the log contents get written to those pages during\nstartup. Blocks that actually got written before the crash are not\nchanged by being overwritten from the log, but that's ok. If they got\nwritten before the corresponding log entry, too, nothing references\nthem, so they are considered blank.\n\n> > might as well allow the blocks to be written any time,\n> > even before the log entry.\n> \n> And what to do with index tuples pointing to unupdated heap pages\n> after that?\n\nMaybe index pages are cached in shm and copied to mmapped blocks \nafter it is ok for them to be written.\n\nWhat platforms does PG run on that don't have mmap()?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 12 Mar 2001 16:45:50 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: WAL & SHM principles" } ]
[ { "msg_contents": "I was going to implement the signal handler like we do with Cancel,\nwhere the signal sets a flag and we check the status of the flag in\nvarious _safe_ places.\n\nCan anyone think of a better way to get information out of a backend?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 7 Mar 2001 16:52:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Performance monitor signal handler" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010312 12:12] wrote:\n> I was going to implement the signal handler like we do with Cancel,\n> where the signal sets a flag and we check the status of the flag in\n> various _safe_ places.\n> \n> Can anyone think of a better way to get information out of a backend?\n\nWhy not use a static area of the shared memory segment? Is it possible\nto have a spinlock over it so that an external utility can take a snapshot\nof it with the spinlock held?\n\nAlso, this could work for other stuff as well, instead of overloading\na lot of signal handlers one could just periodically poll a region of\nthe shared segment.\n\njust some ideas..\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n\n", "msg_date": "Mon, 12 Mar 2001 13:34:00 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "At 13:34 12/03/01 -0800, Alfred Perlstein wrote:\n>Is it possible\n>to have a spinlock over it so that an external utility can take a snapshot\n>of it with the spinlock held?\n\nI'd suggest that locking the stats area might be a bad idea; there is only\none writer for each backend-specific chunk, and it won't matter a hell of a\nlot if a reader gets inconsistent views (since I assume they will be\nre-reading every second or so). All the stats area should contain would be\na bunch of counters with timestamps, I think, and the cost up writing to it\nshould be kept to an absolute minimum.\n\n\n>\n>just some ideas..\n>\n\nUnfortunatley, based on prior discussions, Bruce seems quite opposed to a\nshared memory solution.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Tue, 13 Mar 2001 13:22:37 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Philip Warner <pjw@rhyme.com.au> [010312 18:56] wrote:\n> At 13:34 12/03/01 -0800, Alfred Perlstein wrote:\n> >Is it possible\n> >to have a spinlock over it so that an external utility can take a snapshot\n> >of it with the spinlock held?\n> \n> I'd suggest that locking the stats area might be a bad idea; there is only\n> one writer for each backend-specific chunk, and it won't matter a hell of a\n> lot if a reader gets inconsistent views (since I assume they will be\n> re-reading every second or so). All the stats area should contain would be\n> a bunch of counters with timestamps, I think, and the cost up writing to it\n> should be kept to an absolute minimum.\n> \n> \n> >\n> >just some ideas..\n> >\n> \n> Unfortunatley, based on prior discussions, Bruce seems quite opposed to a\n> shared memory solution.\n\nOk, here's another nifty idea.\n\nOn reciept of the info signal, the backends collaborate to piece\ntogether a status file. The status file is given a temporay name.\nWhen complete the status file is rename(2)'d over a well known\nfile.\n\nThis ought to always give a consistant snapshot of the file to\nwhomever opens it.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n\n", "msg_date": "Tue, 13 Mar 2001 06:38:19 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": ">\n>This ought to always give a consistant snapshot of the file to\n>whomever opens it.\n>\n\nI think Tom has previously stated that there are technical reasons not to\ndo IO in signal handlers, and I have philosophical problems with\nperformance monitors that ask 50 backends to do file IO. I really do think\nshared memory is TWTG.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 14 Mar 2001 01:42:18 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "> At 13:34 12/03/01 -0800, Alfred Perlstein wrote:\n> >Is it possible\n> >to have a spinlock over it so that an external utility can take a snapshot\n> >of it with the spinlock held?\n> \n> I'd suggest that locking the stats area might be a bad idea; there is only\n> one writer for each backend-specific chunk, and it won't matter a hell of a\n> lot if a reader gets inconsistent views (since I assume they will be\n> re-reading every second or so). All the stats area should contain would be\n> a bunch of counters with timestamps, I think, and the cost up writing to it\n> should be kept to an absolute minimum.\n> \n> \n> >\n> >just some ideas..\n> >\n> \n> Unfortunatley, based on prior discussions, Bruce seems quite opposed to a\n> shared memory solution.\n\nNo, I like the shared memory idea. Such an idea will have to wait for\n7.2, and second, there are limits to how much shared memory I can use. \n\nEventually, I think shared memory will be the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 09:54:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "> >\n> >This ought to always give a consistant snapshot of the file to\n> >whomever opens it.\n> >\n> \n> I think Tom has previously stated that there are technical reasons not to\n> do IO in signal handlers, and I have philosophical problems with\n> performance monitors that ask 50 backends to do file IO. I really do think\n> shared memory is TWTG.\n\nThe good news is that right now pgmonitor gets all its information from\n'ps', and only shows the query when the user asks for it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 09:55:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Philip Warner <pjw@rhyme.com.au> [010313 06:42] wrote:\n> >\n> >This ought to always give a consistant snapshot of the file to\n> >whomever opens it.\n> >\n> \n> I think Tom has previously stated that there are technical reasons not to\n> do IO in signal handlers, and I have philosophical problems with\n> performance monitors that ask 50 backends to do file IO. I really do think\n> shared memory is TWTG.\n\nI wasn't really suggesting any of those courses of action, all I\nsuggested was using rename(2) to give a seperate appilcation a\nconsistant snapshot of the stats.\n\nActually, what makes the most sense (although it may be a performance\nkiller) is to have the backends update a system table that the external\napp can query.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n\n", "msg_date": "Tue, 13 Mar 2001 06:59:48 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "> > I think Tom has previously stated that there are technical reasons not to\n> > do IO in signal handlers, and I have philosophical problems with\n> > performance monitors that ask 50 backends to do file IO. I really do think\n> > shared memory is TWTG.\n> \n> I wasn't really suggesting any of those courses of action, all I\n> suggested was using rename(2) to give a seperate appilcation a\n> consistant snapshot of the stats.\n> \n> Actually, what makes the most sense (although it may be a performance\n> killer) is to have the backends update a system table that the external\n> app can query.\n\nYes, it seems storing info in shared memory and having a system table to\naccess it is the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 13 Mar 2001 10:03:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "\n>On reciept of the info signal, the backends collaborate to piece\n>together a status file. The status file is given a temporay name.\n>When complete the status file is rename(2)'d over a well known\n>file.\n\nReporting to files, particularly well known ones, could lead to race \nconditions.\n\nAll in all, I think your better off passing messages through pipes or a \nsimilar communication method.\n\nI really liked the idea of a \"server\" that could parse/analyze data from \nmultiple backends.\n\nMy 2/100 worth...\n\n\n\n", "msg_date": "Tue, 13 Mar 2001 15:36:59 -0600", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Thomas Swan <tswan-lst@ics.olemiss.edu> [010313 13:37] wrote:\n> \n> >On reciept of the info signal, the backends collaborate to piece\n> >together a status file. The status file is given a temporay name.\n> >When complete the status file is rename(2)'d over a well known\n> >file.\n> \n> Reporting to files, particularly well known ones, could lead to race \n> conditions.\n> \n> All in all, I think your better off passing messages through pipes or a \n> similar communication method.\n> \n> I really liked the idea of a \"server\" that could parse/analyze data from \n> multiple backends.\n> \n> My 2/100 worth...\n\nTake a few moments to think about the semantics of rename(2).\n\nYes, you would still need syncronization between the backend\nprocesses to do this correctly, but not any external app.\n\nThe external app can just open the file, assuming it exists it\nwill always have a complete and consistant snapshot of whatever\nthe backends agreed on.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\nDaemon News Magazine in your snail-mail! http://magazine.daemonnews.org/\n\n", "msg_date": "Tue, 13 Mar 2001 13:46:25 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Bruce Momjian wrote:\n>\n> Yes, it seems storing info in shared memory and having a system table to\n> access it is the way to go.\n\n Depends,\n\n first of all we need to know WHAT we want to collect. If we\n talk about block read/write statistics and such on a per\n table base, which is IMHO the most accurate thing for tuning\n purposes, then we're talking about an infinite size of shared\n memory perhaps.\n\n And shared memory has all the interlocking problems we want\n to avoid.\n\n What about a collector deamon, fired up by the postmaster and\n receiving UDP packets from the backends. Under heavy load, it\n might miss some statistic messages, well, but that's not as\n bad as having locks causing backends to loose performance.\n\n The postmaster could already provide the UDP socket for the\n backends, so the collector can know the peer address from\n which to accept statistics messages only. Any message from\n another peer address is simply ignored. For getting the\n statistics out of it, the collector has his own server\n socket, using TCP and providing some lookup protocol.\n\n Now whatever the backend has to tell the collector, it simply\n throws a UDP packet into his direction. If the collector can\n catch it or not, not the backends problem.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Thu, 15 Mar 2001 06:57:49 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Jan Wieck <janwieck@Yahoo.com> writes:\n> What about a collector deamon, fired up by the postmaster and\n> receiving UDP packets from the backends. Under heavy load, it\n> might miss some statistic messages, well, but that's not as\n> bad as having locks causing backends to loose performance.\n\nInteresting thought, but we don't want UDP I think; that just opens\nup a whole can of worms about checking access permissions and so forth.\nWhy not a simple pipe? The postmaster creates the pipe and the\ncollector daemon inherits one end, while all the backends inherit the\nother end.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 10:47:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler " }, { "msg_contents": "At 06:57 15/03/01 -0500, Jan Wieck wrote:\n>\n> And shared memory has all the interlocking problems we want\n> to avoid.\n\nI suspect that if we keep per-backend data in a separate area, then we\ndon;t need locking since there is only one writer. It does not matter if a\nreader gets an inconsistent view, the same as if you drop a few UDP packets.\n\n\n> What about a collector deamon, fired up by the postmaster and\n> receiving UDP packets from the backends. \n\nThis does sound appealing; it means that individual backend data (IO etc)\nwill survive past the termination of the backend. I'd like to see the stats\nsurvive the death of the collector if possible, possibly even survive a\nstop/start of the postmaster.\n\n\n> Now whatever the backend has to tell the collector, it simply\n> throws a UDP packet into his direction. If the collector can\n> catch it or not, not the backends problem.\n\nIf we get the backends to keep the stats they are sending in local counters\nas well, then they can send the counter value (not delta) each time, which\nwould mean that the collector would not 'miss' anything - just it's\noperations/sec might see a hiccough. This could have a sidebenefit that(if\nwewanted to?) we could allow a client to query their own counters to get an\nidea of the costs of their queries.\n\nWhen we need to reset the counters that should be done explicitly, I think.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 16 Mar 2001 11:13:40 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Philip Warner <pjw@rhyme.com.au> [010315 16:14] wrote:\n> At 06:57 15/03/01 -0500, Jan Wieck wrote:\n> >\n> > And shared memory has all the interlocking problems we want\n> > to avoid.\n> \n> I suspect that if we keep per-backend data in a separate area, then we\n> don;t need locking since there is only one writer. It does not matter if a\n> reader gets an inconsistent view, the same as if you drop a few UDP packets.\n\nNo, this is completely different.\n\nLost data is probably better than incorrect data. Either use locks\nor a copying mechanism. People will depend on the data returned\nmaking sense.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 16:17:10 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "At 16:17 15/03/01 -0800, Alfred Perlstein wrote:\n>\n>Lost data is probably better than incorrect data. Either use locks\n>or a copying mechanism. People will depend on the data returned\n>making sense.\n>\n\nBut with per-backend data, there is only ever *one* writer to a given set\nof counters. Everyone else is a reader.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 16 Mar 2001 11:28:21 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Philip Warner <pjw@rhyme.com.au> [010315 16:46] wrote:\n> At 16:17 15/03/01 -0800, Alfred Perlstein wrote:\n> >\n> >Lost data is probably better than incorrect data. Either use locks\n> >or a copying mechanism. People will depend on the data returned\n> >making sense.\n> >\n> \n> But with per-backend data, there is only ever *one* writer to a given set\n> of counters. Everyone else is a reader.\n\nThis doesn't prevent a reader from getting an inconsistant view.\n\nThink about a 64bit counter on a 32bit machine. If you charged per\nmegabyte, wouldn't it upset you to have a small chance of loosing\n4 billion units of sale?\n\n(ie, doing a read after an addition that wraps the low 32 bits\nbut before the carry is done to the top most signifigant 32bits?)\n\nOk, what what if everything can be read atomically by itself?\n\nYou're still busted the minute you need to export any sort of\ncompound stat.\n\nIf A, B and C need to add up to 100 you have a read race.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 16:55:19 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "At 16:55 15/03/01 -0800, Alfred Perlstein wrote:\n>* Philip Warner <pjw@rhyme.com.au> [010315 16:46] wrote:\n>> At 16:17 15/03/01 -0800, Alfred Perlstein wrote:\n>> >\n>> >Lost data is probably better than incorrect data. Either use locks\n>> >or a copying mechanism. People will depend on the data returned\n>> >making sense.\n>> >\n>> \n>> But with per-backend data, there is only ever *one* writer to a given set\n>> of counters. Everyone else is a reader.\n>\n>This doesn't prevent a reader from getting an inconsistant view.\n>\n>Think about a 64bit counter on a 32bit machine. If you charged per\n>megabyte, wouldn't it upset you to have a small chance of loosing\n>4 billion units of sale?\n>\n>(ie, doing a read after an addition that wraps the low 32 bits\n>but before the carry is done to the top most signifigant 32bits?)\n\nI assume this means we can not rely on the existence of any kind of\ninterlocked add on 64 bit machines?\n\n\n>Ok, what what if everything can be read atomically by itself?\n>\n>You're still busted the minute you need to export any sort of\n>compound stat.\n\nWhich is why the backends should not do anything other than maintain the\nraw data. If there is atomic data than can cause inconsistency, then a\ndropped UDP packet will do the same.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 16 Mar 2001 12:08:01 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Philip Warner <pjw@rhyme.com.au> [010315 17:08] wrote:\n> At 16:55 15/03/01 -0800, Alfred Perlstein wrote:\n> >* Philip Warner <pjw@rhyme.com.au> [010315 16:46] wrote:\n> >> At 16:17 15/03/01 -0800, Alfred Perlstein wrote:\n> >> >\n> >> >Lost data is probably better than incorrect data. Either use locks\n> >> >or a copying mechanism. People will depend on the data returned\n> >> >making sense.\n> >> >\n> >> \n> >> But with per-backend data, there is only ever *one* writer to a given set\n> >> of counters. Everyone else is a reader.\n> >\n> >This doesn't prevent a reader from getting an inconsistant view.\n> >\n> >Think about a 64bit counter on a 32bit machine. If you charged per\n> >megabyte, wouldn't it upset you to have a small chance of loosing\n> >4 billion units of sale?\n> >\n> >(ie, doing a read after an addition that wraps the low 32 bits\n> >but before the carry is done to the top most signifigant 32bits?)\n> \n> I assume this means we can not rely on the existence of any kind of\n> interlocked add on 64 bit machines?\n> \n> \n> >Ok, what what if everything can be read atomically by itself?\n> >\n> >You're still busted the minute you need to export any sort of\n> >compound stat.\n> \n> Which is why the backends should not do anything other than maintain the\n> raw data. If there is atomic data than can cause inconsistency, then a\n> dropped UDP packet will do the same.\n\nThe UDP packet (a COPY) can contain a consistant snapshot of the data.\nIf you have dependancies, you fit a consistant snapshot into a single\npacket.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Thu, 15 Mar 2001 17:10:47 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "At 17:10 15/03/01 -0800, Alfred Perlstein wrote:\n>> \n>> Which is why the backends should not do anything other than maintain the\n>> raw data. If there is atomic data than can cause inconsistency, then a\n>> dropped UDP packet will do the same.\n>\n>The UDP packet (a COPY) can contain a consistant snapshot of the data.\n>If you have dependancies, you fit a consistant snapshot into a single\n>packet.\n\nIf we were going to go the shared memory way, then yes, as soon as we start\ncollecting dependant data we would need locking, but IOs, locking stats,\nflushes, cache hits/misses are not really in this category.\n\nBut I prefer the UDP/Collector model anyway; it gives use greater\nflexibility + the ability to keep stats past backend termination, and,as\nyou say, removes any possible locking requirements from the backends.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Fri, 16 Mar 2001 12:20:04 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Philip Warner wrote:\n>\n> But I prefer the UDP/Collector model anyway; it gives use greater\n> flexibility + the ability to keep stats past backend termination, and,as\n> you say, removes any possible locking requirements from the backends.\n\n OK, did some tests...\n\n The postmaster can create a SOCK_DGRAM socket at startup and\n bind(2) it to \"127.0.0.1:0\", what causes the kernel to assign\n a non-privileged port number that then can be read with\n getsockname(2). No other process can have a socket with the\n same port number for the lifetime of the postmaster.\n\n If the socket get's ready, it'll read one backend message\n from it with recvfrom(2). The fromaddr must be\n \"127.0.0.1:xxx\" where xxx is the port number the kernel\n assigned to the above socket. Yes, this is his own one,\n shared with postmaster and all backends. So both, the\n postmaster and the backends can use this one UDP socket,\n which the backends inherit on fork(2), to send messages to\n the collector. If such a UDP packet really came from a\n process other than the postmaster or a backend, well then the\n sysadmin has a more severe problem than manipulated DB\n runtime statistics :-)\n\n Running a 500MHz P-III, 192MB, RedHat 6.1 Linux 2.2.17 here,\n I've been able to loose no single message during the parallel\n regression test, if each backend sends one 1K sized message\n per query executed, and the collector simply sucks them out\n of the socket. Message losses start if the collector does a\n per message idle loop like this:\n\n for (i=0,sum=0;i<250000;i++,sum+=1);\n\n Uh - not much time to spend if the statistics should at least\n be half accurate. And it would become worse in SMP systems.\n So that was a nifty idea, but I think it'd cause much more\n statistic losses than I assumed at first.\n\n Back to drawing board. Maybe a SYS-V message queue can serve?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 16 Mar 2001 11:12:47 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Jan Wieck <JanWieck@yahoo.com> [010316 08:08] wrote:\n> Philip Warner wrote:\n> >\n> > But I prefer the UDP/Collector model anyway; it gives use greater\n> > flexibility + the ability to keep stats past backend termination, and,as\n> > you say, removes any possible locking requirements from the backends.\n> \n> OK, did some tests...\n> \n> The postmaster can create a SOCK_DGRAM socket at startup and\n> bind(2) it to \"127.0.0.1:0\", what causes the kernel to assign\n> a non-privileged port number that then can be read with\n> getsockname(2). No other process can have a socket with the\n> same port number for the lifetime of the postmaster.\n> \n> If the socket get's ready, it'll read one backend message\n> from it with recvfrom(2). The fromaddr must be\n> \"127.0.0.1:xxx\" where xxx is the port number the kernel\n> assigned to the above socket. Yes, this is his own one,\n> shared with postmaster and all backends. So both, the\n> postmaster and the backends can use this one UDP socket,\n> which the backends inherit on fork(2), to send messages to\n> the collector. If such a UDP packet really came from a\n> process other than the postmaster or a backend, well then the\n> sysadmin has a more severe problem than manipulated DB\n> runtime statistics :-)\n\nDoing this is a bad idea:\n\na) it allows any program to start spamming localhost:randport with\nmessages and screw with the postmaster.\n\nb) it may even allow remote people to mess with it, (see recent\nbugtraq articles about this)\n\nYou should use a unix domain socket (at least when possible).\n\n> Running a 500MHz P-III, 192MB, RedHat 6.1 Linux 2.2.17 here,\n> I've been able to loose no single message during the parallel\n> regression test, if each backend sends one 1K sized message\n> per query executed, and the collector simply sucks them out\n> of the socket. Message losses start if the collector does a\n> per message idle loop like this:\n> \n> for (i=0,sum=0;i<250000;i++,sum+=1);\n> \n> Uh - not much time to spend if the statistics should at least\n> be half accurate. And it would become worse in SMP systems.\n> So that was a nifty idea, but I think it'd cause much more\n> statistic losses than I assumed at first.\n> \n> Back to drawing board. Maybe a SYS-V message queue can serve?\n\nI wouldn't say back to the drawing board, I would say two steps back.\n\nWhat about instead of sending deltas, you send totals? This would\nallow you to loose messages and still maintain accurate stats.\n\nYou can also enable SIGIO on the socket, then have a signal handler\nbuffer packets that arrive when not actively select()ing on the\nUDP socket. You can then use sigsetmask(2) to provide mutual\nexclusion with your SIGIO handler and general select()ing on the\nsocket.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Fri, 16 Mar 2001 08:31:49 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Uh - not much time to spend if the statistics should at least\n> be half accurate. And it would become worse in SMP systems.\n> So that was a nifty idea, but I think it'd cause much more\n> statistic losses than I assumed at first.\n\n> Back to drawing board. Maybe a SYS-V message queue can serve?\n\nThat would be the same as a pipe: backends would block if the collector\nstopped accepting data. I do like the \"auto discard\" aspect of this\nUDP-socket approach.\n\nI think Philip had the right idea: each backend should send totals,\nnot deltas, in its messages. Then, it doesn't matter (much) if the\ncollector loses some messages --- that just means that sometimes it\nhas a slightly out-of-date idea about how much work some backends have\ndone. It should be easy to design the software so that that just makes\na small, transient error in the currently displayed statistics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Mar 2001 11:53:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler " }, { "msg_contents": "Alfred Perlstein wrote:\n> * Jan Wieck <JanWieck@yahoo.com> [010316 08:08] wrote:\n> > Philip Warner wrote:\n> > >\n> > > But I prefer the UDP/Collector model anyway; it gives use greater\n> > > flexibility + the ability to keep stats past backend termination, and,as\n> > > you say, removes any possible locking requirements from the backends.\n> >\n> > OK, did some tests...\n> >\n> > The postmaster can create a SOCK_DGRAM socket at startup and\n> > bind(2) it to \"127.0.0.1:0\", what causes the kernel to assign\n> > a non-privileged port number that then can be read with\n> > getsockname(2). No other process can have a socket with the\n> > same port number for the lifetime of the postmaster.\n> >\n> > If the socket get's ready, it'll read one backend message\n> > from it with recvfrom(2). The fromaddr must be\n> > \"127.0.0.1:xxx\" where xxx is the port number the kernel\n> > assigned to the above socket. Yes, this is his own one,\n> > shared with postmaster and all backends. So both, the\n> > postmaster and the backends can use this one UDP socket,\n> > which the backends inherit on fork(2), to send messages to\n> > the collector. If such a UDP packet really came from a\n> > process other than the postmaster or a backend, well then the\n> > sysadmin has a more severe problem than manipulated DB\n> > runtime statistics :-)\n>\n> Doing this is a bad idea:\n>\n> a) it allows any program to start spamming localhost:randport with\n> messages and screw with the postmaster.\n>\n> b) it may even allow remote people to mess with it, (see recent\n> bugtraq articles about this)\n\n So it's possible for a UDP socket to recvfrom(2) and get\n packets with a fromaddr localhost:my_own_non_SO_REUSE_port\n that really came from somewhere else?\n\n If that's possible, the packets must be coming over the\n network. Oterwise it's the local superuser sending them, and\n in that case it's not worth any more discussion because root\n on your system has more powerful possibilities to muck around\n with your database. And if someone outside the local system\n is doing it, it's time for some filter rules, isn't it?\n\n> You should use a unix domain socket (at least when possible).\n\n Unix domain UDP?\n\n>\n> > Running a 500MHz P-III, 192MB, RedHat 6.1 Linux 2.2.17 here,\n> > I've been able to loose no single message during the parallel\n> > regression test, if each backend sends one 1K sized message\n> > per query executed, and the collector simply sucks them out\n> > of the socket. Message losses start if the collector does a\n> > per message idle loop like this:\n> >\n> > for (i=0,sum=0;i<250000;i++,sum+=1);\n> >\n> > Uh - not much time to spend if the statistics should at least\n> > be half accurate. And it would become worse in SMP systems.\n> > So that was a nifty idea, but I think it'd cause much more\n> > statistic losses than I assumed at first.\n> >\n> > Back to drawing board. Maybe a SYS-V message queue can serve?\n>\n> I wouldn't say back to the drawing board, I would say two steps back.\n>\n> What about instead of sending deltas, you send totals? This would\n> allow you to loose messages and still maintain accurate stats.\n\n Similar problem as with shared memory - size. If a long\n running backend of a multithousand table database needs to\n send access stats per table - and had accessed them all up to\n now - it'll be alot of wasted bandwidth.\n\n>\n> You can also enable SIGIO on the socket, then have a signal handler\n> buffer packets that arrive when not actively select()ing on the\n> UDP socket. You can then use sigsetmask(2) to provide mutual\n> exclusion with your SIGIO handler and general select()ing on the\n> socket.\n\n I already thought that priorizing the socket-drain this way:\n there is a fairly big receive buffer. If the buffer is empty,\n it does a blocking select(2). If it's not, it does a non-\n blocking (0-timeout) one and only if the non-blocking tells\n that there aren't new messages waiting, it'll process one\n buffered message and try to receive again.\n\n Will give it a shot.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 16 Mar 2001 13:49:41 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010316 10:06] wrote:\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Uh - not much time to spend if the statistics should at least\n> > be half accurate. And it would become worse in SMP systems.\n> > So that was a nifty idea, but I think it'd cause much more\n> > statistic losses than I assumed at first.\n> \n> > Back to drawing board. Maybe a SYS-V message queue can serve?\n> \n> That would be the same as a pipe: backends would block if the collector\n> stopped accepting data. I do like the \"auto discard\" aspect of this\n> UDP-socket approach.\n> \n> I think Philip had the right idea: each backend should send totals,\n> not deltas, in its messages. Then, it doesn't matter (much) if the\n> collector loses some messages --- that just means that sometimes it\n> has a slightly out-of-date idea about how much work some backends have\n> done. It should be easy to design the software so that that just makes\n> a small, transient error in the currently displayed statistics.\n\nMSGSND(3) FreeBSD Library Functions Manual MSGSND(3)\n\n\nERRORS\n msgsnd() will fail if:\n\n [EAGAIN] There was no space for this message either on the\n queue, or in the whole system, and IPC_NOWAIT was set\n in msgflg.\n\n-- \n-Alfred Perlstein - [bright@wintelcom.net|alfred@freebsd.org]\n\n", "msg_date": "Fri, 16 Mar 2001 12:18:26 -0800", "msg_from": "Alfred Perlstein <bright@wintelcom.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "At 13:49 16/03/01 -0500, Jan Wieck wrote:\n>\n> Similar problem as with shared memory - size. If a long\n> running backend of a multithousand table database needs to\n> send access stats per table - and had accessed them all up to\n> now - it'll be alot of wasted bandwidth.\n\nNot if you only send totals for individual counters when they change; some\nstats may never be resynced, but for the most part it will work. Also, does\nUnix allow interrupts to occur as a result of data arrivibg in a pipe? If\nso, how about:\n\n- All backends to do *blocking* IO to collector.\n\n- Collector to receive an interrupt when a message arrives; while in the\ninterrupt it reads the buffer into a local queue, and returns from the\ninterrupt.\n\n- Main line code processes the queue and writes it to a memory mapped file\nfor durability.\n\n- If collector dies, postmaster starts another immediately, which slears\nthe backlog of data in the pipe and then remaps the file.\n\n- Each backend has its own local copy of it's counters which *possibly* to\ncollector can ask for when it restarts.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 17 Mar 2001 20:49:44 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Philip Warner wrote:\n> At 13:49 16/03/01 -0500, Jan Wieck wrote:\n> >\n> > Similar problem as with shared memory - size. If a long\n> > running backend of a multithousand table database needs to\n> > send access stats per table - and had accessed them all up to\n> > now - it'll be alot of wasted bandwidth.\n>\n> Not if you only send totals for individual counters when they change; some\n> stats may never be resynced, but for the most part it will work. Also, does\n> Unix allow interrupts to occur as a result of data arrivibg in a pipe? If\n> so, how about:\n>\n> - All backends to do *blocking* IO to collector.\n\n The general problem remains. We only have one central\n collector with a limited receive capacity. The more load is\n on the machine, the smaller it's capacity gets. The more\n complex the DB schemas get and the more load is on the\n system, the more interesting accurate statistics get. Both\n factors are contraproductive. More complex schema means more\n tables and thus bigger messages. More load means more\n messages. Having good statistics on a toy system while they\n get worse for a web backend server that's really under\n pressure is braindead from the start.\n\n We don't want the backends to block, so that they can do\n THEIR work. That's to process queries, nothing else.\n\n Pipes seem to be inappropriate because their buffer is\n limited to 4K on Linux and most BSD flavours. Message queues\n are too because they are limited to 2K on most BSD's. So only\n sockets remain.\n\n If we have multiple processes that try to receive from the\n UDP socket, condense the received packets into summary\n messages and send them to the central collector, this might\n solve the problem.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sat, 17 Mar 2001 09:33:03 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "On Sat, Mar 17, 2001 at 09:33:03AM -0500, Jan Wieck wrote:\n> \n> The general problem remains. We only have one central\n> collector with a limited receive capacity. The more load is\n> on the machine, the smaller it's capacity gets. The more\n> complex the DB schemas get and the more load is on the\n> system, the more interesting accurate statistics get. Both\n> factors are contraproductive. More complex schema means more\n> tables and thus bigger messages. More load means more\n> messages. Having good statistics on a toy system while they\n> get worse for a web backend server that's really under\n> pressure is braindead from the start.\n> \nJust as another suggestion, what about sending the data to a different\ncomputer, so instead of tying up the database server with processing the\nstatistics, you have another computer that has some free time to do the\nprocessing.\n\nSome drawbacks are that you can't automatically start/restart it from the\npostmaster and it will put a little more load on the network, but it seems\nto mostly solve the issues of blocked pipes and using too much cpu time\non the database server.\n\n", "msg_date": "Sat, 17 Mar 2001 08:21:13 -0800", "msg_from": "Samuel Sieb <samuel@sieb.net>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Samuel Sieb <samuel@sieb.net> writes:\n> Just as another suggestion, what about sending the data to a different\n> computer, so instead of tying up the database server with processing the\n> statistics, you have another computer that has some free time to do the\n> processing.\n\n> Some drawbacks are that you can't automatically start/restart it from the\n> postmaster and it will put a little more load on the network,\n\n... and a lot more load on the CPU. Same-machine \"network\" connections\nare much cheaper (on most kernels, anyway) than real network\nconnections.\n\nI think all of this discussion is vast overkill. No one has yet\ndemonstrated that it's not sufficient to have *one* collector process\nand a lossy transmission method. Let's try that first, and if it really\nproves to be unworkable then we can get out the lily-gilding equipment.\nBut there is tons more stuff to do before we have useful stats at all,\nand I don't think that this aspect is the most critical part of the\nproblem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Mar 2001 11:48:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler " }, { "msg_contents": "> ... and a lot more load on the CPU. Same-machine \"network\" connections\n> are much cheaper (on most kernels, anyway) than real network\n> connections.\n> \n> I think all of this discussion is vast overkill. No one has yet\n> demonstrated that it's not sufficient to have *one* collector process\n> and a lossy transmission method. Let's try that first, and if it really\n> proves to be unworkable then we can get out the lily-gilding equipment.\n> But there is tons more stuff to do before we have useful stats at all,\n> and I don't think that this aspect is the most critical part of the\n> problem.\n\nAgreed. Sounds like overkill.\n\nHow about a per-backend shared memory area for stats, plus a global\nshared memory area that each backend can add to when it exists. That\nmeets most of our problem.\n\nThe only open issue is per-table stuff, and I would like to see some\ncircular buffer implemented to handle that, with a collection process\nthat has access to shared memory. Even better, have an SQL table\nupdated with the per-table stats periodically. How about a collector\nprocess that periodically reads though the shared memory and UPDATE's\nSQL tables with the information.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Mar 2001 12:10:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The only open issue is per-table stuff, and I would like to see some\n> circular buffer implemented to handle that, with a collection process\n> that has access to shared memory.\n\nThat will get us into locking/contention issues. OTOH, frequent trips\nto the kernel to send stats messages --- regardless of the transport\nmechanism chosen --- don't seem all that cheap either.\n\n> Even better, have an SQL table updated with the per-table stats\n> periodically.\n\nThat will be horribly expensive, if it's a real table.\n\nI think you missed the point that somebody made a little while ago\nabout waiting for functions that can return tuple sets. Once we have\nthat, the stats tables can be *virtual* tables, ie tables that are\ncomputed on-demand by some function. That will be a lot less overhead\nthan physically updating an actual table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Mar 2001 12:38:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The only open issue is per-table stuff, and I would like to see some\n> > circular buffer implemented to handle that, with a collection process\n> > that has access to shared memory.\n> \n> That will get us into locking/contention issues. OTOH, frequent trips\n> to the kernel to send stats messages --- regardless of the transport\n> mechanism chosen --- don't seem all that cheap either.\n\nI am confused. Reading/writing shared memory is not a kernel call,\nright?\n\nI agree on the locking contention problems of a circular buffer.\n\n> \n> > Even better, have an SQL table updated with the per-table stats\n> > periodically.\n> \n> That will be horribly expensive, if it's a real table.\n\nBut per-table stats aren't something that people will look at often,\nright? They can sit in the collector's memory for quite a while. See\npeople wanting to look at per-backend stuff frequently, and that is why\nI thought share memory should be good, and a global area for aggregate\nstats for all backends.\n\n> I think you missed the point that somebody made a little while ago\n> about waiting for functions that can return tuple sets. Once we have\n> that, the stats tables can be *virtual* tables, ie tables that are\n> computed on-demand by some function. That will be a lot less overhead\n> than physically updating an actual table.\n\nYes, but do we want to keep these stats between postmaster restarts? \nAnd what about writing them to tables when our storage of table stats\ngets too big?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Mar 2001 12:43:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Even better, have an SQL table updated with the per-table stats\n> periodically.\n>> \n>> That will be horribly expensive, if it's a real table.\n\n> But per-table stats aren't something that people will look at often,\n> right? They can sit in the collector's memory for quite a while. See\n> people wanting to look at per-backend stuff frequently, and that is why\n> I thought share memory should be good, and a global area for aggregate\n> stats for all backends.\n\n>> I think you missed the point that somebody made a little while ago\n>> about waiting for functions that can return tuple sets. Once we have\n>> that, the stats tables can be *virtual* tables, ie tables that are\n>> computed on-demand by some function. That will be a lot less overhead\n>> than physically updating an actual table.\n\n> Yes, but do we want to keep these stats between postmaster restarts? \n> And what about writing them to tables when our storage of table stats\n> gets too big?\n\nAll those points seem to me to be arguments in *favor* of a virtual-\ntable approach, not arguments against it.\n\nOr are you confusing the method of collecting stats with the method\nof making the collected stats available for use?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Mar 2001 14:11:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler " }, { "msg_contents": "> > But per-table stats aren't something that people will look at often,\n> > right? They can sit in the collector's memory for quite a while. See\n> > people wanting to look at per-backend stuff frequently, and that is why\n> > I thought share memory should be good, and a global area for aggregate\n> > stats for all backends.\n> \n> >> I think you missed the point that somebody made a little while ago\n> >> about waiting for functions that can return tuple sets. Once we have\n> >> that, the stats tables can be *virtual* tables, ie tables that are\n> >> computed on-demand by some function. That will be a lot less overhead\n> >> than physically updating an actual table.\n> \n> > Yes, but do we want to keep these stats between postmaster restarts? \n> > And what about writing them to tables when our storage of table stats\n> > gets too big?\n> \n> All those points seem to me to be arguments in *favor* of a virtual-\n> table approach, not arguments against it.\n> \n> Or are you confusing the method of collecting stats with the method\n> of making the collected stats available for use?\n\nMaybe I am confusing them. I didn't see a distinction in the\ndiscussion.\n\nI assumed the UDP/message passing of information to the collector was\nthe way statistics were collected, and I don't understand why a\nper-backend area and global area, with some kind of cicular buffer for\nper-table stuff isn't the cheapest, cleanest solution.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Mar 2001 15:10:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance monitor signal handler" }, { "msg_contents": "Tom Lane wrote:\n> Samuel Sieb <samuel@sieb.net> writes:\n> > Just as another suggestion, what about sending the data to a different\n> > computer, so instead of tying up the database server with processing the\n> > statistics, you have another computer that has some free time to do the\n> > processing.\n>\n> > Some drawbacks are that you can't automatically start/restart it from the\n> > postmaster and it will put a little more load on the network,\n>\n> ... and a lot more load on the CPU. Same-machine \"network\" connections\n> are much cheaper (on most kernels, anyway) than real network\n> connections.\n>\n> I think all of this discussion is vast overkill. No one has yet\n> demonstrated that it's not sufficient to have *one* collector process\n> and a lossy transmission method. Let's try that first, and if it really\n> proves to be unworkable then we can get out the lily-gilding equipment.\n> But there is tons more stuff to do before we have useful stats at all,\n> and I don't think that this aspect is the most critical part of the\n> problem.\n\n Well,\n\n back to my initial approach with the UDP socket collector. I\n now have a collector simply reading all messages from the\n socket. It doesn't do anything useful except for counting\n their number.\n\n Every backend sends a couple of 1K junk messages at the\n beginning of the main loop. Up to 16 messages, there is no\n time(1) measurable delay in the execution of the \"make\n runcheck\".\n\n The dummy collector can keep up during the parallel\n regression test until the backends send 64 messages each\n time, at that number he lost 1.25% of the messages. That is\n an amount of statistics data of >256MB to be collected. Most\n of the test queries will never generate 1K of message, so\n that there should be some space here.\n\n My plan now is to add some real functionality to the\n collector and the backend, to see if that has an impact.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Sat, 17 Mar 2001 21:18:39 -0500 (EST)", "msg_from": "Jan Wieck <JanWieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Performance monitor signal handler" } ]
[ { "msg_contents": "Hi:\n\nWhen trying to pg_dump a database as the postgres user or as any other user I got this error:\n\ndumpProcLangs(): handler procedure for language plpgsql not found\n\nwhat's going on ?\n\nthe command to dump this is: pg_dump -x alyma > alyma.dat.sql\n\nThank you in advanced.\n\n\n--\nIng. Luis Maga�a.\nGnovus Networks & Software\nwww.gnovus.com\n\n", "msg_date": "Wed, 07 Mar 2001 18:55:43 -0600", "msg_from": "Luis =?UNKNOWN?Q?Maga=F1a?= <joe666@gnovus.com>", "msg_from_op": true, "msg_subject": "pg_dump error" } ]
[ { "msg_contents": "I like the idea of updating shared memory with the performance statistics, \ncurrent query execution information, etc., providing a function to fetch \nthose statistics, and perhaps providing a system view (i.e. pg_performance) \nbased upon such functions which can be queried by the administrator.\n\nFWIW,\n\nMike Mascari\nmascarm@mascari.com\n\n-----Original Message-----\nFrom:\tPhilip Warner [SMTP:pjw@rhyme.com.au]\nSent:\tWednesday, March 07, 2001 7:42 PM\nTo:\tJustin Clift\nCc:\tBruce Momjian; Tom Lane; The Hermit Hacker; PostgreSQL-development\nSubject:\tRe: [HACKERS] Performance monitor\n\nAt 11:33 8/03/01 +1100, Justin Clift wrote:\n>Hi all,\n>\n>Wouldn't another approach be to write a C function that does the\n>necessary work, then just call it like any other C function?\n>\n>i.e. Connect to the database and issue a \"select\n>perf_stats('/tmp/stats-2001-03-08-01.txt')\" ?\n>\n\nI think Bruce wants per-backend data, and this approach would seem to only\nget the data for the current backend.\n\nAlso, I really don't like the proposal to write files to /tmp. If we want a\nperf tool, then we need to have something like 'top', which will\ncontinuously update. With 40 backends, the idea of writing 40 file to /tmp\nevery second seems a little excessive to me.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n", "msg_date": "Wed, 7 Mar 2001 19:59:20 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "RE: Performance monitor" }, { "msg_contents": "At 19:59 7/03/01 -0500, Mike Mascari wrote:\n>I like the idea of updating shared memory with the performance statistics, \n>current query execution information, etc., providing a function to fetch \n>those statistics, and perhaps providing a system view (i.e. pg_performance) \n>based upon such functions which can be queried by the administrator.\n\nThis sounds like The Way to me. Although I worry that using a view (or\nstandard libpq methods) might be too expensive in high load situations\n(this is not based on any knowledge of the likely costs, however!).\n\nWe do need to make this as cheap as possible since we don't want to distort\nthe stats, and it will often be used to diagnose perormance problems, and\nwe don't want to contribute to those problems.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Thu, 08 Mar 2001 12:25:59 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "RE: Performance monitor" } ]
[ { "msg_contents": "> > But what can be done if fsync returns before pages flushed?\n> \n> When you write out critical information, you keep earlier versions of\n> it. On startup, if the critical information is corrupt, you use the\n> earlier versions of it. This helps protect against the scenario I\n> mentioned: a few disk blocks may not have been written when the power\n> goes out.\n> \n> My impression is that that is what Tom is doing with his patches.\n\nIf fsync may return before data *actually* flushed then you may have\nunlogged data page changes which breakes WAL rule and means corrupted\n(inconsistent) database without ANY ABILITY TO RECOVERY TO CONSISTENT\nSTATE. Now please explain me how saving positions of two checkpoints\n(what Tom is doing) can help here?\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 17:49:34 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes" }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> If fsync may return before data *actually* flushed then you may have\n> unlogged data page changes which breakes WAL rule and means corrupted\n> (inconsistent) database without ANY ABILITY TO RECOVERY TO CONSISTENT\n> STATE. Now please explain me how saving positions of two checkpoints\n> (what Tom is doing) can help here?\n\nIf the WAL log is broken then there is (AFAICT) no way to *guarantee*\nrecovery to a consistent state. There might be changes in heap or index\npages that you cannot find a trace of in the readable part of the log,\nand hence don't know you need to fix.\n\nHowever, I think that Vadim is focusing too narrowly on what he can\nguarantee, and neglecting the problem of providing reasonable behavior\nwhen things are too broken to meet the guarantee. In particular,\ncomplete refusal to start up when the WAL log is corrupted is not\nreasonable nor acceptable. We have to be able to recover from that\nand provide access to as-close-to-accurate data as we can manage.\nI believe that backing up to the prior checkpoint and replaying as much\nof the log as we can read is a useful fallback capability if the most\nrecent checkpoint record is unreadable.\n\nIn any case, \"guarantees\" that are based on the assumption that WAL\nwrites hit the disk before data writes do are going to fail anyway\nif we are using disk drives that reorder writes. This is the real\nreason why we can't think only about guarantees, but also about how\nwe will respond to \"guaranteed impossible\" failures.\n\nIf you can improve on this recovery idea, by all means do so. But\ndon't tell me that we don't need to worry about it, because we do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 21:24:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n\n> If fsync may return before data *actually* flushed then you may have\n> unlogged data page changes which breakes WAL rule and means corrupted\n> (inconsistent) database without ANY ABILITY TO RECOVERY TO CONSISTENT\n> STATE. Now please explain me how saving positions of two checkpoints\n> (what Tom is doing) can help here?\n\nMaybe it can't. I don't know.\n\nI do know that a system which can only function if all disk writes\ncomplete in order is doomed when run on a Unix system with modern IDE\ndisks. It won't work correctly in the case of a sudden power outage.\n\nSince many people run Postgres in just that sort of environment, it's\nnot a good idea to completely fail to handle it. Clearly a sudden\npower outage can cause disk data to be corrupted. People can accept\nthat. They need to know what to do in that case.\n\nFor example, for typical Unix filesystems, after a sudden power\noutage, people run fsck. fsck puts the filesystem back in shape. In\nthe process, it may find some files it can not fix; they are put in\n/lost+found. In the process, it may lose some files entirely; too\nbad. The point is, after fsck is run, the filesystem is known to be\ngood, and in most cases all the data will still be present.\n\nImagine if every time the power went out, people were required to\nrestore their disks from the last backup.\n\nSo, I think that what Postgres needs to do when it sees corruption is\njust what fsck does: warn the user, put the database in a usable\nstate, and provide whatever information can be provided about lost or\ninconsistent data. Users who need full bore gold plated consistency\nwill restore from their last good backup, and invest in a larger UPS.\nMany users will simply keep going.\n\nIan\n\n---------------------------(end of broadcast)---------------------------\nTIP 43: The human animal differs from the lesser primates in his passion for\nlists of \"Ten Best\".\n\t\t-- H. Allen Smith\n", "msg_date": "07 Mar 2001 21:52:00 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" }, { "msg_contents": "What do we debate?\nI never told that we shouldn't worry about current WAL disability to restart.\nAnd WAL already correctly works in situation of \"failing to write a couple of disk\nblocks when the system crashes\".\nMy statement at first place that \"WAL can't help in the event of disk errors\"\nwas to remind that we should think over how much are we going to guarantee\nand by what means in the event our *base* requirements were not answered\n(guaranteed -:)). My POV is that two checkpoints increases disk space\nrequirements for *everyday usage* while buying near nothing because of data\nconsistency cannot be guaranteed anyway.\nOn the other hand this is the fastest way to implement WAL restart-ability\n- *which is the real problem we have to fix*.\n\nVadim\n\n\n", "msg_date": "Thu, 8 Mar 2001 04:24:23 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes" } ]
[ { "msg_contents": "> >> * Store two past checkpoint locations, not just one, in pg_control.\n> >> On startup, we fall back to the older checkpoint if the newer one\n> >> is unreadable. Also, a physical copy of the newest \n> >> checkpoint record\n> \n> > And what to do if older one is unreadable too?\n> > (Isn't it like using 2 x CRC32 instead of CRC64 ? -:))\n> \n> Then you lose --- but two checkpoints gives you twice the chance of\n> recovery (probably more, actually, since it's much more likely that\n> the previous checkpoint will have reached disk safely).\n\nThis is not correct. If log is corrupted somehow (checkpoint wasn't\nflushed as promised) then you have no chance to *recover* because of\nDB will be (most likely) in inconsistent state (data pages flushed\nbefore corresponding log records etc). So, second checkpoint gives us\ntwice the chance to *restart* in normal way - read checkpoint and\nrollforward from redo record, - not to *recover*. But this approach\ntwice increases on-line log size requirements and doesn't help to\nhandle cases when pg_control was corrupted. Note, I agreed that\ndisaster *restart* must be implemented, I just think that\n\"two checkpoints\" approach is not the best way to follow.\n>From my POV, scanning logs is much better - it doesn't require\ndoubling size of on-line logs and allows to *restart* if pg_control\nwas lost/corrupted: \n\nIf there is no pg_control or it's corrupted or points to\nunexistent/corrupted checkpoint record then scan logs from\nnewest to oldest one till we find last valid checkpoint record\nor oldest valid log record and than redo from there.\n\n> See later discussion --- Andreas convinced me that flushing NEXTXID\n> records to disk isn't really needed after all. (I didn't \n> take the flush out of my patch yet, but will do so.) I still want\n> to leave the NEXTXID records in there, though, because I think that\n> XID and OID assignment ought to work as nearly alike as possible.\n\nAs I explained in short already: with UNDO we'll be able to reuse\nXIDs after restart - ie there will be no point to have NEXTXID\nrecords at all. And there is no point to add it now.\nDoes it fix anything? Isn't \"fixing\" all what we must do in beta?\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 18:37:07 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n> \"two checkpoints\" approach is not the best way to follow.\n> From my POV, scanning logs is much better - it doesn't require\n> doubling size of on-line logs and allows to *restart* if pg_control\n> was lost/corrupted: \n\n> If there is no pg_control or it's corrupted or points to\n> unexistent/corrupted checkpoint record then scan logs from\n> newest to oldest one till we find last valid checkpoint record\n> or oldest valid log record and than redo from there.\n\nAnd how well will that approach work if the last checkpoint record\ngot written near the start of a log segment file, and then the\ncheckpointer discarded all your prior log segments because \"you don't\nneed those anymore\"? If the checkpoint record gets corrupted,\nyou have no readable log at all.\n\nEven if you prefer to scan the logs to find the latest checkpoint\n(to which I have no objection), it's still useful for the system\nto keep track of the checkpoint-before-latest and only delete log\nsegments that are older than that.\n\nOf course checkpoint-before-latest is a somewhat arbitrary way of\ndeciding how far back we need to keep logs, but I think it's more\ndefensible than anything else we can implement with small work.\n\n> As I explained in short already: with UNDO we'll be able to reuse\n> XIDs after restart - ie there will be no point to have NEXTXID\n> records at all. And there is no point to add it now.\n> Does it fix anything? Isn't \"fixing\" all what we must do in beta?\n\nIf you really insist, I'll take it out again, but I'm unconvinced\nthat that's necessary.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 07 Mar 2001 21:46:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " } ]
[ { "msg_contents": "> > If there is no pg_control or it's corrupted or points to\n> > unexistent/corrupted checkpoint record then scan logs from\n> > newest to oldest one till we find last valid checkpoint record\n> > or oldest valid log record and than redo from there.\n> \n> And how well will that approach work if the last checkpoint record\n> got written near the start of a log segment file, and then the\n> checkpointer discarded all your prior log segments because \"you don't\n> need those anymore\"? If the checkpoint record gets corrupted,\n> you have no readable log at all.\n\nThe question - why should we have it? It is assumed that data files\nare flushed before checkpoint appears in log. If this assumtion\nis wrong due to *bogus* fsync/disk/whatever why should we increase\ndisk space requirements which will affect *good* systems too?\nWhat will we buy with extra logs? Just some data we can't\nguarantee consistency anyway?\nIt seems that you want guarantee more than me, Tom -:)\n\nBTW, in some my tests size of on-line logs was ~ 200Mb with default\ncheckpoint interval. So, it's worth to care about on-line logs size.\n\n> > As I explained in short already: with UNDO we'll be able to reuse\n> > XIDs after restart - ie there will be no point to have NEXTXID\n> > records at all. And there is no point to add it now.\n> > Does it fix anything? Isn't \"fixing\" all what we must do in beta?\n> \n> If you really insist, I'll take it out again, but I'm unconvinced\n> that that's necessary.\n\nPlease convince me that NEXTXID is necessary.\nWhy add anything that is not useful?\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 19:27:13 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: Proposed WAL changes " }, { "msg_contents": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM> writes:\n>> And how well will that approach work if the last checkpoint record\n>> got written near the start of a log segment file, and then the\n>> checkpointer discarded all your prior log segments because \"you don't\n>> need those anymore\"? If the checkpoint record gets corrupted,\n>> you have no readable log at all.\n\n> The question - why should we have it? It is assumed that data files\n> are flushed before checkpoint appears in log. If this assumtion\n> is wrong due to *bogus* fsync/disk/whatever why should we increase\n> disk space requirements which will affect *good* systems too?\n> What will we buy with extra logs? Just some data we can't\n> guarantee consistency anyway?\n> It seems that you want guarantee more than me, Tom -:)\n\nNo, but I want a system that's not brittle. You seem to be content to\ndesign a system that is reliable as long as the WAL log is OK but loses\nthe entire database unrecoverably as soon as one bit goes bad in the\nlog. I'd like a slightly softer failure mode. WAL logs *will* go bad\n(even without system crashes; what of unrecoverable disk read errors?)\nand we ought to be able to deal with that with some degree of grace.\nYes, we lost our guarantee of consistency. That doesn't mean we should\nnot do the best we can with what we've got left.\n\n> BTW, in some my tests size of on-line logs was ~ 200Mb with default\n> checkpoint interval. So, it's worth to care about on-line logs size.\n\nOkay, but to me that suggests we need a smarter log management strategy,\nnot a management strategy that throws away data we might wish we still\nhad (for manual analysis if nothing else). Perhaps the checkpoint\ncreation rule should be \"every M seconds *or* every N megabytes of log,\nwhichever comes first\". It'd be fairly easy to signal the postmaster to\nstart up a new checkpoint process when XLogWrite rolls over to a new log\nsegment, if the last checkpoint was further back than some number of\nsegments. Comments?\n\n> Please convince me that NEXTXID is necessary.\n> Why add anything that is not useful?\n\nI'm not convinced that it's not necessary. In particular, consider the\ncase where we are trying to recover from a crash using an on-line\ncheckpoint as our last readable WAL entry. In the pre-NEXTXID code,\nthis checkpoint would contain the current XID counter and an\nadvanced-beyond-current OID counter. I think both of those numbers\nshould be advanced beyond current, so that there's some safety margin\nagainst reusing XIDs/OIDs that were allocated by now-lost XLOG entries.\nThe OID code is doing this right, but the XID code wasn't.\n\nAgain, it's a question of brittleness. Yes, as long as everything\noperates as designed and the WAL log never drops a bit, we don't need\nit. But I want a safety margin for when things aren't perfect.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 12:03:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Proposed WAL changes " } ]
[ { "msg_contents": "> 1) WAL\n> We have buffer manager, ok. So why not to use WAL as part of \n> it and don't log INSERT/UPDATE/DELETE xlog records but directly\n> changes into buffer pages ? When someone dirties page it has to\n> inform bmgr about dirty region and bmgr would formulate xlog record.\n> The record could be for example fixed bitmap where each bit corresponds\n> to part of page (of size pgsize/no-of-bits) which was changed. These\n> changed regions follows. Multiple writes (by multiple backends) can be\n> coalesced together as long as their transactions overlaps and there is\n> enough memory to keep changed buffer pages in memory.\n> \n> Pros: upper layers can think thet buffers are always safe/logged and\n> there is no special handling for indices; very simple/fast redo\n> Cons: can't implement undo - but in non-overwriting is not needed (?)\n\nBut needed if we want to get rid of vacuum and have savepoints.\n\nVadim\n", "msg_date": "Wed, 7 Mar 2001 19:34:03 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "RE: WAL & SHM principles" }, { "msg_contents": "> > Pros: upper layers can think thet buffers are always safe/logged and\n> > there is no special handling for indices; very simple/fast redo\n> > Cons: can't implement undo - but in non-overwriting is not needed (?)\n> \n> But needed if we want to get rid of vacuum and have savepoints.\n\nHmm. How do you implement savepoints ? When there is rollback to savepoint\ndo you use xlog to undo all changes which the particular transaction has\ndone ? Hmmm it seems nice ... these resords are locked by such transaction\nso that it can safely undo them :-)\nAm I right ?\n\nBut how can you use xlog to get rid of vacuum ? Do you treat all delete\nlog records as candidates for free space ?\n\nregards, devik\n\n", "msg_date": "Fri, 9 Mar 2001 16:03:39 +0100 (CET)", "msg_from": "Martin Devera <devik@cdi.cz>", "msg_from_op": false, "msg_subject": "RE: WAL & SHM principles" } ]
[ { "msg_contents": "Hi!\n\nI am student of system engineering and I carried out a work with\nPostgreSQL in which I should implement inclusion dependencies using\nrules. \n\nDuring the development of this work, I met with a problem that I solved\nadding the Postgres the possibility to disable (deactivate) a rule\ntemporarily. In what follows, I detail the problem and define the\ndeactivation. If you are interested in this concept to incorporate it in\nthe official version, I can send you a diff. \n\nMoreover, I suppose this concept may be useful in other contexts since\nSTARBUST, for instance, has a similar sentence to activate/deactivate\nthe rules.\n\nWith the following tables:\n\ntest=# create table table_a(a_cod int primary key,a_desc char);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'table_a_pkey' for table 'table_a'\nCREATE\n\ntest=# create table table_b(b_cod int primary key,b_desc char,a_cod\nint);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index\n'table_b_pkey' for table 'table_b'\nCREATE\n\ntest=# insert into table_a values (1,'O');\nINSERT 138026 1\n\ntest=# insert into table_a values (2,'T');\nINSERT 138027 1\n\ntest=# insert into table_b values (100,'O',1);\nINSERT 138028 1\n\nAnd the inclusion dependency defined as follows: \n\ntable_b(a_cod) => table_a(a_cod)\n\nThe UPDATE with \"restricted\" option can be implemented with the rule: \n\nCREATE RULE u_table_b_res\nAS ON\nUPDATE TO table_b\nWHERE\n (OLD.a_cod <> NEW.a_cod\n OR\n OLD.a_cod is NULL\n )\n AND\n NEW.a_cod is not NULL\n AND NOT EXISTS \n\t\t (SELECT table_a.a_cod\n FROM table_a\n WHERE table_a.a_cod = new.a_cod\n\t\t)\nDO INSTEAD\n select pg_abort_with_msg(new.a_cod||' NOT EXIST IN TABLE\ntable_a'); \n-- pg_abort_with_msg(msg) is a function, that call elog(ERROR,msg)\n\nThis rule works as expected but if I define a \"cascade\" action for\ntable_b when table_a is updated:\n\nCREATE RULE u_table_a_cas\nAS ON\nUPDATE TO table_a\nDO update table_b set a_cod=New.a_cod\n where\n table_b.a_cod=OLD.a_cod;\n\n\nAnd I execute: \n\ntest=# update table_a set a_cod=100 where a_cod=1;\nERROR: 100 NOT EXIST IN TABLE table_a\n\nThis result is no the expected one. This happens because a rewriting\nsystem characteristic: the queryTree of the rule u_table_b_res is\nexecuted in the first place and therefore the execution miscarries. \n\nTo solve this problem I added to the grammar the sentences DEACTIVATE\nRULE rulename and REACTIVATE RULE rulename. The sentence DEACTIVATE RULE\nallows me to disable the rule u_table_b_res and then to avoid the\ninterference. The sentence REACTIVATE RULE turns the rule in active\nstate again. \nThese new sentences don't reach the executor. DEACTIVATE only avoids the\ntriggering of this rule during the rewriting process (i.e. the action is\nnot included in the queryTree to be executed) and it only affects the\ncurrent session. \nThe rule remains disabled only during the rewriting phase of the\noriginal sentence (the UPDATE to table_a in this example) since the\nDEACTIVATE is detected (in fireRules, of rewriteHandler.c), until\nfinding a REACTIVATE or until the end of the rule. \nWith the new sentence the rule would be: \n\nCREATE RULE u_table_a_cas\nAS ON\nUPDATE TO table_a\nDO (\t\n\tdeactivate rule u_table_b_res;\n\n\tupdate table_b set a_cod=New.a_cod\n \twhere\n \ttable_b.a_cod=OLD.a_cod;\n)\n\nIt is necessary to keep in mind that the rule should only be disabled\nwhen its action is not longer necessary, like it is this case. \nA rule cannot be disabled indiscriminately, thus it is only possible to\ndisable a rule if the user that creates the rule (in whose action the\nDEACTIVATE is executed) has \"permission RULE\" on the table owning the\nrule to be disabled. For the previous case, if 'userA' is the user that\ncreates the rule 'u_table_a_cas', ' userA' should have \"permission RULE\"\non ' table_a' and also on ' table_b' (that is the owner of the rule '\nu_table_b_res') \n\nThat is all.\n\nThanks\n\nSergio.\n", "msg_date": "Thu, 08 Mar 2001 08:14:38 -0300", "msg_from": "Sergio Pili <sergiop@sinectis.com.ar>", "msg_from_op": true, "msg_subject": "Deactivate Rules" } ]
[ { "msg_contents": "Hi guys,\n\nI've been looking through the memory management system today.\n\nWhen a request is made for a memory memory chunk larger than\nALLOC_CHUNK_LIMIT, AllocSetAlloc() uses malloc() to give the request its\nown block. The result is tested by AllocSetAlloc() to see if the memory\nwas allocated.\n\nIrrespective of this, a chunk can be returned which has not had memory\nallocated to it. There is no testing of the return status of\npalloc() through out the code. \n\nWas/has this been addressed?\n\nThanks\n\nGavin\n\n", "msg_date": "Thu, 8 Mar 2001 22:28:50 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Memory management, palloc" }, { "msg_contents": "On Thu, Mar 08, 2001 at 10:28:50PM +1100, Gavin Sherry wrote:\n> Hi guys,\n> \n> I've been looking through the memory management system today.\n> \n> When a request is made for a memory memory chunk larger than\n> ALLOC_CHUNK_LIMIT, AllocSetAlloc() uses malloc() to give the request its\n> own block. The result is tested by AllocSetAlloc() to see if the memory\n> was allocated.\n> \n> Irrespective of this, a chunk can be returned which has not had memory\n> allocated to it. There is no testing of the return status of\n> palloc() through out the code. \n\n I don't understand. If some memory is not obtain in AllocSetAlloc()\nall finish with elog(ERROR). Not exists way how return insufficient \nspace. Or not?\n\n\t\tKarel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Thu, 8 Mar 2001 15:54:29 +0100", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Memory management, palloc" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> I've been looking through the memory management system today.\n\n> When a request is made for a memory memory chunk larger than\n> ALLOC_CHUNK_LIMIT, AllocSetAlloc() uses malloc() to give the request its\n> own block. The result is tested by AllocSetAlloc() to see if the memory\n> was allocated.\n\n> Irrespective of this, a chunk can be returned which has not had memory\n> allocated to it. There is no testing of the return status of\n> palloc() through out the code. \n\nWhat's your point?\n\npalloc() does not have the same specification as malloc. It guarantees\nto return allocated memory, or elog trying.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 10:14:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Memory management, palloc " }, { "msg_contents": "Karel,\n\nOn Thu, 8 Mar 2001, Karel Zak wrote:\n\n> On Thu, Mar 08, 2001 at 10:28:50PM +1100, Gavin Sherry wrote:\n> > Hi guys,\n> > \n> > I've been looking through the memory management system today.\n> > \n> > When a request is made for a memory memory chunk larger than\n> > ALLOC_CHUNK_LIMIT, AllocSetAlloc() uses malloc() to give the request its\n> > own block. The result is tested by AllocSetAlloc() to see if the memory\n> > was allocated.\n> > \n> > Irrespective of this, a chunk can be returned which has not had memory\n> > allocated to it. There is no testing of the return status of\n> > palloc() through out the code. \n> \n> I don't understand. If some memory is not obtain in AllocSetAlloc()\n> all finish with elog(ERROR). Not exists way how return insufficient \n> space. Or not?\n\nAhh. Of course. My mistake =)\n\nGavin\n\n", "msg_date": "Fri, 9 Mar 2001 02:19:14 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "Re: Memory management, palloc" } ]
[ { "msg_contents": "I am looking to beef up a PostgreSQL database by moving it to a Sun\nEnterprise or an Alpha ES-40 or some other multi-CPU platform. My\nquestions are;\n\n - What suggestions do people have for a good PostgreSQL platform.\n - How well does PostgreSQLtake advantage of multiple CPUs?\n\nThanks.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 8 Mar 2001 07:50:15 -0500 (EST)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "PostgreSQL on multi-CPU systems" }, { "msg_contents": "On Thu, Mar 08, 2001 at 07:50:15AM -0500, D'Arcy J.M. Cain wrote:\n> I am looking to beef up a PostgreSQL database by moving it to a Sun\n> Enterprise or an Alpha ES-40 or some other multi-CPU platform. My\n> questions are;\n> \n> - What suggestions do people have for a good PostgreSQL platform.\n> - How well does PostgreSQLtake advantage of multiple CPUs?\n\ni have postgres running on a couple dual-processor intel boxes running\nfreebsd.\n\nunder freebsd, the win is that the various backend processes will flit\nbetween the CPU's, thus increasing the CPU utilization.\n\nthe win was not overly huge in our case, but it was in fact a win.\n\nin one case, the system had numerous volitile multi-million record tables,\nand was not performing adequately on a single cpu.\n\nthe system was migrated to a dual cpu solution (mind you the two CPU's were\neach twice as fast as the previous single CPU) and things have been quite\nmanageable since.\n\nanother factor is to get your data on fast/striped disk.\n\n-- \n[ Jim Mercer jim@pneumonoultramicroscopicsilicovolcanoconiosis.ca ]\n[ Reptilian Research -- Longer Life through Colder Blood ]\n[ aka jim@reptiles.org +1 416 410-5633 ]\n", "msg_date": "Mon, 12 Mar 2001 15:10:06 -0500", "msg_from": "Jim Mercer <jim@reptiles.org>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on multi-CPU systems" }, { "msg_contents": "> I am looking to beef up a PostgreSQL database by moving it to a Sun\n> Enterprise or an Alpha ES-40 or some other multi-CPU platform. My\n> questions are;\n> \n> - What suggestions do people have for a good PostgreSQL platform.\n> - How well does PostgreSQLtake advantage of multiple CPUs?\n\nI have tested PostgreSQL with 2-4 CPU linux boxes. In summary, 2 CPU\nwas a big win, but 4 was not. I'm not sure where the bottle neck is\nthough.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 13 Mar 2001 09:40:01 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on multi-CPU systems" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have tested PostgreSQL with 2-4 CPU linux boxes. In summary, 2 CPU\n> was a big win, but 4 was not. I'm not sure where the bottle neck is\n> though.\n\nOur not-very-good implementation of spin locking (using select() to\nwait) might have something to do with this. Sometime soon I'd like to\nlook at using POSIX semaphores where available, instead of spinlocks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 12 Mar 2001 19:45:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on multi-CPU systems " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> I have tested PostgreSQL with 2-4 CPU linux boxes. In summary, 2 CPU\n>> was a big win, but 4 was not. I'm not sure where the bottle neck is\n>> though.\n\n> Our not-very-good implementation of spin locking (using select() to\n> wait) might have something to do with this. Sometime soon I'd like to\n> look at using POSIX semaphores where available, instead of spinlocks.\n\ndid anyone from here follow the discussion about postgresql on\nsmp machines on the linux kernel malinglist in the last days?\n(just as an info)\n\nt\n\n-- \nthomas.graichen@innominate.com\n innominate AG\n the linux architects\ntel: +49-30-308806-13 fax: -77 http://www.innominate.com\n", "msg_date": "13 Mar 2001 07:24:20 GMT", "msg_from": "Thomas Graichen <news-list.pgsql.hackers@innominate.de>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on multi-CPU systems" }, { "msg_contents": "> did anyone from here follow the discussion about postgresql on\n> smp machines on the linux kernel malinglist in the last days?\n> (just as an info)\n\nI didn't. Do you have a synopsis or references?\n\n - Thomas\n", "msg_date": "Tue, 13 Mar 2001 15:31:02 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on multi-CPU systems" }, { "msg_contents": "On Tue, 13 Mar 2001, Thomas Lockhart wrote:\n\n> > did anyone from here follow the discussion about postgresql on\n> > smp machines on the linux kernel malinglist in the last days?\n> > (just as an info)\n> \n> I didn't. Do you have a synopsis or references?\n\nThe thread starts here:\n\nhttp://www.mail-archive.com/linux-kernel%40vger.kernel.org/msg29798.html\n\n\nMatthew.\n\n", "msg_date": "Tue, 13 Mar 2001 17:09:03 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: Re: PostgreSQL on multi-CPU systems" }, { "msg_contents": "Thus spake Tom Lane\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have tested PostgreSQL with 2-4 CPU linux boxes. In summary, 2 CPU\n> > was a big win, but 4 was not. I'm not sure where the bottle neck is\n> > though.\n> \n> Our not-very-good implementation of spin locking (using select() to\n> wait) might have something to do with this. Sometime soon I'd like to\n> look at using POSIX semaphores where available, instead of spinlocks.\n\nOne thing I notice is that a single query can seem to block other queries,\nat least to some extent. It makes me wonder if we effectively have a\nsingle threaded system. In fact, I have some simple queries that if\nI send a bunch at once, the first one can take 15 seconds while the\nothers zip through. Is this related to what you are talking about?\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n", "msg_date": "Thu, 15 Mar 2001 07:53:17 -0500 (EST)", "msg_from": "darcy@druid.net (D'Arcy J.M. Cain)", "msg_from_op": true, "msg_subject": "Re: PostgreSQL on multi-CPU systems" }, { "msg_contents": "darcy@druid.net (D'Arcy J.M. Cain) writes:\n> One thing I notice is that a single query can seem to block other queries,\n> at least to some extent.\n\nIt's not supposed to, except with certain specific features (for\nexample, I don't think any of the index types other than btree allow\nconcurrent insertions). Can you give a concrete example?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Mar 2001 09:58:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL on multi-CPU systems " } ]