threads
listlengths
1
2.99k
[ { "msg_contents": "After a system crash on a RH 7.2 box (2.4.7-10 kernel), I found that\nPostgres would not restart, complaining that it\t\"found a pre-existing\nshared memory block (ID so-and-so) still in use.\"\n\nThis is coming from code that attempts to defend against the scenario\nwhere the postmaster crashed but one or more backends are still alive.\nIf we start a new postmaster and create a new shmem segment, the\nconsequences will be absolutely disastrous, because the old and new\nbackends will be modifying the same data files with no coordination.\nSo we look to see if the old shmem segment (whose ID is recorded in\nthe data directory lockfile) is still present and if so whether there\nare any processes attached to it. See SharedMemoryIsInUse() in\nsrc/backend/storage/ipc/ipc.c.\n\nThe problem is that SharedMemoryIsInUse() expects shmctl to return\nerrno == EINVAL if the presented shmem segment ID is invalid. What\nLinux 2.4.7 is actually returning is EIDRM (identifier removed).\n\nThe easy \"fix\" of taking EIDRM to be an allowable return code scares\nme. At least on HPUX, the documented implication of this return code\nis that the shmem segment is marked for deletion but is not yet gone\nbecause there are still processes attached to it. That would be\nexactly the scenario after a postmaster crash and manual \"ipcrm\" if\nthere were any old backends still alive. So, it seems to me that\naccepting EIDRM would defeat the entire point of this test, at least\non some platforms.\n\nComments? Is 2.4.7 simply broken and returning the wrong errno?\nIf not, what should we do?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Jan 2002 20:47:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "shmctl portability problem" }, { "msg_contents": "Tom Lane wrote:\n> Comments? Is 2.4.7 simply broken and returning the wrong errno?\n> If not, what should we do?\n\nYou will most surely experiment memory management (Virtual Memory) problems with\nany Linux Kernel v. 2.4.x with x<15 or so. Try again using 2.4.15 or higher.\n\nRegards,\nHaroldo.\n", "msg_date": "Fri, 04 Jan 2002 00:05:46 -0300", "msg_from": "Haroldo Stenger <hstenger@adinet.com.uy>", "msg_from_op": false, "msg_subject": "Re: shmctl portability problem" }, { "msg_contents": "> The easy \"fix\" of taking EIDRM to be an allowable return code scares\n> me. At least on HPUX, the documented implication of this return code\n> is that the shmem segment is marked for deletion but is not yet gone\n> because there are still processes attached to it. That would be\n> exactly the scenario after a postmaster crash and manual \"ipcrm\" if\n> there were any old backends still alive. So, it seems to me that\n> accepting EIDRM would defeat the entire point of this test, at least\n> on some platforms.\n> \n> Comments? Is 2.4.7 simply broken and returning the wrong errno?\n> If not, what should we do?\n\nSeems we have to contact linux kernel guys or dig into the kernel\nourselves to see why that is being returned. I do have EIDRM in BSD/OS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 3 Jan 2002 23:59:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: shmctl portability problem" } ]
[ { "msg_contents": "Did time become a reserved word in 7.2? I tried create a database with \na script that works in 7.1.3, but it fails on the word time as a column \nname.\n\n\n\n", "msg_date": "Thu, 03 Jan 2002 22:06:50 -0500", "msg_from": "Dwayne Miller <dwayne-miller@home.com>", "msg_from_op": true, "msg_subject": "Time keyword" } ]
[ { "msg_contents": "Did time become a keyword in 7.2? 7.1.3 allowed it as a column name... \n7.2 rejects it.\n\nTks\n\n", "msg_date": "Thu, 03 Jan 2002 22:20:21 -0500", "msg_from": "Dwayne Miller <dwayne-miller@home.com>", "msg_from_op": true, "msg_subject": "Time as keyword" }, { "msg_contents": "Dwayne Miller <dwayne-miller@home.com> writes:\n> Did time become a keyword in 7.2? 7.1.3 allowed it as a column name... \n> 7.2 rejects it.\n\nIt's always been a keyword, but it is \"more reserved\" than it used to\nbe. See\n\nhttp://developer.postgresql.org/docs/postgres/sql-keywords-appendix.html\n\nHowever, according to that list TIME is still allowed as a column name,\nand indeed I get:\n\nregression=# create table foo (f1 time, time time);\nCREATE\n\nSo I'm not sure what you did.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 09:44:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time as keyword " }, { "msg_contents": "On Thu, Jan 03, 2002 at 10:20:21PM -0500, Dwayne Miller wrote:\n> Did time become a keyword in 7.2?\n\n$ grep TIME src/backend/parser/keywords.c \n\t{\"current_time\", CURRENT_TIME},\n\t{\"current_timestamp\", CURRENT_TIMESTAMP},\n\t{\"time\", TIME},\n\t{\"timestamp\", TIMESTAMP},\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Tue, 8 Jan 2002 16:13:22 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: Time as keyword" }, { "msg_contents": "> Did time become a keyword in 7.2? 7.1.3 allowed it as a column name...\n> 7.2 rejects it.\n\nYes. We now support SQL99 time and timestamp precision, which require\nthat TIME(p) be a type specification. So there are parts of the grammar\nwhich cannot easily fit \"time\" anymore.\n\nYou could/should use the SQL99 list of reserved words as a guide for\nwhich keywords to *not* use, even though some of them are currently\naccepted as, for example, column names. In the meantime, you can\ndouble-quote the column name if you really need it to stay \"time\".\n\n - Thomas\n", "msg_date": "Tue, 08 Jan 2002 17:00:03 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Time as keyword" }, { "msg_contents": "I tried to create table foo (t time, time time); and received an error \nsomething like\n'Error parsing near time'\n\nI'm on 7.2.b2\n\nI'll upgrade and try again.\n\n\nTom Lane wrote:\n\n>Dwayne Miller <dwayne-miller@home.com> writes:\n>\n>>Did time become a keyword in 7.2? 7.1.3 allowed it as a column name... \n>>7.2 rejects it.\n>>\n>\n>It's always been a keyword, but it is \"more reserved\" than it used to\n>be. See\n>\n>http://developer.postgresql.org/docs/postgres/sql-keywords-appendix.html\n>\n>However, according to that list TIME is still allowed as a column name,\n>and indeed I get:\n>\n>regression=# create table foo (f1 time, time time);\n>CREATE\n>\n>So I'm not sure what you did.\n>\n>\t\t\tregards, tom lane\n>\n\n\n", "msg_date": "Wed, 09 Jan 2002 09:52:43 -0500", "msg_from": "Dwayne Miller <dwayne-miller@home.com>", "msg_from_op": true, "msg_subject": "Re: Time as keyword" } ]
[ { "msg_contents": "Hi!\n\nI am preparing the update of the FreeBSD port of PostgreSQL with the \nupcoming 7.2, and I'm just wondering: is there any performance penalty \nintoduced by including --with-ssl in the default configure args? Of course, \nif SSL is actually *used*, I know what'll happen ;-) Just wondering \nwhether there is any reason not to include it by default if it exists on \nthe system; will it decrease performance for those who don't use it?\n\nRegards,\nPalle\n\n", "msg_date": "Fri, 04 Jan 2002 04:43:22 +0100", "msg_from": "Palle Girgensohn <girgen@partitur.se>", "msg_from_op": true, "msg_subject": "Is there any performance penalty using --with-ssl?" }, { "msg_contents": "Palle Girgensohn <girgen@partitur.se> writes:\n> I am preparing the update of the FreeBSD port of PostgreSQL with the \n> upcoming 7.2, and I'm just wondering: is there any performance penalty \n> intoduced by including --with-ssl in the default configure args?\n\nFailure to build/run if SSL libraries are not available?\n\nAFAIK there is no run-time penalty, especially not if the server is\nstarted without the enable-ssl switch. But there had better be an\nSSL library to link with.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 03 Jan 2002 23:03:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "> I am preparing the update of the FreeBSD port of PostgreSQL with the \n> upcoming 7.2, and I'm just wondering: is there any performance penalty \n> intoduced by including --with-ssl in the default configure args?\n\nNo, the only reason that the switch exists is that some hosts may not have\nOpenSSL\ninstalled (including related legal reasons).\n\n\n-- \nGMX - Die Kommunikationsplattform im Internet.\nhttp://www.gmx.net", "msg_date": "Fri, 4 Jan 2002 05:08:16 +0100 (MET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Is there any performance penalty using --with-ssl?" }, { "msg_contents": "> AFAIK there is no run-time penalty, especially not if the server is\n> started without the enable-ssl switch. But there had better be an\n> SSL library to link with.\n\nWell, FreeBSD has come with OpenSSL in the base system by default for a long\ntime now.\n\nWhat about the memory size overhead it adds to every postgres process?\n\nChris\n\n", "msg_date": "Fri, 4 Jan 2002 12:25:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "--On Thursday, January 03, 2002 23:03:11 -0500 Tom Lane <tgl@sss.pgh.pa.us> \nwrote:\n\n> Palle Girgensohn <girgen@partitur.se> writes:\n>> I am preparing the update of the FreeBSD port of PostgreSQL with the\n>> upcoming 7.2, and I'm just wondering: is there any performance penalty\n>> intoduced by including --with-ssl in the default configure args?\n>\n> Failure to build/run if SSL libraries are not available?\n\nThe main problem, of course, but this is can be handled in the port.\n\n> AFAIK there is no run-time penalty, especially not if the server is\n> started without the enable-ssl switch. But there had better be an\n> SSL library to link with.\n\nTrue. Thanks for the input.\n\nCheers,\nPalle\n\n", "msg_date": "Fri, 04 Jan 2002 05:27:12 +0100", "msg_from": "Palle Girgensohn <girgen@partitur.se>", "msg_from_op": true, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "> Failure to build/run if SSL libraries are not available?\n>\n> AFAIK there is no run-time penalty, especially not if the server is\n> started without the enable-ssl switch. But there had better be an\n> SSL library to link with.\n\nPalle - the current Postgres Port uses the 'dialog' command to present a\nmenu of what people can optionally compile in. Why not just leave it in\nthat menu?\n\nChris\n\n", "msg_date": "Fri, 4 Jan 2002 12:32:26 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "On Thu, 3 Jan 2002, Tom Lane wrote:\n\n> Palle Girgensohn <girgen@partitur.se> writes:\n> > I am preparing the update of the FreeBSD port of PostgreSQL with the\n> > upcoming 7.2, and I'm just wondering: is there any performance penalty\n> > intoduced by including --with-ssl in the default configure args?\n>\n> Failure to build/run if SSL libraries are not available?\n>\n> AFAIK there is no run-time penalty, especially not if the server is\n> started without the enable-ssl switch. But there had better be an\n> SSL library to link with.\n\nSSL libraries are default with a FreeBSD install, as its required by SSH\n...\n\n\n", "msg_date": "Fri, 4 Jan 2002 00:50:08 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What about the memory size overhead it adds to every postgres process?\n\nAFAIK, on all modern OSes there's no significant performance penalty\nfor code that's nominally part of your address space but is never\nactually swapped in/executed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 01:22:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "--On Friday, January 04, 2002 12:32:26 +0800 Christopher Kings-Lynne \n<chriskl@familyhealth.com.au> wrote:\n\n>> Failure to build/run if SSL libraries are not available?\n>>\n>> AFAIK there is no run-time penalty, especially not if the server is\n>> started without the enable-ssl switch. But there had better be an\n>> SSL library to link with.\n>\n> Palle - the current Postgres Port uses the 'dialog' command to present a\n> menu of what people can optionally compile in. Why not just leave it in\n> that menu?\n\nReason is, I am invesigating the possibility of totally removing the dialog \nand split all interfaces into separate ports. There are pros and cons to \nthis idea, but IMO the pros win.\n\n/Palle\n\n\n\n", "msg_date": "Fri, 04 Jan 2002 14:14:20 +0100", "msg_from": "Palle Girgensohn <girgen@partitur.se>", "msg_from_op": true, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " }, { "msg_contents": "--On Friday, January 04, 2002 00:50:08 -0500 \"Marc G. Fournier\" \n<scrappy@hub.org> wrote:\n\n> On Thu, 3 Jan 2002, Tom Lane wrote:\n>\n>> Palle Girgensohn <girgen@partitur.se> writes:\n>> > I am preparing the update of the FreeBSD port of PostgreSQL with the\n>> > upcoming 7.2, and I'm just wondering: is there any performance penalty\n>> > intoduced by including --with-ssl in the default configure args?\n>>\n>> Failure to build/run if SSL libraries are not available?\n>>\n>> AFAIK there is no run-time penalty, especially not if the server is\n>> started without the enable-ssl switch. But there had better be an\n>> SSL library to link with.\n>\n> SSL libraries are default with a FreeBSD install, as its required by SSH\n\nTrue. I was thinking of the obscure cases where\n#NO_OPENSSL= true # do not build OpenSSL (implies NO_OPENSSH)\nis uncommented in make.conf... The port can handle that, no problem, but a \npackage would fail at runtime. Those freebsd'ers can probably live with \nthis, I guess?\n\n/Palle\n\n\n\n", "msg_date": "Fri, 04 Jan 2002 14:20:44 +0100", "msg_from": "Palle Girgensohn <girgen@partitur.se>", "msg_from_op": true, "msg_subject": "Re: Is there any performance penalty using --with-ssl? " } ]
[ { "msg_contents": "Looking at my mailbox, I see _no_ open items for 7.2. Is this a good\ntime for RC1? Tom, can you apply that lwlock patch you are holding?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 00:53:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "RC1 time?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Looking at my mailbox, I see _no_ open items for 7.2. Is this a good\n> time for RC1? Tom, can you apply that lwlock patch you are holding?\n\nAside from the lwlock business, Karel seems to be seeing some problem\nin to_timestamp/to_date.\n\nI agree we're close though. Anyone object to RC1 this weekend?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 01:32:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Looking at my mailbox, I see _no_ open items for 7.2. Is this a good\n> > time for RC1? Tom, can you apply that lwlock patch you are holding?\n> \n> Aside from the lwlock business, Karel seems to be seeing some problem\n> in to_timestamp/to_date.\n\nI thought Karel sent in a to_date patch yesterday that you applied. Was\nthere another issue?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 01:39:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Aside from the lwlock business, Karel seems to be seeing some problem\n>> in to_timestamp/to_date.\n\n> I thought Karel sent in a to_date patch yesterday that you applied. Was\n> there another issue?\n\nYes. He reported something that looked a lot like a DST boundary\nproblem, except it wasn't on a DST boundary date. Thomas thought it\nmight be a consequence of the timestamp-vs-timestamptz change from\n7.1 to 7.2. See http://fts.postgresql.org/db/mw/msg.html?mid=1345390\n\n(BTW, is anyone else noticing that fts.postgresql.org is missing an\nawful lot of traffic? For example, I can't get it to show Thomas'\ncomment on the above-mentioned thread; and that is *VERY* far from\nbeing its only omission lately.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 01:50:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Aside from the lwlock business, Karel seems to be seeing some problem\n> >> in to_timestamp/to_date.\n> \n> > I thought Karel sent in a to_date patch yesterday that you applied. Was\n> > there another issue?\n> \n> Yes. He reported something that looked a lot like a DST boundary\n> problem, except it wasn't on a DST boundary date. Thomas thought it\n> might be a consequence of the timestamp-vs-timestamptz change from\n> 7.1 to 7.2. See http://fts.postgresql.org/db/mw/msg.html?mid=1345390\n\nOh, I didn't realize that was a valid issue that needed attention.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 01:52:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "On Fri, 4 Jan 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Aside from the lwlock business, Karel seems to be seeing some problem\n> >> in to_timestamp/to_date.\n>\n> > I thought Karel sent in a to_date patch yesterday that you applied. Was\n> > there another issue?\n>\n> Yes. He reported something that looked a lot like a DST boundary\n> problem, except it wasn't on a DST boundary date. Thomas thought it\n> might be a consequence of the timestamp-vs-timestamptz change from\n> 7.1 to 7.2. See http://fts.postgresql.org/db/mw/msg.html?mid=1345390\n>\n> (BTW, is anyone else noticing that fts.postgresql.org is missing an\n> awful lot of traffic? For example, I can't get it to show Thomas'\n> comment on the above-mentioned thread; and that is *VERY* far from\n> being its only omission lately.)\n\nthere were a *lot of troubles* with fts.postgresql.org connected with\nmoving to new server, which is far from my dream computer :-)\nWe hope to restore all messages we lost in transition period\n(we have to take into account references between postings must be persistent !).\nbtw, if somebody could donate a server dedicated for rapidly growing\nmailing list archive (already > 300,000 messages) ? fts.postgresql.org\nid currently awfull slow !\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Fri, 4 Jan 2002 10:48:04 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Fri, 4 Jan 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Aside from the lwlock business, Karel seems to be seeing some problem\n> >> in to_timestamp/to_date.\n>\n> > I thought Karel sent in a to_date patch yesterday that you applied. Was\n> > there another issue?\n>\n> Yes. He reported something that looked a lot like a DST boundary\n> problem, except it wasn't on a DST boundary date. Thomas thought it\n> might be a consequence of the timestamp-vs-timestamptz change from\n> 7.1 to 7.2. See http://fts.postgresql.org/db/mw/msg.html?mid=1345390\n>\n> (BTW, is anyone else noticing that fts.postgresql.org is missing an\n> awful lot of traffic? For example, I can't get it to show Thomas'\n> comment on the above-mentioned thread; and that is *VERY* far from\n> being its only omission lately.)\n\nWe just moved it from the old server (that I have to shut down) to the new\none at Rackspace ... the one thing I have to do over the next short period\nof time is to spring for a memory upgrade on that machine though, as\n512Meg just doesn't cut it :(\n\n", "msg_date": "Sat, 5 Jan 2002 01:47:50 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Fri, 4 Jan 2002, Oleg Bartunov wrote:\n\n> btw, if somebody could donate a server dedicated for rapidly growing\n> mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> id currently awfull slow !\n\nOr wants to spring for the memory upgrade? The server is better then we\nhad before, but memory is half of what it was ...\n\n\n", "msg_date": "Sat, 5 Jan 2002 01:49:49 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Sat, 5 Jan 2002, Marc G. Fournier wrote:\n\n> On Fri, 4 Jan 2002, Tom Lane wrote:\n>\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> Aside from the lwlock business, Karel seems to be seeing some problem\n> > >> in to_timestamp/to_date.\n> >\n> > > I thought Karel sent in a to_date patch yesterday that you applied. Was\n> > > there another issue?\n> >\n> > Yes. He reported something that looked a lot like a DST boundary\n> > problem, except it wasn't on a DST boundary date. Thomas thought it\n> > might be a consequence of the timestamp-vs-timestamptz change from\n> > 7.1 to 7.2. See http://fts.postgresql.org/db/mw/msg.html?mid=1345390\n> >\n> > (BTW, is anyone else noticing that fts.postgresql.org is missing an\n> > awful lot of traffic? For example, I can't get it to show Thomas'\n> > comment on the above-mentioned thread; and that is *VERY* far from\n> > being its only omission lately.)\n>\n> We just moved it from the old server (that I have to shut down) to the new\n> one at Rackspace ... the one thing I have to do over the next short period\n> of time is to spring for a memory upgrade on that machine though, as\n> 512Meg just doesn't cut it :(\n\nI see on db.postgresql.org\n\n> vmstat -w 5\n procs memory page disks faults cpu\n r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id\n 0 17 0 471224 28184 369 3 4 2 325 334 0 0 331 401 182 29 2 69\n\n 0 19 0 414556 19272 644 1 1 0 546 0 0 172 461 823 290 1 2 97\n 1 19 0 414788 23940 459 4 4 1 474 615 1 170 454 734 286 0 2 98\n 1 20 0 428592 26912 372 3 14 0 433 592 6 182 480 790 296 1 2 97\n 2 19 0 458688 30164 318 3 9 0 423 592 3 177 463 787 289 1 2 97\n 1 17 0 446848 24196 303 2 4 0 454 0 2 177 463 878 294 1 2 97\n 0 18 0 452432 29404 228 1 3 2 324 633 2 184 472 842 305 2 4 94\n 0 19 0 449724 21860 200 14 6 0 508 0 1 188 473 702 283 0 2 98\n\ndisk activity is very bad, probably not balanced. I catch a moment\nwhen fts.postgresql.org was slow.\n\n\n\n\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sat, 5 Jan 2002 21:11:13 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Sat, 5 Jan 2002, Oleg Bartunov wrote:\n\n> On Sat, 5 Jan 2002, Marc G. Fournier wrote:\n>\n> > On Fri, 4 Jan 2002, Tom Lane wrote:\n> >\n> > > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > >> Aside from the lwlock business, Karel seems to be seeing some problem\n> > > >> in to_timestamp/to_date.\n> > >\n> > > > I thought Karel sent in a to_date patch yesterday that you applied. Was\n> > > > there another issue?\n> > >\n> > > Yes. He reported something that looked a lot like a DST boundary\n> > > problem, except it wasn't on a DST boundary date. Thomas thought it\n> > > might be a consequence of the timestamp-vs-timestamptz change from\n> > > 7.1 to 7.2. See http://fts.postgresql.org/db/mw/msg.html?mid=1345390\n> > >\n> > > (BTW, is anyone else noticing that fts.postgresql.org is missing an\n> > > awful lot of traffic? For example, I can't get it to show Thomas'\n> > > comment on the above-mentioned thread; and that is *VERY* far from\n> > > being its only omission lately.)\n> >\n> > We just moved it from the old server (that I have to shut down) to the new\n> > one at Rackspace ... the one thing I have to do over the next short period\n> > of time is to spring for a memory upgrade on that machine though, as\n> > 512Meg just doesn't cut it :(\n>\n> I see on db.postgresql.org\n>\n> > vmstat -w 5\n> procs memory page disks faults cpu\n> r b w avm fre flt re pi po fr sr da0 da1 in sy cs us sy id\n> 0 17 0 471224 28184 369 3 4 2 325 334 0 0 331 401 182 29 2 69\n>\n> 0 19 0 414556 19272 644 1 1 0 546 0 0 172 461 823 290 1 2 97\n> 1 19 0 414788 23940 459 4 4 1 474 615 1 170 454 734 286 0 2 98\n> 1 20 0 428592 26912 372 3 14 0 433 592 6 182 480 790 296 1 2 97\n> 2 19 0 458688 30164 318 3 9 0 423 592 3 177 463 787 289 1 2 97\n> 1 17 0 446848 24196 303 2 4 0 454 0 2 177 463 878 294 1 2 97\n> 0 18 0 452432 29404 228 1 3 2 324 633 2 184 472 842 305 2 4 94\n> 0 19 0 449724 21860 200 14 6 0 508 0 1 188 473 702 283 0 2 98\n>\n> disk activity is very bad, probably not balanced. I catch a moment\n> when fts.postgresql.org was slow.\n\nMost of it is due to the high swap being used .. I've had two offers so\nfar to help upgrade the RAM, and am looking into the costs of doing so ...\n\n\n", "msg_date": "Sat, 5 Jan 2002 14:12:09 -0500 (EST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "> > btw, if somebody could donate a server dedicated for rapidly growing\n> > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > id currently awfull slow !\n> Or wants to spring for the memory upgrade? The server is better then we\n> had before, but memory is half of what it was ...\n\nWhere is this server located? What would a memory upgrade cost??\n\n - Thomas\n", "msg_date": "Mon, 07 Jan 2002 22:31:35 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "On Mon, 7 Jan 2002, Thomas Lockhart wrote:\n\n> > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > id currently awfull slow !\n> > Or wants to spring for the memory upgrade? The server is better then we\n> > had before, but memory is half of what it was ...\n>\n> Where is this server located? What would a memory upgrade cost??\n\nOnly Marc knows. I think server is overloaded - it hosts several\nrather big projects+database server. More memory will helps but\nI'd add several hard drives to separate disk activity.\n\n\n>\n> - Thomas\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Tue, 8 Jan 2002 02:49:14 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "Thomas Lockhart wrote:\n> > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > id currently awfull slow !\n> > Or wants to spring for the memory upgrade? The server is better then we\n> > had before, but memory is half of what it was ...\n>\n> Where is this server located? What would a memory upgrade cost??\n\n Count me in.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 7 Jan 2002 23:50:36 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "\nserver is at rackspace in San Antonio, Tx ... and am looking into it ...\n\n\nOn Mon, 7 Jan 2002, Thomas Lockhart wrote:\n\n> > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > id currently awfull slow !\n> > Or wants to spring for the memory upgrade? The server is better then we\n> > had before, but memory is half of what it was ...\n>\n> Where is this server located? What would a memory upgrade cost??\n>\n> - Thomas\n>\n\n", "msg_date": "Tue, 8 Jan 2002 09:37:06 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "On Tue, 8 Jan 2002, Oleg Bartunov wrote:\n\n> On Mon, 7 Jan 2002, Thomas Lockhart wrote:\n>\n> > > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > > id currently awfull slow !\n> > > Or wants to spring for the memory upgrade? The server is better then we\n> > > had before, but memory is half of what it was ...\n> >\n> > Where is this server located? What would a memory upgrade cost??\n>\n> Only Marc knows. I think server is overloaded - it hosts several\n> rather big projects+database server. More memory will helps but\n> I'd add several hard drives to separate disk activity.\n\n\nthe only thing that server hosts is the PostgreSQL Project ...\n\n", "msg_date": "Tue, 8 Jan 2002 09:37:55 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > id currently awfull slow !\n> > Or wants to spring for the memory upgrade? The server is better then we\n> > had before, but memory is half of what it was ...\n> \n> Where is this server located? What would a memory upgrade cost??\n\nWhat kind of server? What kind of RAM? If it is an x86 BOX I have some\nPC100 256M simms.\n\nIf you are looking for a server, I might be able to convince my company\n(www.dmn.com) to donate an dual PIII 650.\n", "msg_date": "Tue, 08 Jan 2002 09:46:52 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "On Tue, 8 Jan 2002, Marc G. Fournier wrote:\n\n> On Tue, 8 Jan 2002, Oleg Bartunov wrote:\n>\n> > On Mon, 7 Jan 2002, Thomas Lockhart wrote:\n> >\n> > > > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > > > id currently awfull slow !\n> > > > Or wants to spring for the memory upgrade? The server is better then we\n> > > > had before, but memory is half of what it was ...\n> > >\n> > > Where is this server located? What would a memory upgrade cost??\n> >\n> > Only Marc knows. I think server is overloaded - it hosts several\n> > rather big projects+database server. More memory will helps but\n> > I'd add several hard drives to separate disk activity.\n>\n>\n> the only thing that server hosts is the PostgreSQL Project ...\n\nAgain, I'd prefer to have a separate machine dedicated for fts project.\nI don't like to work in 'jail' bsdish environment :-)\nCurrently I see, for example, ftpd process eats about 10hours ! of CPU,\nkeep in mind the server rebooted only 2 days ago !\nDisk activity is very high ! There are only 2 disks ..\nsimple select from table with 10 records takes about 10 seconds !\nDamn. I think it's time to think seriously about supporting of\npostgresql.org. It's sort of marketing things, but it's very important.\nIf, for example, somebody interest in database with full text search\nsupport and tries fts.postgresql.org, he'll form a very bad opinion\nabout search engine, about database. He'll not interested that\nhardware is very limited and this is temporal problem. He will go\nto mysql website :-(\n\nIn my opinion, simple PIII server: 1Gb ram, 3 HD (SCSI) ( system, db, web )\nwould be enough for fts.postgresql.org.\n\n\n\n\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 9 Jan 2002 13:52:54 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "On Wed, 9 Jan 2002, Oleg Bartunov wrote:\n\n> On Tue, 8 Jan 2002, Marc G. Fournier wrote:\n> \n> > On Tue, 8 Jan 2002, Oleg Bartunov wrote:\n> >\n> > > On Mon, 7 Jan 2002, Thomas Lockhart wrote:\n> > >\n> > > > > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > > > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > > > > id currently awfull slow !\n> > > > > Or wants to spring for the memory upgrade? The server is better then we\n> > > > > had before, but memory is half of what it was ...\n> > > >\n\n[snip]\n\n> Damn. I think it's time to think seriously about supporting of\n> postgresql.org. It's sort of marketing things, but it's very important.\n> If, for example, somebody interest in database with full text search\n> support and tries fts.postgresql.org, he'll form a very bad opinion\n> about search engine, about database. He'll not interested that\n> hardware is very limited and this is temporal problem. He will go\n> to mysql website :-(\n\nI agree. It is bad form. \n\nPerhaps Red Hat or SRA would be able to help out?\n\nGavin\n\n", "msg_date": "Wed, 9 Jan 2002 23:11:39 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "mlw wrote:\n> \n> Thomas Lockhart wrote:\n> >\n> > > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > > id currently awfull slow !\n> > > Or wants to spring for the memory upgrade? The server is better then we\n> > > had before, but memory is half of what it was ...\n> >\n> > Where is this server located? What would a memory upgrade cost??\n> \n> What kind of server? What kind of RAM? If it is an x86 BOX I have some\n> PC100 256M simms.\n> \n> If you are looking for a server, I might be able to convince my company\n> (www.dmn.com) to donate an dual PIII 650\n\nWe have an Intel Motherboard Dual PIII 650, 2U rack mount server. 512MRAM, but\nI'm sure I can scrounge 1G. It has 1 18G IBM SCSI Hard disk, but two built in\nSCSI controllers. One LVD one SE. Built in Intel nic. (I wouldn't trust the\ndisk because it has had a year of service.)\n\nDoes postgresql.org need such a box? If so, let me know.\n", "msg_date": "Wed, 09 Jan 2002 07:50:34 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: RC1 time? (Server time)" }, { "msg_contents": "On Wed, 9 Jan 2002, mlw wrote:\n\n>\n> We have an Intel Motherboard Dual PIII 650, 2U rack mount server. 512MRAM, but\n> I'm sure I can scrounge 1G. It has 1 18G IBM SCSI Hard disk, but two built in\n> SCSI controllers. One LVD one SE. Built in Intel nic. (I wouldn't trust the\n> disk because it has had a year of service.)\n>\n> Does postgresql.org need such a box? If so, let me know.\n>\n\nThanks,\n\nIt's Marc's decision but for dedicated server it'd be ok even with\n512 Mb RAM (of course more memory would be nice). I wrote about\n3 hard drives, but in minimal configuration we need separate HD\nfor database + HD for system and web stuff (2*18Gb SCSI look fine)\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Wed, 9 Jan 2002 16:27:58 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: RC1 time? (Server time)" }, { "msg_contents": "\nJust as an FYI, the server is to be upgraded from 512Meg to 4GB over the\nnext few days ... thanks to everyone that offered to help spring for this,\nbut Rackspace is only charging us a very small amount to perform the\nupgrade itself, with the RAM not costing a thing ...\n\nAs for a seperate machine for fts ... I'm in the process of trying to get\na second machine for Hub, since our first machine is just about at\ncapacity ... it will have 4GB of RAM on her and 7x18Gig RAID5 SCSI ... at\nthat time, I will move the db.postgresql.org server onto it, so that the\ndatabase is on a seperate machine from the web server itself ... I assume\nthat should help, just a little?\n\nOn Wed, 9 Jan 2002, Oleg Bartunov wrote:\n\n> On Tue, 8 Jan 2002, Marc G. Fournier wrote:\n>\n> > On Tue, 8 Jan 2002, Oleg Bartunov wrote:\n> >\n> > > On Mon, 7 Jan 2002, Thomas Lockhart wrote:\n> > >\n> > > > > > btw, if somebody could donate a server dedicated for rapidly growing\n> > > > > > mailing list archive (already > 300,000 messages) ? fts.postgresql.org\n> > > > > > id currently awfull slow !\n> > > > > Or wants to spring for the memory upgrade? The server is better then we\n> > > > > had before, but memory is half of what it was ...\n> > > >\n> > > > Where is this server located? What would a memory upgrade cost??\n> > >\n> > > Only Marc knows. I think server is overloaded - it hosts several\n> > > rather big projects+database server. More memory will helps but\n> > > I'd add several hard drives to separate disk activity.\n> >\n> >\n> > the only thing that server hosts is the PostgreSQL Project ...\n>\n> Again, I'd prefer to have a separate machine dedicated for fts project.\n> I don't like to work in 'jail' bsdish environment :-)\n> Currently I see, for example, ftpd process eats about 10hours ! of CPU,\n> keep in mind the server rebooted only 2 days ago !\n> Disk activity is very high ! There are only 2 disks ..\n> simple select from table with 10 records takes about 10 seconds !\n> Damn. I think it's time to think seriously about supporting of\n> postgresql.org. It's sort of marketing things, but it's very important.\n> If, for example, somebody interest in database with full text search\n> support and tries fts.postgresql.org, he'll form a very bad opinion\n> about search engine, about database. He'll not interested that\n> hardware is very limited and this is temporal problem. He will go\n> to mysql website :-(\n>\n> In my opinion, simple PIII server: 1Gb ram, 3 HD (SCSI) ( system, db, web )\n> would be enough for fts.postgresql.org.\n>\n>\n>\n>\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n", "msg_date": "Wed, 9 Jan 2002 14:02:19 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time?" }, { "msg_contents": "As long as we're talking about overloaded resources ...\n\nThe mailing list servers seem to have been horribly overloaded for a\nlong time. In the past couple days it's been particularly bad (three-\nto four-hour turnaround for postings). Will these planned upgrades\nhelp that situation at all?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 15:07:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "\nWe're going from 512Meg -> 4GB ... most of the issues right now are swap\nrelated ... that machine is just swapping like crazy ...\n\nOn Wed, 9 Jan 2002, Tom Lane wrote:\n\n> As long as we're talking about overloaded resources ...\n>\n> The mailing list servers seem to have been horribly overloaded for a\n> long time. In the past couple days it's been particularly bad (three-\n> to four-hour turnaround for postings). Will these planned upgrades\n> help that situation at all?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Wed, 9 Jan 2002 16:21:04 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Wed, 9 Jan 2002, Tom Lane wrote:\n\n> As long as we're talking about overloaded resources ...\n>\n> The mailing list servers seem to have been horribly overloaded for a\n> long time. In the past couple days it's been particularly bad (three-\n> to four-hour turnaround for postings). Will these planned upgrades\n> help that situation at all?\n\nYes.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 9 Jan 2002 17:09:38 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Wed, 9 Jan 2002, Marc G. Fournier wrote:\n\n>\n> We're going from 512Meg -> 4GB ... most of the issues right now are swap\n> related ... that machine is just swapping like crazy ...\n\nnot so much right nmow. I think Marc just have no time to administrate\nthis machine. I wrote already about ftpd process which eats all CPU and\nCPU time.\n\nlast pid: 26390; load averages: 1.36, 1.46, 1.37 up 2+14:36:51 00:27:27\n268 processes: 5 running, 261 sleeping, 2 zombie\nCPU states: 50.3% user, 0.0% nice, 1.7% system, 0.0% interrupt, 48.0% idle\nMem: 258M Active, 93M Inact, 80M Wired, 27M Cache, 61M Buf, 42M Free\nSwap: 1024M Total, 137M Used, 887M Free, 13% Inuse\n\n PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU CPU COMMAND\n 2986 root 56 0 812K 328K CPU1 0 31.4H 99.02% 99.02% ftpd\n26376 robot 30 0 2232K 1316K CPU0 1 0:00 1.88% 0.34% top\n\nAwfull.\n> w\n12:28AM up 2 days, 14:37, 2 users, load averages: 1.33, 1.44, 1.37\nUSER TTY FROM LOGIN@ IDLE WHAT\n\n From 2 days online 'ftpd' eats 31 hours of CPU !\n\nAlso there are many issues I usually expect from system+db administrators.\n\nSo , I don't think adding a lot of RAM will help until such things like\ncrazy process eating CPU, everything live in one HD\n\n\n\n>\n> On Wed, 9 Jan 2002, Tom Lane wrote:\n>\n> > As long as we're talking about overloaded resources ...\n> >\n> > The mailing list servers seem to have been horribly overloaded for a\n> > long time. In the past couple days it's been particularly bad (three-\n> > to four-hour turnaround for postings). Will these planned upgrades\n> > help that situation at all?\n> >\n> > \t\t\tregards, tom lane\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Thu, 10 Jan 2002 09:32:37 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: RC1 time? " }, { "msg_contents": "On Wed, 2002-01-09 at 22:07, Tom Lane wrote:\n\n> As long as we're talking about overloaded resources ...\n> \n> The mailing list servers seem to have been horribly overloaded for a\n> long time. In the past couple days it's been particularly bad (three-\n> to four-hour turnaround for postings). Will these planned upgrades\n> help that situation at all?\n\nThere are also problems, IMHO, with the mail->Usenet gateway: I've just\nresubscribed to the list since there are only an handful of messages on\nnews.postgresql.org, and that's been a constant since mid-December.\n\n-- \nAlessio F. Bragadini\t\talessio@albourne.com\nAPL Financial Services\t\thttp://village.albourne.com\nNicosia, Cyprus\t\t \tphone: +357-22-755750\n\n\"It is more complicated than you think\"\n\t\t-- The Eighth Networking Truth from RFC 1925\n\n", "msg_date": "10 Jan 2002 16:49:26 +0200", "msg_from": "Alessio Bragadini <alessio@albourne.com>", "msg_from_op": false, "msg_subject": "Re: Usenet service (was: RC1 time?)" } ]
[ { "msg_contents": "I just tried building postgres from CVS and when I 'gmake check' I get this:\n\n---------------------\n\nAll of PostgreSQL successfully made. Ready to install.\ngmake -C src/test check\ngmake[1]: Entering directory `/home/chriskl/pgsql/src/test'\ngmake -C regress check\ngmake[2]: Entering directory `/home/chriskl/pgsql/src/test/regress'\ngmake -C ../../../contrib/spi REFINT_VERBOSE=1 refint.so autoinc.so\ngmake[3]: Entering directory `/home/chriskl/pgsql/contrib/spi'\ngmake[3]: `refint.so' is up to date.\ngmake[3]: `autoinc.so' is up to date.\ngmake[3]: Leaving directory `/home/chriskl/pgsql/contrib/spi'\n/bin/sh\n./pg_regress --temp-install --top-builddir=../../.. --schedule=./parallel_sc\nhedule --multibyte=\n============== removing existing temp installation ==============\n============== creating temporary installation ==============\n============== initializing database system ==============\npid 31952 (postgres): unaligned access: va=0x1202ac024 pc=0x120139398\nra=0x120139374 op=stq\npid 31952 (postgres): unaligned access: va=0x1202ac034 pc=0x1201393ac\nra=0x120139374 op=stq\npid 31952 (postgres): unaligned access: va=0x1202ac03c pc=0x1201393b0\nra=0x120139374 op=stq\npid 31952 (postgres): unaligned access: va=0x1202ac044 pc=0x120139398\nra=0x120139374 op=stq\npid 31952 (postgres): unaligned access: va=0x1202ac054 pc=0x1201393ac\nra=0x120139374 op=stq\n\n... stacks of similar lines ...\n\npid 31952 (postgres): unaligned access: va=0x1202ad2b4 pc=0x1201393ac\nra=0x120139374 op=stq\npid 31952 (postgres): unaligned access: va=0x1202ad2bc pc=0x1201393b0\nra=0x120139374 op=stq\npid 31952 (postgres): unaligned access: va=0x1202ac0a4 pc=0x1201394d4\nra=0x1201324e4 op=ldq\npid 31952 (postgres): unaligned access: va=0x1202ac0a4 pc=0x1201394dc\nra=0x1201324e4 op=ldq_l\n\npg_regress: initdb failed\nExamine ./log/initdb.log for the reason.\n\ngmake[2]: Leaving directory `/home/chriskl/pgsql/src/test/regress'\ngmake[1]: Leaving directory `/home/chriskl/pgsql/src/test'\n\n--------------------\n\nOK, let's look at the log:\n\n--------------------\n\nchriskl@database:~/pgsql$ tail src/test/regress/log/initdb.log\n\ncreating directory /home/chriskl/pgsql/src/test/regress/./tmp_check/data...\nok\ncreating directory\n/home/chriskl/pgsql/src/test/regress/./tmp_check/data/base... ok\ncreating directory\n/home/chriskl/pgsql/src/test/regress/./tmp_check/data/global... ok\ncreating directory\n/home/chriskl/pgsql/src/test/regress/./tmp_check/data/pg_xlog... ok\ncreating directory\n/home/chriskl/pgsql/src/test/regress/./tmp_check/data/pg_clog... ok\ncreating template1 database in\n/home/chriskl/pgsql/src/test/regress/./tmp_check/data/base/1... Bus error -\ncore d\numped\n\ninitdb failed.\nData directory /home/chriskl/pgsql/src/test/regress/./tmp_check/data will\nnot be removed at user's request.\n\n---------------------\n\nAny ideas?\n\n", "msg_date": "Fri, 4 Jan 2002 14:46:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "freebsd/alpha probs" }, { "msg_contents": "Backtrace from the core file, please?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 02:13:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: freebsd/alpha probs " } ]
[ { "msg_contents": "I have a couple functions which form the basis of an aggregate. The purpose of\nthe aggregate function is to be able to perform a GROUP BY on a one to many\ntable and produce a summary able where all the \"many\" integers will be packed\nin a single array. If this were a text search query, rather than searching\nhundreds of entries in the table, one fetch and possibly a detoast is used. It\nis MUCH faster for my purpose.\n\nIt is used like this:\n\ncreate table array_lookup as select id1, int_array_aggregate(id2) from lookup\ngroup by (id1) ;\n\nI have written a good number of functions in PGSQL, I'm not a newbe. Could\nsomeone take a look at it? I don't think I am doing anything that would kill\nthe back end, so it may be a bug in RC3, I am just pulling my hair out. (FYI,\nthe one to many table may have thousands of rows for an entry.) One more thing:\nI'm not getting any elog messages, so it should not be a memory issue.\n\n\n>>>>>>>>>>>>>>>>\n-- Internal function for the aggregate\n-- Is called for each item in an aggregation\ncreate function int_agg_state (int4, int4)\n returns int4\n as 'MODULE_FILENAME','int_agg_state'\n language 'c';\n\n-- Internal function for the aggregate\n-- Is called at the end of the aggregation, and returns an array.\ncreate function int_agg_final_array (int4)\n returns int4[]\n as 'MODULE_FILENAME','int_agg_final_array'\n language 'c';\n\n-- The aggration funcion.\n-- uses the above functions to create an array of integers from an aggregation.\ncreate aggregate int_array_aggregate\n(\n BASETYPE = int4,\n SFUNC = int_agg_state,\n STYPE = int4,\n FINALFUNC = int_agg_final_array,\n INITCOND = 0\n);\n\n>>>>>>>>>>>>>>>>\n/* This is actually a postgres version of a one dimentional array */\ntypedef struct agg\n{\n ArrayType a;\n int items; /* Number of items in array */\n int lower; /* Lower bounds of array, used as max during aggregation\n*/\n int4 array[1];\n}PGARRAY;\n\n#define TOASTED 1\n#define START_NUM 8\n#define PGARRAY_SIZE(n) (sizeof(PGARRAY) + ((n-1)*sizeof(int4)))\n\nPGARRAY * GetPGArray(int4 state, int fAdd);\nDatum int_agg_state(PG_FUNCTION_ARGS);\nDatum int_agg_final_array(PG_FUNCTION_ARGS);\n\nPG_FUNCTION_INFO_V1(int_agg_state);\nPG_FUNCTION_INFO_V1(int_agg_final_array);\n\n/* Manage the aggregation state of the array */\nPGARRAY * GetPGArray(int4 state, int fAdd)\n{\n PGARRAY *p = (PGARRAY *) state;\n if(!state)\n {\n /* New array */\n int cb = PGARRAY_SIZE(START_NUM);\n\n p = (PGARRAY *) palloc(cb);\n\n if(!p)\n {\n elog(ERROR,\"Integer aggregator, cant allocate\nmemory\\n\");\n return 0;\n }\n\n p->a.size = cb;\n p->a.ndim= 0;\n p->a.flags = 0;\n p->items = 0;\n p->lower= START_NUM;\n return p;\n }\n else if(fAdd)\n {\n /* Ensure array has space */\n if(p->items >= p->lower)\n {\n PGARRAY *pn;\n int n = p->lower + p->lower;\n int cbNew = PGARRAY_SIZE(n);\n pn = (PGARRAY *) palloc(cbNew);\n\n if(!pn)\n {\n elog(ERROR,\"Integer aggregator, cant allocate\nmemory\\n\");\n }\n else\n {\n memcpy(pn, p, p->a.size);\n pn->a.size = cbNew;\n pn->lower = n;\n pfree(p);\n return pn;\n }\n }\n }\n return p;\n}\n/* Called for each iteration during an aggregate function */\nDatum int_agg_state(PG_FUNCTION_ARGS)\n{\n int4 state = PG_GETARG_INT32(0);\n int4 value = PG_GETARG_INT32(1);\n\n PGARRAY *p = GetPGArray(state, 1);\n if(!p)\n {\n elog(ERROR,\"No aggregate storage\\n\");\n }\n else if(p->items >= p->lower)\n {\n elog(ERROR,\"aggregate storage too small\\n\");\n }\n else\n {\n p->array[p->items++]= value;\n }\n PG_RETURN_INT32(p);\n}\n\n/* This is the final function used for the integer aggregator. It returns all\nthe integers\n * collected as a one dimentional integer array */\nDatum int_agg_final_array(PG_FUNCTION_ARGS)\n{\n PGARRAY *p = GetPGArray(PG_GETARG_INT32(0),0);\n\n if(p)\n {\n /* Fix up the fields in the structure, so Postgres understands\n*/\n p->a.size = PGARRAY_SIZE(p->items);\n p->a.ndim=1;\n p->a.flags = 0;\n p->lower = 0;\n PG_RETURN_POINTER(p);\n }\n PG_RETURN_NULL();\n}\n", "msg_date": "Fri, 04 Jan 2002 08:52:25 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "sig 11 in RC3" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I have a couple functions which form the basis of an aggregate.\n\nYou can't lie to the system by claiming your state value is an integer\nwhen it's really a pointer to palloc'd memory. The memory will get\nrecycled out from under you.\n\nTry declaring the aggregate as using int4[] as the transition type,\nand make sure that the intermediate states are valid at least to the\npoint of having a correct varlena length word. This will allow the\nsystem to copy the values around when it needs to.\n\nAlternatively, keep the data structure in a longer-lived context\n(TransactionCommandContext should work) instead of the per-tuple\ncontext. That's uglier but would avoid a lot of copying.\n\nSee src/backend/executor/nodeAgg.c if you are wondering why the state\nvalues need to be copied around.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 11:02:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: sig 11 in RC3 " }, { "msg_contents": "On Fri, Jan 04, 2002 at 11:02:04AM -0500, Tom Lane wrote:\n> mlw <markw@mohawksoft.com> writes:\n\n> Alternatively, keep the data structure in a longer-lived context\n> (TransactionCommandContext should work) instead of the per-tuple\n> context.\n\nIt depends, I had to use `TopTransactionContect' in a similar case.\n\nIf you really want to return a pointer, I would recommend to\nintroduce a new type `pointer', where the input and output functions\nsimply call `elog(ERROR,..)'. This way you can avoid to forget that\nyour pointer cannot be externalized.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 4 Jan 2002 17:42:14 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: sig 11 in RC3" } ]
[ { "msg_contents": "Are syntax changes from 7.1.x to 7.2 documented anywhere? I just \nnoticed that 'time' as a column name does not work the same in 7.2 as 7.1.x.\n\nSorry if this shows up twice... I posted last night but it had not \nappeared this morning.\n\nTks\nDwayne\n\n", "msg_date": "Fri, 04 Jan 2002 09:17:00 -0500", "msg_from": "\"Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Syntax changes in 7.2" }, { "msg_contents": "Dwayne Miller wrote:\n> Are syntax changes from 7.1.x to 7.2 documented anywhere? I just \n> noticed that 'time' as a column name does not work the same in 7.2 as 7.1.x.\n> \n> Sorry if this shows up twice... I posted last night but it had not \n> appeared this morning.\n\nSyntax changes are documented at the top of the HISTORY file and in the\nrelease notes at:\n\n\thttp://developer.postgresql.org/docs/postgres/release.html#RELEASE-7-2\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 13:22:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Syntax changes in 7.2" }, { "msg_contents": "Well, if it matters... I did not see a change posted that would account \nfor the difference in operation. time has been a datatype in both, yet \nthe 7.1.x versions would allow a column named time, with no quotes \nrequired. 7.2 does not. Guess I'll change the name of that column.\n\nTks\nDwayne\n\nBruce Momjian wrote:\n\n>Dwayne Miller wrote:\n>\n>>Are syntax changes from 7.1.x to 7.2 documented anywhere? I just \n>>noticed that 'time' as a column name does not work the same in 7.2 as 7.1.x.\n>>\n>>Sorry if this shows up twice... I posted last night but it had not \n>>appeared this morning.\n>>\n>\n>Syntax changes are documented at the top of the HISTORY file and in the\n>release notes at:\n>\n>\thttp://developer.postgresql.org/docs/postgres/release.html#RELEASE-7-2\n>\n>\n\n\n", "msg_date": "Fri, 04 Jan 2002 14:32:09 -0500", "msg_from": "\"Dwayne Miller\" <dmiller@espgroup.net>", "msg_from_op": true, "msg_subject": "Re: Syntax changes in 7.2" }, { "msg_contents": "Here some hacky code to generate diffs of the grammar rules. Maybe it\nappears to be usefull:\n\n#! /bin/sh\n\n# shell script to diff grammars\n# usage: $0 grammar-old.y grammar-new.y\n\nTMPDIR=\"/tmp/diffgrammar.$$\"\n\nPWD=`pwd`\n\nAWKCODE='\nBEGIN { RULE=\"\"; } \n/^rule/ { \n if ( RULE != $3 ) { print $3\" ->\"; RULE = $3 }; \n match($0,/^rule[\\ \\t]*[0-9]+[\\ \\t]*[@A-Za-z_0-9]+[\\ \\t]*->[\\ \\t]*/); \n print \"\\t\\t\"substr($0,RLENGTH);\n}\n'\nmkdir -p ${TMPDIR}/old ${TMPDIR}/new\n\ncp $1 ${TMPDIR}/old\ncp $2 ${TMPDIR}/new\n\ncd ${TMPDIR}/old\nbison -v `basename $1`\n\nawk \"${AWKCODE}\" *.output > grammar.rules\n\ncd ${TMPDIR}/new\nbison -v `basename $2`\n\nawk \"${AWKCODE}\" *.output > grammar.rules\n\ncd ${TMPDIR}\ndiff -u old/grammar.rules new/grammar.rules\n\ncd ${PWD}\nrm -rf ${TMPDIR}\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Mon, 7 Jan 2002 08:05:35 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: Syntax changes in 7.2" } ]
[ { "msg_contents": "For an application I have to code I currently implement ON ERROR\nTRIGGERS which shall be called after UNIQUE, CHECK, NOT NULL and REFERENCES\nviolations.\n\nThe implementation plan is as follows:\n\n1) Make `CurrentTransactionState' static in `xact.c' (done, could\n be posted for 7.2, because this could be seen as a bug)\n2) Allow a transaction to be marked for rollback, in which case\n it proceeds but rolls back at commit time. It is not possible\n to remove the mark, hence database integrity is assured. (done)\n3) Add an ON ERROR UNIQUE trigger OID to pg_index. If the uniqueness\n constraint is violated and such a trigger exists, the transaction is\n marked for rollback (but not actually rolled back) and the error\n trigger is called (getting the conflicting tuple as OLD and the\n tuple to be inserted as NEW). (what I'm currently doing)\n4) Add ON ERROR CHECK, ON ERROR NOT NULL and ON ERROR REFERENCES triggers\n in a similar way. (to do)\n\nThis supersedes what I discussed some days ago with Tom Lane on this list.\n\nMy questions are:\n\nA) Are the hackers interested to integrate those changes, if reasonable\n coded, into the PostgreSQL sources, e.g. for 7.3 ?\nB) What are the hackers' proposals for the syntax at the query string level.\n I think about something like:\n UNIQUE [ ON ERROR trigger ( arguments ) ]\n CHECK ( expression ) [ ON ERROR trigger ( arguments ) ]\n NOT NULL [ ON ERROR trigger ( arguments ) ]\n REFERENCES reftable [ ( refcolumn [, ... ] ) ]\n [ MATCH FULL | MATCH PARTIAL ] [ ON DELETE action ] [ ON UPDATE action ]\n [ ON ERROR trigger ( arguments ) ]\nC) Most of the existing triggers would become error-prone, because the\n checks made at trigger start do not comprise the new possibilities to\n call a trigger as error handler. Hence if a trigger, which is\n conceived to be a e.g. BEFORE INSERT trigger is used as a e.g.\n ON ERROR CHECK trigger, it would not get informed about this. The\n results would be unpredictable.\n Is this seen to be a problem ?\n Don't forget: Nobody is forced to use a BEFORE INSERT trigger as a\n ON ERROR CHECK trigger.\n\nGood luck for 7.2 !\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 4 Jan 2002 17:36:57 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "ON ERROR triggers" }, { "msg_contents": "Holger Krug wrote:\n> For an application I have to code I currently implement ON ERROR\n> TRIGGERS which shall be called after UNIQUE, CHECK, NOT NULL and REFERENCES\n> violations.\n>\n> The implementation plan is as follows:\n>\n> 1) Make `CurrentTransactionState' static in `xact.c' (done, could\n> be posted for 7.2, because this could be seen as a bug)\n> 2) Allow a transaction to be marked for rollback, in which case\n> it proceeds but rolls back at commit time. It is not possible\n> to remove the mark, hence database integrity is assured. (done)\n> 3) Add an ON ERROR UNIQUE trigger OID to pg_index. If the uniqueness\n> constraint is violated and such a trigger exists, the transaction is\n> marked for rollback (but not actually rolled back) and the error\n> trigger is called (getting the conflicting tuple as OLD and the\n> tuple to be inserted as NEW). (what I'm currently doing)\n> 4) Add ON ERROR CHECK, ON ERROR NOT NULL and ON ERROR REFERENCES triggers\n> in a similar way. (to do)\n\n 1. PostgreSQL doesn't know anything about ROLLBACK. It\n simply discards transaction ID's. Each row\n (oversimplified but sufficient here) has a transaction ID\n that created it and one for the Xact that destroyed it.\n By discarding an XID, rows that where created by it are\n ignored later, while rows destroyed by it survive.\n\n 2. When inserting a new row, first the data row in stored in\n the table, then (one by one) the index entries are built\n and stored in the indexes.\n\n Now you do an INSERT ... SELECT ...\n\n Anything goes well, still well, you work and work and at the\n 25th row the 3rd index reports DUPKEY. Since there are BEFORE\n INSERT triggers (I make this up, but that's allowed here), 3\n other tables received inserts and updates as well. BEFORE\n triggers are invoked before storage of the row, so the ones\n for this DUP row are executed by now already, the row is in\n the table and 2 out of 5 indexes are updated.\n\n Here now please explain to me in detail what exactly your ON\n ERROR UNIQUE trigger does, because with the ATOMIC\n requirement on statement level, I don't clearly see what it\n could do. Will it allow to break atomicity? Will it allow to\n treat this UNIQUE violation as, \"yeah, such key is there, but\n this is different, really\"?\n\n What am I missing here?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Fri, 4 Jan 2002 13:56:51 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: ON ERROR triggers" }, { "msg_contents": "On Fri, Jan 04, 2002 at 01:56:51PM -0500, Jan Wieck wrote:\n> Holger Krug wrote:\n> > For an application I have to code I currently implement ON ERROR\n> > TRIGGERS which shall be called after UNIQUE, CHECK, NOT NULL and REFERENCES\n> > violations.\n> \n> 1. PostgreSQL doesn't know anything about ROLLBACK. It\n> simply discards transaction ID's. Each row\n> (oversimplified but sufficient here) has a transaction ID\n> that created it and one for the Xact that destroyed it.\n> By discarding an XID, rows that where created by it are\n> ignored later, while rows destroyed by it survive.\n\nI know this. \"Marking a transaction for rollback\" has the following\nconsequences:\n\nCommitTransaction(void)\n{\n--snip--\n\t/*\n\t * check if the transaction is marked for rollback\n\t */\n\tif (s->markedForRollback)\n\t{\n\t\telog(DEBUG, \"CommitTransaction: marked for rollback\");\n\t\tAbortTransaction();\n\t\tCleanupTransaction();\n\t\treturn;\n\t}\n--snip--\n}\n \n> 2. When inserting a new row, first the data row in stored in\n> the table, then (one by one) the index entries are built\n> and stored in the indexes.\n\nI know this executor code, too. The code is pretty good readable.\n \n> Now you do an INSERT ... SELECT ...\n> \n> Anything goes well, still well, you work and work and at the\n> 25th row the 3rd index reports DUPKEY. Since there are BEFORE\n> INSERT triggers (I make this up, but that's allowed here), 3\n> other tables received inserts and updates as well. BEFORE\n> triggers are invoked before storage of the row, so the ones\n> for this DUP row are executed by now already, the row is in\n> the table and 2 out of 5 indexes are updated.\n> \n> Here now please explain to me in detail what exactly your ON\n> ERROR UNIQUE trigger does, because with the ATOMIC\n> requirement on statement level, I don't clearly see what it\n> could do. Will it allow to break atomicity? Will it allow to\n> treat this UNIQUE violation as, \"yeah, such key is there, but\n> this is different, really\"?\n\nIt will do the following:\n\nAs a preparation I have to make some small changes of the interfaces\nof AM index insertion methods, which allow to give information about\nthe error handler to the index insertion method. This done, after\ndetection of the DUPKEY constraint violation the code will execute\nthe following way:\n\n1) Mark the transaction for rollback. As a consequence the transaction\n will never commit, hence database integrity is assured in spite of\n what follows. (See the code snippet above.)\n2) Insert the DUPKEY into the index. This allows to collect some more\n comprehensive error reports, what is the main purpose of my proposal.\n3) Execute the error handler which, in most cases, will write an\n error report into some TEMP table or do something similar.\n4) Proceed with the 4th index and so on the normal way.\n\n*Why* this should be done is explained in more detail in my answer to\nVadim's mail which I'm now going to write.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Mon, 7 Jan 2002 08:31:58 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "Re: ON ERROR triggers" } ]
[ { "msg_contents": "> * Make it easier to create a database owned by someone who can't createdb,\n> perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\nShouldn't that be\n\nCREATE DATABASE dbname WITH OWNER = \"user\"\n\n?\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@kakidata.dk\n", "msg_date": "Fri, 4 Jan 2002 18:07:39 +0100", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": true, "msg_subject": "Re: Updated TODO item" }, { "msg_contents": "Kaare Rasmussen wrote:\n> > * Make it easier to create a database owned by someone who can't createdb,\n> > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Shouldn't that be\n> \n> CREATE DATABASE dbname WITH OWNER = \"user\"\n> \n> ?\n\nYes! OWNER is much better. TODO updated and I will save this message\nfor changes in 7.3:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 13:43:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Updated TODO item" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> > Shouldn't that be\n> > \n> > CREATE DATABASE dbname WITH OWNER = \"user\"\n> > \n> > ?\n> \n> Yes! OWNER is much better. TODO updated and I will save this message\n> for changes in 7.3:\n\nWill this make OWNER into a new keyword, and break schemas that have\n\"owner\" as a table or column name? USER is at least already a\nkeyword...\n\n-Doug (not personally affected, but thought I'd raise it)\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "04 Jan 2002 13:58:09 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Updated TODO item" }, { "msg_contents": "Doug McNaught wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > > Shouldn't that be\n> > > \n> > > CREATE DATABASE dbname WITH OWNER = \"user\"\n> > > \n> > > ?\n> > \n> > Yes! OWNER is much better. TODO updated and I will save this message\n> > for changes in 7.3:\n> \n> Will this make OWNER into a new keyword, and break schemas that have\n> \"owner\" as a table or column name? USER is at least already a\n> keyword...\n\nWe already use OWNER in ALTER TABLE so it is not really a new keyword:\n\n | ALTER TABLE relation_name OWNER TO UserId\n\nAlso, its usage is limited to CREATE TABLE so I believe a column named\nowner is still possible, as it is now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 14:04:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Updated TODO item" }, { "msg_contents": "On Fri, 4 Jan 2002, Kaare Rasmussen wrote:\n\n> > * Make it easier to create a database owned by someone who can't createdb,\n> > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Shouldn't that be\n> \n> CREATE DATABASE dbname WITH OWNER = \"user\"\n> \n> ?\n> \n\nA much better idea. There is no conflict in using OWNER here.\n\nRevised patch attached.\n\nGavin", "msg_date": "Sat, 5 Jan 2002 12:59:32 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "\nSaved for 7.3.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> On Fri, 4 Jan 2002, Kaare Rasmussen wrote:\n> \n> > > * Make it easier to create a database owned by someone who can't createdb,\n> > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> > \n> > Shouldn't that be\n> > \n> > CREATE DATABASE dbname WITH OWNER = \"user\"\n> > \n> > ?\n> > \n> \n> A much better idea. There is no conflict in using OWNER here.\n> \n> Revised patch attached.\n> \n> Gavin\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 21:28:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "> > > * Make it easier to create a database owned by someone who can't createdb,\n> > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > CREATE DATABASE dbname WITH OWNER = \"user\"\n> A much better idea. There is no conflict in using OWNER here.\n\nDoes this have the multiple \"WITH xxx\" clauses which were discussed\nearlier? That is a nonstarter for syntax. There are other places in the\ngrammar having \"with clauses\" and multiple arguments or subclauses, and\nhaving the shift/reduce issues resolved...\n\n - Thomas\n", "msg_date": "Tue, 08 Jan 2002 03:47:20 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "> Does this have the multiple \"WITH xxx\" clauses which were discussed\n> earlier? That is a nonstarter for syntax. There are other places in the\n> grammar having \"with clauses\" and multiple arguments or subclauses, and\n> having the shift/reduce issues resolved...\n\nI might be thicker than a whale sandwich (10 points if you can pick the\nquote :) ), but can someone please tell me what a shift/reduce issue is,\nexactly...\n\nThanks,\n\nChris\n\n", "msg_date": "Tue, 8 Jan 2002 11:58:11 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Thomas Lockhart wrote:\n> > > > * Make it easier to create a database owned by someone who can't createdb,\n> > > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > CREATE DATABASE dbname WITH OWNER = \"user\"\n> > A much better idea. There is no conflict in using OWNER here.\n> \n> Does this have the multiple \"WITH xxx\" clauses which were discussed\n> earlier? That is a nonstarter for syntax. There are other places in the\n> grammar having \"with clauses\" and multiple arguments or subclauses, and\n> having the shift/reduce issues resolved...\n\nNot sure. Patch is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\nAre you asking if it has \"WITH ARG val, ARG val\" or \"WITH ARG val WITH\nARG val?\"\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 Jan 2002 23:01:51 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "> I might be thicker than a whale sandwich (10 points if you can pick the\n> quote :) ), but can someone please tell me what a shift/reduce issue is,\n> exactly...\n\nIt is what you will come to know and love if you get involved with\ngrammars written in yacc. yacc (and some related parsers) look ahead one\ntoken to decide what parsing path to take. So if it takes more than one\ntoken to figure that out, you will get a shift/reduce or reduce/reduce\nerror, and the parser will end up chosing *one* of the possibilities\nevery time.\n\nYou can make these errors go away by restructuring the language or by\nrestructuring the grammar specification to allow multiple \"threads\" of\nparsing to be carried forward until possible conflicts are resolved. We\nuse every and all technique to shoehorn SQL and extensions into a\nyacc/bison tool ;)\n\n - Thomas\n", "msg_date": "Tue, 08 Jan 2002 04:03:42 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "On Tue, 8 Jan 2002, Thomas Lockhart wrote:\n\n> > > > * Make it easier to create a database owned by someone who can't createdb,\n> > > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > CREATE DATABASE dbname WITH OWNER = \"user\"\n> > A much better idea. There is no conflict in using OWNER here.\n> \n> Does this have the multiple \"WITH xxx\" clauses which were discussed\n> earlier? That is a nonstarter for syntax. There are other places in the\n\nWhen was it discussed so that I can have a read? I cannot recall it.\n\nAnd yes, it is not pleasant to implement. Luckily, the design of the\nCREATE DATABASE rule had already incorporated the possibility of\n\n...\n\n\tWITH LOCATION = ...\n\tWITH TEMPLAETE = ...\n\netc.\n\nI'm not sure, however, if this is really what you were asking.\n\nGavin\n\n", "msg_date": "Tue, 8 Jan 2002 15:26:07 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "On Tue, 8 Jan 2002, Christopher Kings-Lynne wrote:\n\n> > Does this have the multiple \"WITH xxx\" clauses which were discussed\n> > earlier? That is a nonstarter for syntax. There are other places in the\n> > grammar having \"with clauses\" and multiple arguments or subclauses, and\n> > having the shift/reduce issues resolved...\n> \n> I might be thicker than a whale sandwich (10 points if you can pick the\n> quote :) ), but can someone please tell me what a shift/reduce issue is,\n> exactly...\n> \n\nA Yacc parser does two things in order to parse input:\n\n1) Reduce: attempt to reduce the stack by simplifying it to a rule\n2) Shift: obtain the next token from input so that a reduction may be able\nto take.\n\nShift/reduce conflicts are pretty ugly. Basically, what happens is that\nthe parser finds itself in a state where it is valid to reduce OR shift at\nsome point in the grammar. What I believe Thomas was refering to was this\ncondition:\n\nTake a rule:\n\nrule a: CREATE DATABASE <name> WITH LOCATION = <name>\nrule b: CREATE DATABASE <name> WITH LOCATION = <name> WITH OWNER = <name>\n\nnow if the input is:\n\n\tCREATE DATABASE test WITH LOCATION = '/var/test' WITH OWNER = swm\n\t\t\t\t\t\t\t^\n\nThen the parser can reach the point under-marked by the circumflex and\nfind it valid to reduce the stack (CREATE DATABASE test WITH LOCATION =\n'/var/test') to rule a OR shift and put the WITH (after the circumflex) on\nthe stack given that this should match rule b.\n\nNaturally, if this conflict is ignored for the grammar above, you could\nend up with wild results in your parsed node tree. Realistically, Bison\nand other yacc compilers will generate parsers unaffected for this\nsituation because they always opt to shift when there is a shift/reduce\nconflict -- a pretty safe bet. But if it should have been valid to reduce\nthe input to a once it reached the circumflex, you'd be in trouble.\n\nGavin\n\n", "msg_date": "Tue, 8 Jan 2002 15:42:13 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "> > > Does this have the multiple \"WITH xxx\" clauses which were discussed\n> > > earlier? That is a nonstarter for syntax. There are other places in the\n> > > grammar having \"with clauses\" and multiple arguments or subclauses, and\n> > > having the shift/reduce issues resolved...\n...\n> CREATE DATABASE <name> WITH LOCATION = <name> WITH OWNER = <name>\n\nIt was this syntax I was wondering about. Multiple \"WITH\"s should not be\nnecessary. Are they actually required in the patch?\n\n - Thomas\n", "msg_date": "Tue, 08 Jan 2002 15:15:22 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "On Tue, 8 Jan 2002, Thomas Lockhart wrote:\n\n> > > > Does this have the multiple \"WITH xxx\" clauses which were discussed\n> > > > earlier? That is a nonstarter for syntax. There are other places in the\n> > > > grammar having \"with clauses\" and multiple arguments or subclauses, and\n> > > > having the shift/reduce issues resolved...\n> ...\n> > CREATE DATABASE <name> WITH LOCATION = <name> WITH OWNER = <name>\n> \n> It was this syntax I was wondering about. Multiple \"WITH\"s should not be\n> necessary. Are they actually required in the patch?\n\nArgh. My bad. The syntax is what you had in mind:\n\nCREATE DATABASE <name> [WITH [LOCATION <name>] [OWNER <name>] ...]\n\nGavin\n\n", "msg_date": "Wed, 9 Jan 2002 12:52:27 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Kaare Rasmussen wrote:\n> > * Make it easier to create a database owned by someone who can't createdb,\n> > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n> \n> Shouldn't that be\n> \n> CREATE DATABASE dbname WITH OWNER = \"user\"\n\nYes. I will make that change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 21:00:38 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Updated TODO item" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> On Fri, 4 Jan 2002, Kaare Rasmussen wrote:\n>> CREATE DATABASE dbname WITH OWNER = \"user\"\n\n> A much better idea. There is no conflict in using OWNER here.\n> Revised patch attached.\n\nI have applied this patch, with a couple of editorial tweaks, and one\nnot-so-minor change: superuser privilege is required to create a\ndatabase on behalf of another user. Seems to me that CREATEDB\nprivilege should not be sufficient to allow such an operation.\n\nStill to do: teach createdb script about it, and revise pg_dumpall\nto use the facility instead of assuming that database owners have\nCREATEDB privilege. Working on those now ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2002 15:23:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item " }, { "msg_contents": "Thomas Lockhart wrote:\n> \n> > > > * Make it easier to create a database owned by someone who can't createdb,\n> > > > perhaps CREATE DATABASE dbname WITH USER = \"user\"\n> > > CREATE DATABASE dbname WITH OWNER = \"user\"\n> > A much better idea. There is no conflict in using OWNER here.\n> \n> Does this have the multiple \"WITH xxx\" clauses which were discussed\n> earlier? That is a nonstarter for syntax. There are other places in the\n> grammar having \"with clauses\" and multiple arguments or subclauses, and\n> having the shift/reduce issues resolved...\n> \n\nThe syntax of the CREATE SCHEMA SQL standard command is\n\nCREATE SCHEMA AUTHORIZATION userid\n\nShouldn't we be using\n\nCREATE DATABASE AUTHORIZATION userid\n\nto be consistent?\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Sun, 24 Feb 2002 20:11:43 -0500", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> The syntax of the CREATE SCHEMA SQL standard command is\n> CREATE SCHEMA AUTHORIZATION userid\n> Shouldn't we be using\n> CREATE DATABASE AUTHORIZATION userid\n> to be consistent?\n\nSeems like a very weak analogy; there's no other similarities between\nthe two command syntaxes, so why argue that this should be the same?\nAlso, the semantics aren't the same --- for example, there's no a-priori\nassumption that a database owner owns everything within the database.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2002 20:24:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item " }, { "msg_contents": "Tom Lane wrote:\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > On Fri, 4 Jan 2002, Kaare Rasmussen wrote:\n> >> CREATE DATABASE dbname WITH OWNER = \"user\"\n> \n> > A much better idea. There is no conflict in using OWNER here.\n> > Revised patch attached.\n> \n> I have applied this patch, with a couple of editorial tweaks, and one\n> not-so-minor change: superuser privilege is required to create a\n> database on behalf of another user. Seems to me that CREATEDB\n> privilege should not be sufficient to allow such an operation.\n> \n> Still to do: teach createdb script about it, and revise pg_dumpall\n> to use the facility instead of assuming that database owners have\n> CREATEDB privilege. Working on those now ...\n\nSeems you are already on that too. I will wait for you to finish, when\nthere will be nothing left for me to do. :-(\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Feb 2002 21:00:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > On Fri, 4 Jan 2002, Kaare Rasmussen wrote:\n> > >> CREATE DATABASE dbname WITH OWNER = \"user\"\n> > \n> > > A much better idea. There is no conflict in using OWNER here.\n> > > Revised patch attached.\n> > \n> > I have applied this patch, with a couple of editorial tweaks, and one\n> > not-so-minor change: superuser privilege is required to create a\n> > database on behalf of another user. Seems to me that CREATEDB\n> > privilege should not be sufficient to allow such an operation.\n> > \n> > Still to do: teach createdb script about it, and revise pg_dumpall\n> > to use the facility instead of assuming that database owners have\n> > CREATEDB privilege. Working on those now ...\n> \n> Seems you are already on that too. I will wait for you to finish, when\n> there will be nothing left for me to do. :-(\n\nI have applied this minor patch. I don't want to document the ability\nto use equals in this case because some day we may remove it. The\nequals doesn't fit with any of our other WITH clauses. I cleaned up a\nparagraph on OWNER, and mentioned we may want to remove the equals\nbackward compatibility hack someday.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: doc/src/sgml/ref/create_database.sgml\n===================================================================\nRCS file: /cvsroot/pgsql/doc/src/sgml/ref/create_database.sgml,v\nretrieving revision 1.24\ndiff -c -r1.24 create_database.sgml\n*** doc/src/sgml/ref/create_database.sgml\t24 Feb 2002 20:20:18 -0000\t1.24\n--- doc/src/sgml/ref/create_database.sgml\t25 Feb 2002 02:48:15 -0000\n***************\n*** 24,33 ****\n </refsynopsisdivinfo>\n <synopsis>\n CREATE DATABASE <replaceable class=\"PARAMETER\">name</replaceable>\n! [ WITH [ OWNER [ = ] <replaceable class=\"parameter\">dbowner</replaceable> ]\n! [ LOCATION [ = ] '<replaceable class=\"parameter\">dbpath</replaceable>' ]\n! [ TEMPLATE [ = ] <replaceable class=\"parameter\">template</replaceable> ]\n! [ ENCODING [ = ] <replaceable class=\"parameter\">encoding</replaceable> ] ]\n </synopsis>\n \n <refsect2 id=\"R2-SQL-CREATEDATABASE-1\">\n--- 24,33 ----\n </refsynopsisdivinfo>\n <synopsis>\n CREATE DATABASE <replaceable class=\"PARAMETER\">name</replaceable>\n! [ WITH [ OWNER <replaceable class=\"parameter\">dbowner</replaceable> ]\n! [ LOCATION '<replaceable class=\"parameter\">dbpath</replaceable>' ]\n! [ TEMPLATE <replaceable class=\"parameter\">template</replaceable> ]\n! [ ENCODING <replaceable class=\"parameter\">encoding</replaceable> ] ]\n </synopsis>\n \n <refsect2 id=\"R2-SQL-CREATEDATABASE-1\">\n***************\n*** 186,196 ****\n \n <para>\n Normally, the creator becomes the owner of the new database.\n! A different owner may be specified by using the <option>OWNER</>\n! clause (but only superusers may create databases on behalf of other users).\n! To create a database owned by oneself, either superuser privilege\n! or CREATEDB privilege is required. A superuser may create a database\n! for another user, even if that user has no special privileges himself.\n </para>\n \n <para>\n--- 186,195 ----\n \n <para>\n Normally, the creator becomes the owner of the new database.\n! Superusers can create databases owned by other users using the\n! <option>OWNER</> clause. They can even create databases owned by\n! users with no special privileges. Non-superusers with CREATEDB\n! privilege can only create databases owned by themselves.\n </para>\n \n <para>\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.279\ndiff -c -r2.279 gram.y\n*** src/backend/parser/gram.y\t24 Feb 2002 20:20:20 -0000\t2.279\n--- src/backend/parser/gram.y\t25 Feb 2002 02:48:26 -0000\n***************\n*** 3155,3160 ****\n--- 3155,3164 ----\n \t\t\t\t}\n \t\t;\n \n+ /*\n+ *\tOptional equals is here only for backward compatibility.\n+ *\tShould be removed someday. bjm 2002-02-24\n+ */\n opt_equal: '='\t\t\t\t\t\t\t\t{ $$ = TRUE; }\n \t\t| /*EMPTY*/\t\t\t\t\t\t\t{ $$ = FALSE; }\n \t\t;", "msg_date": "Sun, 24 Feb 2002 21:52:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have applied this minor patch. I don't want to document the ability\n> to use equals in this case because some day we may remove it. The\n> equals doesn't fit with any of our other WITH clauses.\n\nOne could argue at least as plausibly that we should allow optional '='\nin all the other WITH clauses. (That thought was why I renamed the\nnonterminal to not refer to createdb.)\n\nI'm also quite unimpressed with the notion of trying to suppress\nknowledge of a syntax that is in active use by pg_dump, and perhaps\nother tools too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 24 Feb 2002 21:58:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item " }, { "msg_contents": "Bruce Momjian writes:\n\n> I have applied this minor patch. I don't want to document the ability\n> to use equals in this case because some day we may remove it.\n\nDocument what the code does, not what you think it may do in the future.\nUsers that are used to the current syntax may get surprised when they see\nthe equal signs gone.\n\n> The equals doesn't fit with any of our other WITH clauses.\n\nAnd the WITH keyword doesn't fit any other SQL command. So when we make\nup new commands in the future, please drop it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 24 Feb 2002 22:06:20 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have applied this minor patch. I don't want to document the ability\n> > to use equals in this case because some day we may remove it. The\n> > equals doesn't fit with any of our other WITH clauses.\n> \n> One could argue at least as plausibly that we should allow optional '='\n> in all the other WITH clauses. (That thought was why I renamed the\n> nonterminal to not refer to createdb.)\n\nWell we have on TODO:\n\n\t* Make equals sign optional in CREATE DATABASE WITH param = 'val'\n\n> I'm also quite unimpressed with the notion of trying to suppress\n> knowledge of a syntax that is in active use by pg_dump, and perhaps\n> other tools too.\n\nWell, my assumption is that we don't want to document it because we want\nto discourage its use, unless we want to add equals to all the WITH\nclauses, which I didn't think we wanted to do.\n\nThere are other cases of syntax we don't document because it makes\nlittle sense, and I thought this was one of them.\n\nYou have a good point with pg_dump. Can I remove the use of the equals\nin there? Seems safe to me. However, it does prevent us from loading\nnewer pgdumps into older database, which seems like a pain.\n\nWow, this is tricky. I guess it is not worth fixing this to make it\nconsistent. I will put back the [=] and remove the comment unless\nsomeone else has a better idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 24 Feb 2002 22:34:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "On Sun, 2002-02-24 at 22:34, Bruce Momjian wrote:\n> You have a good point with pg_dump. Can I remove the use of the equals\n> in there? Seems safe to me. However, it does prevent us from loading\n> newer pgdumps into older database, which seems like a pain.\n\nAFAIK, no effort is made to ensure that dump files produced by a current\nversion of pg_dump will restore properly into older databases. From what\nI can tell, the odds of this happening successfully are slim to none...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n", "msg_date": "25 Feb 2002 00:10:03 -0500", "msg_from": "Neil Conway <nconway@klamath.dyndns.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" }, { "msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> AFAIK, no effort is made to ensure that dump files produced by a current\n> version of pg_dump will restore properly into older databases.\n\nNo, but gratuitous breakage should be avoided --- especially when it's\nin pursuit of a goal that no one except Bruce subscribes to in the\nfirst place ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 25 Feb 2002 00:38:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item " }, { "msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > The syntax of the CREATE SCHEMA SQL standard command is\n> > CREATE SCHEMA AUTHORIZATION userid\n> > Shouldn't we be using\n> > CREATE DATABASE AUTHORIZATION userid\n> > to be consistent?\n> \n> Seems like a very weak analogy; there's no other similarities between\n> the two command syntaxes, so why argue that this should be the same?\n\n\nThe analogy is not with the command -- it is with with the token\n'userid'.\nThe key word prefix tells what that token is supposed to be, and that\nis an <authorization-id>. THe key word AUTHORIZATION works like a sort\nof an 'adjective'.\n\n\n> Also, the semantics aren't the same --- for example, there's no a-priori\n> assumption that a database owner owns everything within the database.\n> \n\nI thought you were arguing that neither would a schema (i.e., you wanted\nobjects in a schema to have different owners).\n\nAnyway, that is not the point here. We have two commands that\ncreate groups of database objects (our \"database\" is the SQL catalog)\nand both specify who will own it. The CREATE DATABASE is implementation\ndefined and we can do whatever we want with it, but as we have a\nstandard\ncommand that uses a syntax to specify the owner I think we should follow\nit.\n\n\nWith the additional advantage that the '=' problem goes away and we\navoid\nfuture shift/reduce problems in the parser as 'WITH' is already too\noverloaded.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n", "msg_date": "Mon, 25 Feb 2002 09:46:05 -0500", "msg_from": "Fernando Nasser <fnasser@redhat.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Updated TODO item" } ]
[ { "msg_contents": "> 2) Allow a transaction to be marked for rollback, in which case\n> it proceeds but rolls back at commit time. It is not possible\n\nSorry, can you explain one more time what's the point to continue\nmake changes in transaction which will be rolled back?\n\nHow about savepoints?\n\nVadim\n", "msg_date": "Fri, 4 Jan 2002 11:48:26 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: ON ERROR triggers" }, { "msg_contents": "On Fri, Jan 04, 2002 at 11:48:26AM -0800, Mikheev, Vadim wrote:\n> > 2) Allow a transaction to be marked for rollback, in which case\n> > it proceeds but rolls back at commit time. It is not possible\n> \n> Sorry, can you explain one more time what's the point to continue\n> make changes in transaction which will be rolled back?\n\nI think, I can.\n\nThe point is to collect comprehensive error reports, mainly about\nfailed modifications of complex structured data which is\ncreated/modified concurrently by several workers in an optimistic\nlocking fashion. Because the data is so complex it won't help anybody\nif you print out a message as \"index xy violated by tuple ab\". Hence I\nwant to collect all the errors to give the application/the user the\npossibility to make an overall assessment about what has to be done to\navoid the error.\n\nThis is also the reason, why I will insert a DUPKEY into an index\nafter having marked the transaction for rollback (see my answer to\nJan's mail). I deem this will give more informative error reports. I\nsimply execute all, what the user wants to be done, and inform the\nuser about all the errors occurring, not only the first one.\n\nImagine CVS would inform you only about 1 conflict each time you asks to\nbe informed about potential conflicts. Wouldn't it be annoying ? For\nsure, it would. Now think about databases.\n \n> How about savepoints?\n\nThis would be my question to you: How about savepoints ?\nDo they help to achieve what I want to achieve ?\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Mon, 7 Jan 2002 08:48:51 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: ON ERROR triggers" } ]
[ { "msg_contents": "> > 1. I prefer Oracle' (and others, I believe) way - put\n> statement(s) in PL block and define for what exceptions\n> (errors) what actions should be taken (ie IGNORE for\n> > NON_UNIQ_KEY error, etc).\n> \n> Some people prefer 'pure' SQL. Anyway, it can be argued which\n> is worse - the usage of non-SQL language, or usage of extended\n> SQL language. I guess the SQL standard does not provide for such\n> functionality?\n\nYes, there is no such syntax in standard. And imho when some\nfeature is not in standard then it's better to implement it\nhow others do (for as much compatibility as possible/significant).\n\n> > 2. For INSERT ... SELECT statement one can put DISTINCT in\n> select' target list.\n> \n> With this construct, you are effectively copying rows from\n> one table to another - or constructing rows from various\n> sources (constants, other tables etc) and inserting these\n> in the table. If the target table has unique indexes \n> (or constraints), and some of the rows returned by SELECT violate\n\nSorry, I didn't consider this case, you're right.\n\n> I believe all this functionality will have to consider the \n> syntax firts.\n\nAll this functionality will have to consider savepoints\nimplementation first. As for syntax - we could implement both.\n\nVadim\n", "msg_date": "Fri, 4 Jan 2002 12:14:43 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: Bulkloading using COPY - ignore duplicates? " } ]
[ { "msg_contents": "I have added this item to TODO:\n\n\t* Consider use of open/fctl(O_DIRECT) to minimize OS caching\n\nWeb shows it minimized file system caching, perhaps for sequential\nscans:\n\n\thttp://archives2.us.postgresql.org/pgsql-hackers/2001-09/msg00713.php\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 16:08:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "O_DIRECT use" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have added this item to TODO:\n> \t* Consider use of open/fctl(O_DIRECT) to minimize OS caching\n\nWhy exactly would we wish to minimize OS caching?\n\nIn my mind, Postgres has always relied heavily on the existence of a\nlayer of kernel caching. Disabling that will hurt far more than help.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 16:29:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT use " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have added this item to TODO:\n> > \t* Consider use of open/fctl(O_DIRECT) to minimize OS caching\n> \n> Why exactly would we wish to minimize OS caching?\n> \n> In my mind, Postgres has always relied heavily on the existence of a\n> layer of kernel caching. Disabling that will hurt far more than help.\n\nNot sure. Someone on IRC brought it up. If we are sequential scanning a\nlarge table, caching may be bad because we are pushing out stuff already\nin the cache that may be useful. It is related to this TODO item:\n\n\t* Add free-behind capability for large sequential scans (Bruce)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 16:31:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: O_DIRECT use" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Why exactly would we wish to minimize OS caching?\n\n> Not sure. Someone on IRC brought it up. If we are sequential scanning a\n> large table, caching may be bad because we are pushing out stuff already\n> in the cache that may be useful.\n\nYeah, but people normally try to set things up to avoid doing large\nsequential scans, at least in all the contexts where they need high\nperformance. For index searches you definitely want all the caching\nyou can get.\n\nFor that matter, I would expect that O_DIRECT also defeats readahead,\nso I'd fully expect it to be a loser for seqscans too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 16:47:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT use " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Why exactly would we wish to minimize OS caching?\n> \n> > Not sure. Someone on IRC brought it up. If we are sequential scanning a\n> > large table, caching may be bad because we are pushing out stuff already\n> > in the cache that may be useful.\n> \n> Yeah, but people normally try to set things up to avoid doing large\n> sequential scans, at least in all the contexts where they need high\n> performance. For index searches you definitely want all the caching\n> you can get.\n> \n> For that matter, I would expect that O_DIRECT also defeats readahead,\n> so I'd fully expect it to be a loser for seqscans too.\n\nI am told on FreeBSD it does not disable read-ahead, just caching;\nsomething that needs more research.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 16:48:50 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: O_DIRECT use" }, { "msg_contents": "[2002-01-04 16:31] Bruce Momjian said:\n\n| Not sure. Someone on IRC brought it up. \n\nIs there a pg IRC channel? What is the server?\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Fri, 4 Jan 2002 16:53:34 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT use" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> For that matter, I would expect that O_DIRECT also defeats readahead,\n>> so I'd fully expect it to be a loser for seqscans too.\n\n> I am told on FreeBSD it does not disable read-ahead, just caching;\n> something that needs more research.\n\nHmm. I always thought of read-ahead as preloading buffer cache entries.\n\nIt'd be interesting to get a description of *exactly* what this flag\ndoes, rather than handwavy approximations. Time to start reading the\nkernel code, I suppose.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 16:57:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT use " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> For that matter, I would expect that O_DIRECT also defeats readahead,\n> >> so I'd fully expect it to be a loser for seqscans too.\n> \n> > I am told on FreeBSD it does not disable read-ahead, just caching;\n> > something that needs more research.\n> \n> Hmm. I always thought of read-ahead as preloading buffer cache entries.\n> \n> It'd be interesting to get a description of *exactly* what this flag\n> does, rather than handwavy approximations. Time to start reading the\n> kernel code, I suppose.\n\nI found this before adding the item:\n\n\thttp://www.pairlist.net/pipermail/flow-tools/2001-October/000058.html\n\nAnd this for FreeBSD 4.4:\n\n2.1 Kernel Changes\n\n The O_DIRECT flag has been added to open(2) and fcntl(2). Specifying this\n flag for open files will attempt to minimize the cache effects of reading\n and writing.\n\n\nI also found:\n\n\thttp://www.ukuug.org/events/linux2001/papers/html/AArcangeli-o_direct.html\n\nThese later ones seem to indicate there isn't read-ahead, meaning we\nwould have to do our own prefetches. Eck. I am unclear if that is true\non all OS's.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 18:12:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: O_DIRECT use" }, { "msg_contents": "Brent Verner wrote:\n> [2002-01-04 16:31] Bruce Momjian said:\n> \n> | Not sure. Someone on IRC brought it up. \n> \n> Is there a pg IRC channel? What is the server?\n\nSee FAQ item 1.6.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 18:13:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: O_DIRECT use" }, { "msg_contents": "Brent Verner wrote:\n> [2002-01-04 16:31] Bruce Momjian said:\n> \n> | Not sure. Someone on IRC brought it up. \n> \n> Is there a pg IRC channel? What is the server?\n\nFAQ item text is:\n\n <P>There is also an IRC channel on EFNet, channel\n <I>#PostgreSQL.</I> I use the unix command <CODE>irc -c\n '#PostgreSQL' \"$USER\" irc.phoenix.net.</CODE></P>\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 18:13:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: O_DIRECT use" }, { "msg_contents": "On Fri, 4 Jan 2002, Bruce Momjian wrote:\n\n> > >> For that matter, I would expect that O_DIRECT also defeats readahead,\n> > >> so I'd fully expect it to be a loser for seqscans too.\n\n> And this for FreeBSD 4.4:\n\n> The O_DIRECT flag has been added to open(2) and fcntl(2). Specifying this\n> flag for open files will attempt to minimize the cache effects of reading\n> and writing.\n\nThis seems rather vague. Can any FreeBSD person here say\nwhether the semantics are any stronger?\n\n> \thttp://www.ukuug.org/events/linux2001/papers/html/AArcangeli-o_direct.html\n>\n> These later ones seem to indicate there isn't read-ahead, meaning we\n> would have to do our own prefetches. Eck. I am unclear if that is\n> true on all OS's.\n\nThe Linux O_DIRECT semantics are intended to be harder.\nIn essence, the kernel _will not cache_ data read from\nor written to such a file or device.\n\nThe point of this, incidentally, was to be able to run\nthings like Oracle Parallel Server and other shared-\ndisk setups. It's use as an \"I don't need this cached\"\nmechanism is secondary, and rather sub-optimal, as seen\nhere; you disable software read-ahead and introduce\ncoherence issues with non-O_DIRECT openers of the file.\n(I'm not sure of the precise Linux semantics of this,\nbut it's probably fair to say that you may as well\nconsider them undefined.)\n\nLinux 2.4 has \"madvise\", but unfortunately no matching\n\"fadvise\". A quick Google implied that FreeBSD is in\nthe same boat.\n\nMatthew.\n\n", "msg_date": "Sat, 5 Jan 2002 00:27:50 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: O_DIRECT use" } ]
[ { "msg_contents": "I have been experimenting with altering the SPINS_PER_DELAY number in\nsrc/backend/storage/lmgr/s_lock.c. My results suggest that the current\nsetting of 100 may be too small.\n\nThe attached graph shows pgbench results on the same 4-way Linux box\nI described in my last message. (The numbers are not exactly comparable\nto the previous graph, because I recompiled with --enable-cassert off\nfor this set of runs.) All runs use current CVS plus the second LWLock\npatch under discussion.\n\nEvidently, on this hardware and test case the optimal SPINS_PER_DELAY\nvalue is somewhere in the low thousands, not 100. I find this rather\nsurprising given that spinlocks are never held for more than a few\ndozen instructions, but the results seem quite stable.\n\nOn the other hand, increasing SPINS_PER_DELAY could hardly fail to be\na loser on a single-CPU machine.\n\nWould it be worth making this value a GUC parameter, so that it could\nbe tuned conveniently on a per-installation basis?\n\n\t\t\tregards, tom lane", "msg_date": "Fri, 04 Jan 2002 22:53:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Some interesting results from tweaking spinlocks" }, { "msg_contents": "Tom Lane wrote:\n> I have been experimenting with altering the SPINS_PER_DELAY number in\n> src/backend/storage/lmgr/s_lock.c. My results suggest that the current\n> setting of 100 may be too small.\n> \n> The attached graph shows pgbench results on the same 4-way Linux box\n> I described in my last message. (The numbers are not exactly comparable\n> to the previous graph, because I recompiled with --enable-cassert off\n> for this set of runs.) All runs use current CVS plus the second LWLock\n> patch under discussion.\n> \n> Evidently, on this hardware and test case the optimal SPINS_PER_DELAY\n> value is somewhere in the low thousands, not 100. I find this rather\n> surprising given that spinlocks are never held for more than a few\n> dozen instructions, but the results seem quite stable.\n> \n> On the other hand, increasing SPINS_PER_DELAY could hardly fail to be\n> a loser on a single-CPU machine.\n> \n> Would it be worth making this value a GUC parameter, so that it could\n> be tuned conveniently on a per-installation basis?\n\nThe difference is small, perhaps 15%. My feeling is that we may want to\nstart configuring whether we are on a multi-cpu machine and handle thing\ndifferently. Are there other SMP issues that could be affected by a\nsingle boolean setting? Is there a way to detect this on postmaster\nstartup?\n\nMy offhand opinion is that we should keep what we have now and start to\nthink of a more comprehensive solution for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 23:34:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The difference is small, perhaps 15%.\n\nThe thing that gets my attention is not that it's so small, it's that\nit is so large. My expectation was that that code would hardly ever\nbe executed at all, and even less seldom (on a multiprocessor) need to\nblock via select(). How is it that *increasing* the delay interval\n(which one might reasonably expect to simply waste cycles) can achieve\na 15% improvement in total throughput? That shouldn't be happening.\n\n> My feeling is that we may want to start configuring whether we are on\n> a multi-cpu machine and handle thing differently.\n\nThat would be more palatable if there were some portable way of\ndetecting it. But maybe we'll be forced into an \"is_smp\" GUC switch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 04 Jan 2002 23:49:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The difference is small, perhaps 15%.\n> \n> The thing that gets my attention is not that it's so small, it's that\n> it is so large. My expectation was that that code would hardly ever\n> be executed at all, and even less seldom (on a multiprocessor) need to\n> block via select(). How is it that *increasing* the delay interval\n> (which one might reasonably expect to simply waste cycles) can achieve\n> a 15% improvement in total throughput? That shouldn't be happening.\n\nOK, I am a little confused now. I thought the spinlock was only done a\nfew times if we couldn't get a lock, and if we don't we go to sleep, and\nthe count determines how many times we try. Isn't that expected to\naffect SMP machines?\n\n> \n> > My feeling is that we may want to start configuring whether we are on\n> > a multi-cpu machine and handle thing differently.\n> \n> That would be more palatable if there were some portable way of\n> detecting it. But maybe we'll be forced into an \"is_smp\" GUC switch.\n\nYes, that is what I was thinking, but frankly, I am not going to give up\non SMP auto-detection until I am convinced it can't be done portably.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 4 Jan 2002 23:52:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, I am a little confused now. I thought the spinlock was only done a\n> few times if we couldn't get a lock, and if we don't we go to sleep, and\n> the count determines how many times we try. Isn't that expected to\n> affect SMP machines?\n\nYeah, but if the spinlock is only held for a few dozen instructions,\none would think that the max useful delay is also a few dozen\ninstructions (or maybe a few times that, allowing for the possibility\nthat other processors might claim the lock before we can get it).\nIf we spin for longer than that, the obvious conclusion is that the\nspinlock is held by a process that's lost the CPU, and we should\nourselves yield the CPU so that it can run again. Further spinning\njust wastes CPU time that might be used elsewhere.\n\nThese measurements seem to say there's a flaw in that reasoning.\nWhat is the flaw?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 00:00:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > OK, I am a little confused now. I thought the spinlock was only done a\n> > few times if we couldn't get a lock, and if we don't we go to sleep, and\n> > the count determines how many times we try. Isn't that expected to\n> > affect SMP machines?\n> \n> Yeah, but if the spinlock is only held for a few dozen instructions,\n> one would think that the max useful delay is also a few dozen\n> instructions (or maybe a few times that, allowing for the possibility\n> that other processors might claim the lock before we can get it).\n> If we spin for longer than that, the obvious conclusion is that the\n> spinlock is held by a process that's lost the CPU, and we should\n> ourselves yield the CPU so that it can run again. Further spinning\n> just wastes CPU time that might be used elsewhere.\n> \n> These measurements seem to say there's a flaw in that reasoning.\n> What is the flaw?\n\nMy guess is that the lock is held for more than a few instructions, at\nleast in some cases. Spin/increment is a pretty fast operation with no\naccess of RAM. Could the overhead of the few instructions be more than\nthe spin time, or perhaps there is a stall in the cpu cache, requiring\nslower RAM access while the spin counter is incrementing rapidly?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 5 Jan 2002 00:13:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "The number of CPUs on a system should be fairly straight forward to\nfind out. Distributed.net source code has some good examples.\n\nWhat I'm not sure of is how well this stuff reacts to CPUs being\nsoftware disabled (Solaris has such a feature).\n\nftp://ftp.distributed.net/pub/dcti/source/pub-20010416.tgz\n\nfirst function of client/common/cpucheck.cpp\n\nEach OS gets its own implementation, but they've got all the ones\nPostgresql uses covered off.\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, January 04, 2002 11:49 PM\nSubject: Re: [HACKERS] Some interesting results from tweaking\nspinlocks\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The difference is small, perhaps 15%.\n>\n> The thing that gets my attention is not that it's so small, it's\nthat\n> it is so large. My expectation was that that code would hardly ever\n> be executed at all, and even less seldom (on a multiprocessor) need\nto\n> block via select(). How is it that *increasing* the delay interval\n> (which one might reasonably expect to simply waste cycles) can\nachieve\n> a 15% improvement in total throughput? That shouldn't be happening.\n>\n> > My feeling is that we may want to start configuring whether we are\non\n> > a multi-cpu machine and handle thing differently.\n>\n> That would be more palatable if there were some portable way of\n> detecting it. But maybe we'll be forced into an \"is_smp\" GUC\nswitch.\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to\nmajordomo@postgresql.org\n>\n\n", "msg_date": "Sat, 5 Jan 2002 00:23:36 -0500", "msg_from": "\"Rod Taylor\" <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "\nThanks. Looks good to me.\n\n---------------------------------------------------------------------------\n\nRod Taylor wrote:\n> The number of CPUs on a system should be fairly straight forward to\n> find out. Distributed.net source code has some good examples.\n> \n> What I'm not sure of is how well this stuff reacts to CPUs being\n> software disabled (Solaris has such a feature).\n> \n> ftp://ftp.distributed.net/pub/dcti/source/pub-20010416.tgz\n> \n> first function of client/common/cpucheck.cpp\n> \n> Each OS gets its own implementation, but they've got all the ones\n> Postgresql uses covered off.\n> --\n> Rod Taylor\n> \n> This message represents the official view of the voices in my head\n> \n> ----- Original Message -----\n> From: \"Tom Lane\" <tgl@sss.pgh.pa.us>\n> To: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Friday, January 04, 2002 11:49 PM\n> Subject: Re: [HACKERS] Some interesting results from tweaking\n> spinlocks\n> \n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > The difference is small, perhaps 15%.\n> >\n> > The thing that gets my attention is not that it's so small, it's\n> that\n> > it is so large. My expectation was that that code would hardly ever\n> > be executed at all, and even less seldom (on a multiprocessor) need\n> to\n> > block via select(). How is it that *increasing* the delay interval\n> > (which one might reasonably expect to simply waste cycles) can\n> achieve\n> > a 15% improvement in total throughput? That shouldn't be happening.\n> >\n> > > My feeling is that we may want to start configuring whether we are\n> on\n> > > a multi-cpu machine and handle thing differently.\n> >\n> > That would be more palatable if there were some portable way of\n> > detecting it. But maybe we'll be forced into an \"is_smp\" GUC\n> switch.\n> >\n> > regards, tom lane\n> >\n> > ---------------------------(end of\n> broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to\n> majordomo@postgresql.org\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 5 Jan 2002 00:30:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My guess is that the lock is held for more than a few instructions, at\n> least in some cases.\n\nIt is not. LWLock and a couple of other places are the only direct uses\nof spinlocks, and none of them execute more than a few lines of C code\nwhile holding a spinlock. Nor do they touch any wide range of memory\nwhile doing so; your thought about cache stalls is a good one, but I\ndon't buy it.\n\nI've performed some profiling on that 4-way SMP machine, and it might\nbe useful to look at the call patterns for LWLock, which is certainly\nthe main use of spinlocks. This is an extract from gprof for one\nbackend process in a 25-client pgbench run, using CVS + second version\nof LWLock patch:\n\n-----------------------------------------------\n 0.00 0.00 1/420232 ExtendCLOG [475]\n 0.00 0.00 1/420232 InitBufferPool [517]\n 0.00 0.00 1/420232 InitBackendSharedInvalidationState [539]\n 0.00 0.00 1/420232 CleanupInvalidationState [547]\n 0.00 0.00 1/420232 LockMethodTableInit [511]\n 0.00 0.00 4/420232 GetPageWithFreeSpace [516]\n 0.00 0.00 8/420232 WaitIO [523]\n 0.00 0.00 8/420232 RecordAndGetPageWithFreeSpace [501]\n 0.00 0.00 10/420232 ReleaseAndReadBuffer [513]\n 0.00 0.00 11/420232 XLogWrite [266]\n 0.00 0.00 12/420232 ShmemInitStruct [494]\n 0.00 0.00 14/420232 SetBufferCommitInfoNeedsSave [509]\n 0.00 0.00 128/420232 ProcSleep [450]\n 0.00 0.00 289/420232 BufferReplace [304]\n 0.00 0.00 400/420232 TransactionIdSetStatus [263]\n 0.00 0.00 400/420232 GetNewObjectId [449]\n 0.00 0.00 400/420232 XLogFlush [215]\n 0.00 0.00 401/420232 GetNewTransactionId [448]\n 0.00 0.00 401/420232 CommitTransaction [47]\n 0.00 0.00 403/420232 LockReleaseAll [345]\n 0.00 0.00 762/420232 StartBufferIO [439]\n 0.00 0.00 1460/420232 TransactionIdGetStatus [192]\n 0.00 0.00 2000/420232 ReadNewTransactionId [388]\n 0.00 0.00 2000/420232 GetSnapshotData [334]\n 0.00 0.00 2870/420232 WriteBuffer [346]\n 0.00 0.00 3204/420232 XLogInsert [43]\n 0.00 0.00 9499/420232 LockRelease [107]\n 0.01 0.00 18827/420232 LockAcquire [66]\n 0.01 0.00 30871/420232 ReceiveSharedInvalidMessages [196]\n 0.03 0.01 76888/420232 ReleaseBuffer [80]\n 0.04 0.01 110970/420232 ReadBufferInternal [31]\n 0.06 0.01 157987/420232 LockBuffer [55]\n[44] 5.4 0.15 0.04 420232 LWLockAcquire [44]\n 0.04 0.00 29912/30040 IpcSemaphoreLock [144]\n 0.00 0.00 4376/7985 s_lock [596]\n\n\n-----------------------------------------------\n 0.00 0.00 1/420708 InitBufferPool [517]\n 0.00 0.00 1/420708 shmem_exit [554]\n 0.00 0.00 1/420708 InitShmemIndex [524]\n 0.00 0.00 1/420708 InitBackendSharedInvalidationState [539]\n 0.00 0.00 1/420708 LockMethodTableInit [511]\n 0.00 0.00 4/420708 GetPageWithFreeSpace [516]\n 0.00 0.00 8/420708 WaitIO [523]\n 0.00 0.00 8/420708 RecordAndGetPageWithFreeSpace [501]\n 0.00 0.00 11/420708 ShmemInitStruct [494]\n 0.00 0.00 11/420708 XLogWrite [266]\n 0.00 0.00 14/420708 SetBufferCommitInfoNeedsSave [509]\n 0.00 0.00 128/420708 ProcSleep [450]\n 0.00 0.00 289/420708 BufferReplace [304]\n 0.00 0.00 400/420708 TransactionLogUpdate [260]\n 0.00 0.00 400/420708 GetNewObjectId [449]\n 0.00 0.00 401/420708 CommitTransaction [47]\n 0.00 0.00 402/420708 GetNewTransactionId [448]\n 0.00 0.00 403/420708 LockReleaseAll [345]\n 0.00 0.00 762/420708 ReadBufferInternal [31]\n 0.00 0.00 762/420708 TerminateBufferIO [455]\n 0.00 0.00 800/420708 XLogFlush [215]\n 0.00 0.00 1460/420708 TransactionIdGetStatus [192]\n 0.00 0.00 2000/420708 ReadNewTransactionId [388]\n 0.00 0.00 2000/420708 GetSnapshotData [334]\n 0.00 0.00 2870/420708 WriteBuffer [346]\n 0.00 0.00 3280/420708 XLogInsert [43]\n 0.00 0.00 9499/420708 LockRelease [107]\n 0.00 0.00 18827/420708 LockAcquire [66]\n 0.01 0.00 30871/420708 ReceiveSharedInvalidMessages [196]\n 0.02 0.00 76888/420708 ReleaseBuffer [80]\n 0.02 0.00 110218/420708 BufferAlloc [42]\n 0.03 0.00 157987/420708 LockBuffer [55]\n[70] 2.6 0.09 0.00 420708 LWLockRelease [70]\n 0.00 0.00 29982/30112 IpcSemaphoreUnlock [571]\n 0.00 0.00 3604/7985 s_lock [596]\n\n\nWhat I draw from this is:\n\n1. The BufMgrLock is the principal source of LWLock contention, since it\nis locked more than anything else. (The ReleaseBuffer,\nReadBufferInternal, and BufferAlloc calls are all to acquire/release\nBufMgrLock. Although LockBuffer appears to numerically exceed these\ncalls, the LockBuffer operations are spread out over all the per-buffer\ncontext locks, so it's unlikely that there's much contention for any one\nbuffer context lock.) It's too late in the 7.2 cycle to think about\nredesigning bufmgr's interlocking but this ought to be high priority for\nfuture work.\n\n2. In this example, almost one in ten LWLockAcquire calls results in\nblocking (calling IpcSemaphoreLock). That seems like a lot. I was\nseeing much better results on a uniprocessor under essentially the\nsame test: one in a thousand LWLockAcquire calls blocked, not one in\nten. What's causing that discrepancy?\n\n3. The amount of spinlock-level contention seems too high too. We\nare calling s_lock about one out of every hundred LWLockAcquire or\nLWLockRelease calls; the equivalent figure from a uniprocessor profile\nis one in five thousand. Given the narrow window in which the spinlock\nis held, how can the contention rate be so high?\n\nAnyone see an explanation for these last two observations?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 00:34:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "> 2. In this example, almost one in ten LWLockAcquire calls results in\n> blocking (calling IpcSemaphoreLock). That seems like a lot. I was\n> seeing much better results on a uniprocessor under essentially the\n> same test: one in a thousand LWLockAcquire calls blocked, not one in\n> ten. What's causing that discrepancy?\n> \n> 3. The amount of spinlock-level contention seems too high too. We\n> are calling s_lock about one out of every hundred LWLockAcquire or\n> LWLockRelease calls; the equivalent figure from a uniprocessor profile\n> is one in five thousand. Given the narrow window in which the spinlock\n> is held, how can the contention rate be so high?\n> \n> Anyone see an explanation for these last two observations?\n\nIsn't there tons more lock contention on an SMP machine. I don't see\nthe surprise.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 5 Jan 2002 00:41:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Isn't there tons more lock contention on an SMP machine.\n\nNo, one would hope not. If you can't get the various processes to\nrun without much interference, you're wasting your time dealing\nwith multiple CPUs.\n\nIn a uniprocessor, we'll suffer from lock contention if one process\nhappens to lose the CPU while holding a lock, and one of the other\nprocesses that gets to run meanwhile tries to acquire that same lock.\nIn SMP this gets folded down: the lock holder might not lose its CPU\nat all, but some other CPU could be running a process that tries to\nacquire the lock meanwhile. It's not apparent to me why that should\nincrease the chance of lock contention, however. The percentage of\na process' runtime in which it is holding a lock should be the same\neither way, so the probability that another process fails to acquire\nthe lock when it wants shouldn't change either. Where is the flaw\nin this analysis?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 00:52:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Isn't there tons more lock contention on an SMP machine.\n> \n> No, one would hope not. If you can't get the various processes to\n> run without much interference, you're wasting your time dealing\n> with multiple CPUs.\n\nThere is hope and reality. :-)\n\n> In a uniprocessor, we'll suffer from lock contention if one process\n> happens to lose the CPU while holding a lock, and one of the other\n> processes that gets to run meanwhile tries to acquire that same lock.\n> In SMP this gets folded down: the lock holder might not lose its CPU\n> at all, but some other CPU could be running a process that tries to\n> acquire the lock meanwhile. It's not apparent to me why that should\n> increase the chance of lock contention, however. The percentage of\n> a process' runtime in which it is holding a lock should be the same\n> either way, so the probability that another process fails to acquire\n> the lock when it wants shouldn't change either. Where is the flaw\n> in this analysis?\n\nAt the risk of sounding stupid because I am missing something: On a\nsingle CPU system, one process is grabbing-releasing the lock while it\nhas the CPU, and sometimes it loses the CPU while it has the lock. On\nan SMP machine, all the backends are contending for the lock at the\n_same_ time. That is why SMP kernel coding is so hard, and they usually\nget around it by having one master kernel lock, which seems to be\nexactly what our mega-lock is doing; not a pretty picture.\n\nOn a single CPU machine, you fail to get the lock only if another\nprocess has gone to sleep while holding the lock. With a multi-cpu\nmachine, especially a 4-way, you can have up to three processes\n(excluding your own) holding that lock, and if that happens, you can't\nget it.\n\nThink of it this way, on a single-cpu machine, only one process can go\nto sleep waiting on the lock. Any others will fail to get the lock and\ngo back to sleep. On a 4-way (which is what I think you said you were\none), you have three possible processes holding that lock, plus\nprocesses that have gone to sleep holding the lock.\n\nDoes that make any sense?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 5 Jan 2002 01:08:39 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "[2002-01-05 00:00] Tom Lane said:\n| Bruce Momjian <pgman@candle.pha.pa.us> writes:\n| > OK, I am a little confused now. I thought the spinlock was only done a\n| > few times if we couldn't get a lock, and if we don't we go to sleep, and\n| > the count determines how many times we try. Isn't that expected to\n| > affect SMP machines?\n| \n| Yeah, but if the spinlock is only held for a few dozen instructions,\n| one would think that the max useful delay is also a few dozen\n| instructions (or maybe a few times that, allowing for the possibility\n| that other processors might claim the lock before we can get it).\n| If we spin for longer than that, the obvious conclusion is that the\n| spinlock is held by a process that's lost the CPU, and we should\n| ourselves yield the CPU so that it can run again. Further spinning\n| just wastes CPU time that might be used elsewhere.\n| \n| These measurements seem to say there's a flaw in that reasoning.\n| What is the flaw?\n\nKnowing very little of SMP, it looks like the spinning is parallelizing\nas expected, getting to select() faster, then serializing on the \nselect() call. I suspect using usleep() instead of select() might \nrelieve the serialization. I'm aware that usleep(10) will actually \nyield between 10 and 20us due to the kernel's scheduler.\n\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 5 Jan 2002 02:54:33 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Your observation that spinning instead of sleeping being faster on SMP makes\nsense.\n\nOn a single processor system, if you don't have the lock, you should call\nselect() as soon as possible (never spin). This will allow the OS (presumably)\nto switch to the process who does. You will never get the lock unless your\nprocess loses the CPU because some other process MUST get CPU time in order to\nrelease the lock.\n\nOn an SMP machine, this is different, other processes can run truly\nsimultaneously to the process spinning. Then you have the trade-off of wasting\nCPU cycles vs sleeping.\n\nA better lock system could know how many CPUs are in a system, and how many\nprocesses are waiting for the lock. Use this information to manage who sleeps\nand who spins.\n\nFor instance, if you have a 2 CPU SMP box, the first process to get the lock\ngets it. The next process to try for the lock should spin. The third process\nwaiting should sleep.\n\nATOMIC_INC(lock->waiters);\n\nwhile(TAS(lock))\n{\n\tif (++delays > (TIMEOUT_MSEC / DELAY_MSEC))\n\t\ts_lock_stuck(lock, file, line);\n\tif(lock->waiters >= num_cpus)\n\t{\n\t\tdelay.tv_sec = 0;\n\t\tdelay.tv_usec = DELAY_MSEC * 1000;\n\t\t(void) select(0, NULL, NULL, NULL, &delay);\n\t}\n}\n\nATOMIC_DEC(lock->waiters);\n\n\nThe above code is probably wrong, but something like it may improve performance\non SMP and uniprocessor boxes. On a uniprocessor box, the CPU is released right\naway on contention. On an SMP box light contention allows some spinning, but on\nheavy contention the CPUs aren't wasting a lot of time spinning.\n", "msg_date": "Sat, 05 Jan 2002 08:12:42 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "mlw wrote:\n[snip]\n#define SPINS_PER_DELAY 2000\n#define DELAY_MSEC 10\n#define TIMEOUT_MSEC (60 * 1000)\n\n ATOMIC_INC(lock->waiters);\n\n while (TAS(lock))\n {\n if ( (++spins > SPINS_PER_DELAY) || (lock->waiters >= CPUS) )\n {\n if (++delays > (TIMEOUT_MSEC / DELAY_MSEC))\n s_lock_stuck(lock, file, line);\n\n delay.tv_sec = 0;\n delay.tv_usec = DELAY_MSEC * 1000;\n (void) select(0, NULL, NULL, NULL, &delay);\n\n spins = 0;\n }\n }\n ATOMIC_DEC(lock->waiters);\n\n\nThis is better function, the one in my previous post was non-sense, I should\nhave coffee BEFORE I post.\n", "msg_date": "Sat, 05 Jan 2002 10:11:29 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> A better lock system could know how many CPUs are in a system, and how many\n> processes are waiting for the lock. Use this information to manage who sleeps\n> and who spins.\n> For instance, if you have a 2 CPU SMP box, the first process to get the lock\n> gets it. The next process to try for the lock should spin. The third process\n> waiting should sleep.\n\nActually, the thing you want to know before deciding whether to spin is\nwhether the current lock holder is running (presumably on some other\nCPU) or is waiting to run. If he is waiting then it makes sense to\nyield your CPU so he can run. If he is running then you should just\nspin for the presumably short time before he frees the spinlock.\nOn a single-CPU system this decision rule obviously reduces to \"always\nyield\".\n\nUnfortunately, while we could store the PID of the current lock holder\nin the data structure, I can't think of any adequately portable way to\ndo anything with the information :-(. AFAIK there's no portable kernel\ncall that asks \"is this PID currently running on another CPU?\"\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 12:46:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "Tom Lane wrote:\n\n>Unfortunately, while we could store the PID of the current lock holder\n>in the data structure, I can't think of any adequately portable way to\n>do anything with the information :-(. AFAIK there's no portable kernel\n>call that asks \"is this PID currently running on another CPU?\"\n>\nBut do all performance tweaks need to be portable ?\n\n>regards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 6: Have you searched our list archives?\n>\n>http://archives.postgresql.org\n>\n\n\n", "msg_date": "Sat, 05 Jan 2002 23:13:16 +0500", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I suspect using usleep() instead of select() might \n> relieve the serialization.\n\nA number of people have suggested that reducing the sleep interval would\nimprove matters. I tried that just now, again on RedHat's 4-way box,\nand was mildly astonished to find that it makes things worse. The graph\nbelow shows pgbench results for both the current code (10 millisec delay\nusing select()) and a 10-microsec delay using usleep(), with several\ndifferent SPINS_PER_DELAY values. Test conditions are otherwise the\nsame as in my last message (in particular, LWLock patch version 2).\n\nAt any given SPINS_PER_DELAY, the 10msec sleep beats the 10usec sleep\nhandily. I wonder if this indicates a problem with Linux'\nimplementation of usleep?\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 05 Jan 2002 14:01:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "[2002-01-05 14:01] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > I suspect using usleep() instead of select() might \n| > relieve the serialization.\n| \n| A number of people have suggested that reducing the sleep interval would\n| improve matters. \n\nUsing a single-processor machine, we're not going to get any lower \nsleep times than ~10ms from either usleep or select on linux, and \nusleep is always longer.\n\nbrent$ ./s_lock 1 0\nusleep = 0.007130 s\nselect = 0.000007 s\nnanosleep = 0.013286 s\nbrent$ ./s_lock 1 10\nusleep = 0.013465 s\nselect = 0.009879 s\nnanosleep = 0.019924 s\n\nOn FBSD, the shortest sleep is ~20ms, but is the same for usleep and\nselect.\n\n| I tried that just now, again on RedHat's 4-way box,\n| and was mildly astonished to find that it makes things worse. The graph\n| below shows pgbench results for both the current code (10 millisec delay\n| using select()) and a 10-microsec delay using usleep(), with several\n| different SPINS_PER_DELAY values. Test conditions are otherwise the\n| same as in my last message (in particular, LWLock patch version 2).\n\nAh, now this is very interesting. Looks like increasing spins allows\nthe process to get the lock before the usleep/select is run -- based \non the fact the that \"usleep 10 spins 100\" is markedly lower than the \nselect version. This is in keeping with observation mentioned above \nwhere usleep sleeps longer than select() on linux.\n\nIt would be interesting to count the number of times this select() is\ncalled on the SMP machines at various spin counts.\n\n| At any given SPINS_PER_DELAY, the 10msec sleep beats the 10usec sleep\n| handily. I wonder if this indicates a problem with Linux'\n| implementation of usleep?\n\nI don't think so, but it does disprove my original suspicion. Given\nthe significant performance gap, I'd vote to add a configurable \nparameter for the spin counter.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 5 Jan 2002 16:41:01 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Using a single-processor machine, we're not going to get any lower \n> sleep times than ~10ms from either usleep or select on linux, and \n> usleep is always longer.\n\nAh, so usleep is just being stricter about rounding up the requested\ndelay? That would explain the results all right.\n\n> Looks like increasing spins allows\n> the process to get the lock before the usleep/select is run \n\nRight. Up to a point, increasing spins improves the odds of acquiring\nthe lock without having to release the processor.\n\nWhat I should've thought of is to try sched_yield() as well, which is\nthe operation we *really* want here, and it is available on this version\nof Linux. Off to run another batch of tests ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 17:04:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "[2002-01-05 17:04] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > Using a single-processor machine, we're not going to get any lower \n| > sleep times than ~10ms from either usleep or select on linux, and \n| > usleep is always longer.\n| \n| Ah, so usleep is just being stricter about rounding up the requested\n| delay? That would explain the results all right.\n\nThe only difference I see is that sys_nanosleep gets its actual timeout\nvalue using timespec_to_jiffies(), and do_select leaves the specified\ndelay untouched.\n\n| > Looks like increasing spins allows\n| > the process to get the lock before the usleep/select is run \n| \n| Right. Up to a point, increasing spins improves the odds of acquiring\n| the lock without having to release the processor.\n| \n| What I should've thought of is to try sched_yield() as well, which is\n| the operation we *really* want here, and it is available on this version\n| of Linux. Off to run another batch of tests ...\n\nyes. using just sched_yield() inside the TAS loop appears to give\nbetter performance on both freebsd and linux (single-proc); in\nparticular, it _looks_ like there is a 8-10% performance gain at\n32 clients.\n\nbtw, what are y'all using to generate these nifty graphs?\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sat, 5 Jan 2002 20:48:45 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "> btw, what are y'all using to generate these nifty graphs?\n> \n\ngnuplot.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 5 Jan 2002 20:53:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some interesting results from tweaking spinlocks" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> btw, what are y'all using to generate these nifty graphs?\n\n> gnuplot.\n\nTatsuo previously posted a script to extract a gnuplot-ready data file\nfrom a transcript of a set of pgbench runs. I've been using that, plus\ngnuplot scripts like the following (slightly tweaked from Tatsuo's\nexample):\n\n\nset xlabel \"concurrent users\"\nset ylabel \"TPS\"\nset yrange [150:330]\nset logscale x\nset key width 5\nset key right \n\nplot \\\n'bench.try2-noassert.data' title 'select spins 100' with linespoint lw 4 pt 1 ps 4, \\\n'bench.try2-na-s1000.data' title 'select spins 1000' with linespoint lw 4 pt 2 ps 4, \\\n'bench.try2-na-s10000-2.data' title 'select spins 10000' with linespoint lw 4 pt 3 ps 4, \\\n'bench.yield-s100-2.data' title 'yield spins 100' with linespoint lw 4 pt 4 ps 4, \\\n'bench.yield-s1000-2.data' title 'yield spins 1000' with linespoint lw 4 pt 5 ps 4\n\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 21:30:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> | What I should've thought of is to try sched_yield() as well, which is\n> | the operation we *really* want here, and it is available on this version\n> | of Linux. Off to run another batch of tests ...\n\n> yes. using just sched_yield() inside the TAS loop appears to give\n> better performance on both freebsd and linux (single-proc); in\n> particular, it _looks_ like there is a 8-10% performance gain at\n> 32 clients.\n\nI'm noticing more variability in the results today than I got yesterday;\nthis is odd, since the only change in the system environment is that we\ncleaned off some more free space on the disk drive array in preparation\nfor running larger benchmarks. An example of the variability can be\nseen by comparing the two \"yield spins 100\" curves below, which should\nbe identical circumstances. Still, it's clear that using sched_yield\nis a win.\n\nAlso note that spins=1000 seems to be a loser compared to spins=100 when\nusing sched_yield, while it is not with either select or usleep. This\nmakes sense, since the reason for not wanting to yield the processor\nis the large delay till we can run again. With sched_yield that penalty\nis eliminated.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 05 Jan 2002 22:05:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Some interesting results from tweaking spinlocks " } ]
[ { "msg_contents": "Just started using the pgcrypt 0.4.2 (very cool stuff) and am having\nsome strange errors (not so cool). Can someone make sense of the SQL\nbelow? I'm not 100% sure what's going on or not going on...\n\n\nhost=# SELECT DIGEST('asdf', 'md5') FROM users_shadow;\n digest \n------------------------------------------------\n \\221.\\310\\003\\262\\316I\\344\\245A\\006\\215IZ\\265p\n(1 row)\n\nhost=# SELECT DIGEST(password, 'md5') FROM users_shadow;\nERROR: Function 'digest(varchar, unknown)' does not exist\n Unable to identify a function that satisfies the given argument types\n You may need to add explicit typecasts\nhost=# SELECT DIGEST(CAST(password AS bytea), CAST('md5' AS TEXT)) FROM users_shadow;\nERROR: Cannot cast type 'varchar' to 'bytea'\n\n\n\tAny ideas as to how I can do this? -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 5 Jan 2002 00:46:40 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "pgcryto strangeness..." }, { "msg_contents": "> host=# SELECT DIGEST('asdf', 'md5') FROM users_shadow;\n> digest \n> ------------------------------------------------\n> \\221.\\310\\003\\262\\316I\\344\\245A\\006\\215IZ\\265p\n> (1 row)\n\nYou must encode() the results.\n\n(For the record, I consider that a serious design flaw.\nIt may not be possible to safely dump and restore tables\ncontaining unencoded 8-bit data.)\n\n> host=# SELECT DIGEST(password, 'md5') FROM users_shadow;\n> ERROR: Function 'digest(varchar, unknown)' does not exist\n> Unable to identify a function that satisfies the given argument types\n> You may need to add explicit typecasts\n> host=# SELECT DIGEST(CAST(password AS bytea), CAST('md5' AS TEXT)) FROM users_shadow;\n> ERROR: Cannot cast type 'varchar' to 'bytea'\n \nTry dropping the first cast.\n\n", "msg_date": "Sat, 5 Jan 2002 08:46:38 -0700 (MST)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": false, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "> > host=# SELECT DIGEST('asdf', 'md5') FROM users_shadow;\n> > digest \n> > ------------------------------------------------\n> > \\221.\\310\\003\\262\\316I\\344\\245A\\006\\215IZ\\265p\n> > (1 row)\n> \n> You must encode() the results.\n\nSorry for not being more clear, this isn't the problem: just proof\nthat things are working on this end.\n\n> (For the record, I consider that a serious design flaw.\n> It may not be possible to safely dump and restore tables\n> containing unencoded 8-bit data.)\n\nHow about a digest_hex() method?\n\n> > host=# SELECT DIGEST(password, 'md5') FROM users_shadow;\n> > ERROR: Function 'digest(varchar, unknown)' does not exist\n> > Unable to identify a function that satisfies the given argument types\n> > You may need to add explicit typecasts\n> > host=# SELECT DIGEST(CAST(password AS bytea), CAST('md5' AS TEXT)) FROM users_shadow;\n> > ERROR: Cannot cast type 'varchar' to 'bytea'\n> \n> Try dropping the first cast.\n\nAlready have. I've cast it to text too. I've even tried having it\noperate on char and text column types, it's looking for a bytea data\ntype, but I don't know how to cast to that correctly and that's the\nproblem (with the module?). Sorry I wasn't more explicitly earlier. -sc\n\nhost=# SELECT DIGEST(CAST(enabled AS bytea), CAST('md5' AS TEXT)) FROM users_shadow;\nERROR: Cannot cast type 'bpchar' to 'bytea'\nhost=# SELECT DIGEST(CAST(enabled AS text), CAST('md5' AS TEXT)) FROM users_shadow;\nERROR: Function 'digest(text, text)' does not exist\n Unable to identify a function that satisfies the given argument types\n You may need to add explicit typecasts\nhost=# SELECT DIGEST(CAST(password AS text), CAST('md5' AS TEXT)) FROM users_shadow;\nERROR: Function 'digest(text, text)' does not exist\n Unable to identify a function that satisfies the given argument types\n You may need to add explicit typecasts\n\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 5 Jan 2002 11:11:10 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "Sean Chittenden wrote:\n\n> \n> Just started using the pgcrypt 0.4.2 (very cool stuff) and am having\n> some strange errors (not so cool). Can someone make sense of the SQL\n> below? I'm not 100% sure what's going on or not going on...\n> \n> \n> host=# SELECT DIGEST('asdf', 'md5') FROM users_shadow;\n> digest \n> ------------------------------------------------\n> \\221.\\310\\003\\262\\316I\\344\\245A\\006\\215IZ\\265p\n> (1 row)\n> \n> host=# SELECT DIGEST(password, 'md5') FROM users_shadow;\n> ERROR: Function 'digest(varchar, unknown)' does not exist\n> Unable to identify a function that satisfies the given argument types\n> You may need to add explicit typecasts\n> host=# SELECT DIGEST(CAST(password AS bytea), CAST('md5' AS TEXT)) FROM users_shadow;\n> ERROR: Cannot cast type 'varchar' to 'bytea'\n> \n> \n> \tAny ideas as to how I can do this? -sc\n> \n\nYou can't directly cast varchar to bytea, but you can use decode(in 7.2):\n\ntest=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.2b3 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\ntest=# create table users_shadow(password varchar(20));\nCREATE\ntest=# insert into users_shadow values('secret');\nINSERT 1492547 1\ntest=# SELECT DIGEST(decode(password,'escape'), 'md5') FROM users_shadow;\n digest\n------------------------------------------------------\n ^\\276\"\\224\\354\\320\\340\\360\\216\\253v\\220\\322\\246\\356i\n(1 row)\n\n \nHTH,\n\n\n-- Joe\n\n\n\n\n", "msg_date": "Sat, 05 Jan 2002 11:56:43 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "> You can't directly cast varchar to bytea, but you can use decode(in 7.2):\n> \n> test=# select version();\n> version\n> -------------------------------------------------------------\n> PostgreSQL 7.2b3 on i686-pc-linux-gnu, compiled by GCC 2.96\n> (1 row)\n> \n> test=# create table users_shadow(password varchar(20));\n> CREATE\n> test=# insert into users_shadow values('secret');\n> INSERT 1492547 1\n> test=# SELECT DIGEST(decode(password,'escape'), 'md5') FROM users_shadow;\n> digest\n> ------------------------------------------------------\n> ^\\276\"\\224\\354\\320\\340\\360\\216\\253v\\220\\322\\246\\356i\n> (1 row)\n> \n> \n> HTH,\n\nYeah, it does... but it also tells me I'm SOL for 7.1.3 even though\npgcrypto comes with a DECODE() function (only supports 'hex' and\n'base64'). Any other ideas? <:~) -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 5 Jan 2002 12:09:39 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "Sean Chittenden wrote:\n\n> Yeah, it does... but it also tells me I'm SOL for 7.1.3 even though\n> pgcrypto comes with a DECODE() function (only supports 'hex' and\n> 'base64'). Any other ideas? <:~) -sc\n> \n\n\nNot sure if you are in a position to do this, but why not make your \npassword field bytea instead of varchar? This won't work if you need to \nsupport multibyte passwords, but I think it should be fine otherwise.\n\n\ntest=# create table users_shadow_2(password bytea);\nCREATE\ntest=# insert into users_shadow_2 values('secret');\nINSERT 1492553 1\ntest=# SELECT DIGEST(password, 'md5') FROM users_shadow_2;\n digest\n------------------------------------------------------\n ^\\276\"\\224\\354\\320\\340\\360\\216\\253v\\220\\322\\246\\356i\n \n\nJoe\n\n", "msg_date": "Sat, 05 Jan 2002 13:12:01 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Yeah, it does... but it also tells me I'm SOL for 7.1.3 even though\n> pgcrypto comes with a DECODE() function (only supports 'hex' and\n> 'base64'). Any other ideas? <:~) -sc\n\nSo, create yourself another function. In pgcrypto.sql.in I see\n\nCREATE FUNCTION digest(bytea, text) RETURNS bytea\n AS 'MODULE_PATHNAME',\n 'pg_digest' LANGUAGE 'C';\n\nYou could add\n\nCREATE FUNCTION digest(text, text) RETURNS bytea\n AS 'MODULE_PATHNAME',\n 'pg_digest' LANGUAGE 'C';\n\nwhich should work fine since the internal representation of text isn't\nreally different from that of bytea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 05 Jan 2002 16:39:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgcryto strangeness... " }, { "msg_contents": "> So, create yourself another function. In pgcrypto.sql.in I see\n> \n> CREATE FUNCTION digest(bytea, text) RETURNS bytea\n> AS 'MODULE_PATHNAME',\n> 'pg_digest' LANGUAGE 'C';\n> \n> You could add\n> \n> CREATE FUNCTION digest(text, text) RETURNS bytea\n> AS 'MODULE_PATHNAME',\n> 'pg_digest' LANGUAGE 'C';\n> \n> which should work fine since the internal representation of text isn't\n> really different from that of bytea.\n\nTom, you're a regular postgres god. ;~) That works beautifully! I'm\ncooking along now, thanks 'all!\n\nReal quick, is anyone else is interested, I'm turning pgcrypto into a\nport for FreeBSD. I'm mostly done, so if anyone has any interest in\ntesting this (very strange port to make because you have to copy the\nbackend headers out of the postgres tarball and into the include path.\nI know this is changing with 7.2, but it's not out yet. ::grin::),\nplease let me know. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sat, 5 Jan 2002 13:52:40 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "> Real quick, is anyone else is interested, I'm turning pgcrypto into a\n> port for FreeBSD. I'm mostly done, so if anyone has any interest in\n> testing this (very strange port to make because you have to copy the\n> backend headers out of the postgres tarball and into the include path.\n> I know this is changing with 7.2, but it's not out yet. ::grin::),\n> please let me know. -sc\n\nAll I do to install contribs is this:\n\ncd /usr/ports/databases/postgresql7\nmake configure\ncd work/postgresql-7.1.3/contrib/pgcrypto\ngmake all && gmake install\ncd /usr/ports/databases/postgresql7\nmake clean\n\nChris\n\n", "msg_date": "Mon, 7 Jan 2002 10:02:47 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "> > Real quick, is anyone else is interested, I'm turning pgcrypto into a\n> > port for FreeBSD. I'm mostly done, so if anyone has any interest in\n> > testing this (very strange port to make because you have to copy the\n> > backend headers out of the postgres tarball and into the include path.\n> > I know this is changing with 7.2, but it's not out yet. ::grin::),\n> > please let me know. -sc\n> \n> All I do to install contribs is this:\n> \n> cd /usr/ports/databases/postgresql7\n> make configure\n> cd work/postgresql-7.1.3/contrib/pgcrypto\n> gmake all && gmake install\n> cd /usr/ports/databases/postgresql7\n> make clean\n\nAlright, nm. I'll change the patch to make pgcypto apart of default\nFreeBSD postgres installs. Thanks for the tip. -sc\n\n-- \nSean Chittenden\n", "msg_date": "Sun, 6 Jan 2002 18:11:02 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: pgcryto strangeness..." }, { "msg_contents": "On Sat, Jan 05, 2002 at 04:39:31PM -0500, Tom Lane wrote:\n> You could add\n> \n> CREATE FUNCTION digest(text, text) RETURNS bytea\n> AS 'MODULE_PATHNAME',\n> 'pg_digest' LANGUAGE 'C';\n> \n> which should work fine since the internal representation of text isn't\n> really different from that of bytea.\n\nThis is so obvious that I would like to make it 'official'.\n\nSeems like the theology around bytea<>text casting kept me from\nseeing the simple :)\n\nAs this should go under 'polishing' I hope it gets into 7.2.\n\n-- \nmarko\n\n\n\nIndex: contrib/pgcrypto/pgcrypto.sql.in\n===================================================================\nRCS file: /opt/cvs/pgsql/pgsql/contrib/pgcrypto/pgcrypto.sql.in,v\nretrieving revision 1.6\ndiff -u -r1.6 pgcrypto.sql.in\n--- contrib/pgcrypto/pgcrypto.sql.in\t29 Sep 2001 03:11:58 -0000\t1.6\n+++ contrib/pgcrypto/pgcrypto.sql.in\t7 Jan 2002 04:11:00 -0000\n@@ -1,6 +1,8 @@\n \n+-- drop function digest(text, text);\n -- drop function digest(bytea, text);\n -- drop function digest_exists(text);\n+-- drop function hmac(text, text, text);\n -- drop function hmac(bytea, bytea, text);\n -- drop function hmac_exists(text);\n -- drop function crypt(text, text);\n@@ -14,6 +16,10 @@\n \n \n \n+CREATE FUNCTION digest(text, text) RETURNS bytea\n+ AS 'MODULE_PATHNAME',\n+ 'pg_digest' LANGUAGE 'C';\n+\n CREATE FUNCTION digest(bytea, text) RETURNS bytea\n AS 'MODULE_PATHNAME',\n 'pg_digest' LANGUAGE 'C';\n@@ -21,6 +27,10 @@\n CREATE FUNCTION digest_exists(text) RETURNS bool\n AS 'MODULE_PATHNAME',\n 'pg_digest_exists' LANGUAGE 'C';\n+\n+CREATE FUNCTION hmac(text, text, text) RETURNS bytea\n+ AS 'MODULE_PATHNAME',\n+ 'pg_hmac' LANGUAGE 'C';\n \n CREATE FUNCTION hmac(bytea, bytea, text) RETURNS bytea\n AS 'MODULE_PATHNAME',\n", "msg_date": "Mon, 7 Jan 2002 07:34:50 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgcryto strangeness..." }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> This is so obvious that I would like to make it 'official'.\n> Seems like the theology around bytea<>text casting kept me from\n> seeing the simple :)\n> As this should go under 'polishing' I hope it gets into 7.2.\n\nDone.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 13:56:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgcryto strangeness... " } ]
[ { "msg_contents": "\ntesting something.\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sat, 5 Jan 2002 15:43:34 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "testing 123" } ]
[ { "msg_contents": "I normally don't make preannouncements, but this directly relates\nto Sean's question and several private inquiries.\n\nDue to popular demand, libpkixpq 0.3 will support crypto using the\nOpenSSL library. The roadmap at the current time is:\n\n\nPHASE 1: support for symmetrical encryption.\n\nThis will introduce three new data types:\n \n - HASH - cryptographic hashes, related to Sean's question\n\n - KEY - symmetrical encryption key\n\n - CIPHERTEXT - encrypted data\n\nThe last two types will be formatted per OpenPGP (RFC 2440) specs, \nwith an eye towards future interoperability with PGP and GPG.\nN.B., no external libraries will be required other than OpenSSL and \nthe compression library (zlib?).\n\nBesides the hash functions (computing hashes, displaying the\ncontents in various formats) three new crypto functions will be\ndefined:\n\n CREATE_KEY (TEXT passphrase, TEXT cipher) returns KEY;\n\n ENCRYPT (TEXT data, KEY key) returns CIPHERTEXT;\n\n DECRYPT (CIPHERTEXT ciphertext, KEY key) returns TEXT;\n\nIn this phase, only symmetrical encryption will be supported. However\nI plan to use \"Symmetric-Key Encrypted Session-Key\" packets so it will\nalways be possible to mix symmetrical and asymmetrical keys with all\nencrypted data.\n\nI hope to have the first cut out within the next few days.\n\n\nPHASE 2: support for asymmetrical encryption.\n\nThe ENCRYPT and DECRYPT functions will be overloaded to accept:\n\n ENCRYPT (TEXT data, X509 cert) returns CIPHERTEXT;\n\n DECRYPT (CIPHERTEXT ciphertext, PKCS8 key, TEXT passphrase) returns TEXT;\n\nAdditional types and functions will undoubtably be defined to facilitate\ninteroperation with PGP/GPG, and to provide a mechanism for allowing\ndecryption by multiple parties. E.g., something like \n\n ENCRYPT (TEXT data, X509 certs[]) return CIPHERTEXT;\n\nPHASE 3: full integration into the database as described previously.\n", "msg_date": "Sat, 5 Jan 2002 14:49:52 -0700 (MST)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": true, "msg_subject": "preannouncement: libpkixpq 0.3 will have crypto" } ]
[ { "msg_contents": "dear sir\n\n\n\nmy database create -E UNICODE\n\ncan't run\n\ntimestamp(date '2000-12-25')\nerror message>> parse error at or near \"date\"\n\nlpad('hi',4,'??') >> not return '??hi'\nrpad('hi',4,'x') >> return error string\n\n \n\n\n\n\n\n\n\ndear sir\n \n \n \nmy database create -E UNICODEcan't runtimestamp(date \n'2000-12-25')error message>> parse error at or near \n\"date\"lpad('hi',4,'??')  >> not return \n'??hi'rpad('hi',4,'x')  >> return error string", "msg_date": "Sun, 6 Jan 2002 10:32:43 +0800", "msg_from": "\"guard\" <guard29@seed.net.tw>", "msg_from_op": true, "msg_subject": "postgresql 7.2b4 bug" }, { "msg_contents": "\"guard\" <guard29@seed.net.tw> writes:\n> timestamp(date '2000-12-25')\n> error message>> parse error at or near \"date\"\n\nThis unfortunately is not a bug, but a deliberate change: TIMESTAMP is\nnow a reserved word, or at least more reserved than it used to be.\nUse proper cast syntax (either CAST or :: style) to convert to timestamp.\n\n> lpad('hi',4,'??') >> not return '??hi'\n> rpad('hi',4,'x') >> return error string\n\nThese are bugs. Fixed --- thanks for the report!\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 12:05:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgresql 7.2b4 bug " } ]
[ { "msg_contents": "I have been following the spin locking discussion, and even have done some\nhacking on the spin lock mechanism to experiment.\n\nI can't get consistent results, not consistently enough to measure anything but\nthe most dramatic of changes.\n\nMy script looks like this (thanks Tom)\n\n#! /bin/sh\n\nHOST=snoopy\nDB=bench\ntotxacts=10000\n\nfor c in 10 25 50 100\ndo\n t=`expr $totxacts / $c`\n psql -h $HOST -c 'vacuum full' $DB\n psql -h $HOST -c 'checkpoint' $DB\n echo \"===== sync ======\" 1>&2\n sync;sync;sync;sleep 10\n echo $c concurrent users... 1>&2\n ./pgbench -n -t $t -h $HOST -c $c $DB\ndone\n\nMy results can vary as much as 10%, with a reliable +-2.5% from run to run. It\nis hard to make any good conclusions.\n\nMy system is RedHat 7.2 with the 2.4.17 kernel, compiled for SMP.\nMy database is on two EIDE drives connected to a promise ATA 66 controller\ncard. (each drive to its own channel, and one drive per channel) DMA is\nenabled.\nthe postgres directory is on one drive, and pg_xlog is symlinked to a directory\non the other.\n\nI have 1/2 gig RAM, 2CPU SMP\n\npg bench:\nscale: 50\nclients: 50\ntransactions: 200\ntps range averages ~71.7\nwith a high of 72.6\nand a low of 70.9\n", "msg_date": "Sat, 05 Jan 2002 23:32:28 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "pgbench, consistency?" } ]
[ { "msg_contents": "Could we remove lines 552-560 of pgbench.c? The behavior that guarded\nagainst is long gone, and forcing a checkpoint every few thousand tuples\nseems to be putting a huge crimp in the speed of pgbench -i ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 00:22:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "pgbench -i spends all its time doing CHECKPOINT" }, { "msg_contents": "Tom Lane wrote:\n> Could we remove lines 552-560 of pgbench.c? The behavior that guarded\n> against is long gone, and forcing a checkpoint every few thousand tuples\n> seems to be putting a huge crimp in the speed of pgbench -i ...\n\nI don't see any need for it. It isn't even needed for 7.1.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 6 Jan 2002 01:38:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Could we remove lines 552-560 of pgbench.c? The behavior that guarded\n>> against is long gone, and forcing a checkpoint every few thousand tuples\n>> seems to be putting a huge crimp in the speed of pgbench -i ...\n\n> I don't see any need for it.\n\nAu contraire. I'm nearly done with pgbench -i -s 500 after about an\nhour. The unmodified code ran for several hours and was less than\nhalf done.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 01:41:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Could we remove lines 552-560 of pgbench.c? The behavior that guarded\n> >> against is long gone, and forcing a checkpoint every few thousand tuples\n> >> seems to be putting a huge crimp in the speed of pgbench -i ...\n> \n> > I don't see any need for it.\n> \n> Au contraire. I'm nearly done with pgbench -i -s 500 after about an\n> hour. The unmodified code ran for several hours and was less than\n> half done.\n\nI meant I don't see any need for that checkpoint code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 6 Jan 2002 01:44:50 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I meant I don't see any need for that checkpoint code.\n\nOh, my mistake; I thought you meant you didn't want to make a change\nnow.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 01:50:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT " }, { "msg_contents": "> Could we remove lines 552-560 of pgbench.c? The behavior that guarded\n> against is long gone, and forcing a checkpoint every few thousand tuples\n> seems to be putting a huge crimp in the speed of pgbench -i ...\n\nYup. Maybe we could ifdef'ed out until we implement true UNDO...\n--\nTatsuo Ishii\n", "msg_date": "Sun, 06 Jan 2002 17:10:36 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT" }, { "msg_contents": "Tatsuo Ishii wrote:\n> > Could we remove lines 552-560 of pgbench.c? The behavior that guarded\n> > against is long gone, and forcing a checkpoint every few thousand tuples\n> > seems to be putting a huge crimp in the speed of pgbench -i ...\n> \n> Yup. Maybe we could ifdef'ed out until we implement true UNDO...\n\nI think we should just remove it. The idea that we are going to do UNDO\nwhich allows unlimited log file growth for long transactions seems like\na loser to me.\n\nActually, that brings up a question I had. In 7.1.0, we didn't recycle\nWAL segements that were used by open transactions during CHECKPOINT,\nwhile in 7.1.3 and later, we do recycle them after CHECKPOINT. My\nquestion is if we do a big transaction that needs 10 log segments, do we\nforce an early CHECKPOINT to clear out the WAL segments or do we just\nwait for the proper interval?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 6 Jan 2002 13:01:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> question is if we do a big transaction that needs 10 log segments, do we\n> force an early CHECKPOINT to clear out the WAL segments or do we just\n> wait for the proper interval?\n\nA checkpoint is forced after every CHECKPOINT_SEGMENTS log segments,\nregardless of longevity of transactions. See\nhttp://developer.postgresql.org/docs/postgres/wal-configuration.html\n\nSince segments before the checkpoint-before-last are deleted or recycled\nafter each checkpoint, the maximum number of back segments would\nnormally be 2 * CHECKPOINT_SEGMENTS. We also pre-create WAL_FILES\nfuture log segments. Counting the current segment gives a total of\nWAL_FILES + 2 * CHECKPOINT_SEGMENTS + 1 log segments.\n\nAFAICS, the only way to force the current code into creating more than\nWAL_FILES + 2 * CHECKPOINT_SEGMENTS + 1 log segments is to be generating\nWAL entries at such a high rate that more than WAL_FILES log segments\nare filled before a triggered checkpoint can be completed.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 13:37:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > question is if we do a big transaction that needs 10 log segments, do we\n> > force an early CHECKPOINT to clear out the WAL segments or do we just\n> > wait for the proper interval?\n> \n> A checkpoint is forced after every CHECKPOINT_SEGMENTS log segments,\n> regardless of longevity of transactions. See\n> http://developer.postgresql.org/docs/postgres/wal-configuration.html\n> \n> Since segments before the checkpoint-before-last are deleted or recycled\n> after each checkpoint, the maximum number of back segments would\n> normally be 2 * CHECKPOINT_SEGMENTS. We also pre-create WAL_FILES\n> future log segments. Counting the current segment gives a total of\n> WAL_FILES + 2 * CHECKPOINT_SEGMENTS + 1 log segments.\n> \n> AFAICS, the only way to force the current code into creating more than\n> WAL_FILES + 2 * CHECKPOINT_SEGMENTS + 1 log segments is to be generating\n> WAL entries at such a high rate that more than WAL_FILES log segments\n> are filled before a triggered checkpoint can be completed.\n> \n\nVery interesting. Thanks. Is there a reason someone would manually run\nthe CHECKPOINT command?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 6 Jan 2002 14:47:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgbench -i spends all its time doing CHECKPOINT" } ]
[ { "msg_contents": "In response to multiple requests, libpkixpq 0.3 includes limited\nOpenPGP (RFC2440) support. It is available for download at \nhttp://www.dimensional.com/~bgiles/. It should not be downloaded\ncontrary to US export or local law.\n\n(It's not like this RFC is difficult to implement - I've done this\nwork in less than a day while doing other things.)\n\nThe key changes are:\n\n1) Base64 encoding:\n\nTwo functions are defined which are wrappers to the OpenSSL\nBase64 routines:\n\n function base64_encode(text) returns text;\n\n function base64_decode(text) returns text;\n\n2) Cryptographic hashes:\n\nA new type is defined, HASH, which can hold a cryptographic\nhash value. The standard hash format is a colon delimited list\nof hexadecimal values, e.g, 01:23:45:67:89:ab:cd:ef.\n\nTo generate a hash, you should use the function\n\n function digest(text data, text digest) returns hash;\n\nYou can get a base-64 encoded hash with\n\n function base64_encode(hash) returns text;\n\nThis is different than the function described above since\nthe hash is against the underlying binary data, not the hexadecimal\nrepresentation.\n\nFinally, you can explicitly cast a hash to a text object with\n\n function text(hash) returns text;\n\n3) OpenPGP encryption:\n\nPRELIMINARY support for OpenPGP(RFC2440) encryption is\nprovided with one new data type, CIPHERTEXT, and two \nfunctions:\n\n function encrypt(text data, text passphrase) returns ciphertext;\n\n function decrypt(ciphertext data, text passphrase) returns text;\n\nNo encrypted keys are stored in the ciphertext object - at the\ncurrent time the only key supported is generated from the passphrase\nby computing the MD5 hash of the passphrase, then using it as the\nkey to the blowfish cipher. (The RFC specifies the IDEA cipher,\nbut it is problematic due to European patents.)\n\nCompression is not yet currently supported. (Should I use\nzlib, or is does the backend provide its own compression \nlibrary?)\n\nThe ciphertext is stored as binary data, but displayed in base64\nencoding instead of the full OpenPGP armor.\n\nIMPORTANT REMINDER: This is not production quality code, do\nNOT use it to store credit card information in your database!\n\nIMPORTANT REMINDER 2: This code is not interoperable with\nPGP or GPG, although that would be an obvious long-term goal.\n\n\nAnnouncement cc'd to crypt@bxa.doc.gov.\n", "msg_date": "Sun, 6 Jan 2002 01:24:57 -0700 (MST)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": true, "msg_subject": "Announcement: libpkixpq 0.3 - with limited OpenPGP support" } ]
[ { "msg_contents": "I noticed a post from Tom saying that anoncvs was not working. This still\nseems to be the case.\n\n[swm@d swm]$ cvs -d\n:pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n(Logging in to anoncvs@anoncvs.postgresql.org)\nCVS password:\nFatal error, aborting.\nanoncvs: no such user\ncvs login: authorization failed: server anoncvs.postgresql.org rejected\naccess to /projects/cvsroot for user anoncvs\n[swm@d swm]$\n\n?\n\nGavin\n\n", "msg_date": "Sun, 6 Jan 2002 20:19:48 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": true, "msg_subject": "anoncvs" }, { "msg_contents": "me too, but slightly different message\n\nFatal error, aborting.\nanoncvs: no such user\ncvs update: authorization failed: server rs.postgresql.org rejected access to /projects/cvsroot for user anoncvs\ncvs update: used empty password; try \"cvs login\" with a real password\n\n\nOn Sun, 6 Jan 2002, Gavin Sherry wrote:\n\n> I noticed a post from Tom saying that anoncvs was not working. This still\n> seems to be the case.\n>\n> [swm@d swm]$ cvs -d\n> :pserver:anoncvs@anoncvs.postgresql.org:/projects/cvsroot login\n> (Logging in to anoncvs@anoncvs.postgresql.org)\n> CVS password:\n> Fatal error, aborting.\n> anoncvs: no such user\n> cvs login: authorization failed: server anoncvs.postgresql.org rejected\n> access to /projects/cvsroot for user anoncvs\n> [swm@d swm]$\n>\n> ?\n>\n> Gavin\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 6 Jan 2002 13:45:51 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: anoncvs" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> I noticed a post from Tom saying that anoncvs was not working. This still\n> seems to be the case.\n\nNot \"still\". It was working as recently as Friday morning. Definitely\nseems to be broken now though.\n\n[tgl@toolbox pgsql]$ cvs status register.txt\ncvs status: authorization failed: server anoncvs.postgresql.org rejected access\nto /projects/cvsroot for user anoncvs\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 13:52:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: anoncvs " }, { "msg_contents": "\nFixed ...\n\nOn Sun, 6 Jan 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > I noticed a post from Tom saying that anoncvs was not working. This still\n> > seems to be the case.\n>\n> Not \"still\". It was working as recently as Friday morning. Definitely\n> seems to be broken now though.\n>\n> [tgl@toolbox pgsql]$ cvs status register.txt\n> cvs status: authorization failed: server anoncvs.postgresql.org rejected access\n> to /projects/cvsroot for user anoncvs\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Sun, 6 Jan 2002 15:40:51 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: anoncvs " } ]
[ { "msg_contents": "I was just asked if there are problems if a program forks() while connected\nto the backend. Of course both processes will try to access the DB after the\nfork. Is this possible at all? If so does it create timing problems? Or\nothers? I think I never tried.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 6 Jan 2002 13:09:49 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "fork() while connected" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n\n> I was just asked if there are problems if a program forks() while connected\n> to the backend. Of course both processes will try to access the DB after the\n> fork. Is this possible at all? If so does it create timing problems? Or\n> others? I think I never tried.\n\nThis will definitely not work. The two processes will stomp all over\neach other. They'll be sharing one socket and one backend and both\nwriting/reading from the socket at random times.\n\nYou can open a second connection in the child (don't close the first\none there, or it'll mess up the parent). It's probably best to try to\navoid fork()ing with an open connection altogether.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "06 Jan 2002 09:24:21 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: fork() while connected" }, { "msg_contents": "On Sun, Jan 06, 2002 at 09:24:21AM -0500, Doug McNaught wrote:\n> This will definitely not work. The two processes will stomp all over\n> each other. They'll be sharing one socket and one backend and both\n> writing/reading from the socket at random times.\n\nThat's exactly what I expected. Thanks.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 7 Jan 2002 10:06:19 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: fork() while connected" } ]
[ { "msg_contents": "Let's think about this:\n\nWhat is the advantage of spinning?\n\nOn a uniprocessor box, there is \"never\" any advantage because you must release\nthe CPU in order to allow the process which owns the lock to run long enough to\nrelease it.\n\nOn an SMP box, this is a bit more complicated. If you have two CPUs, then\nmaybe, one process can spin, but obviously, more than one spinner is wasting\nCPU, one of the spinners must release its time slice in order for another\nprocess release the resource.\n\nIs there a global area where a single count of all the processes spinning can\nbe kept? That way, when a process fails to acquire a lock, and there are\nalready (num_cpus -1) processes spinning, they can call select() right away.\n\nDoes this sound like an interesting approach?\n", "msg_date": "Sun, 06 Jan 2002 13:38:05 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Spinning verses sleeping in s_lock" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> What is the advantage of spinning?\n\n> On a uniprocessor box, there is \"never\" any advantage because you must\n> release the CPU in order to allow the process which owns the lock to\n> run long enough to release it.\n\nActually, that analysis is faulty.\n\nSuppose that we didn't use a select(), yield(), or anything, but just\nspun until we got the lock. On a uniprocessor this would imply spinning\nuntil the end of our current timeslice, whereupon the CPU would be given\nto someone else. When control finally comes back to our process, more\nthan likely the holder of the spinlock has been allowed to run and has\nfinished his use of the spinlock, and we will be able to get it. Thus,\nin this scenario we waste the rest of our timeslice, but we are able\nto proceed as soon as the scheduler next wishes to give us control.\n\nIf \"the rest of our timeslice\" is not long, this might actually be\nbetter than an immediate delay via select() or usleep(), because those\nwon't allow us to run for X number of milliseconds. It's entirely\npossible that the CPU cycles we \"save\" by not spinning will end up being\nspent in the kernel's idle loop, unless the overall load is so high that\nthe CPU never runs out of things to do. I doubt this approach would be\na winner on average, but to claim that it never wins is wrong.\n\nNow if we have available a sched_yield() type of primitive, that lets\nus give up the processor without committing to any minimum delay before\nwe can be rescheduled, then I think it's usually true that on a\nuniprocessor we might as well do sched_yield after just a single failure\nof TAS(). (However, on some architectures such as Alpha even that much\nis wrong; TAS() can \"fail\" for reasons that do not mean the spinlock is\nlocked. So the most portable thing is to try TAS at least a few times,\neven on a uniprocessor.)\n\n> On an SMP box, this is a bit more complicated. If you have two CPUs, then\n> maybe, one process can spin, but obviously, more than one spinner is wasting\n> CPU, one of the spinners must release its time slice in order for another\n> process release the resource.\n\nI don't believe that either, not completely. A process that is spinning\nbut hasn't got a CPU is not wasting cycles; it's essentially done an\nimplicit sched_yield, and as we just saw that is the most efficient way\nof doing things. What you are really trying to say is that it's\ninefficient for *all* the currently executing processes to be spinning\non a lock that none of them hold. This is true but there's no direct\nway for us to determine (in userland code) that that condition holds.\nIf you try to maintain a userland counter of the number of actively\nspinning processes, you will fail because a process might lose the CPU\nwhile it has the counter incremented. (There are also a whole bunch\nof portability problems associated with trying to build an atomically\nincrementable/decrementable counter, bearing in mind that the counter\ncan't itself be protected by a spinlock.) Nor can you directly\ndetermine whether the current holder of the spinlock is actively\nrunning or not. The most effective indirect test of that condition\nis really to spin for awhile and see if the lock gets released.\n\nA final comment --- given that in 7.2 we only use spinlocks to protect\nvery short segments of code, I believe it's fairly improbable for more\nthan two processes to be contending for a spinlock anyway. So it's\nprobably sufficient to distinguish whether we have one or more than\none CPU, and statically select one of two spinning strategies on that\nbasis. Trying to dynamically adapt for more CPUs/contending processes\nwill reap only minimal returns.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 14:40:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Spinning verses sleeping in s_lock " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > What is the advantage of spinning?\n> \n> > On a uniprocessor box, there is \"never\" any advantage because you must\n> > release the CPU in order to allow the process which owns the lock to\n> > run long enough to release it.\n> \n> Actually, that analysis is faulty.\n> \n> Suppose that we didn't use a select(), yield(), or anything, but just\n> spun until we got the lock. On a uniprocessor this would imply spinning\n> until the end of our current timeslice, whereupon the CPU would be given\n> to someone else. When control finally comes back to our process, more\n> than likely the holder of the spinlock has been allowed to run and has\n> finished his use of the spinlock, and we will be able to get it. Thus,\n> in this scenario we waste the rest of our timeslice, but we are able\n> to proceed as soon as the scheduler next wishes to give us control.\n\nOn average, we would have to assume wasting half of a time slice. Yes, if we\nare at the end of our time slice, spinning may be a winner, but on average, it\nwill be half our time slice.\n\nThis depends on the OS and scheduler and the duration of the time slice.\n\n> \n> If \"the rest of our timeslice\" is not long, this might actually be\n> better than an immediate delay via select() or usleep(), because those\n> won't allow us to run for X number of milliseconds. It's entirely\n> possible that the CPU cycles we \"save\" by not spinning will end up being\n> spent in the kernel's idle loop, unless the overall load is so high that\n> the CPU never runs out of things to do. I doubt this approach would be\n> a winner on average, but to claim that it never wins is wrong.\n\nOh, I may have over stated my position, but on average, it would absolutely be\na loser.\n\n> \n> Now if we have available a sched_yield() type of primitive, that lets\n> us give up the processor without committing to any minimum delay before\n> we can be rescheduled, then I think it's usually true that on a\n> uniprocessor we might as well do sched_yield after just a single failure\n> of TAS(). (However, on some architectures such as Alpha even that much\n> is wrong; TAS() can \"fail\" for reasons that do not mean the spinlock is\n> locked. So the most portable thing is to try TAS at least a few times,\n> even on a uniprocessor.)\n\nTwo things: \n(1) Why doesn't Postgresql use sched_yield() instead of select() in s_lock()?\nThere must be a way wrap that stuff in a macro in a port/os.h file.\n\n(2) TAS fails when it should work on the alpha? Is it known why? \n\n> \n> > On an SMP box, this is a bit more complicated. If you have two CPUs, then\n> > maybe, one process can spin, but obviously, more than one spinner is wasting\n> > CPU, one of the spinners must release its time slice in order for another\n> > process release the resource.\n> \n> I don't believe that either, not completely. A process that is spinning\n> but hasn't got a CPU is not wasting cycles; it's essentially done an\n> implicit sched_yield, and as we just saw that is the most efficient way\n> of doing things. What you are really trying to say is that it's\n> inefficient for *all* the currently executing processes to be spinning\n> on a lock that none of them hold. This is true but there's no direct\n> way for us to determine (in userland code) that that condition holds.\n\nSMP has a lot of issues that are not completely obvious. I really don't like\nspinlocks in userland. In an active server, wasting CPU cycles is just wrong.\n\n> If you try to maintain a userland counter of the number of actively\n> spinning processes, you will fail because a process might lose the CPU\n> while it has the counter incremented. (There are also a whole bunch\n> of portability problems associated with trying to build an atomically\n> incrementable/decrementable counter, bearing in mind that the counter\n> can't itself be protected by a spinlock.) Nor can you directly\n> determine whether the current holder of the spinlock is actively\n> running or not. The most effective indirect test of that condition\n> is really to spin for awhile and see if the lock gets released.\n\nAny userland strategy to mitigate wasted CPU cycles has problems. The very\nnature of the problem is OS related. Should PostgreSQL have a more OS specific\nimplementation in a port section of code?\n\n> \n> A final comment --- given that in 7.2 we only use spinlocks to protect\n> very short segments of code, I believe it's fairly improbable for more\n> than two processes to be contending for a spinlock anyway. So it's\n> probably sufficient to distinguish whether we have one or more than\n> one CPU, and statically select one of two spinning strategies on that\n> basis. Trying to dynamically adapt for more CPUs/contending processes\n> will reap only minimal returns.\n\nThat is really one of those funny things about SMP stuff, you are probably\nright, but sometimes little things really surprise.\n", "msg_date": "Sun, 06 Jan 2002 15:25:20 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: Spinning verses sleeping in s_lock" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> (1) Why doesn't Postgresql use sched_yield() instead of select() in s_lock()?\n\nPortability concerns. It is something that I think we ought to look at\nin a future release. However, based on my latest comparisons I'm no\nlonger willing to think of it as a \"must fix\" for 7.2.\n\n> (2) TAS fails when it should work on the alpha? Is it known why? \n\nBecause the Alpha guys designed it to fail anytime it couldn't succeed\nimmediately, or something like that. Consult the archives around end of\nDec 2000. One thing I recall is that the first TAS attempt after a\nprocess gains the CPU is almost guaranteed to fail, because the\nnecessary page lookup table entries haven't been faulted in. The first\nversion of WAL support in beta 7.1 assumed it could TAS once and yield\nthe CPU on failure, and guess what: it spun forever on Alphas.\n\n> SMP has a lot of issues that are not completely obvious. I really don't like\n> spinlocks in userland. In an active server, wasting CPU cycles is just wrong.\n\nIf it takes more cycles to yield the CPU (and dispatch another process,\nand eventually redispatch your own process) than it does to spin until\nthe other guy lets go of the lock, then \"wasting\" CPU cycles by spinning\nis not wrong. In our present usage of spinlocks this condition is\nalmost guaranteed to be true, for a multiprocessor system. There are\nno OSes that can do two process dispatches in ~20 instructions.\n\nThe real issue here is to devise a solution that costs less than 20\ninstructions on average in the multiprocessor case, while not wasting\ncycles in the uniprocessor case. And is portable. That's a tough\nchallenge. I think we will want to look at developing code to determine\nwhether there's more than one CPU, for sure; one algorithm to do both\ncases optimally seems impossible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 22:11:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Spinning verses sleeping in s_lock " }, { "msg_contents": "> A final comment --- given that in 7.2 we only use spinlocks to protect\n> very short segments of code, I believe it's fairly improbable for more\n> than two processes to be contending for a spinlock anyway. So it's\n> probably sufficient to distinguish whether we have one or more than\n> one CPU, and statically select one of two spinning strategies on that\n> basis. Trying to dynamically adapt for more CPUs/contending processes\n> will reap only minimal returns.\n\nAdded to TODO:\n\n\t* Add code to detect an SMP machine and handle spinlocks\n\t accordingly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 22 Jan 2002 15:39:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Spinning verses sleeping in s_lock" } ]
[ { "msg_contents": "In the JDBC binding, why is PGobject a class instead of an interface?\n\nThis is a moot issue when creating a type from scratch, but Java\ncontains a large number of standard classes (not interfaces) for\nPKIX objects so I'm in a bind when trying to create my own JDBC\nextensions. \n\nSpecifically, some of the key mappings (no pun intended) are:\n\n hugeint <-> java.language.BigInteger\n principal <-> java.security.Principal\n x509 <-> java.security.cert.X509Certificate\n x509_crl <-> java.security.cert.X509CRL\n\nand some additional metamappings between pkcs8 and java.security.KeyStore.\n\nI can implement the mapping by casting between the objects and text,\nbut if a type extension mechanism is available it would be nice to be\nable to hide those details from the user.\n\n\n", "msg_date": "Sun, 6 Jan 2002 17:00:35 -0700 (MST)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": true, "msg_subject": "JDBC: why is PGobject class instead of interface?" }, { "msg_contents": "On Sun, Jan 06, 2002 at 05:00:35PM -0700, Bear Giles wrote:\n> I can implement the mapping by casting between the objects and text,\n> but if a type extension mechanism is available it would be nice to be\n> able to hide those details from the user.\n\nThe type extension mechanism inherent in JDBC is provided by an\nimplementation of java.sql.Connection.setTypeMap(Map map) and related\nmethods. The PostgreSQL JDBC driver has not yet got this feature. I\nthink it would be fine if somebody would add this to PostgreSQL ;-)\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Tue, 8 Jan 2002 16:11:24 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: JDBC: why is PGobject class instead of interface?" }, { "msg_contents": "Bear,\n\nThis question is probably better addressed to the pgsql-jdbc mail list \nthan the hackers mail list.\n\nI don't exactly understand what you are trying to do from your \ndescription, but let me try to explain what I think you are trying to do.\n\nYou want to extend the database and jdbc to support additional datatypes \n(java's BigInteger for example). In jdbc you then want to return from a \ngetObject() call an object that is compatible with the underlying java \nobject. Currently the way you add support in the jdbc driver to new \ndatatypes is to subclass PGobject (that is how money is supported for \nexample). The problem with this approach is that because java doesn't \nhave multiple inheritance, the object can't both extend PGobject and the \nreal underlying java object (BigInteger for example). If PGobject were \nan interface then you could create a PGBigInterger object that \nimplemented PGobject and extended BigInteger, such that the calling \nprogram could use the object as a BigInteger object.\n\nIs this a correct understanding of your problem with PGobject?\n\nI don't know of any reason PGobject needs to be a concrete class. I \nbeleive it could be an interface. But this isn't an area of the code I \nhave spent a lot of time looking at, so I may be wrong.\n\nthanks,\n--Barry\n\n\n\nBear Giles wrote:\n\n> In the JDBC binding, why is PGobject a class instead of an interface?\n> \n> This is a moot issue when creating a type from scratch, but Java\n> contains a large number of standard classes (not interfaces) for\n> PKIX objects so I'm in a bind when trying to create my own JDBC\n> extensions. \n> \n> Specifically, some of the key mappings (no pun intended) are:\n> \n> hugeint <-> java.language.BigInteger\n> principal <-> java.security.Principal\n> x509 <-> java.security.cert.X509Certificate\n> x509_crl <-> java.security.cert.X509CRL\n> \n> and some additional metamappings between pkcs8 and java.security.KeyStore.\n> \n> I can implement the mapping by casting between the objects and text,\n> but if a type extension mechanism is available it would be nice to be\n> able to hide those details from the user.\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n", "msg_date": "Tue, 08 Jan 2002 10:51:12 -0800", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] JDBC: why is PGobject class instead of interface?" } ]
[ { "msg_contents": "I think we have talked before about how pgbench is overly subject to\nrow update contention at small scale factors. Since every transaction\nwants to update one (randomly selected) row of the \"branches\" table,\nand since the number of branches rows is equal to the scale factor,\nthere is certain to be update contention when the number of clients\napproaches or exceeds the scale factor. Even worse, since each update\ncreates another dead row, small scale factors mean that there will be\nmany dead rows for each branch ID, slowing both updates and index\nuniqueness checks.\n\nI have now carried out a set of test runs that I think illustrate\nthis point. I used scale factor 500 (creating an 8Gb database)\nto compare to the scale-factor-50 results I got yesterday. Other\ntest conditions were the same as described in my recent messages\n(this is a Linux 4-way SMP machine). In the attached graph, the\nred line is the best performance I was able to get in the scale-\nfactor-50 case. The green line is the scale-factor-500 results\nfor exactly the same conditions. Although the speed is worse\nfor small numbers of clients (probably because of the larger\namount of work done to deal with a ten-times-larger database),\nthe scale-500 results are better for five or more clients.\n\nWhat's really interesting is that in the scale-500 regime, releasing the\nprocessor with sched_yield() is *not* visibly better than releasing it\nwith select(). Indeed, select() with SPINS_PER_DELAY=1000 seems the\nbest overall performance choice for this example. However the absolute\ndifference between the different spinlock algorithms is quite a bit less\nthan before. I believe this is because there are fewer spinlock\nacquisitions and less spinlock contention, primarily due to fewer\nheap_fetches for dead tuples (each branches row should have only about\n1/10th as many dead tuples in the larger database, due to fewer updates\nper branch with the total number of transactions remaining the same).\n\nThe last line on the chart (marked \"big\") was run with -N 500 -B 3000\ninstead of the -N 100 -B 3800 parameters I've used for the other lines.\n(I had to reduce -B to stay within shmmax 32Mb. Doesn't seem to have\nhurt any, though.) I think comparing this to the scale-50 line\ndemonstrates fairly conclusively that the tailoff in performance is\nassociated with number-of-clients approaching scale factor, and not to\nany inherent problem with lots of clients. It appears that a scale\nfactor less than five times the peak number of clients introduces enough\ndead-tuple and row-contention overhead to affect the results.\n\nBased on these results I think that the spinlock and LWLock performance\nissues we have been discussing are not really as significant for\nreal-world use as they appear when running pgbench with a small scale\nfactor. My inclination right now is to commit the second variant of\nmy LWLock patch, leave spinlock spinning where it is, and call it a\nday for 7.2. We can always revisit this stuff again in future\ndevelopment cycles.\n\n\t\t\tregards, tom lane", "msg_date": "Sun, 06 Jan 2002 20:04:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Effects of pgbench \"scale factor\"" }, { "msg_contents": "> Based on these results I think that the spinlock and LWLock performance\n> issues we have been discussing are not really as significant for\n> real-world use as they appear when running pgbench with a small scale\n> factor. My inclination right now is to commit the second variant of\n> my LWLock patch, leave spinlock spinning where it is, and call it a\n> day for 7.2. We can always revisit this stuff again in future\n> development cycles.\n\nI agree. 7.3 can bring more improvements like SMP detection, dead tuple\nindex markers, and lock granularity improvments. They are all on the\nTODO list or in the 7.3 open items list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 6 Jan 2002 20:59:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Effects of pgbench \"scale factor\"" } ]
[ { "msg_contents": "I added some code to the backend to count the number of LWLockAcquire\ncalls and the number of resultant process blocks (IpcSemaphoreLocks)\nacross all PG processes. Here are the results for a pgbench run with\nscale factor 500, number of clients 10, proposed LWLock patch installed\non that 4-way Linux box.\n\nLWLock\t\tacquires\tblocks\t\tfraction\n\nBufMgrLock\t1238892\t\t60273\t\t0.0486507298457008\nLockMgrLock\t690150\t\t83250\t\t0.120625950880243\nOidGenLock\t10004\t\t3\t\t0.000299880047980808\nXidGenLock\t60019\t\t29\t\t0.000483180326230027\nShmemIndexLock\t180\t\t0\t\t0\nSInvalLock\t817017\t\t1940\t\t0.00237449159564611\nFreeSpaceLock\t290\t\t0\t\t0\nWALInsertLock\t80039\t\t6139\t\t0.0767001086970102\nWALWriteLock\t10194\t\t1180\t\t0.115754365312929\nControlFileLock\t141\t\t0\t\t0\nCLogControlLock\t25366\t\t56\t\t0.0022076795710794\nbuf cxt locks\t930006\t\t172\t\t0.000184945043365312\nbuf io locks\t31859\t\t142\t\t0.00445713926990803\nclog buf locks\t1880\t\t2\t\t0.00106382978723404\n\nInteresting data, eh? In particular, it seems my previous opinion\nthat BufMgrLock was the main issue is all wet: the LockMgrLock accounts\nfor more blockages despite being locked fewer times. AFAICS this must\nmean that the average time of holding LockMgrLock is larger than the\naverage time of holding BufMgrLock, and that we ought to look at how\nto reduce that. The WAL locks also seem to have disproportionately\nlarge blocking percentages.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 06 Jan 2002 23:20:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Further info on LWLock behavior" } ]
[ { "msg_contents": "I got followings with current on AIX 5L using xlc (native AIX\ncompiler):\n\n:\n:\n:\n\"data.c\", line 357.81: 1506-068 (S) Operation between types \"void*\" and \"long\" is not allowed.\n\"data.c\", line 362.90: 1506-068 (S) Operation between types \"void*\" and \"long\" is not allowed.\n\nHere is the portion the code:\n\n *((long long int *) (ind + ind_offset*act_tuple)) = variable->len;\n\nActually the compiler complains about \"ind + ind_offset*act_tuple\", \nwhere :\n\n void *ind;\n long ind_offset;\n int act_tuple;\n\nSo the code tries to add a long value to a void *pointer, which is not\ncorrect since the storage unit size is unknown for void * I think. If\nthe code try to do a pointer calculation, \"ind\" should be casted to an\nappropreate type such as char * or long * etc, depending on the logic\nof the code.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 07 Jan 2002 14:27:43 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "ecpg compile error on AIX" }, { "msg_contents": "On Mon, Jan 07, 2002 at 02:27:43PM +0900, Tatsuo Ishii wrote:\n> I got followings with current on AIX 5L using xlc (native AIX\n> compiler):\n> ... \n> :\n> \"data.c\", line 357.81: 1506-068 (S) Operation between types \"void*\" and \"long\" is not allowed.\n> \"data.c\", line 362.90: 1506-068 (S) Operation between types \"void*\" and \"long\" is not allowed.\n\nArgh, I was afraid something like this would happen.\n\n> *((long long int *) (ind + ind_offset*act_tuple)) = variable->len;\n\nDoes it complain about all variable types?\n\n> So the code tries to add a long value to a void *pointer, which is not\n> correct since the storage unit size is unknown for void * I think. If\n> the code try to do a pointer calculation, \"ind\" should be casted to an\n> appropreate type such as char * or long * etc, depending on the logic\n> of the code.\n\nDoes it work with this?\n\n *((long long int *) ((long long int *)ind + ind_offset*act_tuple)) = variable->len;\n\nIf it does we have to check whether that does what I expect it to.\n\nThe idea was to skip the rest of a struct. Let's assume you have a\nstruct \n{\n\tint foo;\n\tfloat bar;\n}\n\nIf you now read a set of tuples into an array of structs of this type you\nhave to calculate the address of the struct[1].foo, struct[2].foo etc. and\nstruct[1].bar, struct[2].bar, etc.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 7 Jan 2002 17:29:05 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" }, { "msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> Does it work with this?\n\n> *((long long int *) ((long long int *)ind + ind_offset*act_tuple)) = variable->len;\n\n> If it does we have to check whether that does what I expect it to.\n\nind_offset is already a sizeof() measure, isn't it?\nI would guess that what you want is\n\n> *((long long int *) ((char *)ind + ind_offset*act_tuple)) = variable->len;\n\nsince ind_offset*act_tuple is a number expressed in bytes, and should\nnot be scaled up by sizeof(long long int).\n\nAlso, if the code works for you at all, it's because GCC is (in\nviolation of the ANSI C standard) interpreting the construct as\naddition to char* rather than addition to void*. Casting to anything\nother than char* will change the behavior.\n\n(Might be a lot easier just to declare ind as char* instead of void*\nin the first place...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 12:40:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I got followings with current on AIX 5L using xlc (native AIX\n> compiler):\n\n> \"data.c\", line 357.81: 1506-068 (S) Operation between types \"void*\" and \"long\" is not allowed.\n> \"data.c\", line 362.90: 1506-068 (S) Operation between types \"void*\" and \"long\" is not allowed.\n\nWith HP's compiler I get lots of\n\ncc: \"data.c\", line 57: error 1539: Cannot do arithmetic with pointers to objects of unknown size.\n\nwhich is perhaps more to the point. The GCC manual explains why Michael\nwas able to get away with this (I assume he used gcc):\n\n In GNU C, addition and subtraction operations are supported on\n pointers to `void' and on pointers to functions. This is done by\n treating the size of a `void' or of a function as 1.\n\n A consequence of this is that `sizeof' is also allowed on `void' and\n on function types, and returns 1.\n\n The option `-Wpointer-arith' requests a warning if these extensions\n are used.\n\nIt occurs to me that we ought to add -Wpointer-arith to our standard\ngcc options, so that this sort of mistake will be caught sooner in\nfuture.\n\nI consider this compile failure a \"must fix\" before we can go to RC1 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 16:07:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX " }, { "msg_contents": "> In GNU C, addition and subtraction operations are supported on\n> pointers to `void' and on pointers to functions. This is done by\n> treating the size of a `void' or of a function as 1.\n> \n> A consequence of this is that `sizeof' is also allowed on `void' and\n> on function types, and returns 1.\n> \n> The option `-Wpointer-arith' requests a warning if these extensions\n> are used.\n> \n> It occurs to me that we ought to add -Wpointer-arith to our standard\n> gcc options, so that this sort of mistake will be caught sooner in\n> future.\n\nI added this to my Makefile.custom.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 Jan 2002 18:39:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" }, { "msg_contents": "Bruce Momjian writes:\n\n> > It occurs to me that we ought to add -Wpointer-arith to our standard\n> > gcc options, so that this sort of mistake will be caught sooner in\n> > future.\n\nI agree with that. Actually, I could never imagine how one would use\nsizeof(void*) in the first place. I guess one can.\n\n> I added this to my Makefile.custom.\n\nI've had -Wpointer-arith and -Wcast-align in my Makefile.custom for a\nyear, but apparently just last week I didn't do any builds where I\nactually paid attention. :-/\n\nBtw., I've never seen any problems related to -Wcast-align? Is the TODO\nitem obsolete or is it platform-related?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 7 Jan 2002 19:05:03 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > It occurs to me that we ought to add -Wpointer-arith to our standard\n> > > gcc options, so that this sort of mistake will be caught sooner in\n> > > future.\n> \n> I agree with that. Actually, I could never imagine how one would use\n> sizeof(void*) in the first place. I guess one can.\n> \n> > I added this to my Makefile.custom.\n> \n> I've had -Wpointer-arith and -Wcast-align in my Makefile.custom for a\n> year, but apparently just last week I didn't do any builds where I\n> actually paid attention. :-/\n\n\nHere is what I have now for my custom:\n\n CUSTOM_COPT=-g -Wall -Wmissing-prototypes \\\n\t\t\t-Wmissing-declarations -Wpointer-arith -Wcast-align\n\nIn Makefile.global.in, we have:\n\t\n\tifeq ($(GCC), yes)\n\t CFLAGS += -Wall -Wmissing-prototypes -Wmissing-declarations\n\tendif\n\nShould I add -Wpointer-arith -Wcast-align for 7.3?\n\n> Btw., I've never seen any problems related to -Wcast-align? Is the TODO\n> item obsolete or is it platform-related?\n\nThat is a Tom item. I think there is some casting that masks the\nproblem. Tom?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 Jan 2002 19:08:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" }, { "msg_contents": ">> Btw., I've never seen any problems related to -Wcast-align? Is the TODO\n>> item obsolete or is it platform-related?\n\nYouse guys that run on Intel hardware will never see any problems from\nit, except possibly a lost cycle here or there due to unaligned fetches.\n\nBut a lot of non-Intel hardware (particularly RISC architectures) treats\nan unaligned access as a segfault.\n\nRight at the moment we can't usefully enable -Wcast-align because it\ngenerates an unreasonable number of complaints. Someday I'm going to\ntry to clean those all up. My personal todo list has:\n\nReduce, or eliminate entirely, warnings issued by -Wcast-align. gcc will\nwarn about char* to foo* but not about void* to foo*, so the bulk of the\nwarnings might be controllable by using void* in places where we now use\nchar*. Be careful not to introduce arithmetic on void* pointers though;\nuse -Wpointer-arith to catch those. Ideally we should add both of these\n(and maybe some other non-Wall flags) to standard gcc arguments.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 19:14:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX " }, { "msg_contents": "Tom Lane writes:\n\n> >> Btw., I've never seen any problems related to -Wcast-align? Is the TODO\n> >> item obsolete or is it platform-related?\n>\n> Youse guys that run on Intel hardware will never see any problems from\n> it, except possibly a lost cycle here or there due to unaligned fetches.\n\nI should have said, I've never seen any warnings generated by\n-Wcast-align. Shouldn't the set of warnings at least be the same on all\nplatforms (at least those with the same integer size) or does it just warn\nif there would actually be a problem on that platform?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 7 Jan 2002 19:41:33 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I should have said, I've never seen any warnings generated by\n> -Wcast-align.\n\nInteresting. On HP-PA with gcc 2.95.3 I see a ton of them. I'm too\nlazy to count 'em, but it's certainly in the thousands, eg from the\nfirst backend module to be compiled I get\n\nmake[4]: Entering directory `/home/postgres/pgsql/src/backend/access/common'\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -g -Wcast-align -I../../../../src/include -c -o heaptuple.o heaptuple.c\nheaptuple.c: In function `ComputeDataSize':\nheaptuple.c:52: warning: cast increases required alignment of target type\nheaptuple.c: In function `DataFill':\nheaptuple.c:113: warning: cast increases required alignment of target type\nheaptuple.c:113: warning: cast increases required alignment of target type\nheaptuple.c:123: warning: cast increases required alignment of target type\nheaptuple.c:134: warning: cast increases required alignment of target type\nheaptuple.c: In function `nocachegetattr':\nheaptuple.c:299: warning: cast increases required alignment of target type\nheaptuple.c:299: warning: cast increases required alignment of target type\nheaptuple.c:357: warning: cast increases required alignment of target type\nheaptuple.c:360: warning: cast increases required alignment of target type\nheaptuple.c:360: warning: cast increases required alignment of target type\nheaptuple.c:400: warning: cast increases required alignment of target type\nheaptuple.c:408: warning: cast increases required alignment of target type\nheaptuple.c:408: warning: cast increases required alignment of target type\nheaptuple.c: In function `heap_copytuple':\nheaptuple.c:486: warning: cast increases required alignment of target type\nheaptuple.c: In function `heap_formtuple':\nheaptuple.c:608: warning: cast increases required alignment of target type\nheaptuple.c:610: warning: cast increases required alignment of target type\nheaptuple.c:610: warning: cast increases required alignment of target type\nheaptuple.c: In function `heap_modifytuple':\nheaptuple.c:680: warning: cast increases required alignment of target type\nheaptuple.c:680: warning: cast increases required alignment of target type\nheaptuple.c: In function `heap_addheader':\nheaptuple.c:765: warning: cast increases required alignment of target type\nheaptuple.c:767: warning: cast increases required alignment of target type\nheaptuple.c:767: warning: cast increases required alignment of target type\n\n\nNeedless to say I'd be pretty much unable to spot any real warnings\nif we were to turn this on by default today.\n\n> Shouldn't the set of warnings at least be the same on all\n> platforms (at least those with the same integer size) or does it just warn\n> if there would actually be a problem on that platform?\n\nApparently the latter. Curious; you'd think the former would be more\nuseful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 19:57:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX " }, { "msg_contents": "> Shouldn't the set of warnings at least be the same on all\n> platforms (at least those with the same integer size) or does it just warn\n> if there would actually be a problem on that platform?\n\nIn fact, the GCC manual makes it pretty clear that the check is machine\ndependent:\n\n`-Wcast-align'\n Warn whenever a pointer is cast such that the required alignment\n of the target is increased. For example, warn if a `char *' is\n cast to an `int *' on machines where integers can only be accessed\n at two- or four-byte boundaries.\n\nI suppose this means that we'd better run test compilations on Alphas\n(or some other 8-byte-long environment) too, whenever we try to turn\non -Wcast-align.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 20:00:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX " }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > It occurs to me that we ought to add -Wpointer-arith to our standard\n> > > gcc options, so that this sort of mistake will be caught sooner in\n> > > future.\n> \n> I agree with that. Actually, I could never imagine how one would use\n> sizeof(void*) in the first place. I guess one can.\n\nSo I should add -Wpointer-arith to standard gcc compiles, but not\n-Wcast-align because it generates too many warnings on some platforms. \nIs this right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 7 Jan 2002 22:48:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" }, { "msg_contents": "On Mon, Jan 07, 2002 at 12:40:07PM -0500, Tom Lane wrote:\n> ind_offset is already a sizeof() measure, isn't it?\n> I would guess that what you want is\n> \n> > *((long long int *) ((char *)ind + ind_offset*act_tuple)) = variable->len;\n> \n> since ind_offset*act_tuple is a number expressed in bytes, and should\n> not be scaled up by sizeof(long long int).\n\nYes, you're right of course. I should have thought more before typing.\n\n> Also, if the code works for you at all, it's because GCC is (in\n> violation of the ANSI C standard) interpreting the construct as\n> addition to char* rather than addition to void*. Casting to anything\n> other than char* will change the behavior.\n\nThat's what I was afraid of and why I asked for some testing on other archs. \nRight now I only have access to Intel based Linux.\n\n> (Might be a lot easier just to declare ind as char* instead of void*\n> in the first place...)\n\nDid that. My test cases all work well. Please test on HP, AIX or whatever.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 8 Jan 2002 15:22:39 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" }, { "msg_contents": "On Mon, Jan 07, 2002 at 04:07:40PM -0500, Tom Lane wrote:\n> I consider this compile failure a \"must fix\" before we can go to RC1 ...\n\nCommit is underways.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Tue, 8 Jan 2002 15:22:55 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ecpg compile error on AIX" } ]
[ { "msg_contents": "Can anybody explain the following code detail ?\n\nA comment in execMain.c tells us:\n\n--start code snippet--\n * NB: the CurrentMemoryContext when this is called must be the context\n * to be used as the per-query context for the query plan. ExecutorRun()\n * and ExecutorEnd() must be called in this same memory context.\n * ----------------------------------------------------------------\n */\nTupleDesc\nExecutorStart(QueryDesc *queryDesc, EState *estate)\n--end code snippet--\n\nNevertheless in ExecRelCheck a context switch to per-query memory\ncontext is made:\n\n--start code snippet--\n /*\n * If first time through for this result relation, build expression\n * nodetrees for rel's constraint expressions. Keep them in the\n * per-query memory context so they'll survive throughout the query.\n */\n if (resultRelInfo->ri_ConstraintExprs == NULL)\n {\n oldContext = MemoryContextSwitchTo(estate->es_query_cxt);\n resultRelInfo->ri_ConstraintExprs =\n (List **) palloc(ncheck * sizeof(List *));\n for (i = 0; i < ncheck; i++)\n {\n qual = (List *) stringToNode(check[i].ccbin);\n resultRelInfo->ri_ConstraintExprs[i] = qual;\n }\n MemoryContextSwitchTo(oldContext);\n }\n--end code snippet--\n\nIs this a switch from per-query memory context to per-query memory\ncontext, hence not necessary, or do I miss something ?\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Mon, 7 Jan 2002 08:59:16 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "Why MemoryContextSwitch in ExecRelCheck ?" }, { "msg_contents": "Holger Krug <hkrug@rationalizer.com> writes:\n> Nevertheless in ExecRelCheck a context switch to per-query memory\n> context is made:\n> Is this a switch from per-query memory context to per-query memory\n> context, hence not necessary, or do I miss something ?\n\n[ thinks ... ] It might be unnecessary. I'm not convinced that the\nper-query context would always be the active one when ExecRelCheck is\ncalled, however. There are various per-tuple contexts that might be\nused as well.\n\nMemoryContextSwitchTo() is cheap enough that I prefer to call it when\nthere's any doubt, rather than build a routine that will fail silently\nif it's called in the wrong context. There are two typical scenarios\nfor routines that are building data structures that will outlive the\nroutine's execution:\n\n1. Data structure is to be returned to the caller. In this case the\ncaller is responsible for identifying the context to allocate the data\nstructure in, either explicitly or by passing it as the current context.\n\n2. Data structure is owned and managed by the routine, which must know\nwhich context it's supposed to live in. In these cases I think the\nroutine ought always to explicitly switch to that context, not assume\nthat it's being called in that context.\n\nI've been trying to migrate away from running with CurrentMemoryContext\nset to anything longer-lived than a per-tuple context, though the\nproject is by no means complete.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 12:28:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why MemoryContextSwitch in ExecRelCheck ? " }, { "msg_contents": "On Mon, Jan 07, 2002 at 12:28:12PM -0500, Tom Lane wrote:\n> Holger Krug <hkrug@rationalizer.com> writes:\n> > Nevertheless in ExecRelCheck a context switch to per-query memory\n> > context is made:\n> > Is this a switch from per-query memory context to per-query memory\n> > context, hence not necessary, or do I miss something ?\n> \n> [ thinks ... ] It might be unnecessary. I'm not convinced that the\n> per-query context would always be the active one when ExecRelCheck is\n> called, however. There are various per-tuple contexts that might be\n> used as well.\n\nI think, there aren't, but nevertheless, you're principles stated\nbelow are convincing.\n \n> MemoryContextSwitchTo() is cheap enough that I prefer to call it when\n> there's any doubt, rather than build a routine that will fail silently\n> if it's called in the wrong context. There are two typical scenarios\n> for routines that are building data structures that will outlive the\n> routine's execution:\n> \n> 1. Data structure is to be returned to the caller. In this case the\n> caller is responsible for identifying the context to allocate the data\n> structure in, either explicitly or by passing it as the current context.\n> \n> 2. Data structure is owned and managed by the routine, which must know\n> which context it's supposed to live in. In these cases I think the\n> routine ought always to explicitly switch to that context, not assume\n> that it's being called in that context.\n\nOK. I wondered, because this is not done for the trigger related\ncache, but only for the check related cache. Now I understand, it's\nwork in progress. (I think, very good work, indeed, because the code\nis astonishingly well readable.)\n\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Mon, 7 Jan 2002 19:30:53 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "Re: Why MemoryContextSwitch in ExecRelCheck ?" }, { "msg_contents": "Holger Krug <hkrug@rationalizer.com> writes:\n>> [ thinks ... ] It might be unnecessary. I'm not convinced that the\n>> per-query context would always be the active one when ExecRelCheck is\n>> called, however. There are various per-tuple contexts that might be\n>> used as well.\n\n> I think, there aren't,\n\nRight now, it might well be the case that ExecRelCheck is always called\nin the per-query context. The point I was trying to make is that I'd\nlike to change the code so that we don't run so much code with current\ncontext set to per-query context; at which point ExecRelCheck will fail\nif it hasn't got that MemoryContextSwitchTo. So, yeah, it's work in\nprogress.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 13:38:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Why MemoryContextSwitch in ExecRelCheck ? " } ]
[ { "msg_contents": "Hello,\n\nI have an table with ca 1.7 million records, with the following structure:\n\n Table \"iplog_gate200112\"\n Attribute | Type | Modifier \n-----------+-----------+----------\n ipaddr | inet | \n input | bigint | \n output | bigint | \n router | text | \n ipdate | timestamp | \nIndices: iplog_gate200112_ipaddr_idx,\n iplog_gate200112_ipdate_idx\n\nthe following query \n\nexplain\nSELECT sum(input) FROM iplog_gate200112 \nWHERE \n '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' AND \n '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02' AND \n network_subeq(ipaddr, '193.68.240.0/20') AND 'uni-gw' ~ router;\n\nresults in \n\nNOTICE: QUERY PLAN:\n\nAggregate (cost=51845.51..51845.51 rows=1 width=8)\n -> Seq Scan on iplog_gate200112 (cost=0.00..51845.04 rows=190 width=8)\n\n\nWhy would it not want to use index scan?\n\nStatistics for the table are as follows (from pg_statistic s, pg_attribute a, \npg_class c\nwhere starelid = c.oid and attrelid = c.oid and staattnum = attnum\nand relname = 'iplog_gate200112')\n\n attname | attdispersion | starelid | staattnum | staop | stanullfrac | \nstacommonfrac | stacommonval | staloval | \nstahival\n---------+---------------+-----------+-----------+-------+-------------+-------\n--------+------------------------+------------------------+--------------------\n----\n ipaddr | 8.85397e-05 | 190565949 | 1 | 1203 | 0 | \n0.000441917 | 192.92.129.1 | 192.92.129.0 | 212.72.197.154\n input | 0.0039343 | 190565949 | 2 | 412 | 0 | \n0.0183278 | 0 | 0 | 5929816798\n output | 0.724808 | 190565949 | 3 | 412 | 0 | \n0.835018 | 0 | 0 | 2639435033\n router | 0.222113 | 190565949 | 4 | 664 | 0 | \n0.416541 | sofia5 | bourgas1 | varna3\n ipdate | 0.014311 | 190565949 | 5 | 1322 | 0 | \n0.0580676 | 2001-12-04 00:00:00+02 | 2001-12-01 00:00:00+02 | 2001-12-31 \n00:00:00+02\n(5 rows)\n\nThe query \n\nexplain\nSELECT sum(input) FROM iplog_gate200112 \nWHERE \n ipdate < '2001-12-01 00:00:00+02' AND \n network_subeq(ipaddr, '193.68.240.0/20') AND 'uni-gw' ~ router;\n\nproduces \n\nAggregate (cost=4.91..4.91 rows=1 width=8)\n -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 \n(cost=0.00..4.91 rows=1 width=8)\n\nNote there are no records with ipdate < '2001-12-01 00:00:00+02' in the table.\n\nCould anyone sched some light? This is on 7.1.3.\n\nDaniel\n\n", "msg_date": "Mon, 07 Jan 2002 20:41:07 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "again on index usage" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> explain\n> SELECT sum(input) FROM iplog_gate200112 \n> WHERE \n> '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' AND \n> '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02' AND \n> network_subeq(ipaddr, '193.68.240.0/20') AND 'uni-gw' ~ router;\n\n> results in \n\n> NOTICE: QUERY PLAN:\n\n> Aggregate (cost=51845.51..51845.51 rows=1 width=8)\n> -> Seq Scan on iplog_gate200112 (cost=0.00..51845.04 rows=190 width=8)\n\n> Why would it not want to use index scan?\n\nIt's difficult to tell from this what it thinks the selectivity of the\nipdate index would be, since the rows estimate includes the effect of\nthe ipaddr and router restrictions. What do you get from just\n\nexplain\nSELECT sum(input) FROM iplog_gate200112 \nWHERE \n '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' AND \n '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02';\n\nIf you say \"set enable_seqscan to off\", does that change the plan?\n\nBTW, the planner does not associate function calls with indexes. If you\nwant to have the ipaddr index considered for this query, you need to write\nipaddr <<= '193.68.240.0/20' not network_subeq(ipaddr, '193.68.240.0/20').\n(But IIRC, that only works in 7.2 anyway, not earlier releases :-()\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 07 Jan 2002 14:41:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "> It's difficult to tell from this what it thinks the selectivity of the\n> ipdate index would be, since the rows estimate includes the effect of\n> the ipaddr and router restrictions. What do you get from just\n> \n> explain\n> SELECT sum(input) FROM iplog_gate200112 \n> WHERE \n> '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' AND \n> '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02';\n> \n\nI don't know if this'd help, but if you're suming the data and running\nthis query often, see if a function index would help:\n\nCREATE INDEX sum_input_fnc_idx ON iplog_gate200112 (sum(input));\n\n-sc\n\n-- \nSean Chittenden\n", "msg_date": "Mon, 7 Jan 2002 17:53:21 -0800", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": ">>>Tom Lane said:\n > It's difficult to tell from this what it thinks the selectivity of the\n > ipdate index would be, since the rows estimate includes the effect of\n > the ipaddr and router restrictions. What do you get from just\n > \n > explain\n > SELECT sum(input) FROM iplog_gate200112 \n > WHERE \n > '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' A\n ND \n > '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02';\n\nSame result (sorry, should have included this originally):\n\n\nAggregate (cost=47721.72..47721.72 rows=1 width=8)\n -> Seq Scan on iplog_gate200112 (cost=0.00..47579.54 rows=56873 width=8)\n\n\n > If you say \"set enable_seqscan to off\", does that change the plan?\n\nYes. As expected (I no longer have the problem of NaN estimates :)\n\nAggregate (cost=100359.71..100359.71 rows=1 width=8)\n -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 \n(cost=0.00..100217.52 rows=56873 width=8)\n\nMy belief is that the planner does not want to use index due to low value \ndispersion of the indexed attribute. When splitting the table into several \nsmaller tables, index is used.\n\nThis bites me, because each such query takes at least 3 minutes and the script \nthat generates these needs to execute few thousands queries.\n\n > BTW, the planner does not associate function calls with indexes. If you\n > want to have the ipaddr index considered for this query, you need to write\n > ipaddr <<= '193.68.240.0/20' not network_subeq(ipaddr, '193.68.240.0/20').\n > (But IIRC, that only works in 7.2 anyway, not earlier releases :-()\n\nThis is what I though too, but using the ipdate index will be sufficient.\n\nI understand my complaint is not a bug, but rather question of proper planner \noptimization (it worked 'as expected' in 7.0). Perhaps the planner should \nconsider the total number of rows, as well as the dispersion factor. With the \ndispersion being around 1.5% and total rows 1.7 million this gives about 25k \nrows with the same value - large enough to trigger sequential scan, as far as \nI understand it, but the cost of scanning 1.7 million rows sequentially is \njust too high.\n\nBy the way, the same query takes approx 10 sec with set enable_seqscan to off.\n\nDaniel\n\n", "msg_date": "Tue, 08 Jan 2002 11:22:03 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> Same result (sorry, should have included this originally):\n\n> Aggregate (cost=47721.72..47721.72 rows=1 width=8)\n> -> Seq Scan on iplog_gate200112 (cost=0.00..47579.54 rows=56873 width=8)\n\n>>> If you say \"set enable_seqscan to off\", does that change the plan?\n\n> Aggregate (cost=100359.71..100359.71 rows=1 width=8)\n> -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 \n> (cost=0.00..100217.52 rows=56873 width=8)\n\nSo, what we've got here is a difference of opinion: the planner thinks\nthat the seqscan will be faster. How many rows are actually selected\nby this WHERE clause? How long does each plan actually take?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 09:37:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > Same result (sorry, should have included this originally):\n > \n > > Aggregate (cost=47721.72..47721.72 rows=1 width=8)\n > > -> Seq Scan on iplog_gate200112 (cost=0.00..47579.54 rows=56873 width=\n 8)\n > \n > >>> If you say \"set enable_seqscan to off\", does that change the plan?\n > \n > > Aggregate (cost=100359.71..100359.71 rows=1 width=8)\n > > -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 \n > > (cost=0.00..100217.52 rows=56873 width=8)\n > \n > So, what we've got here is a difference of opinion: the planner thinks\n > that the seqscan will be faster. How many rows are actually selected\n > by this WHERE clause? How long does each plan actually take?\n > \n > \t\t\tregards, tom lane\n\n3-5 minutes with sequential scan; 10-15 sec with index scan. The query returns \n4062 rows. Out of ca 1700000 rows.\n\nWith only the datetime constraints (which relates to the index), the number of \nrows is 51764.\n\nIn any case, sequential scan of millions of rows cannot be faster than index \nscan. The average number of records for each index key is around 25000 - \nperhaps the planner thinks because the number of tuples in this case is \nhigher, it should prefer sequential scan. I guess the planner will do better \nif there is some scaling of these values with respect to the total number of \nrows.\n\nDaniel\n\n\n", "msg_date": "Wed, 09 Jan 2002 09:45:51 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Daniel Kalchev wrote:\n> \n> >>>Tom Lane said:\n> > Daniel Kalchev <daniel@digsys.bg> writes:\n> > > Same result (sorry, should have included this originally):\n> >\n> > >>> If you say \"set enable_seqscan to off\", does that change the plan?\n> >\n> > > Aggregate (cost=100359.71..100359.71 rows=1 width=8)\n> > > -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112\n> > > (cost=0.00..100217.52 rows=56873 width=8)\n> >\n> > So, what we've got here is a difference of opinion: the planner thinks\n> > that the seqscan will be faster. How many rows are actually selected\n> > by this WHERE clause? How long does each plan actually take?\n> >\n> > regards, tom lane\n> \n> 3-5 minutes with sequential scan; 10-15 sec with index scan. The query returns\n> 4062 rows. Out of ca 1700000 rows.\n> \n> With only the datetime constraints (which relates to the index), the number of\n> rows is 51764.\n\nThe planner estimates 56873 rows. It seems not that bad.\nA plausible reason is your table is nearly clustered \nas to ipdate.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 09 Jan 2002 18:45:22 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> Aggregate (cost=47721.72..47721.72 rows=1 width=8)\n> -> Seq Scan on iplog_gate200112 (cost=0.00..47579.54 rows=56873 width=\n> 8)\n>>> \n> If you say \"set enable_seqscan to off\", does that change the plan?\n>>> \n> Aggregate (cost=100359.71..100359.71 rows=1 width=8)\n> -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 \n> (cost=0.00..100217.52 rows=56873 width=8)\n>>> \n>>> So, what we've got here is a difference of opinion: the planner thinks\n>>> that the seqscan will be faster. How many rows are actually selected\n>>> by this WHERE clause? How long does each plan actually take?\n\n> 3-5 minutes with sequential scan; 10-15 sec with index scan. The query returns \n> 4062 rows. Out of ca 1700000 rows.\n\n> With only the datetime constraints (which relates to the index), the number of \n> rows is 51764.\n\nHm. Okay, so the number-of-rows estimate is not too far off. I concur\nwith Hiroshi's comment: the reason the indexscan is so fast must be that\nthe table is clustered (physical order largely agrees with index order).\nThis would obviously hold if the records were entered in order by\nipdate; is that true?\n\nThe 7.2 planner does try to estimate index order correlation, and would\nbe likely to price this indexscan much lower, so that it would make the\nright choice. I'd suggest updating to 7.2 as soon as we have RC1 out.\n(Don't do it yet, though, since we've got some timestamp bugs to fix\nthat are probably going to require another initdb.)\n\n> In any case, sequential scan of millions of rows cannot be faster than index \n> scan.\n\nSnort. If that were true, we could get rid of most of the complexity\nin the planner.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 10:14:05 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>Tom Lane said:\n > Hm. Okay, so the number-of-rows estimate is not too far off. I concur\n > with Hiroshi's comment: the reason the indexscan is so fast must be that\n > the table is clustered (physical order largely agrees with index order).\n > This would obviously hold if the records were entered in order by\n > ipdate; is that true?\n\nYes. But... do you want me to cluster it by ipaddr for example and try it \nagain? I understand the clustering might help with sequential scans, but why \nwould it help with index scans?\n\nDaniel\n\n", "msg_date": "Wed, 09 Jan 2002 18:37:49 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> I understand the clustering might help with sequential scans, but why \n> would it help with index scans?\n\nNo, the other way around: it makes no difference for seq scans, but can\nspeed up index scans quite a lot. With a clustered table, successive\nindex-driven fetches tend to hit the same pages rather than hitting\nrandom pages throughout the table. That saves I/O.\n\nGiven the numbers you were quoting, if the table were in perfectly\nrandom order by ipdate then there would probably have been about three\nrows per page that the indexscan would've had to fetch. This would mean\ntouching each page three times in some random order. Unless the table\nis small enough to fit in Postgres' shared buffer cache, that's going to\nrepresent a lot of extra I/O --- a lot more than reading each page only\nonce, as a seqscan would do. At the other extreme, if the table is\nperfectly ordered by ipdate then the indexscan need only hit a small\nnumber of pages (all the rows we want are in a narrow range) and we\ntouch each page many times before moving on to the next. Very few I/O\nrequests in that case.\n\n7.1 does not have any statistics about table order, so it uses the\nconservative assumption that the ordering is random. 7.2 has more\nstatistical data and perhaps will make better estimates about the\ncost of indexscans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 14:48:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > I understand the clustering might help with sequential scans, but why \n > > would it help with index scans?\n > \n > No, the other way around: it makes no difference for seq scans, but can\n > speed up index scans quite a lot. With a clustered table, successive\n > index-driven fetches tend to hit the same pages rather than hitting\n > random pages throughout the table. That saves I/O.\n\nOk, time to go home :-), but...\n\n > Given the numbers you were quoting, if the table were in perfectly\n > random order by ipdate then there would probably have been about three\n > rows per page that the indexscan would've had to fetch. This would mean\n > touching each page three times in some random order. Unless the table\n > is small enough to fit in Postgres' shared buffer cache, that's going to\n > represent a lot of extra I/O --- a lot more than reading each page only\n > once, as a seqscan would do. At the other extreme, if the table is\n > perfectly ordered by ipdate then the indexscan need only hit a small\n > number of pages (all the rows we want are in a narrow range) and we\n > touch each page many times before moving on to the next. Very few I/O\n > requests in that case.\n\nIn any case, if we need to hit 50k pages (assuming the indexed data is \nrandomly scattered in the file), and having to read these three times each, it \nwill be less I/O than having to read 1.7 million records. The table will never \nbe laid sequentially on the disk, at least not in this case (which adds to the \ntable day after day - and this is why data is almost ordered by ipdate).\n\nWhat I am arguing about is the scaling - is 50k random reads worse than 1.7 \nmillion sequential reads? Eventually considering the tuple size, disk block \nsize etc.\n\nI will wait patiently for 4.2 to release and see how this same table performs. \n:-)\n\nDaniel\n\n", "msg_date": "Wed, 09 Jan 2002 22:06:43 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> In any case, if we need to hit 50k pages (assuming the indexed data is \n> randomly scattered in the file), and having to read these three times each, it \n> will be less I/O than having to read 1.7 million records.\n\nHow do you arrive at that? Assuming 100 records per page (probably the\nright order of magnitude), the seqscan alternative is 17k page reads.\nYes, you examine more tuples, but CPUs are lots faster than disks.\n\nThat doesn't even address the fact that Unix systems reward sequential\nreads and penalize random access. In a seqscan, we can expect that the\nkernel will schedule the next page read before we ask for it, so that\nour CPU time to examine a page is overlapped with I/O for the next page.\nIn an indexscan that advantage goes away, because neither we nor the\nkernel know which page will be touched next. On top of the loss of\nread-ahead, the filesystem is probably laid out in a way that rewards\nsequential access with fewer and shorter seeks.\n\nThe tests I've made suggest that the penalty involved is about a factor\nof four -- ie, a seqscan can scan four pages in the same amount of time\nthat it takes to bring in one randomly-accessed page.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 15:20:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > In any case, if we need to hit 50k pages (assuming the indexed data is \n > > randomly scattered in the file), and having to read these three times each\n , it \n > > will be less I/O than having to read 1.7 million records.\n > \n > How do you arrive at that? Assuming 100 records per page (probably the\n > right order of magnitude), the seqscan alternative is 17k page reads.\n > Yes, you examine more tuples, but CPUs are lots faster than disks.\n\nI tried this:\n\ndb=# select * into iplog_test from iplog_gate200112;\nSELECT\ndb=# create index iplog_test_ipaddr_idx on iplog_test(ipaddr);\nCREATE\ndb=# cluster iplog_test_ipaddr_idx on iplog_test;\nCLUSTER\ndb=# create index iplog_test_ipdate_idx on iplog_test(ipdate);\nCREATE\ndb=# vacuum verbose analyze iplog_test;\nNOTICE: --Relation iplog_test--\nNOTICE: Pages 17722: Changed 0, reaped 0, Empty 0, New 0; Tup 1706202: Vac 0, \nKeep/VTL 0/0, Crash 0, UnUsed 0, MinLen 80, MaxLen 88; Re-using: Free/Avail. \nSpace 0/0; EndEmpty/Avail. Pages 0/0. CPU 1.48s/-1.86u sec.\nNOTICE: Index iplog_test_ipaddr_idx: Pages 5621; Tuples 1706202. CPU \n0.51s/1.80u sec.\nNOTICE: Index iplog_test_ipdate_idx: Pages 4681; Tuples 1706202. CPU \n0.36s/1.92u sec.\nNOTICE: --Relation pg_toast_253297758--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL \n0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; \nEndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_toast_253297758_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\nVACUUM\ndb=# explain\ndb-# SELECT sum(input), sum(output) FROM iplog_test \ndb-# WHERE \ndb-# '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' \nAND\ndb-# '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02' \nAND\ndb-# ipaddr <<= '193.68.240.0/20' AND 'uni-gw' ~ router;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=56112.97..56112.97 rows=1 width=16)\n -> Seq Scan on iplog_test (cost=0.00..56111.54 rows=284 width=16)\n\nEXPLAIN\n\nquery runs for ca 3.5 minutes.\n\ndb=# set enable_seqscan to off;\n\nthe query plan is\n\nAggregate (cost=100507.36..100507.36 rows=1 width=16)\n -> Index Scan using iplog_test_ipdate_idx on iplog_test \n(cost=0.00..100505.94 rows=284 width=16)\n\nquery runs for ca 2.2 minutes.\n\nMoves closer to your point :-)\n\nAnyway, the platform is an dual Pentium III 550 MHz (intel) machine with 512 \nMB RAM, with 15000 RPM Cheetah for the database, running BSD/OS 4.2. The \nmachine is reasonably loaded all the time, so this is very much real-time test.\n\nI agree, that with the 'wrong' clustering the index scan is not so much faster \nthan the sequential scan.\n\nPerhaps I need to tune this machine's costs to prefer more disk intensive \noperations over CPU intensive operations?\n\nLet's see what 4.2 will result in.\n\nDaniel\n\n", "msg_date": "Thu, 10 Jan 2002 11:39:31 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> I agree, that with the 'wrong' clustering the index scan is not so\n> much faster than the sequential scan.\n\nIt would be interesting to check whether there is any correlation\nbetween ipaddr and ipdate in your test data. Perhaps clustering\non ipaddr might not destroy the ordering on ipdate as much as you\nthought. A more clearly random-order test would go:\n\nselect * into iplog_test from iplog_gate200112 order by random();\ncreate index iplog_test_ipdate_idx on iplog_test(ipdate);\nvacuum verbose analyze iplog_test;\n<< run queries >>\n\n> Perhaps I need to tune this machine's costs to prefer more disk intensive \n> operations over CPU intensive operations?\n\nPossibly. I'm not sure there's much point in tuning the cost estimates\nuntil the underlying model is more nearly right (ie, knows something\nabout correlation). Do you care to try your dataset with 7.2 beta?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 10:03:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>Tom Lane said:\n > It would be interesting to check whether there is any correlation\n > between ipaddr and ipdate in your test data. Perhaps clustering\n > on ipaddr might not destroy the ordering on ipdate as much as you\n > thought. A more clearly random-order test would go:\n > \n > select * into iplog_test from iplog_gate200112 order by random();\n > create index iplog_test_ipdate_idx on iplog_test(ipdate);\n > vacuum verbose analyze iplog_test;\n > << run queries >>\n\nNOTICE: --Relation iplog_test--\nNOTICE: Pages 17761: Changed 17761, reaped 0, Empty 0, New 0; Tup 1706202: \nVac 0, Keep/VTL 0/0, Crash 0, UnUsed 0, MinLen 80, MaxLen 88; Re-using: \nFree/Avail. Space 0/0; EndEmpty/Avail. Pages 0/0. CPU 2.36s/0.32u sec.\nNOTICE: Index iplog_test_ipdate_idx: Pages 4681; Tuples 1706202. CPU \n0.26s/1.98u sec.\nNOTICE: --Relation pg_toast_275335644--\nNOTICE: Pages 0: Changed 0, reaped 0, Empty 0, New 0; Tup 0: Vac 0, Keep/VTL \n0/0, Crash 0, UnUsed 0, MinLen 0, MaxLen 0; Re-using: Free/Avail. Space 0/0; \nEndEmpty/Avail. Pages 0/0. CPU 0.00s/0.00u sec.\nNOTICE: Index pg_toast_275335644_idx: Pages 1; Tuples 0. CPU 0.00s/0.00u sec.\nNOTICE: Analyzing...\n\n\nexplain (with enable_seqscan to on)\n\nAggregate (cost=56151.97..56151.97 rows=1 width=16)\n -> Seq Scan on iplog_test (cost=0.00..56150.54 rows=284 width=16)\n\naverage run time 4 minutes\n\n(iostat before seq run)\n\n tin tout sps tps msps sps tps msps usr nic sys int idl\n 5 44 94 6 5.9 6842 120 3.3 25 11 6 0 58\n 1 14 27 1 5.9 5968 138 3.6 22 11 5 0 62\n 5 44 58 2 6.3 5647 117 3.0 27 9 6 0 58\n 1 13 7 1 5.0 5723 125 3.4 24 10 5 0 61\n 0 15 50 2 4.5 5606 110 3.1 27 10 5 0 58\n 5 44 90 5 9.1 4702 87 2.5 28 12 4 0 55\n 1 15 52 1 6.3 5045 114 4.1 24 10 4 0 61\n\n(iostat during seq run)\n\n tin tout sps tps msps sps tps msps usr nic sys int idl\n 1 40 41 2 2.1 5847 123 3.6 25 11 4 0 60\n 1 16 164 13 6.2 7280 128 3.2 28 8 8 0 57\n 0 13 36 1 7.9 6059 147 3.9 25 8 5 0 62\n 0 13 48 3 5.3 6691 126 3.4 26 8 7 0 59\n 0 13 28 3 4.6 6473 103 2.3 28 11 7 0 54\n 0 13 138 11 7.6 10848 151 4.9 19 6 6 0 69\n 0 13 33 3 3.3 5568 100 3.6 21 16 3 0 59\n 0 13 24 2 1.1 6752 144 3.4 22 12 2 0 64\n\n(sometime at the end of query run)\n\n tin tout sps tps msps sps tps msps usr nic sys int idl\n 0 38 20 2 5.5 5621 57 1.2 23 23 3 0 51\n 0 13 89 7 7.7 8854 101 3.4 21 18 4 0 57\n 0 13 72 6 7.3 9929 88 2.2 18 21 4 0 57\n 0 13 129 6 9.6 4865 43 1.0 21 24 4 0 51\n 0 13 72 3 4.2 5421 46 0.5 24 22 4 0 50\n 0 13 52 2 3.5 5877 64 1.8 22 21 4 0 53\n 0 13 50 3 6.1 5561 54 1.7 19 26 3 0 52\n 0 13 42 1 3.8 5455 76 1.4 22 22 3 0 53\n\n(see lower msps - perhaps other queries slowed down? - but then again returned \nto 'normal')\n\n 0 13 244 20 6.6 6629 199 4.1 19 9 5 0 67\n 0 13 68 4 6.2 6080 191 4.3 14 14 3 0 70\n 0 13 75 3 5.9 6542 214 4.1 19 8 4 0 70\n 0 13 615 18 5.0 5454 129 4.1 20 18 3 0 59\n 0 13 88 2 5.7 3718 48 2.5 21 21 4 0 54\n 0 13 46 3 3.1 4533 75 2.9 20 19 5 0 56\n 0 13 143 7 5.1 4349 58 2.7 22 18 4 0 55\n 0 13 58 2 9.9 4038 45 2.0 20 24 4 0 52\n 0 13 111 4 5.8 4523 60 3.4 18 22 4 0 56\n\n(with enable_seqscan to off)\n\nAggregate (cost=85110.08..85110.08 rows=1 width=16)\n -> Index Scan using iplog_test_ipdate_idx on iplog_test \n(cost=0.00..85108.66 rows=284 width=16)\n\naverage run time 2 minutes (worst 5 minutes, but this happened only once)\n\n(iostat output)\n\n tin tout sps tps msps sps tps msps usr nic sys int idl\n 0 38 1 1 6.7 5110 224 3.2 7 16 3 0 73\n 0 13 21 2 3.6 5249 219 4.1 12 13 2 0 73\n 0 13 0 0 10.0 5341 216 3.9 12 11 4 0 73\n 0 13 6 0 9.9 5049 218 3.5 10 14 3 0 72\n 0 13 6 0 0.0 7654 216 3.8 10 10 2 0 78\n 0 13 2 1 4.0 8758 222 4.1 6 11 4 0 80\n 0 13 6 0 9.9 8594 219 4.4 6 10 3 0 81\n 0 13 4 1 0.0 7290 210 4.3 6 10 4 0 80\n 0 13 36 3 4.9 5912 196 4.7 9 10 4 0 76\n 0 13 7 1 13.3 4787 209 3.7 9 17 2 0 72\n 0 13 0 0 10.0 4691 209 3.8 9 17 2 0 72\n\nThis sort of proves that Tom is right about the need for random data. <grin>\n\nHaving the change to play with it at such idle time in the morning, when \nnobody is supposed to be working with the databae I dared to shut down all \nother sessions\n\n(idle iostat)\n\niostat 5\n tty sd0 sd1 % cpu\n tin tout sps tps msps sps tps msps usr nic sys int idl\n 0 559 148 2 7.7 8 1 3.3 0 0 0 0 99\n 0 559 235 4 4.1 30 2 2.0 0 0 1 0 98\n 0 559 189 2 8.7 3 0 10.0 0 0 1 0 99\n\n\nseq scan (10 sec)\n\n 0 575 12103 101 9.2 18798 164 1.4 19 0 8 0 73\n 0 575 10118 82 11.2 30507 243 0.9 33 0 11 0 55\n 0 559 12704 101 9.3 7642 61 0.7 9 0 8 0 83\n\n\nindex scan 45 sec\n\n 0 38 8 1 4.0 6179 375 2.5 1 0 2 0 98\n 0 13 4 1 2.5 6208 378 2.4 1 0 2 0 97\n 0 13 7 1 3.3 6323 382 2.4 1 0 2 0 97\n 0 13 9 2 0.0 6310 389 2.4 1 0 2 0 97\n 0 13 0 0 0.0 3648 226 2.3 0 0 1 0 98\n\nThis proves to me 15000 RPM Cheetah drives are damn fast at sequential reads.\nWhy index scan wins at 'normal' database load??? I would expect that if the \ndisk subsystem is overloaded, it will be more overloaded seek-wise, than \ntroughput-wise. Perhaps this penalizes the 'expected to be faster' sequential \nreads much more than the random page index reads.\n\nI will try 4.2 with the same data as soon as available and see the differences \n- on a separate machine with about the same configuration (same OS,same \npostgres install, same disks, only one 733 MHz CPU instead of two 550 MHz \nCPUs).\n\nDaniel\n\n", "msg_date": "Fri, 11 Jan 2002 07:32:25 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "> -----Original Message-----\n> From: Daniel Kalchev\n> \n> I tried this:\n> \n> db=# select * into iplog_test from iplog_gate200112;\n> SELECT\n> db=# create index iplog_test_ipaddr_idx on iplog_test(ipaddr);\n> CREATE\n> db=# cluster iplog_test_ipaddr_idx on iplog_test;\n> CLUSTER\n> db=# create index iplog_test_ipdate_idx on iplog_test(ipdate);\n> CREATE\n> db=# explain\n> db-# SELECT sum(input), sum(output) FROM iplog_test \n\n> db-# WHERE \n> db-# '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 \n> 00:00:00+02' \n\nIs there only one ipdate value which satisfies the above where clause ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 Jan 2002 23:10:55 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "> -----Original Message-----\n> From: Hiroshi Inoue\n> \n> > -----Original Message-----\n> > From: Daniel Kalchev\n> > \n> > I tried this:\n> > \n> > db=# explain\n> > db-# SELECT sum(input), sum(output) FROM iplog_test \n> \n> > db-# WHERE \n> > db-# '2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 \n> > 00:00:00+02' \n> \n> Is there only one ipdate value which satisfies the above where clause ?\n\nIf '2001-12-01 00:00:00+02' is the unique ipdate value which satsifies\n'2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' \nand CREATE INDEX preserves the physical order of the same key,\nthe IndexScan would see physically ordered tuples. There's no strangeness\neven if the scan is faster than sequential scan.\n\nregards,\nHiroshi Inoue \n", "msg_date": "Wed, 16 Jan 2002 05:50:04 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: again on index usage " } ]
[ { "msg_contents": "> The point is to collect comprehensive error reports, mainly about\n> failed modifications of complex structured data which is\n> created/modified concurrently by several workers in an optimistic\n> locking fashion. Because the data is so complex it won't help anybody\n> if you print out a message as \"index xy violated by tuple ab\". Hence I\n> want to collect all the errors to give the application/the user the\n> possibility to make an overall assessment about what has to be done to\n> avoid the error.\n...\n> > How about savepoints?\n> \n> This would be my question to you: How about savepoints ?\n> Do they help to achieve what I want to achieve ?\n\nOk, thanks. Yes, savepoints would not allow you to get comprehensive\nerror reports in all cases (when you need to insert record with duplicate\nkey to avoid errors caused by absence of such record etc).\nThough savepoints allow application to fix an error immediately after this\nerror encountered (without wasting time/resources) I will not argue with\nyou about how much such comprehensive reports are useful.\n\nI'd rather ask another question -:) How about constraints in DEFERRED mode?\nLooks like deferred mode allows you to do everything you need - ie make ALL\nrequired changes and then check everything when mode changed to immediate.\nAlso note that this would be more flexible then trigger approach - you can\nchange mode of individual constraint.\n\nTwo glitches though:\n1. I believe that currently transaction will be aborted on first error\n encountered, without checking all other changes for constraint\nviolations.\n I suppose this can be easily changed for your needs. And user would just\n point out what behaviour is required.\n2. Not sure about CHECK constraints but Uniq/PrimaryKey ones are not\n deferrable currently -:( And this is muuuuuch worse drawback then absence\n of comprehensive reports. It's more complex thing to do than on error\n triggers but someday it will be implemented because of this is \"must\nhave\"\n kind of things.\n\nVadim\n", "msg_date": "Mon, 7 Jan 2002 11:14:29 -0800 ", "msg_from": "\"Mikheev, Vadim\" <vmikheev@SECTORBASE.COM>", "msg_from_op": true, "msg_subject": "Re: ON ERROR triggers" }, { "msg_contents": "Mikheev, Vadim wrote:\n\n\n> 2. Not sure about CHECK constraints but Uniq/PrimaryKey ones are not\n> deferrable currently -:( And this is muuuuuch worse drawback then absence\n> of comprehensive reports. It's more complex thing to do than on error\n> triggers but someday it will be implemented because of this is \"must\n> have\"\n> kind of things.\n\n\nAt some point they need to be deferred to statement end so\n\nupdate t set foo = foo + 1;\n\nworks ...\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Mon, 07 Jan 2002 11:52:27 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: ON ERROR triggers" }, { "msg_contents": "On Mon, Jan 07, 2002 at 11:14:29AM -0800, Mikheev, Vadim wrote:\n> I'd rather ask another question -:) How about constraints in DEFERRED mode?\n> Looks like deferred mode allows you to do everything you need - ie make ALL\n> required changes and then check everything when mode changed to immediate.\n> Also note that this would be more flexible then trigger approach - you can\n> change mode of individual constraint.\n> \n> Two glitches though:\n> 1. I believe that currently transaction will be aborted on first error\n> encountered, without checking all other changes for constraint\n> violations.\n\nThat's the problem.\n\n> I suppose this can be easily changed for your needs. And user would just\n> point out what behaviour is required.\n\nI suppose changing this is what i'm doing with my proposed error\nhandlers ;-) For error reporting there is no difference between\nDEFERRED and IMMEDIATE. The only advantage DEFERRED provides and for\nwhat it what added to the SQL standard is some pseudo-errors do not\narise.\n\n> 2. Not sure about CHECK constraints but Uniq/PrimaryKey ones are not\n> deferrable currently -:( And this is muuuuuch worse drawback then absence\n> of comprehensive reports. It's more complex thing to do than on error\n> triggers but someday it will be implemented because of this is \"must\n> have\"\n> kind of things.\n\nA simple implementation of deferred UNIQUE constraints could be very\neasily provided bases on my error handlers. Imagine a deferred UNIQUE\nindex where a DUPKEY is up to be inserted. When the DUPKEY appears in\nDEFERRED mode my error handler will:\n\n1) not mark the transaction for rollback\n2) add a trigger to the deferred trigger queue to do checks on the DUPKEY\n in the given index\n3) that's all\n\nMaybe not the most efficient way, but a very clean implementation\nbased on error handlers. Maybe now a little bit convinced of error\nhandlers ? Would be glad.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Tue, 8 Jan 2002 08:59:15 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: ON ERROR triggers" }, { "msg_contents": "On Tue, 8 Jan 2002, Holger Krug wrote:\n\n> > 2. Not sure about CHECK constraints but Uniq/PrimaryKey ones are not\n> > deferrable currently -:( And this is muuuuuch worse drawback then absence\n> > of comprehensive reports. It's more complex thing to do than on error\n> > triggers but someday it will be implemented because of this is \"must\n> > have\"\n> > kind of things.\n>\n> A simple implementation of deferred UNIQUE constraints could be very\n> easily provided bases on my error handlers. Imagine a deferred UNIQUE\n> index where a DUPKEY is up to be inserted. When the DUPKEY appears in\n> DEFERRED mode my error handler will:\n>\n> 1) not mark the transaction for rollback\n> 2) add a trigger to the deferred trigger queue to do checks on the DUPKEY\n> in the given index\n> 3) that's all\n\nISTM that the above seems to imply that you could make unique\nconstraints that don't actually necessarily constrain to uniqueness (an\nerror handler that say didn't mark for rollback and did nothing to\nenforce it later, or only enforced it in some cases, etc...). If so,\nI'd say that any unique constraint that had an error condition for example\ncouldn't be used as if it guaranteed uniqueness (for example as targets\nof fk constraints).\n\n", "msg_date": "Tue, 8 Jan 2002 01:06:42 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: ON ERROR triggers" }, { "msg_contents": "On Tue, Jan 08, 2002 at 01:06:42AM -0800, Stephan Szabo wrote:\n> On Tue, 8 Jan 2002, Holger Krug wrote:\n> > A simple implementation of deferred UNIQUE constraints could be very\n> > easily provided bases on my error handlers. Imagine a deferred UNIQUE\n> > index where a DUPKEY is up to be inserted. When the DUPKEY appears in\n> > DEFERRED mode my error handler will:\n> >\n> > 1) not mark the transaction for rollback\n> > 2) add a trigger to the deferred trigger queue to do checks on the DUPKEY\n> > in the given index\n> > 3) that's all\n> \n> ISTM that the above seems to imply that you could make unique\n> constraints that don't actually necessarily constrain to uniqueness (an\n> error handler that say didn't mark for rollback and did nothing to\n> enforce it later, or only enforced it in some cases, etc...). If so,\n> I'd say that any unique constraint that had an error condition for example\n> couldn't be used as if it guaranteed uniqueness (for example as targets\n> of fk constraints).\n\nWhat I said above was an extension of my original proposal, which consists of:\n1) marking the transaction for rollback\n2) ...\n\nI only wanted to show, that the addition I'm going to make to\nPostgreSQL, could be used to implemented DEFERRED UNIQUE constraints\nin a very simple way. Of course, this special error handler for\nDEFERRED UNIQUE constraints, which puts a trigger with the DUPKEY into\nthat deferred trigger queue, could not be up-to the user but must be\nsystem-enforced.\n\nBut - you're right. My previous mail didn't express this explicitely,\nhence your notice is correct. Thank you !\n \n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Tue, 8 Jan 2002 10:21:13 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": false, "msg_subject": "Re: ON ERROR triggers" } ]
[ { "msg_contents": "Currently, system catalogs (pg_*) are assumed to be readable by anyone if\nthe privileges are NULL, as opposed to ordinary tables, which assume only\nowner access if the privileges are NULL.\n\nI'm currently working on privileges for functions (see also Nov. 13\nmessage, which apparently stunned everyone into silence), which will need\nsome sort of similar arrangement, only there's no obvious way to find out\nif a function is a \"system function\".\n\nI think the best solution would be to drop the pg_* exception and\nexplicitly grant the right privileges to each table and function in\ninitdb.\n\nObjections?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 7 Jan 2002 17:35:43 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Default permissions of system catalogs" } ]
[ { "msg_contents": "Hello together,\n\ni would like to add the following changes to the code so that postgres\ndoesn�t have any problems if we compile it on machines that have\ngettimeofday with 1 ARG and therefore don�t need 'struct timezone xxx'\nI�m working on a System where timezone is defined in another way.\n\n1. nabstime.c\nDatum\ntimeofday(PG_FUNCTION_ARGS)\n{\n...\n#ifndef GETTIMEOFDAY_1ARG\n\tstruct timezone tpz;\n#endif\n...\n}\n\n2.postgres.c\nResetUsage(void)\n{\n#ifndef GETTIMEOFDAY_1ARG\n\tstruct timezone tz;\n#endif\n\n\tgetrusage(RUSAGE_SELF, &Save_r);\n\tgettimeofday(&Save_t, &tz);\n\tResetBufferUsage();\n/*\t ResetTupleCount(); */\n}\n\n\nvoid\nShowUsage(void)\n{\n...\n#ifndef GETTIMEOFDAY_1ARG\n\tstruct timezone tz;\n#endif\n...\n}\n\n3. postmaster.c\nstatic int\nServerLoop(void)\n{\n...\n#ifndef GETTIMEOFDAY_1ARG\n\tstruct timezone tz;\n#endif\n...\n}\n\n4. vacuum.c\nvoid\nvac_init_rusage(VacRUsage *ru0)\n{\n#ifndef GETTIMEOFDAY_1ARG\n\tstruct timezone tz;\n#endif\n\n\tgetrusage(RUSAGE_SELF, &ru0->ru);\n\tgettimeofday(&ru0->tv, &tz);\n}\n\n\nThanks\n\nUlrich Neumann\n\n", "msg_date": "Tue, 8 Jan 2002 10:42:00 +0200", "msg_from": "Ulrich Neumann<u_neumann@gne.de>", "msg_from_op": true, "msg_subject": "GETTIMEOFDAY_1ARG change" }, { "msg_contents": "> i would like to add the following changes to the code so that postgres\n> doesn�t have any problems if we compile it on machines that have\n> gettimeofday with 1 ARG and therefore don�t need 'struct timezone xxx'\n> I�m working on a System where timezone is defined in another way.\n\nWhat system? How is timezone defined for that system? Is it something\ncompletely new and different, or a variant which we already handle in\nother places but not for this case?\n\n...\n> 2.postgres.c\n> ResetUsage(void)\n> {\n> #ifndef GETTIMEOFDAY_1ARG\n> struct timezone tz;\n> #endif\n> \n> getrusage(RUSAGE_SELF, &Save_r);\n> gettimeofday(&Save_t, &tz);\n...\n\nSo what \"one argument\" does gettimeofday() have? Where does \"tz\" come\nfrom if it is not defined here? Does it become a global variable? Where\nis it declared?\n\nafaik the nabstime.c usage of gettimeofday() has been in the PostgreSQL\ncode for quite a while, so I'm suprised that this is a problem on the\nnew mystery platform ;)\n\n - Thomas\n", "msg_date": "Tue, 08 Jan 2002 15:00:11 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: GETTIMEOFDAY_1ARG change" }, { "msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> afaik the nabstime.c usage of gettimeofday() has been in the PostgreSQL\n> code for quite a while, so I'm suprised that this is a problem on the\n> new mystery platform ;)\n\nI imagine he's merely unhappy about seeing \"unused variable\" warnings.\n\nI'm unconvinced that's worth cleaning up, and certainly wouldn't hold\nup 7.2 release for it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 12:19:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GETTIMEOFDAY_1ARG change " } ]
[ { "msg_contents": "Hello together,\n\n\nIn ipc.c, function InternalIpcMemoryCreate there is the following line of code:\nmemAddress = shmat(shmid, 0, 0);\n\nthis line should be changed to:\nmemAddress = (void *) shmat(shmid, 0, 0);\n\n\nat function IpcMemoryCreate there is the following line of code:\nmemAddress = shmat(shmid, 0, 0);\n\nthis line should be changed to:\nmemAddress = (void *) shmat(shmid, 0, 0);\n\n\nThis will avoid problems with MetroWerks CodeWarrior compiler.\n\nThanks\n\nUlrich Neumann\n\n", "msg_date": "Tue, 8 Jan 2002 10:43:00 +0200", "msg_from": "Ulrich Neumann<u_neumann@gne.de>", "msg_from_op": true, "msg_subject": "(void *) with shmat" }, { "msg_contents": "\nWhy is this needed?\n\nshmat is defined as returning a void *. Is it not so with MetroWerks?\n\n\nUlrich Neumann wrote:\n> \n> Hello together,\n> \n> In ipc.c, function InternalIpcMemoryCreate there is the following line of code:\n> memAddress = shmat(shmid, 0, 0);\n> \n> this line should be changed to:\n> memAddress = (void *) shmat(shmid, 0, 0);\n> \n> at function IpcMemoryCreate there is the following line of code:\n> memAddress = shmat(shmid, 0, 0);\n> \n> this line should be changed to:\n> memAddress = (void *) shmat(shmid, 0, 0);\n> \n> This will avoid problems with MetroWerks CodeWarrior compiler.\n> \n> Thanks\n> \n> Ulrich Neumann\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n", "msg_date": "Tue, 08 Jan 2002 07:23:40 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: (void *) with shmat" }, { "msg_contents": "Ulrich Neumann<u_neumann@gne.de> writes:\n> This will avoid problems with MetroWerks CodeWarrior compiler.\n\nWhat, pray tell, does MetroWerks think shmat returns?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 09:27:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (void *) with shmat " } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net] \n> Sent: 07 January 2002 22:36\n> To: PostgreSQL Development\n> Subject: Default permissions of system catalogs\n> \n> \n> Currently, system catalogs (pg_*) are assumed to be readable \n> by anyone if the privileges are NULL, as opposed to ordinary \n> tables, which assume only owner access if the privileges are NULL.\n> \n> I'm currently working on privileges for functions (see also \n> Nov. 13 message, which apparently stunned everyone into \n> silence), which will need some sort of similar arrangement, \n> only there's no obvious way to find out if a function is a \n> \"system function\".\n> \n> I think the best solution would be to drop the pg_* exception \n> and explicitly grant the right privileges to each table and \n> function in initdb.\n> \n> Objections?\n\nI assume you are proposing the same privileges that you describe for a user\ntable (i.e. by default only the owner (==superuser) has any access)?\n\nIf so, this would break pgAdmin for any users who are not the superuser on\ntheir system as the majority of it's operation relies on examining the\nsystem catalogues. In this case I would *strongly* object.\n\n<thinks...> Surely this would also be the case for psql though - have I\nmisunderstood something?\n\nRegards, Dave.\n", "msg_date": "Tue, 8 Jan 2002 08:48:29 -0000 ", "msg_from": "Dave Page <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: Default permissions of system catalogs" }, { "msg_contents": "Dave Page writes:\n\n> I assume you are proposing the same privileges that you describe for a user\n> table (i.e. by default only the owner (==superuser) has any access)?\n\nNo, I'm not proposing to change any privileges, only the place they're\ngranted.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Jan 2002 11:11:05 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Default permissions of system catalogs" }, { "msg_contents": "On Tue, Jan 08, 2002 at 08:48:29AM -0000,\n Dave Page <dpage@vale-housing.co.uk> wrote:\n> \n> If so, this would break pgAdmin for any users who are not the superuser on\n> their system as the majority of it's operation relies on examining the\n> system catalogues. In this case I would *strongly* object.\n\nThe impression I got was that he was talking about changing to a consistant\ninterpretation for access rights data.\n\nIf this was done, it should be easy to change the initially security for\npg_* tables to include select access for public.\n", "msg_date": "Tue, 8 Jan 2002 10:23:38 -0600", "msg_from": "Bruno Wolff III <bruno@[66.92.219.49]>", "msg_from_op": false, "msg_subject": "Re: Default permissions of system catalogs" }, { "msg_contents": "> > Objections?\n> \n> I assume you are proposing the same privileges that you describe for a user\n> table (i.e. by default only the owner (==superuser) has any access)?\n> \n> If so, this would break pgAdmin for any users who are not the superuser on\n> their system as the majority of it's operation relies on examining the\n> system catalogues. In this case I would *strongly* object.\n> \n> <thinks...> Surely this would also be the case for psql though - have I\n> misunderstood something?\n\nI assumed he was saying that the contents of pg_class permissions should\nbe interpreted the same whether it is a system table or not. He would\nset the proper system table permissions so they are visible to all users\nlike it is now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 Jan 2002 13:58:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Default permissions of system catalogs" } ]
[ { "msg_contents": "Hi,\n\nyou re right. The problem is a mistake in a Metrowerks header file.\nI ve fixed the Metrowerks header and recompiled the library.\n\nThanks for your quick response.\n\nUlrich Neumann\n\n>>> mlw <markw@mohawksoft.com> 08.01.2002 13:23:40 >>>\n\nWhy is this needed?\n\nshmat is defined as returning a void *. Is it not so with MetroWerks?\n\n\nUlrich Neumann wrote:\n> \n> Hello together,\n> \n> In ipc.c, function InternalIpcMemoryCreate there is the following\nline of code:\n> memAddress = shmat(shmid, 0, 0);\n> \n> this line should be changed to:\n> memAddress = (void *) shmat(shmid, 0, 0);\n> \n> at function IpcMemoryCreate there is the following line of code:\n> memAddress = shmat(shmid, 0, 0);\n> \n> this line should be changed to:\n> memAddress = (void *) shmat(shmid, 0, 0);\n> \n> This will avoid problems with MetroWerks CodeWarrior compiler.\n> \n> Thanks\n> \n> Ulrich Neumann\n> \n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-------------------------------------------\n This mail is virus scanned\n Diese mail ist virusgeprueft\n\n CVP Server Solutions by GNE\n visit us at www.gne.de\n-------------------------------------------\n\n", "msg_date": "Tue, 08 Jan 2002 15:19:08 +0100", "msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>", "msg_from_op": true, "msg_subject": "Antw: Re: (void *) with shmat" } ]
[ { "msg_contents": "I confirm that ecpg now builds again on pickier compilers than gcc.\nThat was my only remaining \"must do for RC1\" item. Are we ready to\npush this puppy out the door?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 10:29:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Finally ready to go for RC1?" } ]
[ { "msg_contents": "Sorry, this was my fault.\n\nforget about this message, but thanks for your response.\n\nUlrich Neumann\n\n>>> Tom Lane <tgl@sss.pgh.pa.us> 08.01.2002 15:27:18 >>>\nUlrich Neumann<u_neumann@gne.de> writes:\n> This will avoid problems with MetroWerks CodeWarrior compiler.\n\nWhat, pray tell, does MetroWerks think shmat returns?\n\n\t\t\tregards, tom lane\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n\n-------------------------------------------\n This mail is virus scanned\n Diese mail ist virusgeprueft\n\n CVP Server Solutions by GNE\n visit us at www.gne.de\n-------------------------------------------\n\n", "msg_date": "Tue, 08 Jan 2002 19:39:05 +0100", "msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>", "msg_from_op": true, "msg_subject": "Antw: Re: (void *) with shmat" } ]
[ { "msg_contents": "Hi Thomas,\n\ni m working on postgres for 2 new platforms, NetWare 6 and a future\n64 bit OS that is under development.\n\nOn NetWare timezone is defined as a global variable, seconds from GMT.\nBut wait, gettimeofday will be added to NetWare so there isn t any\nchange neccessary anymore.\n\nOn the other hand the struct isn t used in the mentioned functions,\nso for\n\"correctness\" it could be added.\n\nThanks for your answers\n\nUlrich Neumann\n\n\nUlrich Neumann\n_______________________________________________________\n _ _\n| _ |\\ | |_ GNE GmbH - Brechhoferstr. 1 - 56316 Raubach \n|_| | \\| |_ http://www.gne.de, Tel.: +49 2684-9454-0, Fax -94, \nDavid: -71\n\nGNE Hostmaster (hostmaster@gne.de)\nCNA3, CNA4, CNA5, CNE3, CNE4, CNE5, Pending MCNE\nOffizieller Novell SysOp f�r das Novell DeveloperNet\n\n>>> Thomas Lockhart <lockhart@fourpalms.org> 08.01.2002 16:00:11 >>>\n> i would like to add the following changes to the code so that\npostgres\n> doesn t have any problems if we compile it on machines that have\n> gettimeofday with 1 ARG and therefore don t need 'struct timezone\nxxx'\n> I m working on a System where timezone is defined in another way.\n\nWhat system? How is timezone defined for that system? Is it something\ncompletely new and different, or a variant which we already handle in\nother places but not for this case?\n\n...\n> 2.postgres.c\n> ResetUsage(void)\n> {\n> #ifndef GETTIMEOFDAY_1ARG\n> struct timezone tz;\n> #endif\n> \n> getrusage(RUSAGE_SELF, &Save_r);\n> gettimeofday(&Save_t, &tz);\n...\n\nSo what \"one argument\" does gettimeofday() have? Where does \"tz\" come\nfrom if it is not defined here? Does it become a global variable?\nWhere\nis it declared?\n\nafaik the nabstime.c usage of gettimeofday() has been in the\nPostgreSQL\ncode for quite a while, so I'm suprised that this is a problem on the\nnew mystery platform ;)\n\n - Thomas\n\n\n-------------------------------------------\n This mail is virus scanned\n Diese mail ist virusgeprueft\n\n CVP Server Solutions by GNE\n visit us at www.gne.de\n-------------------------------------------", "msg_date": "Tue, 08 Jan 2002 19:46:36 +0100", "msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>", "msg_from_op": true, "msg_subject": "Antw: Re: GETTIMEOFDAY_1ARG change" } ]
[ { "msg_contents": "I am seeing strange results from date/interval computations involving\nmonths.\n\nI get the correct answers because I have a negative setting relative to\nGMT. Here are my results with TZ=EST5EDT:\n\t\n\ttest=> select '2001/3/1'::date - '1 month'::interval;\n\t ?column? \n\t------------------------\n\t 2001-02-01 00:00:00-05\n\t(1 row)\n\nWith GMT it is OK too:\n\n\ttest=> select '2001/3/1'::date - '1 month'::interval;\n\t ?column? \n\t------------------------\n\t 2001-02-01 00:00:00+00\n\t(1 row)\n\nHowever, with GMT+1 I see a big failure:\n\n\ttest=> select '2001/3/1'::date - '1 month'::interval;\n\t ?column? \n\t------------------------\n\t 2001-01-29 00:00:00+01\n\t(1 row)\n\nWhy does it say 2001-01-29?\n\nThis is interesting:\n\t\n\ttest=> select '2001/7/1'::date - '1 month'::interval;\n\t ?column? \n\t------------------------\n\t 2001-05-31 00:00:00+02\n\t(1 row)\n\t\n\ttest=> select '2001/8/1'::date - '1 month'::interval;\n\t ?column? \n\t------------------------\n\t 2001-07-01 00:00:00+02\n\t(1 row)\n\nBecause August and July have the same number of months, it worked. I am\ngoing to research this but someone may know the solution already.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 Jan 2002 14:10:50 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Strange results with date/interval arithmetic" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am seeing strange results from date/interval computations involving\n> months.\n\nAh, this is what we get for running the regression tests in only one\ntime zone :-(\n\nThe problem appears to be cut-and-paste errors in timestamp.c and\npg_proc.h: various things that should be timestamp are timestamptz\nor vice versa. See attached proposed patches.\n\nFixing this causes the horology regress tests to change, apparently\nwith good reason. I would say that\n\n\t'Wed Feb 28 17:32:01 1996 PST'::timestamptz + interval '1 year'\n\nis more nearly Fri Feb 28 17:32:01 1997 PST than\nThu Feb 27 17:32:01 1997 PST (currently enshrined in the expected\nresults). However I have not gone through all the diffs to verify each.\n\nThomas, you said you had additional horology tests to commit; since\nwe are going to have to fix and resync the horology files anyway,\ndo you want to go ahead and add them?\n\nAnother question: do we bump catversion and force an initdb for our\nlong-suffering beta testers, just to adjust two pg_proc entries?\nWe may not have much choice.\n\nSigh. RC1 is off again.\n\n\t\t\tregards, tom lane\n\n*** src/backend/utils/adt/timestamp.c~\tSat Dec 29 19:48:03 2001\n--- src/backend/utils/adt/timestamp.c\tTue Jan 8 16:55:50 2002\n***************\n*** 1290,1296 ****\n }\n \n \n! /* timestamp_pl_span()\n * Add a interval to a timestamp with time zone data type.\n * Note that interval has provisions for qualitative year/month\n *\tunits, so try to do the right thing with them.\n--- 1290,1296 ----\n }\n \n \n! /* timestamptz_pl_span()\n * Add a interval to a timestamp with time zone data type.\n * Note that interval has provisions for qualitative year/month\n *\tunits, so try to do the right thing with them.\n***************\n*** 1371,1377 ****\n \ttspan.month = -span->month;\n \ttspan.time = -span->time;\n \n! \treturn DirectFunctionCall2(timestamp_pl_span,\n \t\t\t\t\t\t\t TimestampGetDatum(timestamp),\n \t\t\t\t\t\t\t PointerGetDatum(&tspan));\n }\n--- 1371,1377 ----\n \ttspan.month = -span->month;\n \ttspan.time = -span->time;\n \n! \treturn DirectFunctionCall2(timestamptz_pl_span,\n \t\t\t\t\t\t\t TimestampGetDatum(timestamp),\n \t\t\t\t\t\t\t PointerGetDatum(&tspan));\n }\n*** src/include/catalog/pg_proc.h~\tMon Nov 5 14:44:24 2001\n--- src/include/catalog/pg_proc.h\tTue Jan 8 17:09:38 2002\n***************\n*** 1458,1466 ****\n \n DATA(insert OID = 1188 ( timestamptz_mi PGUID 12 f t t t 2 f 1186 \"1184 1184\" 100 0 0 100 timestamp_mi - ));\n DESCR(\"subtract\");\n! DATA(insert OID = 1189 ( timestamptz_pl_span PGUID 12 f t t t 2 f 1184 \"1184 1186\" 100 0 0 100 timestamp_pl_span - ));\n DESCR(\"plus\");\n! DATA(insert OID = 1190 ( timestamptz_mi_span PGUID 12 f t t t 2 f 1184 \"1184 1186\" 100 0 0 100 timestamp_mi_span - ));\n DESCR(\"minus\");\n DATA(insert OID = 1191 ( timestamptz\t\tPGUID 12 f t f t 1 f 1184 \"25\" 100 0 0 100\ttext_timestamptz - ));\n DESCR(\"convert text to timestamp with time zone\");\n--- 1458,1466 ----\n \n DATA(insert OID = 1188 ( timestamptz_mi PGUID 12 f t t t 2 f 1186 \"1184 1184\" 100 0 0 100 timestamp_mi - ));\n DESCR(\"subtract\");\n! DATA(insert OID = 1189 ( timestamptz_pl_span PGUID 12 f t t t 2 f 1184 \"1184 1186\" 100 0 0 100 timestamptz_pl_span - ));\n DESCR(\"plus\");\n! DATA(insert OID = 1190 ( timestamptz_mi_span PGUID 12 f t t t 2 f 1184 \"1184 1186\" 100 0 0 100 timestamptz_mi_span - ));\n DESCR(\"minus\");\n DATA(insert OID = 1191 ( timestamptz\t\tPGUID 12 f t f t 1 f 1184 \"25\" 100 0 0 100\ttext_timestamptz - ));\n DESCR(\"convert text to timestamp with time zone\");\n", "msg_date": "Tue, 08 Jan 2002 17:28:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange results with date/interval arithmetic " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am seeing strange results from date/interval computations involving\n> > months.\n> \n> Ah, this is what we get for running the regression tests in only one\n> time zone :-(\n\nYes, I realized the problem was that \"2001-03-01\" for +1 GMT timezone\ncomes out as \"2001-02-28 23:00 GMT\", and when you subtract a month from\nthat, you get \"2001-01-28 23:00\" and adding the +1 timezone gives you\n\"2001-01-29\" which is not what you expected.\n\n> The problem appears to be cut-and-paste errors in timestamp.c and\n> pg_proc.h: various things that should be timestamp are timestamptz\n> or vice versa. See attached proposed patches.\n\nOh, so that is why the difference between timestamp and timestamptz is\nso important.\n\n> long-suffering beta testers, just to adjust two pg_proc entries?\n> We may not have much choice.\n> \n> Sigh. RC1 is off again.\n\nOuch. :-) \n\nAt least it is before final. Can we give people on hackers an SQL\nscript to run in every database to fix this? That is how we have\nhandled this in the past. Perhaps we can update the catversion as part\nof the patch too. (We have never done that before.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 Jan 2002 17:36:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Strange results with date/interval arithmetic" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> At least it is before final. Can we give people on hackers an SQL\n> script to run in every database to fix this? That is how we have\n> handled this in the past.\n\nWe have? I don't really know how one would adjust catversion without\nan initdb. (Bear in mind it's inside a binary, CRC-protected control\nfile; couldn't be done without a special-purpose C program AFAICS.)\n\nIf you wanted to *not* bump the catversion then we could let people run\na script to fix the two pg_proc entries, but I think that way is likely\nto do more harm than good in the long run. Too much chance of someone\ncarrying the wrong entries into production and not noticing their wrong\nanswers for a long time.\n\nThe ground rules for beta testers have always been \"you may have to\ninitdb before final\", and I think that's where we are now, annoying\nas it is.\n\n(Too bad we don't have a working pg_upgrade...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 17:45:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Strange results with date/interval arithmetic " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > At least it is before final. Can we give people on hackers an SQL\n> > script to run in every database to fix this? That is how we have\n> > handled this in the past.\n> \n> We have? I don't really know how one would adjust catversion without\n> an initdb. (Bear in mind it's inside a binary, CRC-protected control\n> file; couldn't be done without a special-purpose C program AFAICS.)\n>\n\nYes, that seems like a problem. Also, will this affect regression tests\nfor people who don't apply the patch? That makes it extra important we\nmake sure the patch is applied which gives more weight to the catversion\nbump.\n\n> If you wanted to *not* bump the catversion then we could let people run\n> a script to fix the two pg_proc entries, but I think that way is likely\n> to do more harm than good in the long run. Too much chance of someone\n> carrying the wrong entries into production and not noticing their wrong\n> answers for a long time.\n\nYes.\n\n> The ground rules for beta testers have always been \"you may have to\n> initdb before final\", and I think that's where we are now, annoying\n> as it is.\n> \n> (Too bad we don't have a working pg_upgrade...)\n\nI could probably get it working tomorrow if people want it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 Jan 2002 17:48:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Strange results with date/interval arithmetic" } ]
[ { "msg_contents": "I experimented today (for the first time in a long time) with building\nPostgres outside the source tree. Didn't work:\n\nmake[3]: Entering directory `/home/users/t/tg/tgl/builddir/src/backend/bootstrap'\ngcc -O1 -g -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -I/home/users/t/tg/tgl/pgsql/src/include -c -o bootparse.o /home/users/t/tg/tgl/pgsql/src/backend/bootstrap/bootparse.c\ngcc -O1 -g -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../src/include -I/home/users/t/tg/tgl/pgsql/src/include -c -o bootscanner.o /home/users/t/tg/tgl/pgsql/src/backend/bootstrap/bootscanner.c\nbootscanner.l:43: bootstrap_tokens.h: No such file or directory\nmake[3]: *** [bootscanner.o] Error 1\n\nand similarly in src/interfaces/ecpg/preproc.\n\nThe problem was that I already had bison output files built in the\nsource tree. If I remove those, the build goes through (with bison\noutput files built in the object tree). However, since our source\ndistribution tarballs come with prebuilt bison outputs, this means\nthat a VPATH build from a source tarball won't work.\n\nThe simplest fix is probably to add\n\toverride CPPFLAGS := -I$(srcdir) $(CPPFLAGS)\nto the Makefiles in these two directories. (I observe that plpgsql\nalready has this, which is why it fails to fail; backend/parser gets\naround the same problem by installing symlinks.) Any objections?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 14:23:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "VPATH builds fail" }, { "msg_contents": "I said:\n> The problem was that I already had bison output files built in the\n> source tree. If I remove those, the build goes through (with bison\n> output files built in the object tree).\n\nI have to take that back: the bison outputs are rebuilt in the source\ntree, as indeed they should be. I'm now fairly confused about why\nthe first build attempt failed and the second succeeded. The failure\noccurred on a machine running gmake 3.78.1 ... maybe this is some\nbug in that version of make? I can't reproduce any problem on a machine\nrunning gmake 3.79.1...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 14:47:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: VPATH builds fail " }, { "msg_contents": "Tom Lane writes:\n\n> I have to take that back: the bison outputs are rebuilt in the source\n> tree, as indeed they should be. I'm now fairly confused about why\n> the first build attempt failed and the second succeeded. The failure\n> occurred on a machine running gmake 3.78.1 ... maybe this is some\n> bug in that version of make? I can't reproduce any problem on a machine\n> running gmake 3.79.1...\n\nI can't reproduce this with either version of gmake, and the rules look\nfairly bullet-proof. Possibly you didn't distclean the source tree before\nconfiguring in the build tree. In that case it might have picked up the\nwrong Makefile.global for part of the build.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Jan 2002 15:27:31 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: VPATH builds fail " }, { "msg_contents": "I said:\n> I have to take that back: the bison outputs are rebuilt in the source\n> tree, as indeed they should be. I'm now fairly confused about why\n> the first build attempt failed and the second succeeded.\n\nThe difference appears to be that when bison is run in the source dir,\nits output contains lines like\n\n\t#line 121 \"bootscanner.l\"\n\nHowever, when it's run during a VPATH build, its output contains lines\nlike\n\n\t#line 121 \"/home/postgres/pgsql/src/backend/bootstrap/bootscanner.l\"\n\nevidently because bison is invoked with a full path to the .y file in\nthis case.\n\nThere is *no* difference in the #include commands, but apparently the\n#line directives affect gcc's default search path for include files.\n\nNet result: I'm back to my original statement: VPATH builds will not\nwork with a source distribution tarball. Any objections to the\n-I$(srcdir) trick?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 15:28:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: VPATH builds fail " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I can't reproduce this with either version of gmake, and the rules look\n> fairly bullet-proof. Possibly you didn't distclean the source tree before\n> configuring in the build tree.\n\nYes, I certainly did distclean. The difference is in the bison output\nfiles, which are not removed by distclean; see followup message.\nAFAICT it is not really make's fault, but an obscure gcc behavior that\ncreates the problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 15:30:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: VPATH builds fail " }, { "msg_contents": "Tom Lane writes:\n\n> There is *no* difference in the #include commands, but apparently the\n> #line directives affect gcc's default search path for include files.\n\nI was kind of baffled myself why it worked at all.\n\n> Net result: I'm back to my original statement: VPATH builds will not\n> work with a source distribution tarball. Any objections to the\n> -I$(srcdir) trick?\n\nNope.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Jan 2002 17:57:43 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: VPATH builds fail " } ]
[ { "msg_contents": "I am getting the following DEBUG message\n\n\tXLogWrite: new log file created - consider increasing WAL_FILES\n\nWhat would be a good number of files to shoot for, I am currently\nusing the default. Some of my table are into the Gigabyte level.\n\nThanks.\nGary\n", "msg_date": "Tue, 8 Jan 2002 12:03:09 -0800", "msg_from": "\"Gershon M. Wolfe\" <gary.wolfe@lsbc.com>", "msg_from_op": true, "msg_subject": "WAL FILES" }, { "msg_contents": "Gershon M. Wolfe writes:\n\n> I am getting the following DEBUG message\n>\n> \tXLogWrite: new log file created - consider increasing WAL_FILES\n>\n> What would be a good number of files to shoot for, I am currently\n> using the default. Some of my table are into the Gigabyte level.\n\nThis depends on the amount of data modification you do per transaction.\n\nIf the transaction that caused this message is a typical work load (and\nnot just a one-time bulk load, for instance) you can try to raise\nWAL_FILES until the message goes away. I don't have a good formula at\nhand, but basically WAL logs everything that you change during a\ntransaction.\n\nYou could perform a typical (long) transaction, hold off on the commit and\ncheck the size of the accumulated logs under $PGDATA/pg_xlog.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Jan 2002 18:10:40 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: WAL FILES" } ]
[ { "msg_contents": "This could be useful for cleaning out a database (sequences, triggers, \nrules) without deleting it and without having to create it, thus \npreserving permissions/owner information.\n\n", "msg_date": "Tue, 08 Jan 2002 16:19:54 -0600", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": true, "msg_subject": "Feature Request: DROP ALL FROM DATABASE database_name" }, { "msg_contents": "Thomas Swan wrote:\n> This could be useful for cleaning out a database (sequences, triggers, \n> rules) without deleting it and without having to create it, thus \n> preserving permissions/owner information.\n\nWhy delete just those? I don't see a compelling usefulness.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 8 Jan 2002 18:33:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature Request: DROP ALL FROM DATABASE database_name" }, { "msg_contents": "Thomas Swan <tswan-lst@ics.olemiss.edu> writes:\n> This could be useful for cleaning out a database (sequences, triggers, \n> rules) without deleting it and without having to create it, thus \n> preserving permissions/owner information.\n\nWhat permissions/owner information? There won't be any left if we\nremove everything in the database.\n\nAlso, given the possibility that the database has been created from\na nonempty template, it's less than clear exactly what should be\nremoved.\n\nI'd say DROP and CREATE DATABASE is a perfectly presentable way of\nhandling this.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 18:50:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Feature Request: DROP ALL FROM DATABASE database_name " }, { "msg_contents": "\n\n\n\n\nBruce Momjian wrote:\n\nThomas Swan wrote:\n\nThis could be useful for cleaning out a database (sequences, triggers, rules) without deleting it and without having to create it, thus preserving permissions/owner information.\n\nWhy delete just those? I don't see a compelling usefulness.\n\nYou have to explicitly name all of them.   Also, some of those are unnamed\nif they were auto created.  \n\nAlso, the SERIAL datatype creates an auto_named sequence that's not always\neasy to figure out.    And, dropping a table leaves that sequence still intact.\n \n\nOn the owner permissions front.    If someone is developing and wants to\nwipe their database and start fresh,  the easiest way is to do a drop database,\nand create database.   But, that would require the createdatabase right.\n   But, in a situation where multiple developers are working on different\ndatabases, this is awkward.     The proposed request always a normal user\nto clean that database and start fresh, so to speak.\n\n\n\n", "msg_date": "Tue, 08 Jan 2002 17:57:34 -0600", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": true, "msg_subject": "Re: Feature Request: DROP ALL FROM DATABASE database_name" }, { "msg_contents": "On Tue, 8 Jan 2002, Tom Lane wrote:\n\n> Thomas Swan <tswan-lst@ics.olemiss.edu> writes:\n> > This could be useful for cleaning out a database (sequences, triggers,\n> > rules) without deleting it and without having to create it, thus\n> > preserving permissions/owner information.\n>\n> What permissions/owner information? There won't be any left if we\n> remove everything in the database.\n>\n> Also, given the possibility that the database has been created from\n> a nonempty template, it's less than clear exactly what should be\n> removed.\n>\n> I'd say DROP and CREATE DATABASE is a perfectly presentable way of\n> handling this.\n\n From an ISP standpoint, this is something that I'd definitely love to see\n... would save getting requests from users that don't have access to\nanything but their database from having to ask me to DROP/CREATE their\ndatabase :)\n\n\n", "msg_date": "Wed, 9 Jan 2002 01:11:11 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Feature Request: DROP ALL FROM DATABASE database_name" }, { "msg_contents": "\n\n\n\n\nMarc G. Fournier wrote:\n\nOn Tue, 8 Jan 2002, Tom Lane wrote:\n\nThomas Swan <tswan-lst@ics.olemiss.edu> writes:\n\nThis could be useful for cleaning out a database (sequences, triggers,rules) without deleting it and without having to create it, thuspreserving permissions/owner information.\n\nWhat permissions/owner information? There won't be any left if weremove everything in the database.Also, given the possibility that the database has been created froma nonempty template, it's less than clear exactly what should beremoved.I'd say DROP and CREATE DATABASE is a perfectly presentable way ofhandling this.\n\n>From an ISP standpoint, this is something that I'd definitely love to see... would save getting requests from users that don't have access toanything but their database from having to ask me to DROP/CREATE theirdatabase :)\n\nMarc, I agree.   One of the disadvantages of using PostgreSQL in a multiuser\nmultidatabase environment is the user and access controls that are currently\navailable in PostgreSQL.\n\nI have dreamed of the ability to go GRANT ALL ON DATABASE database_name TO\nUSER username and then add a table and them be able to have permissions to\naccess that table.   It would almost be worth having a pg_access system table\nthat had the OID or database_name, uid/gid, and rights as an array that could\nbe a gateway to the database. \n\n pg_access+----------+--------+--------------------------+| database | userid | rights |+----------+--------+--------------------------+| foo | 101 | select,update || foo | 102 | select,update,create,drop|+----------+--------+--------------------------+\nI'm not sure of the actual information to store, i.e. OID or name, but I\nthink it would useful from a DBA standpoint and from an ISP standpoint as\nwell.   Currently a GRANT ALL ON only does current objects, but will not\ninclude future objects (an added table for example).   I do think an  overall\ndatabase level permission would be most advantageous. \n\nThe proposed drop command would be a step in making this more useable and\nhopefully increase the userbase and popularity of PostgreSQL.\n\nPersonally, I think the developer group has done an incredible and commendable\njob so far.   And, there's no other database that I would prefer to use.\n  However, the user and rights management is a bit awkward for me.   This\nis why I mentioned the above ideas.\n\n\n\n------------\n\n\n", "msg_date": "Wed, 09 Jan 2002 00:06:02 -0600", "msg_from": "Thomas Swan <tswan-lst@ics.olemiss.edu>", "msg_from_op": true, "msg_subject": "Re: Feature Request: DROP ALL FROM DATABASE database_name" } ]
[ { "msg_contents": "I'm getting a warning:\n\ngcc -O2 -Wall -Wmissing-prototypes -Wmissing-declarations -Wcast-align\n-Wpointer-arith -fpic -I../../../../src/interfaces/ecpg/include\n-I../../../../src/interfaces/libpq -I../../../../src/include -c -o\ndata.o data.c\ndata.c: In function `ECPGget_data':\ndata.c:96: warning: `res' might be used uninitialized in this function\n\nThe code looks pretty suspicious, 'res' and 'ures' might be getting mixed\nup. Check please.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 8 Jan 2002 17:59:23 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "ECPG warning" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The code looks pretty suspicious, 'res' and 'ures' might be getting mixed\n> up. Check please.\n\nClearly a copy-and-paste bug. I committed a fix an hour or so ago.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 08 Jan 2002 19:27:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ECPG warning " } ]
[ { "msg_contents": "Hi,\n\nI have made a new version of pgbench which allows not to update\nbranches and tellers tables, which should significantly reduce the\ncontentions. (See attached patches against current. Note that the\npaches also includes changes removing CHECKPOINT command while\nrunning in initialization mode (pgbench -i)). With the patches you\ncould specify -N option not to update branches and tellers tables.\n\nWith the new pgbench, I ran a test with current and 7.1 and saw\nnot-so-small differences. Any idea to get better performance on 7.2\nand AIX 5L combo?\n\n7.2 with lwlock.patch rev.2\n7.1.3\nAIX 5L 4way with 4GB RAM\ntesting script is same as my previous postings (except -N for pgbench,\nof course).", "msg_date": "Wed, 09 Jan 2002 10:35:36 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "7.1 vs. 7.2 on AIX 5L " }, { "msg_contents": "Tatsuo Ishii wrote:\n> \n> Hi,\n> \n> I have made a new version of pgbench which allows not to update\n> branches and tellers tables, which should significantly reduce the\n> contentions. (See attached patches against current. Note that the\n> paches also includes changes removing CHECKPOINT command while\n> running in initialization mode (pgbench -i)). With the patches you\n> could specify -N option not to update branches and tellers tables.\n> \n> With the new pgbench, I ran a test with current and 7.1 and saw\n> not-so-small differences. Any idea to get better performance on 7.2\n> and AIX 5L combo?\n> \n> 7.2 with lwlock.patch rev.2\n> 7.1.3\n> AIX 5L 4way with 4GB RAM\n> testing script is same as my previous postings (except -N for pgbench,\n> of course).\n> \n> ------------------------------------------------------------------------\n> Name: pgbench.patch\n> pgbench.patch Type: Plain Text (Text/Plain)\n> Encoding: 7bit\n> \n> Name: result-Jan-09.png\n> result-Jan-09.png Type: PNG Image (image/png)\n> Encoding: base64\n\nCould you add some labels to lines as Tom did ?\n\nWe can only guess which line is which.\n\n--------------\nHannu\n", "msg_date": "Wed, 09 Jan 2002 12:03:06 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L" }, { "msg_contents": "> Could you add some labels to lines as Tom did ?\n> \n> We can only guess which line is which.\n\nI thought I already added labels. 7.1 is \"+\"(green one), and 7.2 is\n\"rhombus\"(red one).\n--\nTatsuo Ishii\n", "msg_date": "Wed, 09 Jan 2002 21:25:15 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have made a new version of pgbench which allows not to update\n> branches and tellers tables, which should significantly reduce the\n> contentions.\n\nI used this version of pgbench in some fresh runs on RedHat's 4-way SMP\nLinux box. I did several test runs under varying conditions (pgbench\nscale 500 or 50, checkpoint_segments/wal_files either default 3/0 or\n30/5, fsync on or off). I compared current CVS tip (including the\nnow-committed lwlock rev 2 patch) to 7.1.3. The results are attached.\nAs you can see, current beats 7.1 pretty much across the board on that\nhardware. The reason seems to be revealed by looking at vmstat output.\nTypical \"vmstat 5\" output for 7.1.3 (here in a 6-client pgbench -N\nrun) is\n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 1 0 0 0 108444 8920 4917092 0 0 213 0 170 4814 0 1 99\n 1 0 0 0 103592 8948 4921912 0 0 234 357 230 4811 1 1 98\n 0 0 0 0 98776 8968 4926704 0 0 233 428 235 4854 1 1 97\n 0 0 0 0 94300 8980 4931168 0 0 216 423 229 4809 1 2 97\n 0 0 0 0 89960 8984 4935504 0 0 209 771 421 4723 2 2 96\n 0 0 0 0 69280 9016 4956140 0 0 205 842 457 4645 1 2 96\n\nThe system is capable of much greater I/O rates, so neither disks nor\nCPUs are exactly exerting themselves here. In contrast, 7.2 shows:\n\n procs memory swap io system cpu\n r b w swpd free buff cache si so bi bo in cs us sy id\n 2 0 0 0 2927344 9148 1969356 0 0 0 5772 102 13753 61 32 7\n 7 0 0 0 3042272 9148 1969716 0 0 0 2267 2400 14083 58 32 10\n 5 0 0 0 3042168 9148 1970100 0 0 0 2734 1028 12994 53 37 11\n\nI think that 7.1's poor showing here is undoubtedly due to the spinlock\nbackoff algorithm it used --- there is no other way to explain 99% idle\nCPU than that all of the backends are caught in 10-msec select() waits.\n\n> With the new pgbench, I ran a test with current and 7.1 and saw\n> not-so-small differences. Any idea to get better performance on 7.2\n> and AIX 5L combo?\n\nI'm thinking more and more that there must be something weird about the\ncs() routine that we use for spinlocks on AIX. Could someone dig into\nthat and find exactly what it does and whether it's got any performance\nissues?\n\n\t\t\tregards, tom lane", "msg_date": "Thu, 10 Jan 2002 12:43:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " }, { "msg_contents": "Tom Lane wrote:\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have made a new version of pgbench which allows not to update\n> > branches and tellers tables, which should significantly reduce the\n> > contentions.\n> \n> I used this version of pgbench in some fresh runs on RedHat's 4-way SMP\n> Linux box. I did several test runs under varying conditions (pgbench\n> scale 500 or 50, checkpoint_segments/wal_files either default 3/0 or\n> 30/5, fsync on or off). I compared current CVS tip (including the\n> now-committed lwlock rev 2 patch) to 7.1.3. The results are attached.\n> As you can see, current beats 7.1 pretty much across the board on that\n> hardware. The reason seems to be revealed by looking at vmstat output.\n> Typical \"vmstat 5\" output for 7.1.3 (here in a 6-client pgbench -N\n> run) is\n\nThose are dramatic graphs. Is it the WAL increase that made 7.2 much\nfaster?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 15:08:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L" }, { "msg_contents": "Tom,\n\nCan I use the fourth graph (scale=50, fsync=on) to show how 7.2 could\noutperform 7.1 on SMP boxes? I'm going to make a presentation at \nNet&Com 2002 (http://expo.nikkeibp.co.jp/netcom/web/e/index.html) the\nday after tomorrow.\n--\nTatsuo Ishii\n\n> I used this version of pgbench in some fresh runs on RedHat's 4-way SMP\n> Linux box. I did several test runs under varying conditions (pgbench\n> scale 500 or 50, checkpoint_segments/wal_files either default 3/0 or\n> 30/5, fsync on or off). I compared current CVS tip (including the\n> now-committed lwlock rev 2 patch) to 7.1.3. The results are attached.\n> As you can see, current beats 7.1 pretty much across the board on that\n> hardware. The reason seems to be revealed by looking at vmstat output.\n> Typical \"vmstat 5\" output for 7.1.3 (here in a 6-client pgbench -N\n> run) is\n> \n> procs memory swap io system cpu\n> r b w swpd free buff cache si so bi bo in cs us sy id\n> 1 0 0 0 108444 8920 4917092 0 0 213 0 170 4814 0 1 99\n> 1 0 0 0 103592 8948 4921912 0 0 234 357 230 4811 1 1 98\n> 0 0 0 0 98776 8968 4926704 0 0 233 428 235 4854 1 1 97\n> 0 0 0 0 94300 8980 4931168 0 0 216 423 229 4809 1 2 97\n> 0 0 0 0 89960 8984 4935504 0 0 209 771 421 4723 2 2 96\n> 0 0 0 0 69280 9016 4956140 0 0 205 842 457 4645 1 2 96\n> \n> The system is capable of much greater I/O rates, so neither disks nor\n> CPUs are exactly exerting themselves here. In contrast, 7.2 shows:\n> \n> procs memory swap io system cpu\n> r b w swpd free buff cache si so bi bo in cs us sy id\n> 2 0 0 0 2927344 9148 1969356 0 0 0 5772 102 13753 61 32 7\n> 7 0 0 0 3042272 9148 1969716 0 0 0 2267 2400 14083 58 32 10\n> 5 0 0 0 3042168 9148 1970100 0 0 0 2734 1028 12994 53 37 11\n> \n> I think that 7.1's poor showing here is undoubtedly due to the spinlock\n> backoff algorithm it used --- there is no other way to explain 99% idle\n> CPU than that all of the backends are caught in 10-msec select() waits.\n> \n> > With the new pgbench, I ran a test with current and 7.1 and saw\n> > not-so-small differences. Any idea to get better performance on 7.2\n> > and AIX 5L combo?\n> \n> I'm thinking more and more that there must be something weird about the\n> cs() routine that we use for spinlocks on AIX. Could someone dig into\n> that and find exactly what it does and whether it's got any performance\n> issues?\n> \n> \t\t\tregards, tom lane\n> \n", "msg_date": "Tue, 05 Feb 2002 14:01:00 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " } ]
[ { "msg_contents": "Commercial: New Book!! PostgreSQL book is released into the market\n\nA Complete Guide to PostgreSQL (Redhat Database)\nSecond Edition (revised and enlarged and covers latest version of\nPostgreSQL 7.1.3),\nPublished Jan 2002\nAuthor: Al Dev\n\nInternational Edition: The cost of the book is very minimal at about\n$20.99.\nAsian Economy Edition: For consumers in India, Bangladesh,\nBhutan, Maldives, Nepal, Pakistan, Sri Lanka, Myanmar and\nThailand please click on the button which says Rs. 329.\nYou can also buy this book for Rs. 329 from\nhttp://www.gtcdrom.com , Bangalore, INDIA as well at all GTCdrom\nbranches\nin every major Indian city.\n\nThe book is sold at http://www.aldev.8m.com\n\nThe first edition of this book (under different title) covered\nPostgreSQL 6.1 and sold about 400,000 copies in USA/Canada. The\nsecond edition is available for sale world-wide and translated version\nin\nJapanese and German will be available in near future.\n\nEdition bound, paper back.\nThe book has 43 chapters and is about 550 pages.\nLevel: Intermediate to Advanced User.\n\nAbout 40% of the book is generic and can be used for other databases\nlike MySQL, Oracle, MS SQL server and other 60% is specific to\nPostgreSQL.\n\nBooks are immensely important for open-source projects like\nPostgreSQL as they help the new & experienced users.\nThere are several books on PostgreSQL in the market\nwhich directly and indirectly are helping the development of PostgreSQL.\n\nPostgreSQL is a very sophisticated software and to use such\na sophisticated database, you need a book.\n\n\"A Complete Guide to PostgreSQL (Redhat Database)\" is\na \"must have\" book as it is very difficult to use PostgreSQL without\nthis book. This book had evolved over a very long period of time.\n\nVisit http://www.aldev.8m.com to buy this book.\n\nThe Table of Contents are as follows:\n\nTable Contents\n--------------\n\nPART I\n\nCHAPTER 1. Introduction\nCHAPTER 2. Principles of Database Management\nCHAPTER 3. PostgreSQL Solution\nCHAPTER 4. Evaluation and Selection of SQL Servers\nCHAPTER 5. PostgreSQL, Embedded Databases and LDAP\nCHAPTER 6. The PostgreSQL Database Architecture\nCHAPTER 7. The PostgreSQL Instance Architecture\nCHAPTER 8. PostgreSQL Database Administration Overview\nCHAPTER 9. PostgreSQL Quick Installation Instructions\nCHAPTER 10. Quick Start Guide\nCHAPTER 11. PostgreSQL Supports Extremely Large Databases greater than\n500 Gig\nCHAPTER 12. Setup accurate Time on a SQL Server\nCHAPTER 13. Replication Management\n\nPART II\n\nCHAPTER 14. Object-Oriented Features\nCHAPTER 15. Java Capabilities in PostgreSQL\nCHAPTER 16. Performance Tuning of PostgreSQL Server\nCHAPTER 17. Security Management\nCHAPTER 18. Backup and Recovery\nCHAPTER 19. PSQL Utility\nCHAPTER 20. GUI FrontEnd Tool for PostgreSQL (Graphical User Interface)\nCHAPTER 21. Interface Drivers for PostgreSQL\nCHAPTER 22. Perl Database Interface Driver for PostgreSQL\nCHAPTER 23. PostgreSQL Management Tools\n\nPART III\n\nCHAPTER 24. Web-Application-Servers for PostgreSQL\nCHAPTER 25. Applications and Tools for PostgreSQL\nCHAPTER 26. Database Design Tool - Entity Relation Diagram Tool\nCHAPTER 27. Web Database Design/Implementation tool for PostgreSQL\nCHAPTER 28. PHP Hypertext Preprocessor (Server-side HTML-embedded\nscripting language)\nCHAPTER 29. Web Gem package\nCHAPTER 30. Python Interface for PostgreSQL\nCHAPTER 31. Gateway between PostgreSQL and the WWW\nCHAPTER 32. \"C\", \"C++\", ESQL/C language Interfaces and Bitwise Operators\nfor PostgreSQL\nCHAPTER 33. Procedural Languages\n\nPART IV\n\nCHAPTER 34. Japanese Language Interface/Kanji Code for PostgreSQL, NLS\nsupport\nCHAPTER 35. PostgreSQL Port to Windows 95/Windows NT/2000/XP\nCHAPTER 36. Mailing Lists\nCHAPTER 37. Documentation and Reference Books\nCHAPTER 38. Technical support for PostgreSQL\nCHAPTER 39. Economic and Business Aspects\nCHAPTER 40. FAQ - Questions on PostgreSQL\n\nAPPENDIX\n\n41. APPENDIX A - SYNTAX OF ANSI/ISO SQL 1992, ANSI/ISO SQL 1998\n42. APPENDIX B - SQL TUTORIAL FOR BEGINNERS\n42.1 TUTORIAL FOR POSTGRESQL\n42.2 INTERNET URL POINTERS\n42.3 ON-LINE SQL TUTORIALS\n43. APPENDIX C - MIDGARD INSTALLATION\n43.1 SECURITY with OPENSSL\n\nVisit http://www.aldev.8m.com to buy this book.\n\n\n", "msg_date": "Wed, 09 Jan 2002 04:00:00 GMT", "msg_from": "computertechnology <\"alavoor[AT]\"@yahoo.com>", "msg_from_op": true, "msg_subject": "Commercial: New Book!! PostgreSQL book is released into the market" } ]
[ { "msg_contents": "What is the difference between the postmaster binary and the postgres\nbinary? Does the postmaster act as nothing more than a multiplexor for\npostgres processes or something?\n\nChris\n\n", "msg_date": "Wed, 9 Jan 2002 13:38:21 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "postmaster vs. postgres" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What is the difference between the postmaster binary and the postgres\n> binary? Does the postmaster act as nothing more than a multiplexor for\n> postgres processes or something?\n\nNone, and yup. There is some stuff about this at\nhttp://developer.postgresql.org/osdn.php (I recommend the \"tour\"\nslides ;-)). I thought we had some graphics about this in the standard\ndocumentation, too, but am not finding it right now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 01:03:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster vs. postgres " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> What is the difference between the postmaster binary and the postgres\n> binary? Does the postmaster act as nothing more than a multiplexor for\n> postgres processes or something?\n\nThe binaries are the same -- one is a symlink to the other. This allows\nus to fork postgres children of the postmaster with no exec(). The\nbinary does read argv[0] and behaves differently depending on which\nbinary name it was called under.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 Jan 2002 01:07:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postmaster vs. postgres" }, { "msg_contents": "On Wed, 9 Jan 2002, Christopher Kings-Lynne wrote:\n\n> What is the difference between the postmaster binary and the postgres\n> binary? Does the postmaster act as nothing more than a multiplexor for\n> postgres processes or something?\n\nThe postmaster 'binary' is a symbolic link to postgres. In reality, the\nnaming convention corresponds programmatically to what you have\ndescribed. If the invokation is done via 'postmaster' PostmasterMain() is\nexecuted, otherwise it is assumed that a standalone backend s executed.\n\nGavin\n\n\n", "msg_date": "Wed, 9 Jan 2002 17:13:04 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: postmaster vs. postgres" } ]
[ { "msg_contents": "Hello,\n I am srinivas.I want to know whether there is\nany possibility for importing the orclae database in\nto postgresql database i.e, importing oracle data dump\nfile to postgresql database including all\ntables,views,constraints etc,if possible please send\nme the procedure for doing the oracle to postgresql\nconversion of data\n\n__________________________________________________\nDo You Yahoo!?\nSend FREE video emails in Yahoo! Mail!\nhttp://promo.yahoo.com/videomail/\n", "msg_date": "Tue, 8 Jan 2002 22:42:06 -0800 (PST)", "msg_from": "erigeneni srinivasulu <srinuchowdary@yahoo.com>", "msg_from_op": true, "msg_subject": "Converting oracle data in to postgresql" }, { "msg_contents": "Hi,\n\nCheck out: http://techdocs.postgresql.org/\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of erigeneni\n> srinivasulu\n> Sent: Wednesday, 9 January 2002 2:42 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Converting oracle data in to postgresql\n> \n> \n> Hello,\n> I am srinivas.I want to know whether there is\n> any possibility for importing the orclae database in\n> to postgresql database i.e, importing oracle data dump\n> file to postgresql database including all\n> tables,views,constraints etc,if possible please send\n> me the procedure for doing the oracle to postgresql\n> conversion of data\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Send FREE video emails in Yahoo! Mail!\n> http://promo.yahoo.com/videomail/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n", "msg_date": "Wed, 9 Jan 2002 15:12:47 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Converting oracle data in to postgresql" } ]
[ { "msg_contents": "\n>I have dreamed of the ability to go GRANT ALL ON DATABASE database_name TO \n>USER username and then add a table and them\nbe able to have permissions to access that table.\n\nFunny you should mention this ... my roomate is sys admin at a colocation \ncompany that is in the process of swicthing over to using\npostgresql now. He was asking if there was a way to do exactly this sort of \nthing. He ended up going doing \\dt pasting into another file and then doing \nGRANTs on all tables. He was mainly just looking for GRANT ALL ON DATABASE \nnot thinking about future tables just a way to GRANT on all tables currently \nin database. It would be nice to extend object somehow to include all db \nobjects.\n\nBest Regards,\nCarl Garland\n\n\n\n_________________________________________________________________\nGet your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp.\n\n", "msg_date": "Wed, 09 Jan 2002 01:46:38 -0500", "msg_from": "\"carl garland\" <carlhgarland@hotmail.com>", "msg_from_op": true, "msg_subject": "Re: Feature Request: DROP ALL FROM DATABASE database_name" } ]
[ { "msg_contents": "Hi,\n\nI just got a bug report that originated in the programmer not knowing he had\nto include sqlca to use whenever. When I told him he asked me why it isn't\nincluded automatically. Now that's a tricky question. :-)\n\nI do know that Oracle also asks the programmer to include sqlca, but how\nabout other DBs? Infromix? Sybase? DB2? Does anyone know that?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Wed, 9 Jan 2002 08:34:55 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "ECPG: include sqlca" } ]
[ { "msg_contents": "> > Did time become a keyword in 7.2? 7.1.3 allowed it as a column name...\n> > 7.2 rejects it.\n> \n> Yes. We now support SQL99 time and timestamp precision, which require\n> that TIME(p) be a type specification. So there are parts of the grammar\n> which cannot easily fit \"time\" anymore.\n\nIsn't the grammar explicit enough to distinguish a value (in this case function \ncall) from a type name ? It seems a type name will only appear in very specific \ncontexts.\nImho it would be nice if we could allow \"select timestamp(xxx);\",\nand this has been the umpteenth request in this regard, and 7.2 is not even \nreleased yet.\n\nAndreas\n", "msg_date": "Wed, 9 Jan 2002 09:45:09 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Time as keyword" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Imho it would be nice if we could allow \"select timestamp(xxx);\",\n> and this has been the umpteenth request in this regard, and 7.2 is not even \n> released yet.\n\nUnfortunately, there's just no way. If we tried, it would be ambiguous\nwhether, say, \"timestamp(6)\" is a function call or a type name.\n\nThis is not one of my favorite parts of SQL92 syntax :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 10:28:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Time as keyword " }, { "msg_contents": "...\n> Imho it would be nice if we could allow \"select timestamp(xxx);\",\n> and this has been the umpteenth request in this regard, and 7.2 is not even\n> released yet.\n\nafaicr one of the very sticky areas is the SQL99-specified syntax for\ndate/time literals:\n\n timestamp(6) '2001-01-08 04:05:06'\n\nwhich is difficult to reconcile with a function named timestamp:\n\n timstamp(something)\n\n - Thomas\n", "msg_date": "Wed, 09 Jan 2002 16:24:40 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Time as keyword" } ]
[ { "msg_contents": "Michael wrote:\n> I just got a bug report that originated in the programmer not knowing he had\n> to include sqlca to use whenever. When I told him he asked me why it isn't\n> included automatically. Now that's a tricky question. :-)\n> \n> I do know that Oracle also asks the programmer to include \n> sqlca, but how\n> about other DBs? Infromix? Sybase? DB2? Does anyone know that?\n\nInformix includes \n#include <sqlhdr.h>\n#include <sqliapi.h>\nautomatically. \n\nsqlhdr.h then includes sqlca, sqlda, locator.h and most of the other headers.\n\nAndreas\n", "msg_date": "Wed, 9 Jan 2002 10:14:36 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: ECPG: include sqlca" }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n> Michael wrote:\n> > I just got a bug report that originated in the programmer not knowing he had\n> > to include sqlca to use whenever. When I told him he asked me why it isn't\n> > included automatically. Now that's a tricky question. :-)\n> > \n> > I do know that Oracle also asks the programmer to include \n> > sqlca, but how\n> > about other DBs? Infromix? Sybase? DB2? Does anyone know that?\n> \n> Informix includes \n> #include <sqlhdr.h>\n> #include <sqliapi.h>\n> automatically. \n> \n> sqlhdr.h then includes sqlca, sqlda, locator.h and most of the other headers.\n\nAlso, we are allowed to be better than other databases. I recommend\nauto-include, perhaps with a message to the user.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 Jan 2002 10:45:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ECPG: include sqlca" }, { "msg_contents": "On Wed, Jan 09, 2002 at 10:45:44AM -0500, Bruce Momjian wrote:\n> > sqlhdr.h then includes sqlca, sqlda, locator.h and most of the other headers.\n> \n> Also, we are allowed to be better than other databases. I recommend\n> auto-include, perhaps with a message to the user.\n\nVery valid point. I will commit this in a few minutes.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Thu, 10 Jan 2002 11:37:23 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: ECPG: include sqlca" } ]
[ { "msg_contents": "I recently got a Debian bug report about 3 architectures where char is\nunsigned by default. There were 2 locations identified in the code\nwhere a char is compared with a negative value, and should therefore be\ndeclared as a \"signed char\". There may be others in 7.2, but I don't\nmyself have access to a suitable machine for testing.\n\nThe locations I am aware of are:\n\n\tsrc/backend/libpq/hba.c\tGetCharSetByHost(): if (c == EOF)\n\tsrc/backend/utils/init/miscinit.c SetCharSet(): if (c == EOF)\n\nThe architectures are Linux on: arm, powerpc and s390.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"And not only so, but we glory in tribulations also; \n knowing that tribulation worketh patience; And \n patience, experience; and experience, hope; And hope \n maketh not ashamed; because the love of God is shed \n abroad in our hearts by the Holy Ghost which is given \n unto us.\" Romans 5:3-5", "msg_date": "09 Jan 2002 11:43:04 +0000", "msg_from": "Oliver Elphick <olly@lfix.co.uk>", "msg_from_op": true, "msg_subject": "Some architectures need \"signed char\" declarations" }, { "msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n\n> I recently got a Debian bug report about 3 architectures where char is\n> unsigned by default. There were 2 locations identified in the code\n> where a char is compared with a negative value, and should therefore be\n> declared as a \"signed char\". There may be others in 7.2, but I don't\n> myself have access to a suitable machine for testing.\n> \n> The locations I am aware of are:\n> \n> \tsrc/backend/libpq/hba.c\tGetCharSetByHost(): if (c == EOF)\n> \tsrc/backend/utils/init/miscinit.c SetCharSet(): if (c == EOF)\n> \n> The architectures are Linux on: arm, powerpc and s390.\n\nHmmm, according to my knowledge of C, 'c' should be an int here, as\nEOF is guaranteed not to collide with any legal char value. With 'c'\nan unsigned char, ASCII 255 and EOF would be indistingushable on\ntwos-complement machines. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "09 Jan 2002 08:42:00 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Some architectures need \"signed char\" declarations" }, { "msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> Hmmm, according to my knowledge of C, 'c' should be an int here, as\n> EOF is guaranteed not to collide with any legal char value.\n\nI agree with Doug: EOF is not supposed to be equal to any value of\n'char', therefore changing the variables to signed char will merely\nbreak something else. Probably the variables should be int; are their\nvalues coming from getc() or some such? Will look at it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 10:24:37 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some architectures need \"signed char\" declarations " }, { "msg_contents": "Tom Lane wrote:\n> \n> Doug McNaught <doug@wireboard.com> writes:\n> > Hmmm, according to my knowledge of C, 'c' should be an int here, as\n> > EOF is guaranteed not to collide with any legal char value.\n> \n> I agree with Doug: EOF is not supposed to be equal to any value of\n> 'char', therefore changing the variables to signed char will merely\n> break something else. Probably the variables should be int; are their\n> values coming from getc() or some such? Will look at it.\n\nI deleted the original post, but I think the issue was signed\nversus unsigned comparisons. I think he was saying the\nvariable should be explicitly declared as 'signed int'\n(or signed char) and not 'int' (or char) because EOF is (-1).\n\n\tunsigned int foo;\n\n\tif (foo == -1) ...\tcauses a warning (or errors)\n\t\t\t\ton many compilers.\n\nAnd if the default for int or char is unsigned as it can\nbe on some systems, the code does exactly that.\n\nPerhaps he is just wanted to reduce the build time noise?\n\nApologies if this was not on point.", "msg_date": "Wed, 09 Jan 2002 10:37:38 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: Some architectures need \"signed char\" declarations" }, { "msg_contents": "Oliver Elphick <olly@lfix.co.uk> writes:\n> I recently got a Debian bug report about 3 architectures where char is\n> unsigned by default. There were 2 locations identified in the code\n> where a char is compared with a negative value, and should therefore be\n> declared as a \"signed char\". There may be others in 7.2, but I don't\n> myself have access to a suitable machine for testing.\n\n> The locations I am aware of are:\n\n> \tsrc/backend/libpq/hba.c\tGetCharSetByHost(): if (c =3D=3D EOF)\n> \tsrc/backend/utils/init/miscinit.c SetCharSet(): if (c =3D=3D EOF)\n\nFix committed. I looked at every use of \"EOF\" in the distribution, and\nthose two are the only ones I could find that were wrong. I did also\nfind a place where the result of \"getopt\" was incorrectly stored in a\n\"char\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 14:15:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some architectures need \"signed char\" declarations " }, { "msg_contents": "Doug Royer <Doug@royer.com> writes:\n\n> I deleted the original post, but I think the issue was signed\n> versus unsigned comparisons. I think he was saying the\n> variable should be explicitly declared as 'signed int'\n> (or signed char) and not 'int' (or char) because EOF is (-1).\n> \n> \tunsigned int foo;\n> \n> \tif (foo == -1) ...\tcauses a warning (or errors)\n> \t\t\t\ton many compilers.\n> \n> And if the default for int or char is unsigned as it can\n> be on some systems, the code does exactly that.\n> \n> Perhaps he is just wanted to reduce the build time noise?\n> \n> Apologies if this was not on point.\n\nThe point is that this is potentially buggy code. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "09 Jan 2002 15:03:54 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Some architectures need \"signed char\" declarations" }, { "msg_contents": "Doug Royer <Doug@royer.com> writes:\n> And if the default for int or char is unsigned as it can\n> be on some systems, the code does exactly that.\n\nThere are no systems where \"int\" means \"unsigned int\". That would break\n(to a first approximation) every C program in existence, as well as\nviolate the ANSI C specification.\n\nThe variables in question do need to be \"int\" not any flavor of \"char\",\nbut we don't need to say \"signed int\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 15:30:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Some architectures need \"signed char\" declarations " } ]
[ { "msg_contents": "The current version of the interfaces/libpgtcl/Makefile fails to build a\nshared lib on platforms where the shlib needs to resolve symbols from \nthe loading executable (tclsh) at link time. (only affects AIX?)\n\nThe current version of pl/tcl/Makefile fails to build on some platforms,\ne.g. that use a packaged version of tcl that was built with a now\nunavailable compiler or that need an exports file to build shlibs. \nThe patch simplifies this Makefile considerably.\n\nMakefile.shlib fails to clean correctly on some platforms.\n\nI thus ask you again to consider applying this improved patch that fixes\nabove problems before RC1.\nElse please keep it, to apply after release.\n\nThanks\nAndreas", "msg_date": "Wed, 9 Jan 2002 12:58:36 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "--with-tcl build on AIX (and others) fails" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> The current version of pl/tcl/Makefile fails to build on some platforms,\n> e.g. that use a packaged version of tcl that was built with a now\n> unavailable compiler or that need an exports file to build shlibs.\n> The patch simplifies this Makefile considerably.\n\nI don't think we dare risk a wholesale change in the way pltcl is built\nfor 7.2. We're already through the ports testing process for 7.2, and\nI don't want to start it over again.\n\nThis does look like the direction to go in for 7.3, however. (I think\nPeter had already muttered something about getting rid of the existing\npltcl build process in favor of using Makefile.shlib.)\n\nAs for the libpgtcl part of the patch, I don't understand why the Tcl\nlibrary should be linked into libpgtcl. libpgtcl is supposed to be\nloaded into a Tcl interpreter, not carry its own interpreter along with\nit; but it sure looks like that's what will happen with this patch.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 11:05:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: --with-tcl build on AIX (and others) fails " }, { "msg_contents": "Saved for 7.3:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nZeugswetter Andreas SB SD wrote:\n> \n> The current version of the interfaces/libpgtcl/Makefile fails to build a\n> shared lib on platforms where the shlib needs to resolve symbols from \n> the loading executable (tclsh) at link time. (only affects AIX?)\n> \n> The current version of pl/tcl/Makefile fails to build on some platforms,\n> e.g. that use a packaged version of tcl that was built with a now\n> unavailable compiler or that need an exports file to build shlibs. \n> The patch simplifies this Makefile considerably.\n> \n> Makefile.shlib fails to clean correctly on some platforms.\n> \n> I thus ask you again to consider applying this improved patch that fixes\n> above problems before RC1.\n> Else please keep it, to apply after release.\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 Jan 2002 12:19:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] --with-tcl build on AIX (and others) fails" }, { "msg_contents": "I tried out the patch that Andreas sent in a couple weeks ago, and find\nthat as-is it fails on HPUX. The attached version works okay on both\nHPUX 10.20 and RH Linux 7.2, however. Anyone want to try it on other\nplatforms? (Note this only covers Andreas' proposed changes for pltcl,\nnot for libpgtcl.)\n\n\t\t\tregards, tom lane\n\n\n*** src/Makefile.shlib.orig\tSun Nov 11 20:45:36 2001\n--- src/Makefile.shlib\tSun Jan 20 17:03:25 2002\n***************\n*** 400,406 ****\n clean-lib:\n \trm -f lib$(NAME).a\n ifeq ($(enable_shared), yes)\n! \trm -f $(shlib) lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION) lib$(NAME)$(DLSUFFIX)\n endif\n ifeq ($(PORTNAME), win)\n \trm -rf $(NAME).def\n--- 400,409 ----\n clean-lib:\n \trm -f lib$(NAME).a\n ifeq ($(enable_shared), yes)\n! \trm -f lib$(NAME)$(DLSUFFIX) lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION) lib$(NAME)$(DLSUFFIX).$(SO_MAJOR_VERSION).$(SO_MINOR_VERSION)\n! ifdef EXPSUFF\n! \trm -f lib$(NAME)$(EXPSUFF)\n! endif\n endif\n ifeq ($(PORTNAME), win)\n \trm -rf $(NAME).def\n*** src/pl/tcl/Makefile.orig\tSat Oct 13 00:23:50 2001\n--- src/pl/tcl/Makefile\tSun Jan 20 17:20:56 2002\n***************\n*** 26,97 ****\n endif\n endif\n \n! \n! # Change following to how shared library that contains references to\n! # libtcl must get built on your system. Since these definitions come\n! # from the tclConfig.sh script, they should work if the shared build\n! # of tcl was successful on this system. However, tclConfig.sh lies to\n! # us a little bit (at least in versions 7.6 through 8.0.4) --- it\n! # doesn't mention -lc in TCL_LIBS, but you still need it on systems\n! # that want to hear about dependent libraries...\n \n ifneq ($(TCL_SHLIB_LD_LIBS),)\n # link command for a shared lib must mention shared libs it uses\n! SHLIB_EXTRA_LIBS=$(TCL_LIBS) -lc\n else\n ifeq ($(PORTNAME), hpux)\n # link command for a shared lib must mention shared libs it uses,\n # even though Tcl doesn't think so...\n! SHLIB_EXTRA_LIBS=$(TCL_LIBS) -lc\n else\n # link command for a shared lib must NOT mention shared libs it uses\n! SHLIB_EXTRA_LIBS=\n! endif\n endif\n- \n- %$(TCL_SHLIB_SUFFIX): %.o\n- \t$(TCL_SHLIB_LD) -o $@ $< $(TCL_LIB_SPEC) $(SHLIB_EXTRA_LIBS)\n- \n- \n- CC = $(TCL_CC)\n- \n- # Since we are using Tcl's choice of C compiler, which might not be\n- # the same one selected for Postgres, do NOT use CFLAGS from\n- # Makefile.global. Instead use TCL's CFLAGS plus necessary -I\n- # directives.\n- \n- # Can choose either TCL_CFLAGS_OPTIMIZE or TCL_CFLAGS_DEBUG here, as\n- # needed\n- override CPPFLAGS += $(TCL_DEFS)\n- override CFLAGS = $(TCL_CFLAGS_OPTIMIZE) $(TCL_SHLIB_CFLAGS)\n- \n- \n- #\n- # DLOBJS is the dynamically-loaded object file.\n- #\n- DLOBJS= pltcl$(DLSUFFIX)\n- \n- INFILES= $(DLOBJS) \n- \n- #\n- # plus exports files\n- #\n- ifdef EXPSUFF\n- INFILES+= $(DLOBJS:.o=$(EXPSUFF))\n endif\n \n \n! # Provide dummy targets for the case where we can't build the shared library.\n \n ifeq ($(TCL_SHARED_BUILD), 1)\n \n! all: $(INFILES)\n \t$(MAKE) -C modules $@\n \n- pltcl$(DLSUFFIX): pltcl.o\n- \n install: all installdirs\n! \t$(INSTALL_SHLIB) $(DLOBJS) $(DESTDIR)$(pkglibdir)/$(DLOBJS)\n \t$(MAKE) -C modules $@\n \n installdirs:\n--- 26,64 ----\n endif\n endif\n \n! # Set up extra libs that must be mentioned in pltcl.so's link command.\n! # Aside from libtcl.so, on many platforms we must mention the shared\n! # libraries that libtcl.so depends on. Don't forget -lc, which the\n! # Tcl makefiles unaccountably exclude from $(TCL_LIBS).\n \n ifneq ($(TCL_SHLIB_LD_LIBS),)\n # link command for a shared lib must mention shared libs it uses\n! SHLIB_LINK=$(TCL_LIB_SPEC) $(TCL_LIBS) -lc\n else\n ifeq ($(PORTNAME), hpux)\n # link command for a shared lib must mention shared libs it uses,\n # even though Tcl doesn't think so...\n! SHLIB_LINK=$(TCL_LIB_SPEC) $(TCL_LIBS) -lc\n else\n # link command for a shared lib must NOT mention shared libs it uses\n! SHLIB_LINK=$(TCL_LIB_SPEC)\n endif\n endif\n \n+ NAME = pltcl\n+ SO_MAJOR_VERSION = 2\n+ SO_MINOR_VERSION = 0\n+ OBJS = pltcl.o\n \n! include $(top_srcdir)/src/Makefile.shlib\n \n ifeq ($(TCL_SHARED_BUILD), 1)\n \n! all: all-lib\n \t$(MAKE) -C modules $@\n \n install: all installdirs\n! \t$(INSTALL_SHLIB) $(shlib) $(DESTDIR)$(pkglibdir)/$(NAME)$(DLSUFFIX)\n \t$(MAKE) -C modules $@\n \n installdirs:\n***************\n*** 99,105 ****\n \t$(MAKE) -C modules $@\n \n uninstall:\n! \trm -f $(DESTDIR)$(pkglibdir)/$(DLOBJS)\n \t$(MAKE) -C modules $@\n \n else # TCL_SHARED_BUILD = 0\n--- 66,72 ----\n \t$(MAKE) -C modules $@\n \n uninstall:\n! \trm -f $(DESTDIR)$(pkglibdir)/$(NAME)$(DLSUFFIX)\n \t$(MAKE) -C modules $@\n \n else # TCL_SHARED_BUILD = 0\n***************\n*** 114,119 ****\n Makefile.tcldefs: mkMakefile.tcldefs.sh\n \t$(SHELL) $< '$(TCL_CONFIG_SH)' '$@'\n \n! clean distclean maintainer-clean:\n! \trm -f $(INFILES) pltcl.o Makefile.tcldefs\n \t$(MAKE) -C modules $@\n--- 81,86 ----\n Makefile.tcldefs: mkMakefile.tcldefs.sh\n \t$(SHELL) $< '$(TCL_CONFIG_SH)' '$@'\n \n! clean distclean maintainer-clean: clean-lib\n! \trm -f $(OBJS) Makefile.tcldefs\n \t$(MAKE) -C modules $@", "msg_date": "Sun, 20 Jan 2002 17:55:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: --with-tcl build on AIX (and others) fails " }, { "msg_contents": "Tom Lane writes:\n\n> I tried out the patch that Andreas sent in a couple weeks ago, and find\n> that as-is it fails on HPUX. The attached version works okay on both\n> HPUX 10.20 and RH Linux 7.2, however. Anyone want to try it on other\n> platforms? (Note this only covers Andreas' proposed changes for pltcl,\n> not for libpgtcl.)\n\nBack to this one...\n\nI think we're hung up on this part:\n\n> ! # Set up extra libs that must be mentioned in pltcl.so's link command.\n> ! # Aside from libtcl.so, on many platforms we must mention the shared\n> ! # libraries that libtcl.so depends on. Don't forget -lc, which the\n> ! # Tcl makefiles unaccountably exclude from $(TCL_LIBS).\n>\n> ifneq ($(TCL_SHLIB_LD_LIBS),)\n> # link command for a shared lib must mention shared libs it uses\n> ! SHLIB_LINK=$(TCL_LIB_SPEC) $(TCL_LIBS) -lc\n> else\n> ifeq ($(PORTNAME), hpux)\n> # link command for a shared lib must mention shared libs it uses,\n> # even though Tcl doesn't think so...\n> ! SHLIB_LINK=$(TCL_LIB_SPEC) $(TCL_LIBS) -lc\n> else\n> # link command for a shared lib must NOT mention shared libs it uses\n> ! SHLIB_LINK=$(TCL_LIB_SPEC)\n> endif\n> endif\n\nThis is still wrong because it depends on information that Tcl generated\nduring its build using its compiler and linker configuration.\n\nAlso, if you use GCC to link, specifying -lc is almost certainly wrong in\nany case.\n\nI think what should work is this: Assign\n\nSHLIB_LINK = $(TCL_LIB_SPEC) $(TCL_LIBS)\n\nunconditionally. If the port doesn't like mention of shared lib\ndependencies, it should ignore SHLIB_LINK in Makefile.shlib. If the port\nwants to have -lc, it should add it to SHLIB_LINK in Makefile.shlib.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 22 Feb 2002 23:42:05 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: --with-tcl build on AIX (and others) fails " } ]
[ { "msg_contents": "Tom asked about pg_upgrade as part of our initdb for timezone.\n\nI have made some improvements to pg_upgrade in CVS and have successfully\nmigrated a regression database from a 7.2 to another 7.2 database using\nit. (At least the tables show some data; very light testing.)\n\npg_upgrade is still disabled in CVS, it doesn't install, and there is no\nmanual page so it is still an unused command. I have made the commit so\npeople can review where I have gone and make comments.\n\nTo test it, you have to find the line that says 7.2 and remove the '#'\ncomment. This is for testing purposes only, so far.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 Jan 2002 11:04:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pg_upgrade" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom asked about pg_upgrade as part of our initdb for timezone.\n> \n> I have made some improvements to pg_upgrade in CVS and have successfully\n> migrated a regression database from a 7.2 to another 7.2 database using\n> it. (At least the tables show some data; very light testing.)\n\nHere is a patch I need to /contrib/pg_resetxlog to support a new \"-x\nXID\" option to set the XID in pg_control. Patch attached. This is the\nlast feature I needed for a functioning pg_upgrade for 7.1->7.2 and\n7.2->7.2 databases.\n\nMany commercial distributions like this script, and with our\nnewly-needed initdb to fix our timezonetz problem, it seemed like a good\ntime. :-) It certainly reduces upgrade time.\n\n(BTW, where are we on that timezonetz patch anyway? Tom posted it two\ndays ago and I haven't seen any comments.)\n\npg_upgrade is still disabled.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/pg_resetxlog/README.pg_resetxlog\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/README.pg_resetxlog,v\nretrieving revision 1.1\ndiff -c -r1.1 README.pg_resetxlog\n*** contrib/pg_resetxlog/README.pg_resetxlog\t2001/03/14 00:57:43\t1.1\n--- contrib/pg_resetxlog/README.pg_resetxlog\t2002/01/10 06:09:04\n***************\n*** 21,26 ****\n--- 21,29 ----\n Then run pg_resetxlog, and finally install and start the new version of\n the database software.\n \n+ A tertiary purpose it its use by pg_upgrade to set the next transaction\n+ id in pg_control.\n+ \n To run the program, make sure your postmaster is not running, then\n (as the Postgres admin user) do\n \nIndex: contrib/pg_resetxlog/pg_resetxlog.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/pg_resetxlog.c,v\nretrieving revision 1.10\ndiff -c -r1.10 pg_resetxlog.c\n*** contrib/pg_resetxlog/pg_resetxlog.c\t2001/11/05 17:46:23\t1.10\n--- contrib/pg_resetxlog/pg_resetxlog.c\t2002/01/10 06:09:05\n***************\n*** 709,742 ****\n * Write out the new pg_control file.\n */\n static void\n! RewriteControlFile(void)\n {\n \tint\t\t\tfd;\n \tchar\t\tbuffer[BLCKSZ]; /* need not be aligned */\n \n! \t/*\n! \t * Adjust fields as needed to force an empty XLOG starting at the next\n! \t * available segment.\n! \t */\n! \tnewXlogId = ControlFile.logId;\n! \tnewXlogSeg = ControlFile.logSeg;\n! \t/* be sure we wrap around correctly at end of a logfile */\n! \tNextLogSeg(newXlogId, newXlogSeg);\n! \n! \tControlFile.checkPointCopy.redo.xlogid = newXlogId;\n! \tControlFile.checkPointCopy.redo.xrecoff =\n! \t\tnewXlogSeg * XLogSegSize + SizeOfXLogPHD;\n! \tControlFile.checkPointCopy.undo = ControlFile.checkPointCopy.redo;\n! \tControlFile.checkPointCopy.time = time(NULL);\n! \n! \tControlFile.state = DB_SHUTDOWNED;\n! \tControlFile.time = time(NULL);\n! \tControlFile.logId = newXlogId;\n! \tControlFile.logSeg = newXlogSeg + 1;\n! \tControlFile.checkPoint = ControlFile.checkPointCopy.redo;\n! \tControlFile.prevCheckPoint.xlogid = 0;\n! \tControlFile.prevCheckPoint.xrecoff = 0;\n! \n \t/* Contents are protected with a CRC */\n \tINIT_CRC64(ControlFile.crc);\n \tCOMP_CRC64(ControlFile.crc,\n--- 709,747 ----\n * Write out the new pg_control file.\n */\n static void\n! RewriteControlFile(TransactionId set_xid)\n {\n \tint\t\t\tfd;\n \tchar\t\tbuffer[BLCKSZ]; /* need not be aligned */\n \n! \tif (set_xid == 0)\n! \t{\n! \t\t/*\n! \t\t * Adjust fields as needed to force an empty XLOG starting at the next\n! \t\t * available segment.\n! \t\t */\n! \t\tnewXlogId = ControlFile.logId;\n! \t\tnewXlogSeg = ControlFile.logSeg;\n! \t\t/* be sure we wrap around correctly at end of a logfile */\n! \t\tNextLogSeg(newXlogId, newXlogSeg);\n! \t\n! \t\tControlFile.checkPointCopy.redo.xlogid = newXlogId;\n! \t\tControlFile.checkPointCopy.redo.xrecoff =\n! \t\t\tnewXlogSeg * XLogSegSize + SizeOfXLogPHD;\n! \t\tControlFile.checkPointCopy.undo = ControlFile.checkPointCopy.redo;\n! \t\tControlFile.checkPointCopy.time = time(NULL);\n! \t\n! \t\tControlFile.state = DB_SHUTDOWNED;\n! \t\tControlFile.time = time(NULL);\n! \t\tControlFile.logId = newXlogId;\n! \t\tControlFile.logSeg = newXlogSeg + 1;\n! \t\tControlFile.checkPoint = ControlFile.checkPointCopy.redo;\n! \t\tControlFile.prevCheckPoint.xlogid = 0;\n! \t\tControlFile.prevCheckPoint.xrecoff = 0;\n! \t}\n! \telse\n! \t\tControlFile.checkPointCopy.nextXid = set_xid;\n! \t\n \t/* Contents are protected with a CRC */\n \tINIT_CRC64(ControlFile.crc);\n \tCOMP_CRC64(ControlFile.crc,\n***************\n*** 926,934 ****\n static void\n usage(void)\n {\n! \tfprintf(stderr, \"Usage: pg_resetxlog [-f] [-n] PGDataDirectory\\n\\n\"\n! \t\t\t\" -f\\tforce update to be done\\n\"\n! \t\t\t\" -n\\tno update, just show extracted pg_control values (for testing)\\n\");\n \texit(1);\n }\n \n--- 931,940 ----\n static void\n usage(void)\n {\n! \tfprintf(stderr, \"Usage: pg_resetxlog [-f] [-n] [-x xid] PGDataDirectory\\n\"\n! \t\t\t\" -f\\tforce update to be done\\n\"\n! \t\t\t\" -n\\tno update, just show extracted pg_control values (for testing)\\n\"\n! \t\t\t\" -x XID\\tset XID in pg_control\\n\");\n \texit(1);\n }\n \n***************\n*** 939,944 ****\n--- 945,951 ----\n \tint\t\t\targn;\n \tbool\t\tforce = false;\n \tbool\t\tnoupdate = false;\n+ \tTransactionId set_xid = 0;\n \tint\t\t\tfd;\n \tchar\t\tpath[MAXPGPATH];\n \n***************\n*** 950,955 ****\n--- 957,974 ----\n \t\t\tforce = true;\n \t\telse if (strcmp(argv[argn], \"-n\") == 0)\n \t\t\tnoupdate = true;\n+ \t\telse if (strcmp(argv[argn], \"-x\") == 0)\n+ \t\t{\n+ \t\t\targn++;\n+ \t\t\tif (argn == argc)\n+ \t\t\t\tusage();\n+ \t\t\tset_xid = strtoul(argv[argn], NULL, 0);\n+ \t\t\tif (set_xid == 0)\n+ \t\t\t{\n+ \t\t\t\tfprintf(stderr, \"XID can not be 0.\");\n+ \t\t\t\texit(1);\n+ \t\t\t}\n+ \t\t}\n \t\telse\n \t\t\tusage();\n \t}\n***************\n*** 993,998 ****\n--- 1012,1031 ----\n \t\tGuessControlValues();\n \n \t/*\n+ \t * Set XID in pg_control and exit\n+ \t */\n+ \tif (set_xid)\n+ \t{\n+ \t\tif (guessed)\n+ \t\t{\n+ \t\t\tprintf(\"\\npg_control appears corrupt. Can not update XID.\\n\");\n+ \t\t\texit(1);\n+ \t\t}\n+ \t\tRewriteControlFile(set_xid);\n+ \t\texit(0);\n+ \t}\n+ \n+ \t/*\n \t * If we had to guess anything, and -f was not given, just print the\n \t * guessed values and exit. Also print if -n is given.\n \t */\n***************\n*** 1018,1024 ****\n \t/*\n \t * Else, do the dirty deed.\n \t */\n! \tRewriteControlFile();\n \tKillExistingXLOG();\n \tWriteEmptyXLOG();\n \n--- 1051,1057 ----\n \t/*\n \t * Else, do the dirty deed.\n \t */\n! \tRewriteControlFile(0);\n \tKillExistingXLOG();\n \tWriteEmptyXLOG();", "msg_date": "Thu, 10 Jan 2002 01:09:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Here is a patch I need to /contrib/pg_resetxlog to support a new \"-x\n> XID\" option to set the XID in pg_control.\n\nI don't like this patch. It seems weird to add -x as an independent\nfunction rather than just have pg_resetxlog do its normal thing and\nallow -x to override the xid value. -x defined that way makes sense\nin the context of pg_resetxlog's original mission (in particular, one\nshould be able to use it in the situation where the old pg_control is\nunrecoverable). Also, there's no good reason for pg_upgrade not to\nreset the xlog --- certainly we would not want the records therein to\nbe replayed against the pg_upgraded database!\n\nThere is a more serious problem, also. Pages transferred over from the\nold database will contain LSN values pointing into the old xlog. If\nthese are past the end of the new database's xlog (very probable) then\nyou have a strong risk of \"XLogFlush: request past end of xlog\" errors,\nwhich per Vadim's insistence we treat as a system-wide fatal condition.\n\nProbably the cleanest way to deal with that is to tweak pg_resetxlog\nfurther to have an optional switch with a minimum xlog position.\nIt already knows how to set up its cleared xlog with a position >=\nend of the removed log, so you could have an additional option switch\nthat forces the new position to be >= switch value. To issue the\nswitch, pg_upgrade would have to look at the old xlog files to determine\nthe endpoint of the old xlog. Seems messy but not impossible.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 12:52:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Here is a patch I need to /contrib/pg_resetxlog to support a new \"-x\n> > XID\" option to set the XID in pg_control.\n> \n> I don't like this patch. It seems weird to add -x as an independent\n> function rather than just have pg_resetxlog do its normal thing and\n> allow -x to override the xid value. -x defined that way makes sense\n> in the context of pg_resetxlog's original mission (in particular, one\n> should be able to use it in the situation where the old pg_control is\n> unrecoverable). Also, there's no good reason for pg_upgrade not to\n> reset the xlog --- certainly we would not want the records therein to\n> be replayed against the pg_upgraded database!\n\nOK, if we want to reset WAL at the same time, which does make sense as\nyou say, here is the patch. This is even easier for me. It just\noptionally sets the XID as part of the normal operation. (I am going to\ncommit this patch because it is better for you and smaller than the one\nI just committed from last night.)\n\n> There is a more serious problem, also. Pages transferred over from the\n> old database will contain LSN values pointing into the old xlog. If\n> these are past the end of the new database's xlog (very probable) then\n> you have a strong risk of \"XLogFlush: request past end of xlog\" errors,\n> which per Vadim's insistence we treat as a system-wide fatal condition.\n> \n> Probably the cleanest way to deal with that is to tweak pg_resetxlog\n> further to have an optional switch with a minimum xlog position.\n> It already knows how to set up its cleared xlog with a position >=\n> end of the removed log, so you could have an additional option switch\n> that forces the new position to be >= switch value. To issue the\n> switch, pg_upgrade would have to look at the old xlog files to determine\n> the endpoint of the old xlog. Seems messy but not impossible.\n\nWow, that sounds hard. Can you give me some hints which pg_control\nfield that is in?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: pg_resetxlog.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/pg_resetxlog.c,v\nretrieving revision 1.10\ndiff -c -r1.10 pg_resetxlog.c\n*** pg_resetxlog.c\t2001/11/05 17:46:23\t1.10\n--- pg_resetxlog.c\t2002/01/10 18:01:51\n***************\n*** 23,29 ****\n * Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n *\n! * $Header: /cvsroot/pgsql/contrib/pg_resetxlog/pg_resetxlog.c,v 1.10 2001/11/05 17:46:23 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n--- 23,29 ----\n * Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group\n * Portions Copyright (c) 1994, Regents of the University of California\n *\n! * $Header: /cvsroot/pgsql/contrib/pg_resetxlog/pg_resetxlog.c,v 1.11 2002/01/10 17:51:52 momjian Exp $\n *\n *-------------------------------------------------------------------------\n */\n***************\n*** 709,715 ****\n * Write out the new pg_control file.\n */\n static void\n! RewriteControlFile(void)\n {\n \tint\t\t\tfd;\n \tchar\t\tbuffer[BLCKSZ]; /* need not be aligned */\n--- 709,715 ----\n * Write out the new pg_control file.\n */\n static void\n! RewriteControlFile(TransactionId set_xid)\n {\n \tint\t\t\tfd;\n \tchar\t\tbuffer[BLCKSZ]; /* need not be aligned */\n***************\n*** 737,742 ****\n--- 737,745 ----\n \tControlFile.prevCheckPoint.xlogid = 0;\n \tControlFile.prevCheckPoint.xrecoff = 0;\n \n+ \tif (set_xid != 0)\n+ \t\tControlFile.checkPointCopy.nextXid = set_xid;\n+ \t\n \t/* Contents are protected with a CRC */\n \tINIT_CRC64(ControlFile.crc);\n \tCOMP_CRC64(ControlFile.crc,\n***************\n*** 926,934 ****\n static void\n usage(void)\n {\n! \tfprintf(stderr, \"Usage: pg_resetxlog [-f] [-n] PGDataDirectory\\n\\n\"\n! \t\t\t\" -f\\tforce update to be done\\n\"\n! \t\t\t\" -n\\tno update, just show extracted pg_control values (for testing)\\n\");\n \texit(1);\n }\n \n--- 929,938 ----\n static void\n usage(void)\n {\n! \tfprintf(stderr, \"Usage: pg_resetxlog [-f] [-n] [-x xid] PGDataDirectory\\n\"\n! \t\t\t\" -f\\tforce update to be done\\n\"\n! \t\t\t\" -n\\tno update, just show extracted pg_control values (for testing)\\n\"\n! \t\t\t\" -x XID\\tset XID in pg_control\\n\");\n \texit(1);\n }\n \n***************\n*** 939,944 ****\n--- 943,949 ----\n \tint\t\t\targn;\n \tbool\t\tforce = false;\n \tbool\t\tnoupdate = false;\n+ \tTransactionId set_xid = 0;\n \tint\t\t\tfd;\n \tchar\t\tpath[MAXPGPATH];\n \n***************\n*** 950,955 ****\n--- 955,972 ----\n \t\t\tforce = true;\n \t\telse if (strcmp(argv[argn], \"-n\") == 0)\n \t\t\tnoupdate = true;\n+ \t\telse if (strcmp(argv[argn], \"-x\") == 0)\n+ \t\t{\n+ \t\t\targn++;\n+ \t\t\tif (argn == argc)\n+ \t\t\t\tusage();\n+ \t\t\tset_xid = strtoul(argv[argn], NULL, 0);\n+ \t\t\tif (set_xid == 0)\n+ \t\t\t{\n+ \t\t\t\tfprintf(stderr, \"XID can not be 0.\");\n+ \t\t\t\texit(1);\n+ \t\t\t}\n+ \t\t}\n \t\telse\n \t\t\tusage();\n \t}\n***************\n*** 1018,1024 ****\n \t/*\n \t * Else, do the dirty deed.\n \t */\n! \tRewriteControlFile();\n \tKillExistingXLOG();\n \tWriteEmptyXLOG();\n \n--- 1035,1041 ----\n \t/*\n \t * Else, do the dirty deed.\n \t */\n! \tRewriteControlFile(set_xid);\n \tKillExistingXLOG();\n \tWriteEmptyXLOG();", "msg_date": "Thu, 10 Jan 2002 13:08:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "> Probably the cleanest way to deal with that is to tweak pg_resetxlog\n> further to have an optional switch with a minimum xlog position.\n> It already knows how to set up its cleared xlog with a position >=\n> end of the removed log, so you could have an additional option switch\n> that forces the new position to be >= switch value. To issue the\n> switch, pg_upgrade would have to look at the old xlog files to determine\n> the endpoint of the old xlog. Seems messy but not impossible.\n\nAlso, how do I find the current xlog segment position. Is it one of\nthese fields shown by pg_controldata?\n\t\n\tpg_controldata\n\tpg_control version number: 71\n\tCatalog version number: 200110251\n\tDatabase state: SHUTDOWNED\n\tpg_control last modified: 01/10/02 13:00:04\n\tCurrent log file id: 0\n\tNext log file segment: 7\n\tLatest checkpoint location: 0/6000010\n\tPrior checkpoint location: 0/0\n\tLatest checkpoint's REDO location: 0/6000010\n\tLatest checkpoint's UNDO location: 0/6000010\n\tLatest checkpoint's StartUpID: 8\n\tLatest checkpoint's NextXID: 105\n\tLatest checkpoint's NextOID: 16557\n\tTime of latest checkpoint: 01/10/02 13:00:04\n\tDatabase block size: 8192\n\tBlocks per segment of large relation: 131072\n\tLC_COLLATE: C\n\tLC_CTYPE: C \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 13:09:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Also, how do I find the current xlog segment position. Is it one of\n> these fields shown by pg_controldata?\n\n\"latest checkpoint location\" should do.\n\nBTW, if your script is relying on pg_resetxlog to be available, best to\nensure that it's there before you do anything irreversible ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 13:14:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, how do I find the current xlog segment position. Is it one of\n> > these fields shown by pg_controldata?\n> \n> \"latest checkpoint location\" should do.\n\nOK. I will add a -l flag to specify that location.\n\n> BTW, if your script is relying on pg_resetxlog to be available, best to\n> ensure that it's there before you do anything irreversible ...\n\nOh yes, I will make sure it is available _and_ has the flags from the\nnew version.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 13:16:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, how do I find the current xlog segment position. Is it one of\n> > these fields shown by pg_controldata?\n> \n> \"latest checkpoint location\" should do.\n> \n> BTW, if your script is relying on pg_resetxlog to be available, best to\n> ensure that it's there before you do anything irreversible ...\n\nDo we want to remove the 7.1beta WAL format code from\n/contrib/pg_resetxlog? Remember, this utility was originally written to\nallow for a WAL format change during 7.1beta testing. Seems like dead\ncode now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 13:54:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we want to remove the 7.1beta WAL format code from\n> /contrib/pg_resetxlog?\n\nSure, I don't think there's any strong need for it anymore. Anyone who\ndid need that version could get it out of the 7.1 release, anyway.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 14:04:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pg_upgrade " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do we want to remove the 7.1beta WAL format code from\n> > /contrib/pg_resetxlog?\n> \n> Sure, I don't think there's any strong need for it anymore. Anyone who\n> did need that version could get it out of the 7.1 release, anyway.\n\nOK, I will do that as a separate patch later.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 14:06:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Also, how do I find the current xlog segment position. Is it one of\n> > these fields shown by pg_controldata?\n> \n> \"latest checkpoint location\" should do.\n> \n> BTW, if your script is relying on pg_resetxlog to be available, best to\n> ensure that it's there before you do anything irreversible ...\n\nOK, here is code to set the checkpoint log id and offset using a new -l\nflag. It also now displays the checkpoint location with -n.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/pg_resetxlog/README.pg_resetxlog\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/README.pg_resetxlog,v\nretrieving revision 1.2\ndiff -c -r1.2 README.pg_resetxlog\n*** contrib/pg_resetxlog/README.pg_resetxlog\t2002/01/10 17:51:52\t1.2\n--- contrib/pg_resetxlog/README.pg_resetxlog\t2002/01/10 19:49:23\n***************\n*** 22,28 ****\n the database software.\n \n A tertiary purpose it its use by pg_upgrade to set the next transaction\n! id in pg_control.\n \n To run the program, make sure your postmaster is not running, then\n (as the Postgres admin user) do\n--- 22,28 ----\n the database software.\n \n A tertiary purpose it its use by pg_upgrade to set the next transaction\n! id and checkpoint location in pg_control.\n \n To run the program, make sure your postmaster is not running, then\n (as the Postgres admin user) do\nIndex: contrib/pg_resetxlog/pg_resetxlog.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/pg_resetxlog.c,v\nretrieving revision 1.12\ndiff -c -r1.12 pg_resetxlog.c\n*** contrib/pg_resetxlog/pg_resetxlog.c\t2002/01/10 18:08:29\t1.12\n--- contrib/pg_resetxlog/pg_resetxlog.c\t2002/01/10 19:49:24\n***************\n*** 683,688 ****\n--- 683,689 ----\n \t\t \"Catalog version number: %u\\n\"\n \t\t \"Current log file id: %u\\n\"\n \t\t \"Next log file segment: %u\\n\"\n+ \t\t \"Latest checkpoint location: %X/%X\\n\"\n \t\t \"Latest checkpoint's StartUpID: %u\\n\"\n \t\t \"Latest checkpoint's NextXID: %u\\n\"\n \t\t \"Latest checkpoint's NextOID: %u\\n\"\n***************\n*** 695,700 ****\n--- 696,703 ----\n \t\t ControlFile.catalog_version_no,\n \t\t ControlFile.logId,\n \t\t ControlFile.logSeg,\n+ \t\t ControlFile.checkPoint.xlogid,\n+ \t\t ControlFile.checkPoint.xrecoff,\n \t\t ControlFile.checkPointCopy.ThisStartUpID,\n \t\t ControlFile.checkPointCopy.nextXid,\n \t\t ControlFile.checkPointCopy.nextOid,\n***************\n*** 709,715 ****\n * Write out the new pg_control file.\n */\n static void\n! RewriteControlFile(TransactionId set_xid)\n {\n \tint\t\t\tfd;\n \tchar\t\tbuffer[BLCKSZ]; /* need not be aligned */\n--- 712,718 ----\n * Write out the new pg_control file.\n */\n static void\n! RewriteControlFile(TransactionId set_xid, XLogRecPtr set_checkpoint)\n {\n \tint\t\t\tfd;\n \tchar\t\tbuffer[BLCKSZ]; /* need not be aligned */\n***************\n*** 733,745 ****\n \tControlFile.time = time(NULL);\n \tControlFile.logId = newXlogId;\n \tControlFile.logSeg = newXlogSeg + 1;\n- \tControlFile.checkPoint = ControlFile.checkPointCopy.redo;\n \tControlFile.prevCheckPoint.xlogid = 0;\n \tControlFile.prevCheckPoint.xrecoff = 0;\n \n \tif (set_xid != 0)\n \t\tControlFile.checkPointCopy.nextXid = set_xid;\n! \t\n \t/* Contents are protected with a CRC */\n \tINIT_CRC64(ControlFile.crc);\n \tCOMP_CRC64(ControlFile.crc,\n--- 736,753 ----\n \tControlFile.time = time(NULL);\n \tControlFile.logId = newXlogId;\n \tControlFile.logSeg = newXlogSeg + 1;\n \tControlFile.prevCheckPoint.xlogid = 0;\n \tControlFile.prevCheckPoint.xrecoff = 0;\n \n \tif (set_xid != 0)\n \t\tControlFile.checkPointCopy.nextXid = set_xid;\n! \n! \tif (set_checkpoint.xlogid == 0 &&\n! \t\tset_checkpoint.xrecoff == 0)\n! \t\tControlFile.checkPoint = ControlFile.checkPointCopy.redo;\n! \telse\n! \t\tControlFile.checkPoint = set_checkpoint;\n! \n \t/* Contents are protected with a CRC */\n \tINIT_CRC64(ControlFile.crc);\n \tCOMP_CRC64(ControlFile.crc,\n***************\n*** 929,938 ****\n static void\n usage(void)\n {\n! \tfprintf(stderr, \"Usage: pg_resetxlog [-f] [-n] [-x xid] PGDataDirectory\\n\"\n! \t\t\t\" -f\\tforce update to be done\\n\"\n! \t\t\t\" -n\\tno update, just show extracted pg_control values (for testing)\\n\"\n! \t\t\t\" -x XID\\tset XID in pg_control\\n\");\n \texit(1);\n }\n \n--- 937,947 ----\n static void\n usage(void)\n {\n! \tfprintf(stderr, \"Usage: pg_resetxlog [-f] [-n] [-x xid] [ -l log_id offset ] PGDataDirectory\\n\"\n! \t\t\t\" -f\\t force update to be done\\n\"\n! \t\t\t\" -n\\t no update, just show extracted pg_control values (for testing)\\n\"\n! \t\t\t\" -x XID set XID in pg_control\\n\"\n! \t\t\t\" -l log_id offset set checkpoint location in pg_control\\n\");\n \texit(1);\n }\n \n***************\n*** 944,949 ****\n--- 953,959 ----\n \tbool\t\tforce = false;\n \tbool\t\tnoupdate = false;\n \tTransactionId set_xid = 0;\n+ \tXLogRecPtr\tset_checkpoint = {0,0};\n \tint\t\t\tfd;\n \tchar\t\tpath[MAXPGPATH];\n \n***************\n*** 967,972 ****\n--- 977,999 ----\n \t\t\t\texit(1);\n \t\t\t}\n \t\t}\n+ \t\telse if (strcmp(argv[argn], \"-l\") == 0)\n+ \t\t{\n+ \t\t\targn++;\n+ \t\t\tif (argn == argc)\n+ \t\t\t\tusage();\n+ \t\t\tset_checkpoint.xlogid = strtoul(argv[argn], NULL, 0);\n+ \t\t\targn++;\n+ \t\t\tif (argn == argc)\n+ \t\t\t\tusage();\n+ \t\t\tset_checkpoint.xrecoff = strtoul(argv[argn], NULL, 0);\n+ \t\t\tif (set_checkpoint.xlogid == 0 &&\n+ \t\t\t\tset_checkpoint.xrecoff == 0)\n+ \t\t\t{\n+ \t\t\t\tfprintf(stderr, \"Checkpoint can not be '0 0'.\");\n+ \t\t\t\texit(1);\n+ \t\t\t}\n+ \t\t}\n \t\telse\n \t\t\tusage();\n \t}\n***************\n*** 1035,1041 ****\n \t/*\n \t * Else, do the dirty deed.\n \t */\n! \tRewriteControlFile(set_xid);\n \tKillExistingXLOG();\n \tWriteEmptyXLOG();\n \n--- 1062,1068 ----\n \t/*\n \t * Else, do the dirty deed.\n \t */\n! \tRewriteControlFile(set_xid, set_checkpoint);\n \tKillExistingXLOG();\n \tWriteEmptyXLOG();", "msg_date": "Thu, 10 Jan 2002 14:52:13 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Do we want to remove the 7.1beta WAL format code from\n> > /contrib/pg_resetxlog?\n> \n> Sure, I don't think there's any strong need for it anymore. Anyone who\n> did need that version could get it out of the 7.1 release, anyway.\n\nThe following patch removes the V0 WAL handling that was there only for\n7.1beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: contrib/pg_resetxlog/pg_resetxlog.c\n===================================================================\nRCS file: /cvsroot/pgsql/contrib/pg_resetxlog/pg_resetxlog.c,v\nretrieving revision 1.13\ndiff -c -r1.13 pg_resetxlog.c\n*** contrib/pg_resetxlog/pg_resetxlog.c\t2002/01/10 20:09:06\t1.13\n--- contrib/pg_resetxlog/pg_resetxlog.c\t2002/01/10 22:37:33\n***************\n*** 59,106 ****\n \t\t\t(logSeg)++; \\\n \t} while (0)\n \n- /*\n- * Compute ID and segment from an XLogRecPtr.\n- *\n- * For XLByteToSeg, do the computation at face value. For XLByteToPrevSeg,\n- * a boundary byte is taken to be in the previous segment.\tThis is suitable\n- * for deciding which segment to write given a pointer to a record end,\n- * for example.\n- */\n- #define XLByteToSeg(xlrp, logId, logSeg)\t\\\n- \t( logId = (xlrp).xlogid, \\\n- \t logSeg = (xlrp).xrecoff / XLogSegSize \\\n- \t)\n- #define XLByteToPrevSeg(xlrp, logId, logSeg)\t\\\n- \t( logId = (xlrp).xlogid, \\\n- \t logSeg = ((xlrp).xrecoff - 1) / XLogSegSize \\\n- \t)\n- \n- /*\n- * Is an XLogRecPtr within a particular XLOG segment?\n- *\n- * For XLByteInSeg, do the computation at face value. For XLByteInPrevSeg,\n- * a boundary byte is taken to be in the previous segment.\n- */\n- #define XLByteInSeg(xlrp, logId, logSeg)\t\\\n- \t((xlrp).xlogid == (logId) && \\\n- \t (xlrp).xrecoff / XLogSegSize == (logSeg))\n- \n- #define XLByteInPrevSeg(xlrp, logId, logSeg)\t\\\n- \t((xlrp).xlogid == (logId) && \\\n- \t ((xlrp).xrecoff - 1) / XLogSegSize == (logSeg))\n- \n- \n #define XLogFileName(path, log, seg)\t\\\n \t\t\tsnprintf(path, MAXPGPATH, \"%s/%08X%08X\",\t\\\n \t\t\t\t\t XLogDir, log, seg)\n \n- /*\n- * _INTL_MAXLOGRECSZ: max space needed for a record including header and\n- * any backup-block data.\n- */\n- #define _INTL_MAXLOGRECSZ\t(SizeOfXLogRecord + MAXLOGRECSZ + \\\n- \t\t\t\t\t\t\t XLR_MAX_BKP_BLOCKS * (sizeof(BkpBlock) + BLCKSZ))\n \n /******************** end of stuff copied from xlog.c ********************/\n \n--- 59,68 ----\n***************\n*** 115,136 ****\n static bool guessed = false;\t/* T if we had to guess at any values */\n \n \n- static bool CheckControlVersion0(char *buffer, int len);\n- \n- \n- static int\n- XLogFileOpen(uint32 log, uint32 seg)\n- {\n- \tchar\t\tpath[MAXPGPATH];\n- \tint\t\t\tfd;\n- \n- \tXLogFileName(path, log, seg);\n- \n- \tfd = open(path, O_RDWR | PG_BINARY, S_IRUSR | S_IWUSR);\n- \treturn (fd);\n- }\n- \n- \n /*\n * Try to read the existing pg_control file.\n *\n--- 77,82 ----\n***************\n*** 174,180 ****\n \tif (len >= sizeof(ControlFileData) &&\n \t\t((ControlFileData *) buffer)->pg_control_version == PG_CONTROL_VERSION)\n \t{\n! \t\t/* Seems to be current version --- check the CRC. */\n \t\tINIT_CRC64(crc);\n \t\tCOMP_CRC64(crc,\n \t\t\t\t buffer + sizeof(crc64),\n--- 120,126 ----\n \tif (len >= sizeof(ControlFileData) &&\n \t\t((ControlFileData *) buffer)->pg_control_version == PG_CONTROL_VERSION)\n \t{\n! \t\t/* Check the CRC. */\n \t\tINIT_CRC64(crc);\n \t\tCOMP_CRC64(crc,\n \t\t\t\t buffer + sizeof(crc64),\n***************\n*** 188,612 ****\n \t\t\treturn true;\n \t\t}\n \n! \t\tfprintf(stderr, \"pg_control exists but has invalid CRC; proceed with caution.\\n\");\n \t\t/* We will use the data anyway, but treat it as guessed. */\n \t\tmemcpy(&ControlFile, buffer, sizeof(ControlFile));\n \t\tguessed = true;\n \t\treturn true;\n \t}\n \n- \t/*\n- \t * Maybe it's a 7.1beta pg_control.\n- \t */\n- \tif (CheckControlVersion0(buffer, len))\n- \t\treturn true;\n- \n \t/* Looks like it's a mess. */\n \tfprintf(stderr, \"pg_control exists but is broken or unknown version; ignoring it.\\n\");\n \treturn false;\n }\n \n \n- /******************* routines for old XLOG format *******************/\n- \n- \n- /*\n- * This format was in use in 7.1 beta releases through 7.1beta5. The\n- * pg_control layout was different, and so were the XLOG page headers.\n- * The XLOG record header format was physically the same as 7.1 release,\n- * but interpretation of the xl_len field was not.\n- */\n- \n- typedef struct crc64V0\n- {\n- \tuint32\t\tcrc1;\n- \tuint32\t\tcrc2;\n- }\tcrc64V0;\n- \n- static uint32 crc_tableV0[] = {\n- \t0x00000000, 0x77073096, 0xee0e612c, 0x990951ba, 0x076dc419, 0x706af48f,\n- \t0xe963a535, 0x9e6495a3, 0x0edb8832, 0x79dcb8a4, 0xe0d5e91e, 0x97d2d988,\n- \t0x09b64c2b, 0x7eb17cbd, 0xe7b82d07, 0x90bf1d91, 0x1db71064, 0x6ab020f2,\n- \t0xf3b97148, 0x84be41de, 0x1adad47d, 0x6ddde4eb, 0xf4d4b551, 0x83d385c7,\n- \t0x136c9856, 0x646ba8c0, 0xfd62f97a, 0x8a65c9ec, 0x14015c4f, 0x63066cd9,\n- \t0xfa0f3d63, 0x8d080df5, 0x3b6e20c8, 0x4c69105e, 0xd56041e4, 0xa2677172,\n- \t0x3c03e4d1, 0x4b04d447, 0xd20d85fd, 0xa50ab56b, 0x35b5a8fa, 0x42b2986c,\n- \t0xdbbbc9d6, 0xacbcf940, 0x32d86ce3, 0x45df5c75, 0xdcd60dcf, 0xabd13d59,\n- \t0x26d930ac, 0x51de003a, 0xc8d75180, 0xbfd06116, 0x21b4f4b5, 0x56b3c423,\n- \t0xcfba9599, 0xb8bda50f, 0x2802b89e, 0x5f058808, 0xc60cd9b2, 0xb10be924,\n- \t0x2f6f7c87, 0x58684c11, 0xc1611dab, 0xb6662d3d, 0x76dc4190, 0x01db7106,\n- \t0x98d220bc, 0xefd5102a, 0x71b18589, 0x06b6b51f, 0x9fbfe4a5, 0xe8b8d433,\n- \t0x7807c9a2, 0x0f00f934, 0x9609a88e, 0xe10e9818, 0x7f6a0dbb, 0x086d3d2d,\n- \t0x91646c97, 0xe6635c01, 0x6b6b51f4, 0x1c6c6162, 0x856530d8, 0xf262004e,\n- \t0x6c0695ed, 0x1b01a57b, 0x8208f4c1, 0xf50fc457, 0x65b0d9c6, 0x12b7e950,\n- \t0x8bbeb8ea, 0xfcb9887c, 0x62dd1ddf, 0x15da2d49, 0x8cd37cf3, 0xfbd44c65,\n- \t0x4db26158, 0x3ab551ce, 0xa3bc0074, 0xd4bb30e2, 0x4adfa541, 0x3dd895d7,\n- \t0xa4d1c46d, 0xd3d6f4fb, 0x4369e96a, 0x346ed9fc, 0xad678846, 0xda60b8d0,\n- \t0x44042d73, 0x33031de5, 0xaa0a4c5f, 0xdd0d7cc9, 0x5005713c, 0x270241aa,\n- \t0xbe0b1010, 0xc90c2086, 0x5768b525, 0x206f85b3, 0xb966d409, 0xce61e49f,\n- \t0x5edef90e, 0x29d9c998, 0xb0d09822, 0xc7d7a8b4, 0x59b33d17, 0x2eb40d81,\n- \t0xb7bd5c3b, 0xc0ba6cad, 0xedb88320, 0x9abfb3b6, 0x03b6e20c, 0x74b1d29a,\n- \t0xead54739, 0x9dd277af, 0x04db2615, 0x73dc1683, 0xe3630b12, 0x94643b84,\n- \t0x0d6d6a3e, 0x7a6a5aa8, 0xe40ecf0b, 0x9309ff9d, 0x0a00ae27, 0x7d079eb1,\n- \t0xf00f9344, 0x8708a3d2, 0x1e01f268, 0x6906c2fe, 0xf762575d, 0x806567cb,\n- \t0x196c3671, 0x6e6b06e7, 0xfed41b76, 0x89d32be0, 0x10da7a5a, 0x67dd4acc,\n- \t0xf9b9df6f, 0x8ebeeff9, 0x17b7be43, 0x60b08ed5, 0xd6d6a3e8, 0xa1d1937e,\n- \t0x38d8c2c4, 0x4fdff252, 0xd1bb67f1, 0xa6bc5767, 0x3fb506dd, 0x48b2364b,\n- \t0xd80d2bda, 0xaf0a1b4c, 0x36034af6, 0x41047a60, 0xdf60efc3, 0xa867df55,\n- \t0x316e8eef, 0x4669be79, 0xcb61b38c, 0xbc66831a, 0x256fd2a0, 0x5268e236,\n- \t0xcc0c7795, 0xbb0b4703, 0x220216b9, 0x5505262f, 0xc5ba3bbe, 0xb2bd0b28,\n- \t0x2bb45a92, 0x5cb36a04, 0xc2d7ffa7, 0xb5d0cf31, 0x2cd99e8b, 0x5bdeae1d,\n- \t0x9b64c2b0, 0xec63f226, 0x756aa39c, 0x026d930a, 0x9c0906a9, 0xeb0e363f,\n- \t0x72076785, 0x05005713, 0x95bf4a82, 0xe2b87a14, 0x7bb12bae, 0x0cb61b38,\n- \t0x92d28e9b, 0xe5d5be0d, 0x7cdcefb7, 0x0bdbdf21, 0x86d3d2d4, 0xf1d4e242,\n- \t0x68ddb3f8, 0x1fda836e, 0x81be16cd, 0xf6b9265b, 0x6fb077e1, 0x18b74777,\n- \t0x88085ae6, 0xff0f6a70, 0x66063bca, 0x11010b5c, 0x8f659eff, 0xf862ae69,\n- \t0x616bffd3, 0x166ccf45, 0xa00ae278, 0xd70dd2ee, 0x4e048354, 0x3903b3c2,\n- \t0xa7672661, 0xd06016f7, 0x4969474d, 0x3e6e77db, 0xaed16a4a, 0xd9d65adc,\n- \t0x40df0b66, 0x37d83bf0, 0xa9bcae53, 0xdebb9ec5, 0x47b2cf7f, 0x30b5ffe9,\n- \t0xbdbdf21c, 0xcabac28a, 0x53b39330, 0x24b4a3a6, 0xbad03605, 0xcdd70693,\n- \t0x54de5729, 0x23d967bf, 0xb3667a2e, 0xc4614ab8, 0x5d681b02, 0x2a6f2b94,\n- \t0xb40bbe37, 0xc30c8ea1, 0x5a05df1b, 0x2d02ef8d\n- };\n- \n- #define INIT_CRC64V0(crc)\t((crc).crc1 = 0xffffffff, (crc).crc2 = 0xffffffff)\n- #define FIN_CRC64V0(crc)\t((crc).crc1 ^= 0xffffffff, (crc).crc2 ^= 0xffffffff)\n- #define COMP_CRC64V0(crc, data, len)\t\\\n- do {\\\n- \t\tuint32\t\t __c1 = (crc).crc1;\\\n- \t\tuint32\t\t __c2 = (crc).crc2;\\\n- \t\tchar\t\t*__data = (char *) (data);\\\n- \t\tuint32\t\t __len = (len);\\\n- \\\n- \t\twhile (__len >= 2)\\\n- \t\t{\\\n- \t\t\t\t__c1 = crc_tableV0[(__c1 ^ *__data++) & 0xff] ^ (__c1 >> 8);\\\n- \t\t\t\t__c2 = crc_tableV0[(__c2 ^ *__data++) & 0xff] ^ (__c2 >> 8);\\\n- \t\t\t\t__len -= 2;\\\n- \t\t}\\\n- \t\tif (__len > 0)\\\n- \t\t\t\t__c1 = crc_tableV0[(__c1 ^ *__data++) & 0xff] ^ (__c1 >> 8);\\\n- \t\t(crc).crc1 = __c1;\\\n- \t\t(crc).crc2 = __c2;\\\n- } while (0)\n- \n- #define EQ_CRC64V0(c1,c2) ((c1).crc1 == (c2).crc1 && (c1).crc2 == (c2).crc2)\n- \n- \n- #define LOCALE_NAME_BUFLEN_V0 128\n- \n- typedef struct ControlFileDataV0\n- {\n- \tcrc64V0\t\tcrc;\n- \tuint32\t\tlogId;\t\t\t/* current log file id */\n- \tuint32\t\tlogSeg;\t\t\t/* current log file segment (1-based) */\n- \tXLogRecPtr\tcheckPoint;\t\t/* last check point record ptr */\n- \ttime_t\t\ttime;\t\t\t/* time stamp of last modification */\n- \tDBState\t\tstate;\t\t\t/* see enum above */\n- \tuint32\t\tblcksz;\t\t\t/* block size for this DB */\n- \tuint32\t\trelseg_size;\t/* blocks per segment of large relation */\n- \tuint32\t\tcatalog_version_no;\t\t/* internal version number */\n- \tchar\t\tlc_collate[LOCALE_NAME_BUFLEN_V0];\n- \tchar\t\tlc_ctype[LOCALE_NAME_BUFLEN_V0];\n- \tchar\t\tarchdir[MAXPGPATH];\t\t/* where to move offline log files */\n- }\tControlFileDataV0;\n- \n- typedef struct CheckPointV0\n- {\n- \tXLogRecPtr\tredo;\t\t\t/* next RecPtr available when we */\n- \t/* began to create CheckPoint */\n- \t/* (i.e. REDO start point) */\n- \tXLogRecPtr\tundo;\t\t\t/* first record of oldest in-progress */\n- \t/* transaction when we started */\n- \t/* (i.e. UNDO end point) */\n- \tStartUpID\tThisStartUpID;\n- \tTransactionId nextXid;\n- \tOid\t\t\tnextOid;\n- \tbool\t\tShutdown;\n- }\tCheckPointV0;\n- \n- typedef struct XLogRecordV0\n- {\n- \tcrc64V0\t\txl_crc;\n- \tXLogRecPtr\txl_prev;\t\t/* ptr to previous record in log */\n- \tXLogRecPtr\txl_xact_prev;\t/* ptr to previous record of this xact */\n- \tTransactionId xl_xid;\t\t/* xact id */\n- \tuint16\t\txl_len;\t\t\t/* total len of record *data* */\n- \tuint8\t\txl_info;\n- \tRmgrId\t\txl_rmid;\t\t/* resource manager inserted this record */\n- }\tXLogRecordV0;\n- \n- #define SizeOfXLogRecordV0\tDOUBLEALIGN(sizeof(XLogRecordV0))\n- \n- typedef struct XLogContRecordV0\n- {\n- \tuint16\t\txl_len;\t\t\t/* len of data left */\n- }\tXLogContRecordV0;\n- \n- #define SizeOfXLogContRecordV0\tDOUBLEALIGN(sizeof(XLogContRecordV0))\n- \n- #define XLOG_PAGE_MAGIC_V0 0x17345168\n- \n- typedef struct XLogPageHeaderDataV0\n- {\n- \tuint32\t\txlp_magic;\n- \tuint16\t\txlp_info;\n- }\tXLogPageHeaderDataV0;\n- \n- #define SizeOfXLogPHDV0 DOUBLEALIGN(sizeof(XLogPageHeaderDataV0))\n- \n- typedef XLogPageHeaderDataV0 *XLogPageHeaderV0;\n- \n- \n- static bool RecordIsValidV0(XLogRecordV0 * record);\n- static XLogRecordV0 *ReadRecordV0(XLogRecPtr *RecPtr, char *buffer);\n- static bool ValidXLOGHeaderV0(XLogPageHeaderV0 hdr);\n- \n- \n- /*\n- * Try to interpret pg_control contents as \"version 0\" format.\n- */\n- static bool\n- CheckControlVersion0(char *buffer, int len)\n- {\n- \tcrc64V0\t\tcrc;\n- \tControlFileDataV0 *oldfile;\n- \tXLogRecordV0 *record;\n- \tCheckPointV0 *oldchkpt;\n- \n- \tif (len < sizeof(ControlFileDataV0))\n- \t\treturn false;\n- \t/* Check CRC the version-0 way. */\n- \tINIT_CRC64V0(crc);\n- \tCOMP_CRC64V0(crc,\n- \t\t\t\t buffer + sizeof(crc64V0),\n- \t\t\t\t sizeof(ControlFileDataV0) - sizeof(crc64V0));\n- \tFIN_CRC64V0(crc);\n- \n- \tif (!EQ_CRC64V0(crc, ((ControlFileDataV0 *) buffer)->crc))\n- \t\treturn false;\n- \n- \t/* Valid data, convert useful fields to new-style pg_control format */\n- \toldfile = (ControlFileDataV0 *) buffer;\n- \n- \tmemset(&ControlFile, 0, sizeof(ControlFile));\n- \n- \tControlFile.pg_control_version = PG_CONTROL_VERSION;\n- \tControlFile.catalog_version_no = oldfile->catalog_version_no;\n- \n- \tControlFile.state = oldfile->state;\n- \tControlFile.logId = oldfile->logId;\n- \tControlFile.logSeg = oldfile->logSeg;\n- \n- \tControlFile.blcksz = oldfile->blcksz;\n- \tControlFile.relseg_size = oldfile->relseg_size;\n- \tstrcpy(ControlFile.lc_collate, oldfile->lc_collate);\n- \tstrcpy(ControlFile.lc_ctype, oldfile->lc_ctype);\n- \n- \t/*\n- \t * Since this format did not include a copy of the latest checkpoint\n- \t * record, we have to go rooting in the old XLOG to get that.\n- \t */\n- \trecord = ReadRecordV0(&oldfile->checkPoint,\n- \t\t\t\t\t\t (char *) malloc(_INTL_MAXLOGRECSZ));\n- \tif (record == NULL)\n- \t{\n- \t\t/*\n- \t\t * We have to guess at the checkpoint contents.\n- \t\t */\n- \t\tguessed = true;\n- \t\tControlFile.checkPointCopy.ThisStartUpID = 0;\n- \t\tControlFile.checkPointCopy.nextXid = (TransactionId) 514;\t\t/* XXX */\n- \t\tControlFile.checkPointCopy.nextOid = BootstrapObjectIdData;\n- \t\treturn true;\n- \t}\n- \toldchkpt = (CheckPointV0 *) XLogRecGetData(record);\n- \n- \tControlFile.checkPointCopy.ThisStartUpID = oldchkpt->ThisStartUpID;\n- \tControlFile.checkPointCopy.nextXid = oldchkpt->nextXid;\n- \tControlFile.checkPointCopy.nextOid = oldchkpt->nextOid;\n- \n- \treturn true;\n- }\n- \n- /*\n- * CRC-check an XLOG V0 record. We do not believe the contents of an XLOG\n- * record (other than to the minimal extent of computing the amount of\n- * data to read in) until we've checked the CRCs.\n- *\n- * We assume all of the record has been read into memory at *record.\n- */\n- static bool\n- RecordIsValidV0(XLogRecordV0 * record)\n- {\n- \tcrc64V0\t\tcrc;\n- \tuint32\t\tlen = record->xl_len;\n- \n- \t/*\n- \t * NB: this code is not right for V0 records containing backup blocks,\n- \t * but for now it's only going to be applied to checkpoint records, so\n- \t * I'm not going to worry about it...\n- \t */\n- \tINIT_CRC64V0(crc);\n- \tCOMP_CRC64V0(crc, XLogRecGetData(record), len);\n- \tCOMP_CRC64V0(crc, (char *) record + sizeof(crc64V0),\n- \t\t\t\t SizeOfXLogRecordV0 - sizeof(crc64V0));\n- \tFIN_CRC64V0(crc);\n- \n- \tif (!EQ_CRC64V0(record->xl_crc, crc))\n- \t\treturn false;\n- \n- \treturn (true);\n- }\n- \n- /*\n- * Attempt to read an XLOG V0 record at recptr.\n- *\n- * If no valid record is available, returns NULL.\n- *\n- * buffer is a workspace at least _INTL_MAXLOGRECSZ bytes long. It is needed\n- * to reassemble a record that crosses block boundaries. Note that on\n- * successful return, the returned record pointer always points at buffer.\n- */\n- static XLogRecordV0 *\n- ReadRecordV0(XLogRecPtr *RecPtr, char *buffer)\n- {\n- \tstatic int\treadFile = -1;\n- \tstatic uint32 readId = 0;\n- \tstatic uint32 readSeg = 0;\n- \tstatic uint32 readOff = 0;\n- \tstatic char *readBuf = NULL;\n- \n- \tXLogRecordV0 *record;\n- \tuint32\t\tlen,\n- \t\t\t\ttotal_len;\n- \tuint32\t\ttargetPageOff;\n- \n- \tif (readBuf == NULL)\n- \t\treadBuf = (char *) malloc(BLCKSZ);\n- \n- \tXLByteToSeg(*RecPtr, readId, readSeg);\n- \tif (readFile < 0)\n- \t{\n- \t\treadFile = XLogFileOpen(readId, readSeg);\n- \t\tif (readFile < 0)\n- \t\t\tgoto next_record_is_invalid;\n- \t\treadOff = (uint32) (-1);\t/* force read to occur below */\n- \t}\n- \n- \ttargetPageOff = ((RecPtr->xrecoff % XLogSegSize) / BLCKSZ) * BLCKSZ;\n- \tif (readOff != targetPageOff)\n- \t{\n- \t\treadOff = targetPageOff;\n- \t\tif (lseek(readFile, (off_t) readOff, SEEK_SET) < 0)\n- \t\t\tgoto next_record_is_invalid;\n- \t\tif (read(readFile, readBuf, BLCKSZ) != BLCKSZ)\n- \t\t\tgoto next_record_is_invalid;\n- \t\tif (!ValidXLOGHeaderV0((XLogPageHeaderV0) readBuf))\n- \t\t\tgoto next_record_is_invalid;\n- \t}\n- \tif ((((XLogPageHeaderV0) readBuf)->xlp_info & XLP_FIRST_IS_CONTRECORD) &&\n- \t\tRecPtr->xrecoff % BLCKSZ == SizeOfXLogPHDV0)\n- \t\tgoto next_record_is_invalid;\n- \trecord = (XLogRecordV0 *) ((char *) readBuf + RecPtr->xrecoff % BLCKSZ);\n- \n- \tif (record->xl_len == 0)\n- \t\tgoto next_record_is_invalid;\n- \n- \t/*\n- \t * Compute total length of record including any appended backup\n- \t * blocks.\n- \t */\n- \ttotal_len = SizeOfXLogRecordV0 + record->xl_len;\n- \n- \t/*\n- \t * Make sure it will fit in buffer (currently, it is mechanically\n- \t * impossible for this test to fail, but it seems like a good idea\n- \t * anyway).\n- \t */\n- \tif (total_len > _INTL_MAXLOGRECSZ)\n- \t\tgoto next_record_is_invalid;\n- \tlen = BLCKSZ - RecPtr->xrecoff % BLCKSZ;\n- \tif (total_len > len)\n- \t{\n- \t\t/* Need to reassemble record */\n- \t\tXLogContRecordV0 *contrecord;\n- \t\tuint32\t\tgotlen = len;\n- \n- \t\tmemcpy(buffer, record, len);\n- \t\trecord = (XLogRecordV0 *) buffer;\n- \t\tbuffer += len;\n- \t\tfor (;;)\n- \t\t{\n- \t\t\treadOff += BLCKSZ;\n- \t\t\tif (readOff >= XLogSegSize)\n- \t\t\t{\n- \t\t\t\tclose(readFile);\n- \t\t\t\treadFile = -1;\n- \t\t\t\tNextLogSeg(readId, readSeg);\n- \t\t\t\treadFile = XLogFileOpen(readId, readSeg);\n- \t\t\t\tif (readFile < 0)\n- \t\t\t\t\tgoto next_record_is_invalid;\n- \t\t\t\treadOff = 0;\n- \t\t\t}\n- \t\t\tif (read(readFile, readBuf, BLCKSZ) != BLCKSZ)\n- \t\t\t\tgoto next_record_is_invalid;\n- \t\t\tif (!ValidXLOGHeaderV0((XLogPageHeaderV0) readBuf))\n- \t\t\t\tgoto next_record_is_invalid;\n- \t\t\tif (!(((XLogPageHeaderV0) readBuf)->xlp_info & XLP_FIRST_IS_CONTRECORD))\n- \t\t\t\tgoto next_record_is_invalid;\n- \t\t\tcontrecord = (XLogContRecordV0 *) ((char *) readBuf + SizeOfXLogPHDV0);\n- \t\t\tif (contrecord->xl_len == 0 ||\n- \t\t\t\ttotal_len != (contrecord->xl_len + gotlen))\n- \t\t\t\tgoto next_record_is_invalid;\n- \t\t\tlen = BLCKSZ - SizeOfXLogPHDV0 - SizeOfXLogContRecordV0;\n- \t\t\tif (contrecord->xl_len > len)\n- \t\t\t{\n- \t\t\t\tmemcpy(buffer, (char *) contrecord + SizeOfXLogContRecordV0, len);\n- \t\t\t\tgotlen += len;\n- \t\t\t\tbuffer += len;\n- \t\t\t\tcontinue;\n- \t\t\t}\n- \t\t\tmemcpy(buffer, (char *) contrecord + SizeOfXLogContRecordV0,\n- \t\t\t\t contrecord->xl_len);\n- \t\t\tbreak;\n- \t\t}\n- \t\tif (!RecordIsValidV0(record))\n- \t\t\tgoto next_record_is_invalid;\n- \t\treturn record;\n- \t}\n- \n- \t/* Record does not cross a page boundary */\n- \tif (!RecordIsValidV0(record))\n- \t\tgoto next_record_is_invalid;\n- \tmemcpy(buffer, record, total_len);\n- \treturn (XLogRecordV0 *) buffer;\n- \n- next_record_is_invalid:;\n- \tclose(readFile);\n- \treadFile = -1;\n- \treturn NULL;\n- }\n- \n- /*\n- * Check whether the xlog header of a page just read in looks valid.\n- *\n- * This is just a convenience subroutine to avoid duplicated code in\n- * ReadRecord.\tIt's not intended for use from anywhere else.\n- */\n- static bool\n- ValidXLOGHeaderV0(XLogPageHeaderV0 hdr)\n- {\n- \tif (hdr->xlp_magic != XLOG_PAGE_MAGIC_V0)\n- \t\treturn false;\n- \tif ((hdr->xlp_info & ~XLP_ALL_FLAGS) != 0)\n- \t\treturn false;\n- \treturn true;\n- }\n- \n- /******************* end of routines for old XLOG format *******************/\n- \n- \n /*\n * Guess at pg_control values when we can't read the old ones.\n */\n--- 134,152 ----\n \t\t\treturn true;\n \t\t}\n \n! \t\tfprintf(stderr, \"pg_control exists but has invalid CRC; proceeding with caution.\\n\");\n \t\t/* We will use the data anyway, but treat it as guessed. */\n \t\tmemcpy(&ControlFile, buffer, sizeof(ControlFile));\n \t\tguessed = true;\n \t\treturn true;\n \t}\n \n \t/* Looks like it's a mess. */\n \tfprintf(stderr, \"pg_control exists but is broken or unknown version; ignoring it.\\n\");\n \treturn false;\n }\n \n \n /*\n * Guess at pg_control values when we can't read the old ones.\n */\n***************\n*** 676,684 ****\n * reset by RewriteControlFile().\n */\n static void\n! PrintControlValues(void)\n {\n! \tprintf(\"Guessed-at pg_control values:\\n\\n\"\n \t\t \"pg_control version number: %u\\n\"\n \t\t \"Catalog version number: %u\\n\"\n \t\t \"Current log file id: %u\\n\"\n--- 216,224 ----\n * reset by RewriteControlFile().\n */\n static void\n! PrintControlValues(bool guessed)\n {\n! \tprintf(\"\\n%spg_control values:\\n\\n\"\n \t\t \"pg_control version number: %u\\n\"\n \t\t \"Catalog version number: %u\\n\"\n \t\t \"Current log file id: %u\\n\"\n***************\n*** 692,697 ****\n--- 232,238 ----\n \t\t \"LC_COLLATE: %s\\n\"\n \t\t \"LC_CTYPE: %s\\n\",\n \n+ \t\t (guessed ? \"Guessed-at \" : \"\"),\n \t\t ControlFile.pg_control_version,\n \t\t ControlFile.catalog_version_no,\n \t\t ControlFile.logId,\n***************\n*** 1042,1048 ****\n \t */\n \tif ((guessed && !force) || noupdate)\n \t{\n! \t\tPrintControlValues();\n \t\tif (!noupdate)\n \t\t\tprintf(\"\\nIf these values seem acceptable, use -f to force reset.\\n\");\n \t\texit(1);\n--- 583,589 ----\n \t */\n \tif ((guessed && !force) || noupdate)\n \t{\n! \t\tPrintControlValues(guessed);\n \t\tif (!noupdate)\n \t\t\tprintf(\"\\nIf these values seem acceptable, use -f to force reset.\\n\");\n \t\texit(1);", "msg_date": "Thu, 10 Jan 2002 18:40:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pg_upgrade" }, { "msg_contents": "Bruce Momjian wrote:\n> Tom asked about pg_upgrade as part of our initdb for timezone.\n> \n> I have made some improvements to pg_upgrade in CVS and have successfully\n> migrated a regression database from a 7.2 to another 7.2 database using\n> it. (At least the tables show some data; very light testing.)\n> \n> pg_upgrade is still disabled in CVS, it doesn't install, and there is no\n> manual page so it is still an unused command. I have made the commit so\n> people can review where I have gone and make comments.\n> \n> To test it, you have to find the line that says 7.2 and remove the '#'\n> comment. This is for testing purposes only, so far.\n\nStatus report: I have completed all the steps necessary for pg_upgrade\nto work for 7.1->7.2 and for 7.2->7.2 databases. I will run tests\ntomorrow, and once I am sure it works, I will ask others to test.\n\nI will not enable it until everyone agrees. Are there people interested\nin this tool being in 7.2, or who are against this tool being in 7.2?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 21:16:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade" }, { "msg_contents": "Bruce Momjian wrote:\n\n> \n> Status report: I have completed all the steps necessary for pg_upgrade\n> to work for 7.1->7.2 and for 7.2->7.2 databases. I will run tests\n> tomorrow, and once I am sure it works, I will ask others to test.\n> \n> I will not enable it until everyone agrees. Are there people interested\n> in this tool being in 7.2, or who are against this tool being in 7.2?\n> \n\n\nI have a good sized (~11 GB) database on 7.2b3. I'd like to try \npg_upgrade to move it to CVS tip and/or 7.2RC1, so I guess I can be one \nof your testers.\n\n-- Joe\n\n\n\n\n\n", "msg_date": "Thu, 10 Jan 2002 18:36:18 -0800", "msg_from": "Joe Conway <joseph.conway@home.com>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade" }, { "msg_contents": "Bruce Momjian writes:\n\n> I will not enable it until everyone agrees. Are there people interested\n> in this tool being in 7.2, or who are against this tool being in 7.2?\n\nYou're not going to like my opinion, but I'm going to put it forth anyway.\nWe've been working on this release for half a year, and there have been\nfar too many last-minute bright ideas that should have been postponed.\nThe fact that someone is going to want to upgrade their installation from\na previous release didn't just occur to us yesterday, so while this\ndevelopment effort is commendable, this is just not the time. I'm not\neven going to list any technical reasons here, you can make up your own\nlist because you're looking at the code. I'm just looking at the emails\nand it gives me the creeps already.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 11 Jan 2002 00:19:27 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I will not enable it until everyone agrees. Are there people interested\n> > in this tool being in 7.2, or who are against this tool being in 7.2?\n> \n> You're not going to like my opinion, but I'm going to put it forth anyway.\n> We've been working on this release for half a year, and there have been\n> far too many last-minute bright ideas that should have been postponed.\n> The fact that someone is going to want to upgrade their installation from\n> a previous release didn't just occur to us yesterday, so while this\n> development effort is commendable, this is just not the time. I'm not\n> even going to list any technical reasons here, you can make up your own\n> list because you're looking at the code. I'm just looking at the emails\n> and it gives me the creeps already.\n\nGives me the creeps too. :-)\n\nI am working on it only because there isn't other stuff to do and it\nisn't delaying anything because it is disabled anyway; we can keep it\nfor 7.3 if we wish. There was also the problem of a system catalog\nchange, and that got the fire moving. Also, certain commerical\ndistributors bug me about this from time to time so even if we don't\nofficially use it my guess is that some of them may enable it anyway for\ntheir distributions.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 Jan 2002 00:21:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade" }, { "msg_contents": ">>>Bruce Momjian said:\n > I will not enable it until everyone agrees. Are there people interested\n > in this tool being in 7.2, or who are against this tool being in 7.2?\n\nI have never trusted pg_upgrade on production databases. This is where it is \nactually targeted to (to minimize downtime). I have always done full \npg_dumpall and then full recreation of the database system, which \nunfortunately in my cases take few hours. With so many little changes in the \nstructures pg_upgrade it is probably not very safe method anyway?\n\nJust my opinion. :-)\n\nDaniel\n\n", "msg_date": "Fri, 11 Jan 2002 07:40:41 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade " }, { "msg_contents": "Daniel Kalchev wrote:\n> >>>Bruce Momjian said:\n> > I will not enable it until everyone agrees. Are there people interested\n> > in this tool being in 7.2, or who are against this tool being in 7.2?\n> \n> I have never trusted pg_upgrade on production databases. This is where it is \n> actually targeted to (to minimize downtime). I have always done full \n> pg_dumpall and then full recreation of the database system, which \n> unfortunately in my cases take few hours. With so many little changes in the \n> structures pg_upgrade it is probably not very safe method anyway?\n> \n> Just my opinion. :-)\n\nI agree. There are just too many people who complain to me about a lack\nof pg_upgrade that I have to do my best on it and let people decide if\nthey want to use it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 Jan 2002 00:42:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> ... I'm just looking at the emails\n> and it gives me the creeps already.\n\nFWIW, I would *never* trust a production database to pg_upgrade in its\ncurrent state; it's untested and can't possibly get enough testing\nbefore release to be trustable. But if Bruce wants to work on it,\nwhere's the harm? The discussions I've had with him over the past\ncouple days are more than valuable enough for development of a future\nbulletproof pg_upgrade, whether or not the current script ever helps\nanyone.\n\nThe only mistake we could make here is to advertise pg_upgrade as\nreliable. Which we will not do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 01:00:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > ... I'm just looking at the emails\n> > and it gives me the creeps already.\n> \n> FWIW, I would *never* trust a production database to pg_upgrade in its\n> current state; it's untested and can't possibly get enough testing\n> before release to be trustable. But if Bruce wants to work on it,\n> where's the harm? The discussions I've had with him over the past\n> couple days are more than valuable enough for development of a future\n> bulletproof pg_upgrade, whether or not the current script ever helps\n> anyone.\n> \n> The only mistake we could make here is to advertise pg_upgrade as\n> reliable. Which we will not do.\n\nSome people have large, non-critical databases they want to upgrade to\n7.2. I can imagine some people using pg_upgrade for those cases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 Jan 2002 01:09:55 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pg_upgrade" }, { "msg_contents": "Tom Lane writes:\n\n> FWIW, I would *never* trust a production database to pg_upgrade in its\n> current state; it's untested and can't possibly get enough testing\n> before release to be trustable. But if Bruce wants to work on it,\n> where's the harm?\n\nThere isn't any harm working on it, but the question was whether we want\nto enable it in the 7.2 release. Given that you would \"never\" trust it in\nits current state, and me just having seen the actual code, I think that\nit's barely worth being put into contrib. Where in fact it should\nprobably go.\n\n> The only mistake we could make here is to advertise pg_upgrade as\n> reliable. Which we will not do.\n\nOr ship pg_upgrade in a default installation and undermine the reliability\nreputation for people who don't read advertisements.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 13 Jan 2002 01:23:10 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_upgrade " } ]
[ { "msg_contents": "> I don't think we dare risk a wholesale change in the way \n> pltcl is built\n> for 7.2. We're already through the ports testing process for 7.2, and\n> I don't want to start it over again.\n\nOk, that is what I asked for, that you look at it, thanks. 7.3 then.\n(Of course there have been other portability problems introduced recently, \nso ...)\n\nThis leaves the question how many port testers actually built \nand tested --with-tcl and actually tried a load and pg_connect in tclsh. \nMy guess is not too many.\nImho we should start to require port reports to contain certain --with \nfeatures, like perl, tcl, ....\n\n> As for the libpgtcl part of the patch, I don't understand why the Tcl\n> library should be linked into libpgtcl. libpgtcl is supposed to be\n> loaded into a Tcl interpreter, not carry its own interpreter \n> along with\n> it; but it sure looks like that's what will happen with this patch.\n\nOn AIX this is actually an imports file. Now the tricky part is \nthe first line of this file. This line has to be correct.\n\nSince the tcl Interpreter is linked with this same exports file\nthe library is only supposed to be loaded once.\nDoes anybody have an idea how to actually test this (without tk) ?\n\npgtclsh does work, but the shlib with above Makefile cannot be loaded \nby tclsh because it misses a -bnoentry flag :-(\n\nWith -bnoentry it does load and pg_connect and pg_exec and pg_result, \nis this a sufficient test ?\nThis again is probably a bug in Makefile.shlib on aix, since I guess \nall of our shlibs don't have a main function. Unfortunately a different \nflag has to be used on different aix Versions :-( Don't know if we still \nneed to worry about AIX 4.2 and below, though.\n\nAndreas\n", "msg_date": "Wed, 9 Jan 2002 18:00:09 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: --with-tcl build on AIX (and others) fails " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> This leaves the question how many port testers actually built \n> and tested --with-tcl and actually tried a load and pg_connect in tclsh. \n> My guess is not too many.\n\nGood point.\n\n> Imho we should start to require port reports to contain certain --with \n> features, like perl, tcl, ....\n\nYes, it'd be nice to ask people to check those things. However if\nsomeone doesn't have Tcl installed, I'm not going to reject the\nport report...\n\n> pgtclsh does work, but the shlib with above Makefile cannot be loaded \n> by tclsh because it misses a -bnoentry flag :-(\n> With -bnoentry it does load and pg_connect and pg_exec and pg_result, \n> is this a sufficient test ?\n\nI think so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 12:05:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: --with-tcl build on AIX (and others) fails " } ]
[ { "msg_contents": "\n> > Imho it would be nice if we could allow \"select timestamp(xxx);\",\n> > and this has been the umpteenth request in this regard, and 7.2 is not even\n> > released yet.\n> \n> afaicr one of the very sticky areas is the SQL99-specified syntax for\n> date/time literals:\n> \n> timestamp(6) '2001-01-08 04:05:06'\n> \n> which is difficult to reconcile with a function named timestamp:\n\nBut since '2001-01-08 04:05:06' is in single quotes it can't be \na column label, which would be the only other \"token?\" after a function, \nno ?\n\nSo it is eighter timestamp(6) followed by ' a single quote, or timestamp \nis a function in this context ???\n\nAndreas\n", "msg_date": "Wed, 9 Jan 2002 18:07:52 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Time as keyword" } ]
[ { "msg_contents": "I notice that in some places we compare the result of getopt(3) to\n\"EOF\", and in some other places we compare it to \"-1\". I think we\nshould standardize on one or the other; anyone have an opinion\nwhich it should be?\n\nThe man pages I have here (HPUX and Linux) both describe the\nend-of-switches return value as being \"-1\". The glibc sources also\nuse \"-1\". Replacing this by EOF seems more readable but perhaps is\nnot strictly correct.\n\nAre there any platforms that define EOF as something other than -1?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 12:58:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> I notice that in some places we compare the result of getopt(3) to\n> \"EOF\", and in some other places we compare it to \"-1\". I think we\n> should standardize on one or the other; anyone have an opinion\n> which it should be?\n> \n> The man pages I have here (HPUX and Linux) both describe the\n> end-of-switches return value as being \"-1\". The glibc sources also\n> use \"-1\". Replacing this by EOF seems more readable but perhaps is\n> not strictly correct.\n> \n> Are there any platforms that define EOF as something other than -1?\n\nI don't know, but the Solaris getopt() manpage specifies it as\nreturning EOF rather than -1. I *think* POSIX mandates EOF == -1\nanyhow but I'm certainly not sure of that (and we run on non-POSIX\nsystems too I guess). \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n", "msg_date": "09 Jan 2002 15:35:53 -0500", "msg_from": "Doug McNaught <doug@wireboard.com>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "On Wed, 9 Jan 2002, Tom Lane wrote:\n\n> I notice that in some places we compare the result of getopt(3) to\n> \"EOF\", and in some other places we compare it to \"-1\". I think we\n> should standardize on one or the other; anyone have an opinion\n> which it should be?\n\nWhy not standardize on -1 - considering that's what the manpages I've been\nable to find, say. (Linux, AIX)\n\nUsing EOF where the documentation states -1 - to me, would be confusing.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Wed, 9 Jan 2002 14:44:49 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "Tom Lane writes:\n\n> I notice that in some places we compare the result of getopt(3) to\n> \"EOF\", and in some other places we compare it to \"-1\". I think we\n> should standardize on one or the other; anyone have an opinion\n> which it should be?\n\nDefinitely \"-1\", since getopt() comes from unistd.h and EOF is in stdio.h\nso EOF is not necessarily available unless the program does stream-based\nI/O.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 9 Jan 2002 16:06:49 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "Tom Lane wrote:\n> I notice that in some places we compare the result of getopt(3) to\n> \"EOF\", and in some other places we compare it to \"-1\". I think we\n> should standardize on one or the other; anyone have an opinion\n> which it should be?\n> \n> The man pages I have here (HPUX and Linux) both describe the\n> end-of-switches return value as being \"-1\". The glibc sources also\n> use \"-1\". Replacing this by EOF seems more readable but perhaps is\n> not strictly correct.\n> \n> Are there any platforms that define EOF as something other than -1?\n\nI think -1 is the only way to go. EOF just doesn't seem right for a\nnon-file access function.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Wed, 9 Jan 2002 16:10:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "Tom Lane wrote:\n> \n> I notice that in some places we compare the result of getopt(3) to\n> \"EOF\", and in some other places we compare it to \"-1\". I think we\n> should standardize on one or the other; anyone have an opinion\n> which it should be?\n> \n> The man pages I have here (HPUX and Linux) both describe the\n> end-of-switches return value as being \"-1\". The glibc sources also\n> use \"-1\". Replacing this by EOF seems more readable but perhaps is\n> not strictly correct.\n> \n> Are there any platforms that define EOF as something other than -1?\n\nWould the correct question be, \"what does POSIX define?\". More \nand more systems (at least Unix systems) are defining POSIX\ninterfaces. I don't have my POSIX CD here with me or I would\nquote the getopt() definition. I ~think~ it says EOF, and\nthe target systems include files define what EOF means.", "msg_date": "Wed, 09 Jan 2002 14:27:47 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "On Wed, Jan 09, 2002 at 12:58:45PM -0500, Tom Lane wrote:\n> I notice that in some places we compare the result of getopt(3) to\n> \"EOF\", and in some other places we compare it to \"-1\". I think we\n> should standardize on one or the other; anyone have an opinion\n> which it should be?\n> \n> The man pages I have here (HPUX and Linux) both describe the\n> end-of-switches return value as being \"-1\". The glibc sources also\n> use \"-1\". Replacing this by EOF seems more readable but perhaps is\n> not strictly correct.\n> \n> Are there any platforms that define EOF as something other than -1?\n\nOpenBSD's getopt(3):\n The getopt() function was once specified to return EOF instead of -1.\n This was changed by IEEE Std1003.2-1992 (``POSIX.2'') to decouple\n getopt() from <stdio.h>.\n\n-- \nDavid Terrell | \"If NNTP had a protocol extension for\ndbt@meat.net | administering a spanking (long overdue if\nNebcorp Prime Minister | you ask me), you'd be yelping right now.\"\nhttp://wwn.nebcorp.com/ | - Miguel Cruz\n", "msg_date": "Wed, 9 Jan 2002 13:29:49 -0800", "msg_from": "David Terrell <dbt@meat.net>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "Doug Royer <Doug@royer.com> writes:\n> Would the correct question be, \"what does POSIX define?\". More \n> and more systems (at least Unix systems) are defining POSIX\n> interfaces. I don't have my POSIX CD here with me or I would\n> quote the getopt() definition. I ~think~ it says EOF, and\n> the target systems include files define what EOF means.\n\nI looked at the Single Unix Specification at \nhttp://www.opengroup.org/onlinepubs/007908799/\nand their man page for getopt says \"-1\".\nI believe SUS is derived from POSIX among others.\nIf POSIX does say EOF then we might have a conflict,\nbut otherwise the tide seems to be running to -1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 16:42:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end? " }, { "msg_contents": "Tom Lane wrote:\n> \n> Doug Royer <Doug@royer.com> writes:\n> > Would the correct question be, \"what does POSIX define?\". More\n> > and more systems (at least Unix systems) are defining POSIX\n> > interfaces. I don't have my POSIX CD here with me or I would\n> > quote the getopt() definition. I ~think~ it says EOF, and\n> > the target systems include files define what EOF means.\n> \n> I looked at the Single Unix Specification at\n> http://www.opengroup.org/onlinepubs/007908799/\n> and their man page for getopt says \"-1\".\n> I believe SUS is derived from POSIX among others.\n> If POSIX does say EOF then we might have a conflict,\n> but otherwise the tide seems to be running to -1.\n\nIt's probabily the same.\n\nTom Lane wrote:\n> \n> Doug Royer <Doug@royer.com> writes:\n> > And if the default for int or char is unsigned as it can\n> > be on some systems, the code does exactly that.\n> \n> There are no systems where \"int\" means \"unsigned int\". That would break\n> (to a first approximation) every C program in existence, as well as\n> violate the ANSI C specification.\n\nYour right - oops.", "msg_date": "Wed, 09 Jan 2002 15:59:20 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" }, { "msg_contents": "David Terrell <dbt@meat.net> writes:\n> OpenBSD's getopt(3):\n> The getopt() function was once specified to return EOF instead of -1.\n> This was changed by IEEE Std1003.2-1992 (``POSIX.2'') to decouple\n> getopt() from <stdio.h>.\n\nAh, nothing like historical perspective to make it all clear. Thanks.\n\nLooks like -1 it shall be.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 18:20:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end? " }, { "msg_contents": "On Wed, 9 Jan 2002 16:10:15 -0500 (EST)\nBruce Momjian <pgman@candle.pha.pa.us> wrote:\n\n> Tom Lane wrote:\n> > I notice that in some places we compare the result of getopt(3) to\n> > \"EOF\", and in some other places we compare it to \"-1\". I think we\n> > should standardize on one or the other; anyone have an opinion\n> > which it should be?\n> > \n> > The man pages I have here (HPUX and Linux) both describe the\n> > end-of-switches return value as being \"-1\". The glibc sources also\n> > use \"-1\". Replacing this by EOF seems more readable but perhaps is\n> > not strictly correct.\n> > \n> > Are there any platforms that define EOF as something other than -1?\n> \n> I think -1 is the only way to go. EOF just doesn't seem right for a\n> non-file access function.\n\nFWIW, here's a quote from the FreeBSD man page:\n\n The getopt() function was once specified to return EOF instead of -1.\n This was changed by IEEE Std 1003.2-1992 (``POSIX.2'') to decouple\n getopt() from <stdio.h>.\n\n-- \nRichard Kuhns\t\t\trjk@grauel.com\nPO Box 6249\t\t\tTel: (765)477-6000 \\\n100 Sawmill Road\t\t\t\t x319\nLafayette, IN 47903\t\t (800)489-4891 /\n", "msg_date": "Thu, 10 Jan 2002 07:33:09 -0500", "msg_from": "Richard Kuhns <rjk@grauel.com>", "msg_from_op": false, "msg_subject": "Re: Does getopt() return \"-1\", or \"EOF\", at end?" } ]
[ { "msg_contents": "\nDid I ever send in a bug report about pg_dump 'crashing' while dumping a\ndatabase where one of the tables gets drop'd while the pg_dump is running?\n\nNot the easiest thing to reproduce, mind you, cause its a matter of that\none in a million timing thing ... but, if you run pg_dump against a\ndatabase where one of the tables yet to be dump gets drop'd, the pg_dump\nwill crash, as opposed to just skipping it and continue with those tables\nthat still exist ...\n\n\n\n", "msg_date": "Wed, 9 Jan 2002 16:49:23 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "pg_dump bug ... or not?" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Did I ever send in a bug report about pg_dump 'crashing' while dumping a\n> database where one of the tables gets drop'd while the pg_dump is running?\n> Not the easiest thing to reproduce, mind you, cause its a matter of that\n> one in a million timing thing ... but, if you run pg_dump against a\n> database where one of the tables yet to be dump gets drop'd, the pg_dump\n> will crash, as opposed to just skipping it and continue with those tables\n> that still exist ...\n\nI'd be inclined to fix this by having pg_dump issue a LOCK IN ACCESS\nSHARE MODE against each table as it reads the table name from pg_class.\nNot by allowing tables to disappear from under us after the dump starts.\nThe idea of pg_dump is to produce a consistent snapshot, no?\n\nEven that is not *perfectly* secure since the locking phase will take\nmore than zero time, but it seems close enough.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 09 Jan 2002 18:13:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_dump bug ... or not? " } ]
[ { "msg_contents": "\n-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nRegarding bzipping postgres tarballs:\n\nDid we ever reach a consensus on this issue? I'd like to see \nalternate forms start appearing for each version (moving forward) \nso that people can download either (for example) \npostgresql-7.2.tar.gz or postgresql-7.2.tar.bz2 as they wish. \nI'm not advocating *replacing* gz; I just think we should offer \nthis as an alternate form for those who have the correct tools \nand wish to save space and download time.\n\nThanks,\nGreg Sabino Mullane greg@turnstep.com\nPGP Key: 0x14964AC8 200201091657\n\n-----BEGIN PGP SIGNATURE-----\nComment: http://www.turnstep.com/pgp.html\n\niQA/AwUBPDy9RrybkGcUlkrIEQIEmgCfY6PxpYZH6wV3/IGIgYrOhnREpiEAnj0M\nYR6GY5XKY7oEW2w2YjTdJni2\n=Xv6B\n-----END PGP SIGNATURE-----\n\n\n\n", "msg_date": "Wed, 9 Jan 2002 22:18:52 -0000", "msg_from": "\"Greg Sabino Mullane\" <greg@turnstep.com>", "msg_from_op": true, "msg_subject": "Bzip2 postgres tarballs" } ]
[ { "msg_contents": "This patch to the python bindings adds C versions of the often-used query\nargs quoting routines, as well as support for quoting lists e.g.\ndbc.execute(\"SELECT * FROM foo WHERE blah IN %s\", (myblahlist,))\n\nPlease consider incorporating this patch into postgresql,\n-- Elliot", "msg_date": "Wed, 9 Jan 2002 20:37:52 -0500 (EST)", "msg_from": "Elliot Lee <sopwith@redhat.com>", "msg_from_op": true, "msg_subject": "postgresql-7.2b3-betterquote.patch" }, { "msg_contents": "\nThis has been saved for the 7.3 release:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nElliot Lee wrote:\n> This patch to the python bindings adds C versions of the often-used query\n> args quoting routines, as well as support for quoting lists e.g.\n> dbc.execute(\"SELECT * FROM foo WHERE blah IN %s\", (myblahlist,))\n> \n> Please consider incorporating this patch into postgresql,\n> -- Elliot\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 Jan 2002 17:45:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgresql-7.2b3-betterquote.patch" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nElliot Lee wrote:\n> This patch to the python bindings adds C versions of the often-used query\n> args quoting routines, as well as support for quoting lists e.g.\n> dbc.execute(\"SELECT * FROM foo WHERE blah IN %s\", (myblahlist,))\n> \n> Please consider incorporating this patch into postgresql,\n> -- Elliot\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 22 Feb 2002 21:06:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: postgresql-7.2b3-betterquote.patch" } ]
[ { "msg_contents": "The attached curves are for pgbench in a scale-factor-500 database,\npostmaster options -F -N 100 -B 3800 on a 4-way Linux machine,\ncurrent CVS sources.\n\nI noticed that with the default WAL options, the system was spawning\na checkpoint process every fifteen seconds or so during the pgbench\nrun. Bad news. (Aside from the I/O implied by the checkpoint itself,\nthere's a big penalty in increased WAL traffic, since the first update\nof any page after a checkpoint dumps that whole page to WAL.)\nI bumped up the checkpoint_segments parameter to ensure checkpoints\nwouldn't happen so frequently, and for good measure kicked wal_files\nup too. As you can see, it helped noticeably --- at the price of\nseveral times as much WAL disk space, of course.\n\nCuriously, I'm seeing no improvement whatever from increasing\ncheckpoint_segments and wal_files in 7.1.3 on the same hardware and same\ntest conditions. 7.2 has a better algorithm for managing WAL segments\n(recycle rather than delete and recreate), but still it seems odd that\n7.1.3 can't benefit at all.\n\nBTW, has anyone experimented with the OSDB benchmark at\nhttp://osdb.sourceforge.net/ ? I'm wondering if it might give\nmore useful numbers than pgbench does.\n\n\t\t\tregards, tom lane", "msg_date": "Wed, 09 Jan 2002 22:01:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Increasing checkpoint distance helps 7.2 noticeably" }, { "msg_contents": "Tom Lane wrote:\n\n\n> BTW, has anyone experimented with the OSDB benchmark at\n> http://osdb.sourceforge.net/ ? I'm wondering if it might give\n> more useful numbers than pgbench does.\n\n\nGraphs like this are worth saving and adding to the documentation as an \naid to understanding various approaches to tuning.\n\nAlong with the scripts that generate the graphs as an example that folks \ncan use if they have to tune a particular set of queries on a particular \nset of data.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Wed, 09 Jan 2002 19:27:42 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: Increasing checkpoint distance helps 7.2 noticeably" }, { "msg_contents": "Don Baccus wrote:\n> \n> Tom Lane wrote:\n> \n> > BTW, has anyone experimented with the OSDB benchmark at\n> > http://osdb.sourceforge.net/ ? I'm wondering if it might give\n> > more useful numbers than pgbench does.\n\nI heartily recommend OSDB. It seems to give reliable numbers, and\nsimulates multiple clients quite well.\n\nIt's the continuation of the code which Compaq sponsored for internal\ntesting of their own servers.\n\nIt's based on the AS3AP (ANSI SQL, Standard Scalable and Portable)\nbenchmark, with the exception it generates a lot more information about\nhow the database is doing, unlike the official benchmark which pretty\nmuch just generates one number at the end.\n\nThe AS3AP benchmark is described and documented at :\n\nhttp://www.benchmarkresources.com/handbook/5.html\n\nVersion 0.11 of the OSDB software\n(http://sourceforge.net/project/showfiles.php?group_id=18681&release_id=55146)\nsupports PostgreSQL 7.1+.\n\nI remember having trouble with hash indices, and having to modify the\ncode to use btree indices instead, as PostgreSQL seems to have\ndeadlocking problems with hash indices when multiple people access them.\n\nAlso, you have to generate your own dataset using a specific program (I\nhave it if anyone needs it), or download the test datasets. The test\ndatasets have a maximum of about 40MB, but if you use the program to\ngenerate your own data, you can generate test data up to about 44GB or\nso (or maybe more, I don't remember).\n\nHope that info is useful for someone.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> Graphs like this are worth saving and adding to the documentation as an\n> aid to understanding various approaches to tuning.\n> \n> Along with the scripts that generate the graphs as an example that folks\n> can use if they have to tune a particular set of queries on a particular\n> set of data.\n> \n> --\n> Don Baccus\n> Portland, OR\n> http://donb.photo.net, http://birdnotes.net, http://openacs.org\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 10 Jan 2002 20:16:25 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Increasing checkpoint distance helps 7.2 noticeably" } ]
[ { "msg_contents": "Seen these Postgres Books?\n\nhttp://www.ptf.com/dossier/sets/Post.shtml\n\nChris\n\n", "msg_date": "Thu, 10 Jan 2002 11:37:15 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DOSSIER postgresql stuff" } ]
[ { "msg_contents": "If you're interested in what DOSSIER is all about, check this:\n\nhttp://ezine.daemonnews.org/200201/meta.html\n\nChris\n\n", "msg_date": "Thu, 10 Jan 2002 11:38:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: DOSSIER" } ]
[ { "msg_contents": "Hi,\n\nI hope, that this is not too off topic for this list, but anyway, it \nshould be easy for any PG-expert.\n\nI need to run a shell script that logs in to Postgresql, executes a \nquery and logs off again.\n\nIn MySQL, I did it like this:\n\n'mysql -u user -ppassword < script.sh'\n\nMy problem is that I can't find a way to put the password in an 'psql' \nstatement at the prompt.\n\nAny suggestions are welcome!\n\nmfg\nALEX\n\n-- \n________________________________________________________\n\nInstitut fuer Geographie und Regionalforschung\nUniversit�t Wien\nKartografie und Geoinformation\n\nDepartement of Geography and Regional Research\nUniversity of Vienna\nCartographie and GIS\n\nUniversitaetstr. 7, A-1010 Wien, AUSTRIA\n\nTel: (+43 1) 4277 48644\nFax: (+43 1) 4277 48649\nE-mail: pucher@atlas.gis.univie.ac.at\n\nFTP: ftp://ftp.gis.univie.ac.at\nWWW: http://www.gis.univie.ac.at/karto\n________________________________________________________\n\n\"He that will not apply new remedies must expect new evils; for time is the greatest innovator\"--Francis Bacon\n\n \n\n\n", "msg_date": "Thu, 10 Jan 2002 09:07:50 +0100", "msg_from": "Alexander Pucher <pucher@atlas.gis.univie.ac.at>", "msg_from_op": true, "msg_subject": "Postgres in bash-mode" }, { "msg_contents": "I used to be able to do:\n\n(echo \"login\\npassword\\n\"; bench.sh) | psql -h system database\n\nDon't really know if it works anymore though. Started using SSL\nrelated tools.\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Alexander Pucher\" <pucher@atlas.gis.univie.ac.at>\nTo: \"pgsql-hackers\" <pgsql-hackers@postgresql.org>\nSent: Thursday, January 10, 2002 3:07 AM\nSubject: [HACKERS] Postgres in bash-mode\n\n\n> Hi,\n>\n> I hope, that this is not too off topic for this list, but anyway, it\n> should be easy for any PG-expert.\n>\n> I need to run a shell script that logs in to Postgresql, executes a\n> query and logs off again.\n>\n> In MySQL, I did it like this:\n>\n> 'mysql -u user -ppassword < script.sh'\n>\n> My problem is that I can't find a way to put the password in an\n'psql'\n> statement at the prompt.\n>\n> Any suggestions are welcome!\n>\n> mfg\n> ALEX\n>\n> --\n> ________________________________________________________\n>\n> Institut fuer Geographie und Regionalforschung\n> Universit�t Wien\n> Kartografie und Geoinformation\n>\n> Departement of Geography and Regional Research\n> University of Vienna\n> Cartographie and GIS\n>\n> Universitaetstr. 7, A-1010 Wien, AUSTRIA\n>\n> Tel: (+43 1) 4277 48644\n> Fax: (+43 1) 4277 48649\n> E-mail: pucher@atlas.gis.univie.ac.at\n>\n> FTP: ftp://ftp.gis.univie.ac.at\n> WWW: http://www.gis.univie.ac.at/karto\n> ________________________________________________________\n>\n> \"He that will not apply new remedies must expect new evils; for time\nis the greatest innovator\"--Francis Bacon\n>\n>\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Thu, 10 Jan 2002 23:31:23 -0500", "msg_from": "\"Rod Taylor\" <rbt@barchord.com>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" }, { "msg_contents": "Alexander Pucher writes:\n\n> In MySQL, I did it like this:\n>\n> 'mysql -u user -ppassword < script.sh'\n\nThen you might as well not have any authentication at all, because every\nuser on your system can then read the password off the \"ps\" output.\n\n> My problem is that I can't find a way to put the password in an 'psql'\n> statement at the prompt.\n\nYou can put it into the environment variable PGPASSWORD, but that *might*\nsuffer from the same problems depending on your OS. If you want real\nnoninteractive login you will have to use a different authentication\nmethod, such as ident.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 10 Jan 2002 23:43:41 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" }, { "msg_contents": "On Thu, Jan 10, 2002 at 09:07:50AM +0100, Alexander Pucher wrote:\n> Hi,\n> \n> I hope, that this is not too off topic for this list, but anyway, it \n> should be easy for any PG-expert.\n> \n> I need to run a shell script that logs in to Postgresql, executes a \n> query and logs off again.\n> \n> In MySQL, I did it like this:\n> \n> 'mysql -u user -ppassword < script.sh'\n> \n> My problem is that I can't find a way to put the password in an 'psql' \n> statement at the prompt.\n> \n> Any suggestions are welcome!\n\n\"Don't do that\"\n\nYou CAN do something like \n% psql -f dbscript database\nPassword: <typeity>\n<stuff happens>\n129195981 INSERT\n%\n\n-- \nDavid Terrell | \"I went into Barnes and Noble to look for a \nPrime Minister, Nebcorp | book on A.D.D., but I got bored and left.\" \ndbt@meat.net | - Benjy Feen\nhttp://wwn.nebcorp.com/ |\n", "msg_date": "Sat, 12 Jan 2002 00:32:05 -0800", "msg_from": "David Terrell <dbt@meat.net>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" }, { "msg_contents": "On Sat, 2002-01-12 at 03:32, David Terrell wrote:\n> On Thu, Jan 10, 2002 at 09:07:50AM +0100, Alexander Pucher wrote:\n> > Hi,\n> > \n> > I hope, that this is not too off topic for this list, but anyway, it \n> > should be easy for any PG-expert.\n> > \n> > I need to run a shell script that logs in to Postgresql, executes a \n> > query and logs off again.\n> > \n> > In MySQL, I did it like this:\n> > \n> > 'mysql -u user -ppassword < script.sh'\n> > \n> > My problem is that I can't find a way to put the password in an 'psql' \n> > statement at the prompt.\n> > \n> > Any suggestions are welcome!\n> \n> \"Don't do that\"\n> \n> You CAN do something like \n> % psql -f dbscript database\n> Password: <typeity>\n> <stuff happens>\n> 129195981 INSERT\n\nJust a bit of extra info. Passwords on the command line are sniffable.\nYou can obsure them somewhat, but AFAIK there is no way, or at least no\ngeneral way to secure them fully.\n\nIf you absolutely need to do something like this, look into expect.\n\n--\nKarl\n", "msg_date": "12 Jan 2002 09:03:44 -0500", "msg_from": "Karl DeBisschop <kdebisschop@range.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" } ]
[ { "msg_contents": "Daniel wrote: (stripped to info I used)\n> NOTICE: Pages 17722: Changed 0, reaped 0, Empty 0, New 0; \n> Tup 1706202: Vac 0, \n> NOTICE: Index iplog_test_ipaddr_idx: Pages 5621; Tuples 1706202. CPU \n> NOTICE: Index iplog_test_ipdate_idx: Pages 4681; Tuples 1706202. CPU \n\n> -> Seq Scan on iplog_test (cost=0.00..56111.54 rows=284 width=16)\n> query runs for ca 3.5 minutes.\n\n> -> Index Scan using iplog_test_ipdate_idx on iplog_test \n> (cost=0.00..100505.94 rows=284 width=16)\n> query runs for ca 2.2 minutes.\n\nI cannot really see how 284 rows can have an estimated index cost of 100506 ?\n\n> 512 MB RAM, with 15000 RPM Cheetah for the database, running\n\n> Perhaps I need to tune this machine's costs to prefer more \n> disk intensive operations over CPU intensive operations?\n\nWhat is actually estimated wrong here seems to be the estimated\neffective cache size, and thus the cache ratio of page fetches.\nMost of your pages will be cached.\n\nThe tuning parameter is: effective_cache_size\n\nWith (an estimated) 50 % of 512 Mb for file caching that number would \nneed to be:\neffective_cache_size = 32768 # 8k pages\n\nCan you try this and tell us what happens ?\n\nAndreas\n", "msg_date": "Thu, 10 Jan 2002 11:55:20 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>\"Zeugswetter Andreas SB SD\" said:\n > What is actually estimated wrong here seems to be the estimated\n > effective cache size, and thus the cache ratio of page fetches.\n > Most of your pages will be cached.\n > \n > The tuning parameter is: effective_cache_size\n > \n > With (an estimated) 50 % of 512 Mb for file caching that number would \n > need to be:\n > effective_cache_size = 32768 # 8k pages\n > \n > Can you try this and tell us what happens ?\n\nI suspected this, but haven't really come to test it. On BSD/OS, the buffer \ncache is 10% of the RAM, in my case\n\nbuffer cache = 53522432 (51.04 MB)\n\nI guess effective_cache_size = 6400 will be ok.\n\nDaniel\n\n", "msg_date": "Thu, 10 Jan 2002 13:37:41 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "[with the new effective_cache_size = 6400]\n\nexplain\nSELECT sum(input), sum(output) FROM iplog_gate200112\nWHERE \n'2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2001-12-02 00:00:00+02' AND \n'2001-12-01 00:00:00+02' <= ipdate AND ipdate < '2002-01-01 00:00:00+02' AND \nipaddr <<= '193.68.240.0/20' AND 'uni-gw' ~ router;\n\ngives\n\nAggregate (cost=56111.97..56111.97 rows=1 width=16)\n -> Seq Scan on iplog_gate200112 (cost=0.00..56110.54 rows=284 width=16)\n\ntakes 3 min to execute. (was 10 sec after fresh restart)\n\ndb=# set enable_seqscan to off;\n\nAggregate (cost=84980.10..84980.10 rows=1 width=16)\n -> Index Scan using iplog_gate200112_ipdate_idx on iplog_gate200112 \n(cost=0.00..84978.68 rows=284 width=16)\n\ntakes 1.8 min to execute. (was 2 sec after fresh reshart)\n\nStill proves my point, But the fresh restart performance is impressive. After \nfew minutes the database takes its normal load and in my opinion the buffer \ncache is too much cluttered with pages from other tables.\n\nWhich brings another question: with so much RAM recent equipment runs, it may \nbe good idea to specifically add to INSTALL instruction on tuning the system \nas soon as it is installed. Most people will stop there, especially after an \nupgrade (as I did).\n\nDaniel\n\n", "msg_date": "Thu, 10 Jan 2002 14:03:15 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> I cannot really see how 284 rows can have an estimated index cost of 100506 ?\n\nThe estimated number of indexscanned rows is more like 50k. The number\nyou are looking at includes the estimated selectivity of the\nnon-indexable WHERE clauses, too.\n\n> What is actually estimated wrong here seems to be the estimated\n> effective cache size, and thus the cache ratio of page fetches.\n\nGood point, but I think the estimates are only marginally sensitive\nto estimated cache size (if they're not, we have a problem, considering\nhow poorly we can estimate the kernel's disk buffer size). It would\nbe interesting for Daniel to try a few different settings of\neffective_cache_size and see how much the EXPLAIN costs change.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 10:07:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " } ]
[ { "msg_contents": "\nImho the simplification, that seq scan startup cost is 0.0 is\nonly valid when we expect to return most of the rows.\n\nWhen expecting only 1 row, imho the costs need to be 50 % for\nstartup and 50 % rest. Assuming, that on average the row will be \nin the middle of that relation file.\nWhen returning 10% of the rows startup would be 10 % ...\n\nThe reasoning beeing, that you need to read a few pages before you \nfind the first match. \n\nAndreas\n", "msg_date": "Thu, 10 Jan 2002 12:14:31 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "seq scan startup cost" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Imho the simplification, that seq scan startup cost is 0.0 is\n> only valid when we expect to return most of the rows.\n\n> When expecting only 1 row, imho the costs need to be 50 % for\n> startup and 50 % rest.\n\nThis is already accounted for in the code that makes use of the\nestimates. \"Startup cost\" is really the time before we can start trying\nto produce output, not the time till the first tuple is returned.\n\nAn example of the usage is this fragment from costsize.c:\n\n if (subplan->sublink->subLinkType == EXISTS_SUBLINK)\n {\n /* we only need to fetch 1 tuple */\n subcost = plan->startup_cost +\n (plan->total_cost - plan->startup_cost) / plan->plan_rows;\n }\n\nIf a single row is expected, then this will estimate the actual cost to\nfetch it as equal to total_cost, not startup_cost.\n\nIt's true that for a seqscan we might reasonably hope to find the wanted\nrow after scanning only half the file, but what of plans like Aggregate?\nThe startup/total-cost model isn't sufficiently detailed to capture this\ndifference, so I prefer to stick with the more conservative behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 10:27:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: seq scan startup cost " } ]
[ { "msg_contents": "\n> [with the new effective_cache_size = 6400]\n\nThis seems way too low for a 512 Mb machine. Why does your OS\nonly use so little for filecache ? Is the rest used for processes ?\nFor the above number you need to consider OS cache and shared_buffers.\nYou can approximatly add them together minus a few %.\n\nWith the data you gave, a calculated value for effective_cache_size\nwould be 29370, assuming the random_page_cost is actually 4 on your\nmachine. 29370 might be a slight overestimate, since your new table\nwill probably still be somewhat sorted by date within one IP.\n\nTry to measure IO/s during the seq scan and during the index path\nand calculate the ratio. This should be done during an average workload\non the machine.\n\nAndreas\n", "msg_date": "Thu, 10 Jan 2002 15:46:54 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>\"Zeugswetter Andreas SB SD\" said:\n > \n > > [with the new effective_cache_size = 6400]\n > \n > This seems way too low for a 512 Mb machine. Why does your OS\n > only use so little for filecache ? Is the rest used for processes ?\n > For the above number you need to consider OS cache and shared_buffers.\n > You can approximatly add them together minus a few %.\n\nAs far as I am aware, 10% for buffer space is the default for BSD operating \nsystems... although I have seen buffer space = 50% on MacOS X. There is no \nproblem to increase the buffer space in kernel, although I am not very \nconfident this will give much better overall performance (well, more memory \ncan be added as well).\n\n > With the data you gave, a calculated value for effective_cache_size\n > would be 29370, assuming the random_page_cost is actually 4 on your\n > machine. 29370 might be a slight overestimate, since your new table\n > will probably still be somewhat sorted by date within one IP.\n\nrandom_page_cost is 4.\n\nIf the select into then cluster do this, then yes, it is possible, but not \nguaranteed.\n\nI will try with increased effective_cache_size.\n\nPostmaster is started with -N 128 -B 256 -i -o \"-e -S 10240\" \n\nDaniel\n\n", "msg_date": "Thu, 10 Jan 2002 18:09:15 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " } ]
[ { "msg_contents": "\n> > What is actually estimated wrong here seems to be the estimated\n> > effective cache size, and thus the cache ratio of page fetches.\n> \n> Good point, but I think the estimates are only marginally sensitive\n> to estimated cache size (if they're not, we have a problem, considering\n> how poorly we can estimate the kernel's disk buffer size). It would\n> be interesting for Daniel to try a few different settings of\n> effective_cache_size and see how much the EXPLAIN costs change.\n\nWell, the number I told him (29370) should clearly prefer the index.\nThe estimate is very sensitive to this value :-(\nWith 29370 (=229 Mb) the index cost is 1,364 instead of 3,887 with the \ndefault of 1000 pages ==> index scan.\n\n229 Mb file cache with 512Mb Ram is a reasonable value, I have\na lot more here:\nMemory Real Virtual\nfree 0 MB 218 MB\nprocs 95 MB 293 MB\nfiles 159 MB\ntotal 256 MB 512 MB\n\nAndreas\n", "msg_date": "Thu, 10 Jan 2002 17:04:29 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": ">>>\"Zeugswetter Andreas SB SD\" said:\n > \n > > > What is actually estimated wrong here seems to be the estimated\n > > > effective cache size, and thus the cache ratio of page fetches.\n > > \n > > Good point, but I think the estimates are only marginally sensitive\n > > to estimated cache size (if they're not, we have a problem, considering\n > > how poorly we can estimate the kernel's disk buffer size). It would\n > > be interesting for Daniel to try a few different settings of\n > > effective_cache_size and see how much the EXPLAIN costs change.\n > \n > Well, the number I told him (29370) should clearly prefer the index.\n > The estimate is very sensitive to this value :-(\n > With 29370 (=229 Mb) the index cost is 1,364 instead of 3,887 with the \n > default of 1000 pages ==> index scan.\n\nBut... if I understand it right (effective_cache_size to be related to kernel \nbuffer space). it turns out that the estimates are different with reality - my \nbuffer cache is ca. 50 MB and I still get at least twice the performance with \nindex scan instead of sequential scan - where as Tom explained things should \nbe much worse.\n\nI considered the possibility that the clustered table can still maintain some \nordering by ipdate after being clustered by ipaddr - but with over 65k ip \naddresses, almost evenly spread, this should be not so significant.\n\nBest Regards,\nDaniel\n\n", "msg_date": "Thu, 10 Jan 2002 18:42:13 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " } ]
[ { "msg_contents": "\n> > > > What is actually estimated wrong here seems to be the estimated\n> > > > effective cache size, and thus the cache ratio of page fetches.\n> > > \n> > > Good point, but I think the estimates are only marginally sensitive\n> > > to estimated cache size (if they're not, we have a problem, considering\n> > > how poorly we can estimate the kernel's disk buffer size). It would\n> > > be interesting for Daniel to try a few different settings of\n> > > effective_cache_size and see how much the EXPLAIN costs change.\n> > \n> > Well, the number I told him (29370) should clearly prefer the index.\n> > The estimate is very sensitive to this value :-(\n> > With 29370 (=229 Mb) the index cost is 1,364 instead of 3,887 with the \n> > default of 1000 pages ==> index scan.\n> \n> But... if I understand it right (effective_cache_size to be related to kernel \n> buffer space). it turns out that the estimates are different with reality - my \n> buffer cache is ca. 50 MB and I still get at least twice the performance with \n> index scan instead of sequential scan - where as Tom explained things should \n> be much worse.\n\nSince pg only reads one 8k page at a time, the IO performance of a seq scan is\nprobably not nearly a good as it could be when a lot of other IO is done on the\nsame drive.\n\nFirst thing you should verify is if there is actually a measurable difference\nin IO throughput on the pg drive during the seq scan and the index scan. (iostat)\nIf there is not, then random_page_cost is too high in your scenario.\n(All assuming your data is not still clustered like Tom suggested)\n\nAndreas\n", "msg_date": "Thu, 10 Jan 2002 18:18:39 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: again on index usage " }, { "msg_contents": "\nThis topic seems to come up a lot. Is there something we are missing in\nthe FAQ?\n\n---------------------------------------------------------------------------\n\nZeugswetter Andreas SB SD wrote:\n> \n> > > > > What is actually estimated wrong here seems to be the estimated\n> > > > > effective cache size, and thus the cache ratio of page fetches.\n> > > > \n> > > > Good point, but I think the estimates are only marginally sensitive\n> > > > to estimated cache size (if they're not, we have a problem, considering\n> > > > how poorly we can estimate the kernel's disk buffer size). It would\n> > > > be interesting for Daniel to try a few different settings of\n> > > > effective_cache_size and see how much the EXPLAIN costs change.\n> > > \n> > > Well, the number I told him (29370) should clearly prefer the index.\n> > > The estimate is very sensitive to this value :-(\n> > > With 29370 (=229 Mb) the index cost is 1,364 instead of 3,887 with the \n> > > default of 1000 pages ==> index scan.\n> > \n> > But... if I understand it right (effective_cache_size to be related to kernel \n> > buffer space). it turns out that the estimates are different with reality - my \n> > buffer cache is ca. 50 MB and I still get at least twice the performance with \n> > index scan instead of sequential scan - where as Tom explained things should \n> > be much worse.\n> \n> Since pg only reads one 8k page at a time, the IO performance of a seq scan is\n> probably not nearly a good as it could be when a lot of other IO is done on the\n> same drive.\n> \n> First thing you should verify is if there is actually a measurable difference\n> in IO throughput on the pg drive during the seq scan and the index scan. (iostat)\n> If there is not, then random_page_cost is too high in your scenario.\n> (All assuming your data is not still clustered like Tom suggested)\n> \n> Andreas\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 10 Jan 2002 14:40:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": ">>>\"Zeugswetter Andreas SB SD\" said:\n > First thing you should verify is if there is actually a measurable differenc\n e\n > in IO throughput on the pg drive during the seq scan and the index scan. (io\n stat)\n > If there is not, then random_page_cost is too high in your scenario.\n > (All assuming your data is not still clustered like Tom suggested)\n\n\nAt this idle time (got to have other emergency at 5am in the office :) here is \nwhat I have (sd0 is the 'system' drive, sd1 is where postgres data lives):\n\n tin tout sps tps msps sps tps msps usr nic sys int idl\n 0 39 831 12 2.0 8962 121 3.6 4 26 7 0 63\n 0 13 215 4 7.7 9917 122 3.7 5 24 5 0 66\n 0 13 216 3 6.1 7116 115 4.1 5 23 4 0 68\n 0 13 220 3 5.0 9401 128 5.0 5 17 4 0 74\n 0 13 226 3 12.2 9232 122 3.8 4 24 4 0 67\n 0 13 536 26 8.5 11353 147 4.4 13 16 9 0 62\n 0 13 259 5 5.8 12102 165 4.1 8 14 8 0 70\n 0 13 492 20 7.2 13913 186 4.5 8 9 6 0 76\n 0 13 185 2 4.7 11423 184 5.0 14 6 8 0 72\n\nrunning index scan:\n\n 0 13 274 8 4.9 5786 145 4.4 18 10 8 0 64\n 0 13 210 3 8.1 5707 153 3.9 20 9 6 0 64\n 0 13 286 8 7.7 6283 139 4.3 21 9 8 0 62\n 0 13 212 3 9.7 5900 133 3.3 22 13 7 0 58\n 0 13 222 3 6.0 5811 148 3.5 20 12 6 0 61\n 0 13 350 16 7.5 5640 134 4.1 22 12 7 0 58\n\n(seems to be slowing down other I/O :)\n\nrunning seq scan:\n\n 0 13 50 4 1.9 4787 101 3.8 24 12 7 0 57\n 0 13 34 3 5.6 5533 105 3.4 24 12 6 0 58\n 0 13 42 4 3.1 5414 103 3.0 25 12 6 0 58\n 0 13 26 2 0.0 5542 102 3.9 28 12 6 0 54\n 0 13 52 5 2.8 5644 112 4.1 24 11 7 0 58\n 0 13 27 2 4.1 6462 122 4.0 26 8 7 0 60\n 0 13 36 3 2.0 5616 128 4.2 22 8 7 0 63\n\nI can't seem to find any difference... Perhaps this is because the \n'sequential' data is anyway scattered all around the disk.\n\nI have done this test first, now I will try the random() clustering Tom \nsuggested (although... isn't random not so random to trust it in this \nscenario? :)\n\nDaniel\n\n", "msg_date": "Fri, 11 Jan 2002 06:08:48 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " } ]
[ { "msg_contents": "I have had a postgres 7.1.3 database running for a few months. I have\na perl application that runs several sql commands and populates the\ndatabase. I have been running the following command without any\nerrors until a few days ago (keep in mind that no code has changed):\n\ninsert into w_host select datetime('2001-12-10 09:00:00') as start,\nint4(2001) as year, int4(50) as week, host, count(*) as jobs,\nsum(cputime) as cpu, int4(reltime(sum(starttime-queuetime))) as\npending, int4(reltime(sum((finishtime-starttime)))) as wallclock,\nint4(reltime(sum((finishtime-starttime)-interval(cputime-0)))) as\nsuspended from jobs where finishtime >= '2001-12-10 09:00:00' and\nfinishtime < '2001-12-10 10:00:00' and starttime != 0 and queuetime\n!=0 and finishtime != 0 group by host;\n\nThe following error is generated:\nERROR: Bad interval external representation '275'\n\nDoes anyone have any ideas on how to correct this error?\n\nJeff\n\n", "msg_date": "10 Jan 2002 09:23:30 -0800", "msg_from": "jeff.brickley@motorola.com (Jeff)", "msg_from_op": true, "msg_subject": "Bad interval external representation" }, { "msg_contents": "I have located the error. The cputime is of datatype double and the\nconversion to the temporal type interval was failing.\n\n\n\njeff.brickley@motorola.com (Jeff) wrote in message news:<dea33a90.0201100923.66fa599c@posting.google.com>...\n> I have had a postgres 7.1.3 database running for a few months. I have\n> a perl application that runs several sql commands and populates the\n> database. I have been running the following command without any\n> errors until a few days ago (keep in mind that no code has changed):\n> \n> insert into w_host select datetime('2001-12-10 09:00:00') as start,\n> int4(2001) as year, int4(50) as week, host, count(*) as jobs,\n> sum(cputime) as cpu, int4(reltime(sum(starttime-queuetime))) as\n> pending, int4(reltime(sum((finishtime-starttime)))) as wallclock,\n> int4(reltime(sum((finishtime-starttime)-interval(cputime-0)))) as\n> suspended from jobs where finishtime >= '2001-12-10 09:00:00' and\n> finishtime < '2001-12-10 10:00:00' and starttime != 0 and queuetime\n> !=0 and finishtime != 0 group by host;\n> \n> The following error is generated:\n> ERROR: Bad interval external representation '275'\n> \n> Does anyone have any ideas on how to correct this error?\n> \n> Jeff\n", "msg_date": "11 Jan 2002 06:20:28 -0800", "msg_from": "jeff.brickley@motorola.com (Jeff)", "msg_from_op": true, "msg_subject": "Re: Bad interval external representation" } ]
[ { "msg_contents": "I believe I see the mechanism for the 7.2b4 failure reported by\nChristian Meunier this morning:\n\n> pg_dump: query to get data of sequence \"account_num_seq\" failed: FATAL 2:\n> open of /usr/local/pgsql/data/pg_clog/0000 failed: No such file or directory\n> server closed the connection unexpectedly\n> This probably means the server terminated abnormally\n> before or while processing the request.\n> \n> The only file i got in directory pg_clog is : 0002\n\nHere's what happened: when the sequence object was created, the single\nrow inserted in it was given the XID of the transaction creating the\nsequence. Let's suppose that the only thing done with the sequence for\na long time was nextval()s, never a \"select from account_num_seq\". The\nrow would remain present with unmodified info bits --- ie, same XID,\nnot known committed. VACUUM ignores sequences, so VACUUMs wouldn't\nchange the state of the row either. However, in the fullness of time\nVACUUM would decide that there could no longer be any unvacuumed\nreferences to the sequence-creating XID, and it would delete the CLOG\nsegment holding the state of that XID. Later still (this must be more\nthan a million xacts after the sequence's creation), a pg_dump is done,\nand it tries to do \"select from account_num_seq\", whereupon the CLOG\ncode is asked for the state of the long-ago transaction. Kaboom.\n\nIn short, VACUUM's assumption that it sees and marks every t_xmin in the\ndatabase is false, because it doesn't look at sequences, and every\nsequence contains a t_xmin field.\n\nI believe that the best way to fix this is for sequence creation to\nforcibly mark the sequence's lone tuple with t_xmin =\nFrozenTransactionId. In this way, the row will always be considered\ngood by SELECT with no further ado. This cannot cause any transaction\nto see a row that it shouldn't see --- if it can see the sequence\nobject's entry in pg_class, then it should be able to see the sequence's\ntuple.\n\nThe alternative, if anyone thinks that's unsafe, is for VACUUM to\nprocess sequences along with plain relations so that it can mark/freeze\nsequence rows along with regular rows. But that seems like an awful lot\nof cycles expended to solve the problem.\n\nAny objections to doing it the first way?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 10 Jan 2002 16:23:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Sequence rows need to have FrozenTransactionId" } ]
[ { "msg_contents": "Dear all,\n\nI am interested in learning more about HLR/VLR database systems for wireless \nroaming services. Do you know any free implementation under PostgreSQL?\n\nThe idea would be to offer a free HLR/VLR service in the European Union to \nall 802.11a user. Any information about interactive maps is also welcome.\n\nBest regards,\nJean-Michel POURE\n", "msg_date": "Fri, 11 Jan 2002 08:45:07 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": true, "msg_subject": "Home Location Registry (HLR) and VLR databases" } ]
[ { "msg_contents": "\n> > With the new pgbench, I ran a test with current and 7.1 and saw\n> > not-so-small differences. Any idea to get better performance on 7.2\n> > and AIX 5L combo?\n> \n> I'm thinking more and more that there must be something weird about the\n> cs() routine that we use for spinlocks on AIX. Could someone dig into\n> that and find exactly what it does and whether it's got any performance\n> issues?\n\nThe manual page sais:\n\n Note: The cs subroutine is only provided to support binary compatibility with\n AIX Version 3 applications. When writing new applications, it is not\n recommended to use this subroutine; it may cause reduced performance in the\n future. Applications should use the compare_and_swap subroutine, unless they\n need to use unaligned memory locations.\n\nI once tried to replace cs() with compare_and_swap() but saw worse performance\nfor the limited testing I did (probably on a single CPU). Maybe the \"threat\"\nthat performance will be reduced is actually true on AIX 5 now.\n\nThe thing would imho now be for Tatsuo to try to replace cs with compare_and_swap,\nand see what happens on AIX 5.\n\nAndreas\n\nPS: Would the __powerpc__ assembly work on AIX machines ?\n", "msg_date": "Fri, 11 Jan 2002 09:16:29 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " }, { "msg_contents": "> > I'm thinking more and more that there must be something weird about the\n> > cs() routine that we use for spinlocks on AIX. Could someone dig into\n> > that and find exactly what it does and whether it's got any performance\n> > issues?\n> \n> The manual page sais:\n> \n> Note: The cs subroutine is only provided to support binary compatibility with\n> AIX Version 3 applications. When writing new applications, it is not\n> recommended to use this subroutine; it may cause reduced performance in the\n> future. Applications should use the compare_and_swap subroutine, unless they\n> need to use unaligned memory locations.\n> \n> I once tried to replace cs() with compare_and_swap() but saw worse performance\n> for the limited testing I did (probably on a single CPU). Maybe the \"threat\"\n> that performance will be reduced is actually true on AIX 5 now.\n> \n> The thing would imho now be for Tatsuo to try to replace cs with compare_and_swap,\n> and see what happens on AIX 5.\n> \n> Andreas\n> \n> PS: Would the __powerpc__ assembly work on AIX machines ?\n> \n\nI wish I could do that but...\n\n From the manual page of compare_and_swap (see below):\n\nWhat I'm not sure is this part:\n\n> Note If compare_and_swap is used as a locking primitive, insert an\n> isync at the start of any critical sections;\n\nWhat is \"isync\"? Also, how I can implement calling compare_and_swap in\nthe assembly language?\n--\nTatsuo Ishii\n\n-----------------------------------------------------------------------\nboolean_t compare_and_swap ( word_addr, old_val_addr, new_val)\n\natomic_p word_addr;\n\nint *old_val_addr;\n\nint new_val;\n\nDescription\n\nThe compare_and_swap subroutine performs an atomic operation which compares\nthe contents of a single word variable with a stored old value; If the\nvalues are equal, a new value is stored in the single word variable and TRUE\nis returned; otherwise, the old value is set to the current value of the\nsingle word variable and FALSE is returned;\n\nThe compare_and_swap subroutine is useful when a word value must be updated\nonly if it has not been changed since it was last read;\n\n Note The word containing the single word variable must be aligned on\n a full word boundary\n\n Note If compare_and_swap is used as a locking primitive, insert an\n isync at the start of any critical sections;\n\nParameters\n\nword_addr Specifies the address of the single word variable&#46;\n\nold_val_addr Specifies the address of the old value to be checked against\n(and conditionally updated with) the value of the single word variable&#46;\n\nnew_val Specifies the new value to be conditionally assigned to the single\nword variable&#46;\n\nReturn Values\n\nTRUE Indicates that the single word variable was equal to the old value, and\nhas been set to the new value&#46;\n\nFALSE Indicates that the single word variable was not equal to the old value,\nand that its current value has been returned in the location where the old\nvalue was previously stored&#46;\n\nImplementation Specifics\n\nImplementation Specifics\n\nThe compare_and_swap subroutine is part of the Base Operating System (BOS)\nRuntime\n\nRelated Information\n\nThe fetch_and_add (fetch_and_add Subroutine) subroutine, fetch_and_and\n(fetch_and_and or fetch_and_or Subroutine) subroutine, fetch_and_or\n(fetch_and_and or fetch_and_or Subroutine) subroutine&#46;\n\n", "msg_date": "Fri, 11 Jan 2002 17:28:01 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> [ compare_and_swap man page ]\n\nLooks kinda baroque. What about the referenced fetch_and_or routine?\nIf that's atomic it might be closer to TAS semantics.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 12:14:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " } ]
[ { "msg_contents": "If have some problem with updating `pg_relcheck' entries. Who can help ?\n\nPostgreSQL version:\n\n CVS of Jan 03 with some additions for my CREATE ERROR TRIGGER project\n (announced on this list). Among those is the following change in\n pg_relcheck:\n\n \tNameData\trcname;\n \ttext\t\trcbin;\n \ttext\t\trcsrc;\n+\tint2\t\trcerrhandlers;/* # of ON ERROR triggers,\n+\t\t\t\t\t * currently on 0 or 1 is supported */\n } FormData_pg_relcheck;\n \n rcerrhandlers is the number of error handlers for a CHECK constraint,\n i.e. the number of procedures to be called if a CHECK constraint fails.\n Error handlers are stored in pg_triggers like normal triggers. They\n reference CHECK constraints by OID in the same way as normal triggers\n reference relations by OID. For this to work I have still to add an OID\n to pg_relcheck, but the current question does not depend on this.\n\nError handlers are created by the following statement:\n\n=# create table mytab (arg text CHECK (length(arg)>10));\nCREATE\n=# CREATE ERROR TRIGGER ehtest on CONSTRAINT CHECK mytab_arg FOR EACH ROW EXECUTE PROCEDURE __proc('arg');\nCREATE\n\nIn executing the CREATE ERROR TRIGGER command I have to update the entry\nof the CHECK constraint in pg_relcheck and ro add an entry to pg_trigger.\nThe problem I have is related to updating the pg_relcheck entry.\n\nFor this I use the following code:\n\n\t/* Grab an exclusive lock on the pg_relcheck relation */\n\trcrel = heap_openr(RelCheckRelationName, RowExclusiveLock);\n\n\t/*\n\t * Create the scan key. We need to match the name of the CHECK\n\t * constraint.\n\t */\n\tScanKeyEntryInitialize(&key, 0, Anum_pg_relcheck_rcname,\n\t\t\t F_NAMEEQ, PointerGetDatum(constrName));\n\t/*\n\t * Begin scanning the heap\n\t */\n\trcscan = heap_beginscan(rcrel, 0, SnapshotNow, 1, &key);\n\n\t/*\n\t * We take the first CHECK constraint we can find.\n\t */\n\n\tif (! HeapTupleIsValid(rctup = heap_getnext(rcscan, 0)))\n\t{\n\t elog(ERROR,\"CreateErrorHandler: CHECK constraint \\\"%s\\\" does not exist\",\n\t constrName);\n\t}\n\n\t/**\n\t * Copy the tuple to be able to end the scan.\n\t */\n\trctup = heap_copytuple(rctup);\n\n\theap_endscan(rcscan);\n\n[snip]\n\n\t/**\n\t * update the error handler counter of the constraint\n\t */\n\t((Form_pg_relcheck) GETSTRUCT(rctup))->rcerrhandlers = found + 1;\n\tsimple_heap_update(rcrel, &rctup->t_self, rctup);\n\t\t\t\t\n\t/**\n\t * update the indices, too\n\t */\n[snip]\n\theap_freetuple(rctup);\n\n\theap_close(rcrel, RowExclusiveLock);\n\n\npg_relcheck before the update is done:\n\n# select * from pg_relcheck where rcname='mytab_arg';\nrcrelid | rcname | rcbin .. | rcsrc .. | rcerrhandlers\n 16570 | mytab_arg | ({ EXPR :typeOid 16 :opType op :oper { OPER :opno 521 :opid 147 :opresulttype 16 } :args ({ EXPR :typeOid 23 :opType func :oper { FUNC :funcid 1317 :functype 23 } :args ({ VAR :varno 1 :varattno 1 :vartype 25 :vartypmod -1 :varlevelsup 0 :varnoold 1 :varoattno 1})} { CONST :consttype 23 :constlen 4 :constbyval true :constisnull false :constvalue 4 [ 10 0 0 0 ] })}) | (length(arg) > 10) | 0\n(1 row)\n\npg_relcheck after the update is done:\n\n=# select * from pg_relcheck where rcname='mytab_arg';\n rcrelid | rcname | rcbin | rcsrc | rcerrhandlers \n---------+-----------+---------------+--------------------+---------------\n 16564 | mytab_arg | ({ EXPR :typ | (length(arg) > 10) | 0\n(1 row)\n\nFurthermore when I abort the transaction (with `elog') after the code\nsegment shown above is executed, the entry in pg_relcheck even vanishes !\n\nI'm really puzzled.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 11 Jan 2002 11:52:39 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "Problems with simple_heap_update and Form_pg_relcheck" }, { "msg_contents": "On Fri, Jan 11, 2002 at 11:52:39AM +0100, Holger Krug wrote:\n> Furthermore when I abort the transaction (with `elog') after the code\n> segment shown above is executed, the entry in pg_relcheck even vanishes !\n\nThis remark applies to a slightly different version of the code, when\nI do not heap_copytuple but modify the value in place. OK, that's\nunderstandable, so please forget it. But the problem, that\n`rcerrhandlers' is not updated, remains.\n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 11 Jan 2002 11:59:34 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "Re: Problems with simple_heap_update and Form_pg_relcheck" }, { "msg_contents": "Holger Krug <hkrug@rationalizer.com> writes:\n> \t((Form_pg_relcheck) GETSTRUCT(rctup))->rcerrhandlers = found + 1;\n\nWhat's an \"rcerrhandlers\"? It doesn't appear in current sources.\n\n> pg_relcheck after the update is done:\n> [rcbin is messed up]\n\nI have a feeling you added rcerrhandlers after the variable-length\nfields. Not good, at least not if you want to access it via a C\nstruct. See, eg,\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/catalog/README\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 11:57:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problems with simple_heap_update and Form_pg_relcheck " }, { "msg_contents": "On Fri, Jan 11, 2002 at 11:57:27AM -0500, Tom Lane wrote:\n> Holger Krug <hkrug@rationalizer.com> writes:\n> > \t((Form_pg_relcheck) GETSTRUCT(rctup))->rcerrhandlers = found + 1;\n> \n> What's an \"rcerrhandlers\"? It doesn't appear in current sources.\n\nYes, it's part of my project to add error handlers. It's the number\nof error handlers for this constraint, currently my code allows\nas values either 0 or 1.\n\n> I have a feeling you added rcerrhandlers after the variable-length\n> fields. Not good, at least not if you want to access it via a C\n> struct. See, eg,\n> http://developer.postgresql.org/cvsweb.cgi/pgsql/src/backend/catalog/README\n\nI have that feeling, too. Thanks for the prompt help. \n\n-- \nHolger Krug\nhkrug@rationalizer.com\n", "msg_date": "Fri, 11 Jan 2002 18:39:49 +0100", "msg_from": "Holger Krug <hkrug@rationalizer.com>", "msg_from_op": true, "msg_subject": "Re: Problems with simple_heap_update and Form_pg_relcheck" } ]
[ { "msg_contents": "\n> This topic seems to come up a lot. Is there something we are missing in\n> the FAQ?\n\nMost of the reports we seem to see are the usual, \"but the seq scan is actually \nfaster\" case. Daniel actually has a case where the optimizer chooses a bad plan.\n\nThe difficulty seems to be, that the optimizer cooses a correct plan for an idle\nsystem, but with his workload the index path would be far better (2 vs 4 Minutes).\n\nThis is one of the main problems of the current optimizer which imho rather \naggressively chooses seq scans over index scans. During high load this does \nnot pay off. My preference would actually be a way to make the optimizer\nchoose a plan that causes minimal workload, and not shortest runtime \n(which will obviously only be fast with low overall workload)\nThe reasoning behind this is, that during low workload your response times\nwill be good enough with a \"bad\" plan, but during high workload your response \ntimes will be best with a plan that produces the least additional workload.\n\nAndreas\n", "msg_date": "Fri, 11 Jan 2002 12:46:42 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: again on index usage" }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> My preference would actually be a way to make the optimizer\n> choose a plan that causes minimal workload, and not shortest runtime \n\n?? I am not sure that I see the difference.\n\nWhat I think you are saying is that when there's lots of competing work,\nseqscans have less advantage over indexscans because the\nsequential-access locality advantage is lost when the disk drive has to\ngo off and service some other request. If that's the mechanism, I think\nthat the appropriate answer is just to reduce random_page_cost. It's\ntrue that the current default of 4.0 was chosen using measurements on\notherwise-unloaded systems. If you assume that the system is (a) too\nbusy to do read-aheads for you, and (b) has to move the disk arm to\nservice other requests between each of your requests, then it's not\nclear that sequential reads have any performance advantage at all :-(.\nI don't think I'd go as far as to lower random_page_cost to 1.0, but\ncertainly there's a case for using an intermediate value.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 11:34:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Zeugswetter Andreas SB SD wrote:\n\n\n> This is one of the main problems of the current optimizer which imho rather \n> aggressively chooses seq scans over index scans. During high load this does \n> not pay off.\n\n\nBingo ... dragging huge tables through the buffer cache via a sequential \nscan guarantees that a) the next query sequentially scanning the same \ntable will have to read every block again (if the table's longer than \navailable PG and OS cache) b) on a high-concurrency system other queries \nend up doing extra I/O, too.\n\nOracle partially mitigates the second effect by refusing to trash its \nentire buffer cache on any given sequential scan. Or so I've been told \nby people who know Oracle well. A repeat of the sequential scan will \nstill have to reread the entire table but that's true anyway if the \ntable's at least one block longer than available cache.\n\nOf course, Oracle picks sequential scans in horribly and obviously wrong \ncases as well. On one project over the summer I had a query Oracle \nrefused to use an available index on until I told it to do so explictly, \nand when I did it sped up by a factor of about 100.\n\nAll optimizers will fail miserably for certain queries and datasets.\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 11 Jan 2002 08:41:11 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Don Baccus wrote:\n> Zeugswetter Andreas SB SD wrote:\n> \n> \n> > This is one of the main problems of the current optimizer which imho rather \n> > aggressively chooses seq scans over index scans. During high load this does \n> > not pay off.\n> \n> \n> Bingo ... dragging huge tables through the buffer cache via a sequential \n> scan guarantees that a) the next query sequentially scanning the same \n> table will have to read every block again (if the table's longer than \n> available PG and OS cache) b) on a high-concurrency system other queries \n> end up doing extra I/O, too.\n> \n> Oracle partially mitigates the second effect by refusing to trash its \n> entire buffer cache on any given sequential scan. Or so I've been told \n> by people who know Oracle well. A repeat of the sequential scan will \n> still have to reread the entire table but that's true anyway if the \n> table's at least one block longer than available cache.\n\nThat is on our TODO list, at least.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 Jan 2002 11:42:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Bruce Momjian wrote:\n\n\n>>Oracle partially mitigates the second effect by refusing to trash its \n>>entire buffer cache on any given sequential scan. Or so I've been told \n>>by people who know Oracle well. A repeat of the sequential scan will \n>>still have to reread the entire table but that's true anyway if the \n>>table's at least one block longer than available cache.\n>>\n> \n> That is on our TODO list, at least.\n\n\nI didn't realize this, it's good news. (I don't follow PG development \nclosely these days).\n\nBTW overall I think the cost-estimating portion of the PG optimizer does \nabout as well as Oracle's. Oracle is a lot smarter about doing \ntransformations of certain types of queries (turning \"scalar in (select \n...)\" into something akin to an \"exists\") but of course this has nothing \nto do with estimating the cost of index vs. sequential scans.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 11 Jan 2002 09:01:46 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": ">>>Tom Lane said:\n > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n > > My preference would actually be a way to make the optimizer\n > > choose a plan that causes minimal workload, and not shortest runtime \n > \n > ?? I am not sure that I see the difference.\n\nThere can be difference only if the optimizer takes into account already \nexecuting plans (by other backends).\n\n > What I think you are saying is that when there's lots of competing work,\n > seqscans have less advantage over indexscans because the\n > sequential-access locality advantage is lost when the disk drive has to\n > go off and service some other request.\n\nThis is exactly my point. The primary goal of the optimizer in my opinion \nshould be to avoid trashing. :-) Now, it is not easy to figure out when the \nsystem starts trashing - at least not a portable way I can think of \nimmediately.\n\n > I don't think I'd go as far as to lower random_page_cost to 1.0, but\n > certainly there's a case for using an intermediate value.\n\nThe question is: how does one find the proper value? That is, is it possible to design planner benchmarking utility to aid in tuning Postgres? One that does not execute single query and optimize on idle system.\n\nDaniel\n\n", "msg_date": "Fri, 11 Jan 2002 19:05:45 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "On Fri, Jan 11, 2002 at 11:42:43AM -0500, Bruce Momjian wrote:\n> Don Baccus wrote:\n> > Zeugswetter Andreas SB SD wrote:\n> > \n> > \n> > > This is one of the main problems of the current optimizer which imho rather \n> > > aggressively chooses seq scans over index scans. During high load this does \n> > > not pay off.\n> > \n> > \n> > Bingo ... dragging huge tables through the buffer cache via a sequential \n> > scan guarantees that a) the next query sequentially scanning the same \n> > table will have to read every block again (if the table's longer than \n> > available PG and OS cache) b) on a high-concurrency system other queries \n> > end up doing extra I/O, too.\n> > \n> > Oracle partially mitigates the second effect by refusing to trash its \n> > entire buffer cache on any given sequential scan. Or so I've been told \n> > by people who know Oracle well. A repeat of the sequential scan will \n> > still have to reread the entire table but that's true anyway if the \n> > table's at least one block longer than available cache.\n> \n> That is on our TODO list, at least.\n> \n\nHmm, on Linux this sort of behavior (skip the pg buffers for sequential\nscans) would have interesting load senstive behavior: since Linux uses\nall not-otherwise in use RAM as buffer cache, if you've got a low-load\nsystem, even very large tables will be cached. Once other processes start\ncompeting for RAM, your buffers go away. Bruce, what does xBSD do?\n\nI like it, since anything that needs to be sensitive to system wide\ninformation, like the total load on the machine, should be handled by\nthe system, not a particular app.\n\nRoss\n", "msg_date": "Fri, 11 Jan 2002 11:22:09 -0600", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n>>> I don't think I'd go as far as to lower random_page_cost to 1.0, but\n>>> certainly there's a case for using an intermediate value.\n\n> The question is: how does one find the proper value? That is, is it\n> possible to design planner benchmarking utility to aid in tuning\n> Postgres?\n\nThe trouble is that getting trustworthy numbers requires huge test\ncases, because you have to swamp out the effects of the kernel's own\nbuffer caching. I spent about a week running 24-hour-constant-disk-\nbanging experiments when I came up with the 4.0 number we use now,\nand even then I didn't feel that I had a really solid range of test\ncases to back it up.\n\nMy advice to you is just to drop it to 2.0 and see if you like the plans\nyou get any better.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 12:42:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Ross J. Reedstrom wrote:\n\n\n> Hmm, on Linux this sort of behavior (skip the pg buffers for sequential\n> scans) would have interesting load senstive behavior: since Linux uses\n> all not-otherwise in use RAM as buffer cache, if you've got a low-load\n> system, even very large tables will be cached. Once other processes start\n> competing for RAM, your buffers go away. Bruce, what does xBSD do?\n\n\nFor people who run dedicated database services simply not using pg \nbuffers for sequential scans is probably too simplistic. Assuming one \nallocates a very large pg buffer space, as I tend to do. Remember that \nshuffling data between a pg buffer and OS cache buffer takes cycles, and \nmodern machines tend to be starved for memory bandwidth - perhaps \nanother reason why Oracle requested and got writes that bypass the OS \ncache entirely? This bypasses the byte-shuffling.\n\nOf course, Oracle's preferred approach is to have you set up your OS \nenvironment so that Oracle pretty much eats the machine. They tell you \nto set SHMAX to 4GB in the installation docs, for instance, then the \ninstaller by default will configure Oracle to use about 1/3 of your \navailable RAM for its buffer cache. Books on tuning generally tell you \nthat's far too low.\n\nAnyway, I've been told that Oracle's approach is more along the lines of \n\"don't cache sequential scans that eat up more than N% of our own cache \nspace\".\n\nThen shorter tables still get fully cached when sequentially scanned, \nwhile humongous tables don't wipe out the cache causing dirty pages to \nbe flushed to the platter and other concurrent processes to do file I/O \nreads because everything but the huge table's disappeared.\n\nSomeone in an earlier post mentioned \"thrashing\" and that's what \ndragging a table bigger than cache causes on busy systems.\n\n\n> \n> I like it, since anything that needs to be sensitive to system wide\n> information, like the total load on the machine, should be handled by\n> the system, not a particular app.\n> \n> Ross\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n> .\n> \n> \n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 11 Jan 2002 10:23:27 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Daniel Kalchev wrote:\n> >>>Tom Lane said:\n> > \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > > My preference would actually be a way to make the optimizer\n> > > choose a plan that causes minimal workload, and not shortest runtime \n> > \n> > ?? I am not sure that I see the difference.\n> \n> There can be difference only if the optimizer takes into account already \n> executing plans (by other backends).\n> \n> > What I think you are saying is that when there's lots of competing work,\n> > seqscans have less advantage over indexscans because the\n> > sequential-access locality advantage is lost when the disk drive has to\n> > go off and service some other request.\n> \n> This is exactly my point. The primary goal of the optimizer in my opinion \n> should be to avoid trashing. :-) Now, it is not easy to figure out when the \n> system starts trashing - at least not a portable way I can think of \n> immediately.\n\nI have always felt some feedback mechanism from the executor back to the\noptimizer was required but I was never sure quite how to implement it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 Jan 2002 13:24:20 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "> > That is on our TODO list, at least.\n> > \n> \n> Hmm, on Linux this sort of behavior (skip the pg buffers for sequential\n> scans) would have interesting load senstive behavior: since Linux uses\n> all not-otherwise in use RAM as buffer cache, if you've got a low-load\n> system, even very large tables will be cached. Once other processes start\n> competing for RAM, your buffers go away. Bruce, what does xBSD do?\n\nFreeBSD does, NetBSD will soon, not sure about the others. I believe\nNetBSD will be tunable because page cache vs. i/o cache is not always\nbest done with a FIFO setup.\n\nAlso, when we pull from the kernel cache we have to read into our shared\nbuffers; much faster than disk i/o but slower than if they were already\nin the cache. For me the idea of doing non-cached sequential scans came\nfrom a Solaris internals book I was reading. I think it will be\npossible to mark large sequential scan buffer i/o with lower priority\ncaching that may help performance. However, if others are also doing\nsequential scans of the same table _only_, our normal caching may be\nbest. As you can see, this gets quite complicated and I am doubtful\nthere will be a general solution to this problem --- more of a feedback\nloop may be the best bet --- determine which sequential scans are wiping\nthe cache with little other purpose and start not caching them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 11 Jan 2002 13:30:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Tom Lane wrote:\n> What I think you are saying is that when there's lots of competing work,\n> seqscans have less advantage over indexscans because the\n> sequential-access locality advantage is lost when the disk drive has to\n> go off and service some other request. If that's the mechanism, I think\n> that the appropriate answer is just to reduce random_page_cost. It's\n> true that the current default of 4.0 was chosen using measurements on\n> otherwise-unloaded systems. If you assume that the system is (a) too\n> busy to do read-aheads for you, and (b) has to move the disk arm to\n> service other requests between each of your requests, then it's not\n> clear that sequential reads have any performance advantage at all :-(.\n> I don't think I'd go as far as to lower random_page_cost to 1.0, but\n> certainly there's a case for using an intermediate value.\n\nIt is even more interesting than simple sequential vs random scan\nthinking. Depending on the maker of the drive, even an unloaded system\nwill move the head randomly. Modern drives have almost no resemblance to\ntheir predecessors. Sectors are mapped however the OEM sees fit. A\nnumerically sequential read from a hard disk may have the drive heads\nbouncing all over the disk because the internal configuration of the\ndisk has almost nothing to do with the external representation.\n\nThink about a RAID device. What does a sequential scan mean to a RAID\nsystem? Very little depending on how the image is constructed. Storage\ndevices are now black boxes. The only predictable advantage a\n\"sequential scan\" can have on a modern computer is OS level caching.\n", "msg_date": "Fri, 11 Jan 2002 13:58:09 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> ... Storage\n> devices are now black boxes. The only predictable advantage a\n> \"sequential scan\" can have on a modern computer is OS level caching.\n\nYou mean read-ahead. True enough, but that \"only advantage\" is very\nsignificant. The 4.0 number did not come out of the air, it came\nfrom actual measurements.\n\nI think the real point in this thread is that measurements on an idle\nsystem might not extrapolate very well to measurements on a heavily\nloaded system. I can see the point, but I don't really have time to\ninvestigate it right now. I'd be willing to reduce the default value of\nrandom_page_cost to something around 2, if someone can come up with\nexperimental evidence justifying it ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 14:09:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Bruce Momjian wrote:\n\n\n> \n> I have always felt some feedback mechanism from the executor back to the\n> optimizer was required but I was never sure quite how to implement it.\n\n\nThe folks at DEC (rdb???) wrote a paper on it a long time ago (duh, back \nwhen DEC existed). I ran across it in the Tuft's library about a year \nago, back when my girlfriend was in grad school.\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 11 Jan 2002 11:29:32 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Bruce Momjian wrote:\n\n\n> Also, when we pull from the kernel cache we have to read into our shared\n> buffers; much faster than disk i/o but slower than if they were already\n> in the cache.\n\n\nYes ... this is one reason why folks like Oracle want to be able to \nbypass the kernel cache.\n\n> For me the idea of doing non-cached sequential scans came\n> from a Solaris internals book I was reading. I think it will be\n> possible to mark large sequential scan buffer i/o with lower priority\n> caching that may help performance. However, if others are also doing\n> sequential scans of the same table _only_, our normal caching may be\n> best. As you can see, this gets quite complicated and I am doubtful\n> there will be a general solution to this problem --- more of a feedback\n> loop may be the best bet --- determine which sequential scans are wiping\n> the cache with little other purpose and start not caching them.\n\n\nIt would be interesting to learn more about the actual hueristic Oracle \nuses (straight percents of the buffer cache? Something based on \nconcurrency? I have no idea). The Oracle folks have got tons and tons \nof data on real-world big, busy db installations to draw from when they \ninvestigate such things.\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Fri, 11 Jan 2002 11:42:38 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": ">>>Tom Lane said:\n > mlw <markw@mohawksoft.com> writes:\n > > ... Storage\n > > devices are now black boxes. The only predictable advantage a\n > > \"sequential scan\" can have on a modern computer is OS level caching.\n > \n > You mean read-ahead. True enough, but that \"only advantage\" is very\n > significant. The 4.0 number did not come out of the air, it came\n > from actual measurements.\n\nOn what OS? Linux? Windows? BSD? OSF/1? System V? All these differ \nsignificantly in how buffer cache is managed. For example, the BSD 'soft \nupdates' will not penalize large directory updates, but not do any good for \nsequential reads (considering what was said already about modern disks). SCSI \ntag queueing will significantly improve raw disk reads ('sequential' again) \nbecause of the low overhead of host<->SCSI subsystem communication - any \ndecent SCSI host adapter will do bus-master DMA, without the interference of \nthe processor (simplified as much as to illustrate it :). Todays IDE drives on \nPC hardware don't do that! Which is not to say that only SCSI drive \ncontrollers are intelligent enough - I still remember an older Motorola VME \nbased UNIX system (that now can only server the purpose of coffee table :), \nwhere an MFM controller board had all the intelligence of the SCSI subsystem, \nalthough it operated with 'dump' MFM disks. So many examples can be given here.\n\n > I think the real point in this thread is that measurements on an idle\n > system might not extrapolate very well to measurements on a heavily\n > loaded system. I can see the point, but I don't really have time to\n > investigate it right now. I'd be willing to reduce the default value of\n > random_page_cost to something around 2, if someone can come up with\n > experimental evidence justifying it ...\n\nAgreed. My preference would be, that if you have reasonable enough test data, \nthat can be shared, many people on different platforms can run performance \ntests and come up with an array of recommended values for their particular \nOS/hardware configuration. I believe these two items are most significant for \nthe tuning of an installation.\n\nDaniel Kalchev\n\n", "msg_date": "Sat, 12 Jan 2002 13:51:23 +0200", "msg_from": "Daniel Kalchev <daniel@digsys.bg>", "msg_from_op": false, "msg_subject": "Re: again on index usage " }, { "msg_contents": "Don Baccus wrote:\n> \n> Zeugswetter Andreas SB SD wrote:\n> \n> > This is one of the main problems of the current optimizer which imho rather\n> > aggressively chooses seq scans over index scans. During high load this does\n> > not pay off.\n> \n> Bingo ... dragging huge tables through the buffer cache via a sequential\n> scan guarantees that a) the next query sequentially scanning the same\n> table will have to read every block again (if the table's longer than\n> available PG and OS cache) b) on a high-concurrency system other queries\n> end up doing extra I/O, too.\n> \n> Oracle partially mitigates the second effect by refusing to trash its\n> entire buffer cache on any given sequential scan. Or so I've been told\n> by people who know Oracle well. A repeat of the sequential scan will\n> still have to reread the entire table but that's true anyway if the\n> table's at least one block longer than available cache.\n\nOne radical way to get better-than-average cache behaviour in such \npathologigal casescases would be to discard a _random_ page instead of \nLRU page (perhaps tuned to not not select from 1/N of pages on that are\nMRU)\n\n-------------\nHannu\n", "msg_date": "Sat, 12 Jan 2002 17:08:24 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: again on index usage" }, { "msg_contents": "Hannu Krosing wrote:\n\n\n> One radical way to get better-than-average cache behaviour in such \n> pathologigal casescases would be to discard a _random_ page instead of \n> LRU page (perhaps tuned to not not select from 1/N of pages on that are\n> MRU)\n\n\nYep, that's one of the ways to improve performance when the same table's \nbeing scanned sequentially multiple times, or where different queries \nsometimes scan it sequentially, other times by index. MRU would help if \nyou're constantly doing sequential scans.\n\nSo would flipping the scan order depending on what's in the cache :)\n\nBut none of these would mitigate the effects on other concurrent queries \nthat don't query the large table at all.\n\n\n\n-- \nDon Baccus\nPortland, OR\nhttp://donb.photo.net, http://birdnotes.net, http://openacs.org\n\n", "msg_date": "Sat, 12 Jan 2002 07:44:29 -0800", "msg_from": "Don Baccus <dhogaza@pacifier.com>", "msg_from_op": false, "msg_subject": "Re: again on index usage" } ]
[ { "msg_contents": "\nWhy does postgresql have to die if the disk fills up? Can't it just\ngo into an idle state and complain or something?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 11 Jan 2002 10:54:55 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "why?" }, { "msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> Why does postgresql have to die if the disk fills up? Can't it just\n> go into an idle state and complain or something?\n\nAFAIR the curl-up-and-die response only occurs if we fail to obtain\nspace for XLOG or CLOG files; failure to extend ordinary data files\nisn't fatal.\n\nI don't really see a way around stopping the system when XLOG or CLOG\nis broken. With transaction support not working you can't do anything.\n\nA bright spot is that in 7.2, since we generally recycle rather than\ndelete/recreate XLOG files, adding space for XLOG is a rare event.\nCLOG doesn't grow very fast either (2 bits per transaction). So\nyou should be more likely to see out-of-space reflected as a user data\nfile extension failure before you run into XLOG/CLOG trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 11:42:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: why? " } ]
[ { "msg_contents": "Should the permissions of a deleted user get assigned to a new user\nas in the example below?\n\nMartin\n\n$ createdb test\nCREATE DATABASE\n$ psql test\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\ntest=# create table abc(a int, b int);\nCREATE\ntest=# select relacl from pg_class where relname='abc';\n relacl \n--------\n \n(1 row)\n\ntest=# create user martin;\nCREATE USER\ntest=# grant all on abc to martin;\nGRANT\ntest=# select relacl from pg_class where relname='abc';\n relacl \n-------------------------------------\n {=,postgres=arwdRxt,martin=arwdRxt}\n(1 row)\n\ntest=# drop user martin;\nDROP USER\ntest=# select relacl from pg_class where relname='abc';\n relacl \n----------------------------------\n {=,postgres=arwdRxt,101=arwdRxt}\n(1 row)\n\ntest=# create user tom;\nCREATE USER\ntest=# select relacl from pg_class where relname='abc';\n relacl \n----------------------------------\n {=,postgres=arwdRxt,tom=arwdRxt}\n(1 row)\n\ntest=# select version();\n version \n---------------------------------------------------------------------\n PostgreSQL 7.2b4 on i386-unknown-freebsd4.3, compiled by GCC 2.95.3\n(1 row)\n\ntest=# \n\n", "msg_date": "Fri, 11 Jan 2002 14:10:46 -0500", "msg_from": "Martin Renters <martin@datafax.com>", "msg_from_op": true, "msg_subject": "bug in permission handling?" }, { "msg_contents": "Martin Renters <martin@datafax.com> writes:\n> Should the permissions of a deleted user get assigned to a new user\n> as in the example below?\n\nThat can happen, since the default \"usesysid\" assignment is \"max\nexisting usesysid + 1\". If you delete the last user then their sysid\nbecomes a candidate for reassignment. This is not real good, but fixing\nit isn't that high on the priority list (and is difficult to do unless\nwe take away the option of hand-assigned sysids ... otherwise we could\njust have a sequence generator for sysids).\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 10:29:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " }, { "msg_contents": "On Mon, Jan 14, 2002 at 10:29:01AM -0500, Tom Lane wrote:\n> Martin Renters <martin@datafax.com> writes:\n> > Should the permissions of a deleted user get assigned to a new user\n> > as in the example below?\n> \n> That can happen, since the default \"usesysid\" assignment is \"max\n> existing usesysid + 1\". If you delete the last user then their sysid\n> becomes a candidate for reassignment. This is not real good, but fixing\n> it isn't that high on the priority list (and is difficult to do unless\n> we take away the option of hand-assigned sysids ... otherwise we could\n> just have a sequence generator for sysids).\n\nIsn't it possible for PostgreSQL to delete permissions on tables when a\nuser gets deleted? It seems to be a bit of a security issue when a new\nuser suddenly inherits permissions he shouldn't have.\n\nMartin\n", "msg_date": "Mon, 14 Jan 2002 11:12:48 -0500", "msg_from": "Martin Renters <martin@datafax.com>", "msg_from_op": true, "msg_subject": "Re: bug in permission handling?" }, { "msg_contents": "Martin Renters <martin@datafax.com> writes:\n> Isn't it possible for PostgreSQL to delete permissions on tables when a\n> user gets deleted?\n\nNot as long as users span multiple databases. The deleting backend\ncan't even get to the other databases in which the user might have\npermissions.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 11:15:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " }, { "msg_contents": "On Monday 14 January 2002 10:15 am, Tom Lane wrote:\n> Martin Renters <martin@datafax.com> writes:\n> > Isn't it possible for PostgreSQL to delete permissions on tables when a\n> > user gets deleted?\n>\n> Not as long as users span multiple databases. The deleting backend\n> can't even get to the other databases in which the user might have\n> permissions.\n>\n\nCould we have it delete all the users permissions in the current database? \nOr at least do this when we have schema support, as I think that there will \ntypically be only one database (I think that is what I have heard).\n\nI think that extranious permissions whether they are misassgned to a new \nuser, or not assigned to anyone are a bad thing.\n", "msg_date": "Mon, 14 Jan 2002 14:59:02 -0600", "msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling?" }, { "msg_contents": "Matthew T. O'Connor writes:\n\n> I think that extranious permissions whether they are misassgned to a new\n> user, or not assigned to anyone are a bad thing.\n\nWell, Unix systems have been working like that for decades and no one has\ncome up with a bright idea how to fix it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 14 Jan 2002 18:42:52 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling?" }, { "msg_contents": "At 10:29 AM 14-01-2002 -0500, Tom Lane wrote:\n>Martin Renters <martin@datafax.com> writes:\n>> Should the permissions of a deleted user get assigned to a new user\n>> as in the example below?\n>\n>That can happen, since the default \"usesysid\" assignment is \"max\n>existing usesysid + 1\". If you delete the last user then their sysid\n>becomes a candidate for reassignment. This is not real good, but fixing\n>it isn't that high on the priority list (and is difficult to do unless\n>we take away the option of hand-assigned sysids ... otherwise we could\n>just have a sequence generator for sysids).\n\nI think the sequence way is good for now - don't reuse user ids. However\nwhat's the max possible user id? \n\nAlso that way if someone screws up and deletes the wrong user, it might\nstill be possible to restore the user and permissions.\n\nCheerio,\nLink.\n\n", "msg_date": "Tue, 15 Jan 2002 10:44:04 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " }, { "msg_contents": "On Mon, 14 Jan 2002, Tom Lane wrote:\n\n> Martin Renters <martin@datafax.com> writes:\n> > Should the permissions of a deleted user get assigned to a new user\n> > as in the example below?\n> \n> That can happen, since the default \"usesysid\" assignment is \"max\n> existing usesysid + 1\". If you delete the last user then their sysid\n> becomes a candidate for reassignment. This is not real good, but fixing\n> it isn't that high on the priority list (and is difficult to do unless\n> we take away the option of hand-assigned sysids ... otherwise we could\n> just have a sequence generator for sysids).\n\nAnother slight bug with CreateUser() -- there does not appear to be any\nchecking for potential overflow of sysid. The function scans pg_shadow to\nfind the largest usrsysid. Once obtained:\n\n /* If no sysid given, use max existing id + 1 */\n if (!havesysid)\n sysid = max_id + 1;\n\nGavin\n\n", "msg_date": "Tue, 15 Jan 2002 14:51:21 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " }, { "msg_contents": "On Tue, 15 Jan 2002, Gavin Sherry wrote:\n\n> On Mon, 14 Jan 2002, Tom Lane wrote:\n> \n> > Martin Renters <martin@datafax.com> writes:\n> > > Should the permissions of a deleted user get assigned to a new user\n> > > as in the example below?\n> > \n> > That can happen, since the default \"usesysid\" assignment is \"max\n> > existing usesysid + 1\". If you delete the last user then their sysid\n> > becomes a candidate for reassignment. This is not real good, but fixing\n> > it isn't that high on the priority list (and is difficult to do unless\n> > we take away the option of hand-assigned sysids ... otherwise we could\n> > just have a sequence generator for sysids).\n> \n> Another slight bug with CreateUser() -- there does not appear to be any\n> checking for potential overflow of sysid. The function scans pg_shadow to\n> find the largest usrsysid. Once obtained:\n> \n> /* If no sysid given, use max existing id + 1 */\n> if (!havesysid)\n> sysid = max_id + 1;\n> \n\nLeft this bit off:\n\ntemplate1=# create user def with sysid 2147483647;\nCREATE USER\ntemplate1=# create user def2;\nCREATE USER\ntemplate1=# create user def3;\nERROR: Cannot insert a duplicate key into unique index \npg_shadow_usesysid_index\ntemplate1=# select usesysid from pg_shadow where usename ~ 'def';\n usesysid\n-------------\n 2147483647\n -2147483648\n(2 rows)\n\ntemplate1=# select version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.2b4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\ntemplate1=#\n\nGavin\n\n\n", "msg_date": "Tue, 15 Jan 2002 15:06:23 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " }, { "msg_contents": "On Mon, 14 Jan 2002, Peter Eisentraut wrote:\n\n> Matthew T. O'Connor writes:\n> \n> > I think that extranious permissions whether they are misassgned to a new\n> > user, or not assigned to anyone are a bad thing.\n> \n> Well, Unix systems have been working like that for decades and no one has\n> come up with a bright idea how to fix it.\n\nSorry to bring this up again a few weeks later. It occurs to me that this\nreally isn't an answer. When adding a new user to a UNIX system, the\nrelevant command would have *at least* to scan the entire file system to\ndetermine if the max(uid + 1) (from /etc/passwd) owned anything. This is\nunreasonable. \n\nIn the case of postgres, however, all objects in the system are\nnecessarily registered in the system tables. One could easily determine a\nsysid which owns no objects by scanning the attributes of those relations\nwhich reference objects in the system -- pg_aggregate.aggowner,\npg_class.relowner, etc -- and add one to the maximum sysid found.\n\nI was going to run up a patch for this, but it wold be premature given\nthe introduction of schemas in 7.3. Once implemented, it would be trivial\nto add a test of schema ownership and incorporate this into the idea\nabove.\n\nGavin\n\n", "msg_date": "Sat, 26 Jan 2002 16:27:32 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling?" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> I was going to run up a patch for this, but it wold be premature given\n> the introduction of schemas in 7.3. Once implemented, it would be trivial\n> to add a test of schema ownership and incorporate this into the idea\n> above.\n\nUnfortunately, looking into databases you aren't connected to is far\nfrom trivial. As long as users span multiple databases this problem\nis not really soluble...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 26 Jan 2002 01:24:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " }, { "msg_contents": "On Sat, 26 Jan 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > I was going to run up a patch for this, but it wold be premature given\n> > the introduction of schemas in 7.3. Once implemented, it would be trivial\n> > to add a test of schema ownership and incorporate this into the idea\n> > above.\n> \n> Unfortunately, looking into databases you aren't connected to is far\n> from trivial. As long as users span multiple databases this problem\n> is not really soluble...\n\nArgh. Of course, my bad.\n\nGavin\n\n", "msg_date": "Sat, 26 Jan 2002 19:09:32 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: bug in permission handling? " } ]
[ { "msg_contents": "I was running a stock 7.2b4 Postgres with pgbench.\n\nRedHat Linux 7.2, stock everything.\nDual PIII 650MHZ\nSMP kernel\n1G RAM\nEXT3 file system\n\nAdaptec SCSI disks, one for /base and ont for/pg_xlog\n\npgbench scaled to 50\n\n./pgbench -n -t 100 -h $HOST -c 100 bench\n\nIt hung in the checkpoint. I wasn't able to trace it.\n", "msg_date": "Fri, 11 Jan 2002 22:53:08 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "checkpoint hang in 7.2b4" }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> ./pgbench -n -t 100 -h $HOST -c 100 bench\n\n> It hung in the checkpoint. I wasn't able to trace it.\n\nHung, or just took longer than you expected?\n\nI don't see how you expect us to do anything with that report.\nI can assure you I've run quite a few pgbenches on stock RH \nLinux 7.2, with no sign of a problem. So you'll have to do some\ninvestigation of your own, if you see a problem...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 11 Jan 2002 23:15:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: checkpoint hang in 7.2b4 " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > ./pgbench -n -t 100 -h $HOST -c 100 bench\n> \n> > It hung in the checkpoint. I wasn't able to trace it.\n> \n> Hung, or just took longer than you expected?\n\nNo, hung, 20 minutes. Load average dropped to zero.\n\n> \n> I don't see how you expect us to do anything with that report.\n> I can assure you I've run quite a few pgbenches on stock RH\n> Linux 7.2, with no sign of a problem. So you'll have to do some\n> investigation of your own, if you see a problem...\n\nHey I didn't have the debugger going, I was doing a performance test. I will\nlook into it if it happens again. This wasn't a request for help, it was a\nheads up. If what I saw was anomalous that's great, but if it isn't I want to\nmake sure that you guys heard about it, just in case.\n", "msg_date": "Fri, 11 Jan 2002 23:30:19 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: checkpoint hang in 7.2b4" }, { "msg_contents": "mlw wrote:\n> \n> I was running a stock 7.2b4 Postgres with pgbench.\n> \n> RedHat Linux 7.2, stock everything.\n> Dual PIII 650MHZ\n> SMP kernel\n> 1G RAM\n> EXT3 file system\n> \n> Adaptec SCSI disks, one for /base and ont for/pg_xlog\n> \n> pgbench scaled to 50\n> \n> ./pgbench -n -t 100 -h $HOST -c 100 bench\n> \n> It hung in the checkpoint. I wasn't able to trace it.\n\nI had a similar experience once with -c 5. I sent output of \n\"ps ax| grep post\" to this list and Tom claimed it to be \nfixed in CVS head.\n\n------------\nHannu\n", "msg_date": "Sat, 12 Jan 2002 16:51:23 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: checkpoint hang in 7.2b4" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n>> It hung in the checkpoint. I wasn't able to trace it.\n\n> I had a similar experience once with -c 5. I sent output of \n> \"ps ax| grep post\" to this list and Tom claimed it to be \n> fixed in CVS head.\n\nOh, you're thinking of that three-way deadlock condition. I've\nforgotten, was that in b4?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Jan 2002 11:21:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: checkpoint hang in 7.2b4 " }, { "msg_contents": "On Sat, 2002-01-12 at 21:21, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> >> It hung in the checkpoint. I wasn't able to trace it.\n> \n> > I had a similar experience once with -c 5. I sent output of \n> > \"ps ax| grep post\" to this list and Tom claimed it to be \n> > fixed in CVS head.\n> \n> Oh, you're thinking of that three-way deadlock condition. I've\n> forgotten, was that in b4?\n\nYes.\n\n--------------\nHannu\n", "msg_date": "12 Jan 2002 22:40:14 +0500", "msg_from": "Hannu Krosing <hannu@krosing.net>", "msg_from_op": false, "msg_subject": "Re: checkpoint hang in 7.2b4" } ]
[ { "msg_contents": "I've committed fixes for the recently reported problem with adding\ntimestamp with time zone to intervals. As pointed out earlier, there\nwere one or two places which did not make the right choice between\n\"zoneful\" and \"zoneless\" forms of support routines.\n\nI've also committed some additions to the horology regression test to\nexercise more possible input formats of date/time types. These should be\nexpanded further after 7.2 is released.\n\nRegression tests pass, and Tom Lane has volunteered to fix up the\nalternate horology regression outputs.\n\n - Thomas\n", "msg_date": "Sat, 12 Jan 2002 07:14:09 +0000", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": true, "msg_subject": "timestamptz + interval fixed" }, { "msg_contents": "Thomas Lockhart wrote:\n> I've committed fixes for the recently reported problem with adding\n> timestamp with time zone to intervals. As pointed out earlier, there\n> were one or two places which did not make the right choice between\n> \"zoneful\" and \"zoneless\" forms of support routines.\n\nActually, Tatsuo/SRA found the problem. I only did the legwork on\nchecking it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Jan 2002 22:47:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: timestamptz + interval fixed" } ]
[ { "msg_contents": "I have been doing some benchmarking on RedHat 7.2\nThe machine\nDual PIII 650, 1 Gig RAM.\n18G IBM SCSI LVD main drive with /pg_xlog\n9G Seagate UW SE with /base.\n2 Adaptec SCSI controllers.\n\nChanges to default postgresql.conf:\ntcpip_socket = true\nmax_connections = 128\nshared_buffers = 4096\nsort_mem = 2048\nwal_files = 32\nwal_sync_method = fdatasync\nrandom_page_cost = 2\n\nThe script: (from Tom)\n#! /bin/sh\nHOST=slave2\nDB=bench\ntotxacts=10000\n\nfor c in 10 25 50 100\ndo\n t=`expr $totxacts / $c`\n psql -h $HOST -c 'vacuum' $DB\n psql -h $HOST -c 'checkpoint' $DB\n echo \"===== sync ======\" 1>&2\n sync;sync;sync;sleep 10\n echo $c concurrent users... 1>&2\n ./pgbench -n -t $t -h $HOST -c $c $DB\ndone\n\nThe client machine was a separate Dual PIII 600 also running RedHat 7.2\n\n\nThe procedure:\nRedHat uses ext3 by default. Ext3, if you do not already know, is ext2 with\njournalling enabled. This is important because an ext3 file system can be\nmounted as an ext2 file system without any changes. Simply specify it as such.\n\nThe first two tests used a stock RedHat 7.2 system with ext3.\nThe next two tests used that same system rebooted and mounting the file systems\nas ext2.\n\next3.txt are the tests run against the ext3 file systems with fsync set to\nfalse.\next3.fsync.txt are the tests with fsync set to true.\next2.txt are the tests run against the database mounted as ext2 and fsync set\nto false.\next2.fsync.txt are the tests run with fsync set to true.\n\nJournallng always affects performance. This is no surprise. If you have fsync\nenabled, the affect is less pronounced. (this is also no surprise). One\ninteresting thing, as the number of concurrent connections goes up, the impact\nof journalling and fsync are reduced.\n\nAfter running these benchmarks, I decided to run a series of benchmarks\nchanging the number of buffers.\n\next2.1024.txt\next2.2048.txt\n(4096 was the default in the previous tests)\next2.8192.txt\n\nI'm not sure of the digestion of all these numbers, but I thought some of you\nguys would be interested in comparing notes.", "msg_date": "Sat, 12 Jan 2002 14:15:51 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "benchmarking journalling file systems, fsync, and buffers." }, { "msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I have been doing some benchmarking on RedHat 7.2\n\nI think you neglected to mention the pgbench scale factor?\n\n> Journallng always affects performance. This is no surprise. If you\n> have fsync enabled, the affect is less pronounced. (this is also no\n> surprise). One interesting thing, as the number of concurrent\n> connections goes up, the impact of journalling and fsync are reduced.\n\nIf you are approaching the scale factor then that just means that the\nbackends are spending too much CPU on contending for row locks ...\n\n> I'm not sure of the digestion of all these numbers, but I thought some of you\n> guys would be interested in comparing notes.\n\nI find it easier to digest graphs than numbers, so here are a couple of\nGIFs of Mark's results. The first is the filesystem/fsync comparison,\nthe second the NBuffers comparison.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 12 Jan 2002 18:10:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: benchmarking journalling file systems, fsync, and buffers. " }, { "msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > I have been doing some benchmarking on RedHat 7.2\n> \n> I think you neglected to mention the pgbench scale factor?\n\nIt is in the output of the benchmark files, it is 50.\n\n> I find it easier to digest graphs than numbers, so here are a couple of\n> GIFs of Mark's results. The first is the filesystem/fsync comparison,\n> the second the NBuffers comparison.\n\nThanks.\n", "msg_date": "Sat, 12 Jan 2002 18:36:54 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": true, "msg_subject": "Re: benchmarking journalling file systems, fsync, and buffers." } ]
[ { "msg_contents": "I just spent some time trying to understand the mechanism behind the\n\"XLogFlush: request is not satisfied\" startup errors we've seen reported\noccasionally with 7.1. The only apparent way for this to happen is for\nXLogFlush to be given a garbage WAL record pointer (ie, one pointing\nbeyond the current end of WAL), which presumably must be coming from\na corrupted LSN field in a data page. Well, that's not too hard to\nbelieve during normal operation: say the disk drive drops some bits in\nthe LSN field, and we read the page in, and don't have any immediate\nneed to change it (which would cause the LSN to be overwritten); but we\ndo find some transaction status hint bits to set, so the page gets\nmarked dirty. Then when the page is written out, bufmgr will try to\nflush xlog using the corrupted LSN pointer.\n\nHowever, what about the cases where this error occurs during WAL\nrecovery? As far as I can see, all pages that WAL recovery touches will\nbe marked with a valid LSN pointing at the WAL record that caused them\nto be touched. So the bufmgr flush operations done during recovery\ncan't see a bad LSN even if the disk dropped some bits.\n\nWith one exception. In 7.1, pages of pg_log are held by the buffer\nmanager but we never touch their LSN fields. Ordinarily the LSNs are\nalways zero and don't cause any flush problems. But what if pg_log\nbecomes corrupt? If there's a sufficiently large garbage value in the\nLSN of a pg_log page that WAL recovery tries to set commit status in,\nthen bingo: we'll get \"request is not satisfied\" every time, because\nthe recovery process will always try to flush that page without fixing\nits LSN.\n\nSo the failure-to-start-up problem can be blamed entirely on 7.1's\nfailure to do anything with LSN fields in pg_log pages. I was able to\nget some experimental confirmation of this train of thought when I\nlooked at Jeff Lu's files (he's the most recent reporter of failure\nto start up with this message). Sure enough, his pg_log contains\ngarbage.\n\nIf that's true, then the startup problem should be gone in 7.2, since\n7.2 doesn't use the buffer manager to access CLOG pages and doesn't\nexpect CLOG pages to have an LSN.\n\nHowever we could still see failures during normal operation due to\ndropped bits on disk. So I am still dissatisfied with doing elog(STOP)\nfor this condition, as I regard it as an overly strong reaction to\ncorrupted data; moreover, it does nothing to fix the problem and indeed\ngets in the way of fixing the problem. I propose the attached patch.\nWhat do you think?\n\n\t\t\tregards, tom lane\n\n\n*** src/backend/access/transam/xlog.c.orig\tFri Dec 28 13:16:41 2001\n--- src/backend/access/transam/xlog.c\tSat Jan 12 13:24:20 2002\n***************\n*** 1262,1276 ****\n \t\t\tWriteRqst.Write = WriteRqstPtr;\n \t\t\tWriteRqst.Flush = record;\n \t\t\tXLogWrite(WriteRqst);\n- \t\t\tif (XLByteLT(LogwrtResult.Flush, record))\n- \t\t\t\telog(STOP, \"XLogFlush: request %X/%X is not satisfied --- flushed only to %X/%X\",\n- \t\t\t\t\t record.xlogid, record.xrecoff,\n- \t\t\t\t LogwrtResult.Flush.xlogid, LogwrtResult.Flush.xrecoff);\n \t\t}\n \t\tLWLockRelease(WALWriteLock);\n \t}\n \n \tEND_CRIT_SECTION();\n }\n \n /*\n--- 1262,1301 ----\n \t\t\tWriteRqst.Write = WriteRqstPtr;\n \t\t\tWriteRqst.Flush = record;\n \t\t\tXLogWrite(WriteRqst);\n \t\t}\n \t\tLWLockRelease(WALWriteLock);\n \t}\n \n \tEND_CRIT_SECTION();\n+ \n+ \t/*\n+ \t * If we still haven't flushed to the request point then we have a\n+ \t * problem; most likely, the requested flush point is past end of XLOG.\n+ \t * This has been seen to occur when a disk page has a corrupted LSN.\n+ \t *\n+ \t * Formerly we treated this as a STOP condition, but that hurts the\n+ \t * system's robustness rather than helping it: we do not want to take\n+ \t * down the whole system due to corruption on one data page. In\n+ \t * particular, if the bad page is encountered again during recovery then\n+ \t * we would be unable to restart the database at all! (This scenario\n+ \t * has actually happened in the field several times with 7.1 releases.\n+ \t * Note that we cannot get here while InRedo is true, but if the bad\n+ \t * page is brought in and marked dirty during recovery then\n+ \t * CreateCheckpoint will try to flush it at the end of recovery.)\n+ \t *\n+ \t * The current approach is to ERROR under normal conditions, but only\n+ \t * NOTICE during recovery, so that the system can be brought up even if\n+ \t * there's a corrupt LSN. Note that for calls from xact.c, the ERROR\n+ \t * will be promoted to STOP since xact.c calls this routine inside a\n+ \t * critical section. However, calls from bufmgr.c are not within\n+ \t * critical sections and so we will not force a restart for a bad LSN\n+ \t * on a data page.\n+ \t */\n+ \tif (XLByteLT(LogwrtResult.Flush, record))\n+ \t\telog(InRecovery ? NOTICE : ERROR,\n+ \t\t\t \"XLogFlush: request %X/%X is not satisfied --- flushed only to %X/%X\",\n+ \t\t\t record.xlogid, record.xrecoff,\n+ \t\t\t LogwrtResult.Flush.xlogid, LogwrtResult.Flush.xrecoff);\n }\n \n /*\n", "msg_date": "Sat, 12 Jan 2002 15:46:30 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Theory about XLogFlush startup failures" }, { "msg_contents": "Tom Lane wrote:\n> \n> I just spent some time trying to understand the mechanism behind the\n> \"XLogFlush: request is not satisfied\" startup errors we've seen reported\n> occasionally with 7.1. The only apparent way for this to happen is for\n> XLogFlush to be given a garbage WAL record pointer (ie, one pointing\n> beyond the current end of WAL), which presumably must be coming from\n> a corrupted LSN field in a data page. Well, that's not too hard to\n> believe during normal operation: say the disk drive drops some bits in\n> the LSN field, and we read the page in, and don't have any immediate\n> need to change it (which would cause the LSN to be overwritten); but we\n> do find some transaction status hint bits to set, so the page gets\n> marked dirty. Then when the page is written out, bufmgr will try to\n> flush xlog using the corrupted LSN pointer.\n\nI agree with you at least at the point that we had better\ncontinue FlushBufferPool() even though STOP-error occurs.\n\nBTW doesn't the LSN corruption imply the possibility\nof the corruption of other parts (of e.g. pg_log) ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 Jan 2002 11:23:44 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Theory about XLogFlush startup failures" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> BTW doesn't the LSN corruption imply the possibility\n> of the corruption of other parts (of e.g. pg_log) ?\n\nIndeed. Not sure what we can do about that.\n\nIn the case I examined (Jeff Lu's recent problem), pg_log appeared\nperfectly valid up through the end of the page containing the last\ntransaction ID recorded in pg_control. However, this ID was close to\nthe end of the page, and WAL entries contained XIDs reaching into the\nnext page of pg_log. That page contained complete garbage. Even\nmore interesting, there was about 400K of complete garbage beyond that\npage, in pages that Postgres should never have touched at all. (This\nseemed like a lot, considering the valid part of pg_log was less than\n200K.)\n\nMy bet is that the garbaged pages were there before Postgres got to\nthem. Both normal operation and WAL recovery would've died at the first\nattempt to write out the first garbage page, because of its bad LSN.\nAlso, AFAICT 7.1 and before contained no explicit code to zero a newly\nused pg_log page (it relied on the smgr to fill in zeroes when reading\nbeyond EOF); nor did the pg_log updating code stop to notice whether the\ntransaction status bits it was about to overwrite looked sane. So there\nwould've been no notice before trying to write the garbage page back out.\n(These last two holes, at least, are plugged in 7.2. But if the OS\ngives us back a page of garbage instead of the data we wrote out, how\nwell can we be expected to survive that?)\n\nSince Jeff was running on a Cygwin/Win2K setup, I'm quite happy to write\nthis off as an OS hiccup, unless someone can think of a mechanism inside\nPostgres that could have provoked it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 21:49:49 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > BTW doesn't the LSN corruption imply the possibility\n> > of the corruption of other parts (of e.g. pg_log) ?\n> \n> Indeed. Not sure what we can do about that.\n\nOne thing I can think of is to prevent a corrupted page\nfrom spoiling other pages by jumping the page boundary\nin the buffer pool.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 Jan 2002 12:49:51 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Theory about XLogFlush startup failures" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Indeed. Not sure what we can do about that.\n\n> One thing I can think of is to prevent a corrupted page\n> from spoiling other pages by jumping the page boundary\n> in the buffer pool.\n\nWe do that already, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 22:55:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Indeed. Not sure what we can do about that.\n> \n> > One thing I can think of is to prevent a corrupted page\n> > from spoiling other pages by jumping the page boundary\n> > in the buffer pool.\n> \n> We do that already, no?\n\nOh I may be missing something.\nWhere is it checked ?\n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 Jan 2002 13:22:59 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Theory about XLogFlush startup failures" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> One thing I can think of is to prevent a corrupted page\n> from spoiling other pages by jumping the page boundary\n> in the buffer pool.\n>> \n>> We do that already, no?\n\n> Oh I may be missing something.\n> Where is it checked ?\n\nI know PageRepairFragmentation is real paranoid about this, because I\nmade it so recently. I suppose it might be worth adding some more\nsanity checks to PageAddItem, maybe PageZero (is that ever called on a\npre-existing page?), and PageIndexTupleDelete. Seems like that should\nabout cover it --- noplace else inserts items on disk pages or\nreshuffles disk page contents, AFAIK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 23:38:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > One thing I can think of is to prevent a corrupted page\n> > from spoiling other pages by jumping the page boundary\n> > in the buffer pool.\n> >>\n> >> We do that already, no?\n> \n> > Oh I may be missing something.\n> > Where is it checked ?\n> \n> I know PageRepairFragmentation is real paranoid about this, because I\n> made it so recently. I suppose it might be worth adding some more\n> sanity checks to PageAddItem, maybe PageZero (is that ever called on a\n> pre-existing page?), and PageIndexTupleDelete. Seems like that should\n> about cover it --- noplace else inserts items on disk pages or\n> reshuffles disk page contents, AFAIK.\n\nWhat about PageGetItem ? It seems to be able to touch the item\nvia HeapTupleSatisfies etc. \n\nregards,\nHiroshi Inoue\n", "msg_date": "Tue, 15 Jan 2002 13:52:01 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Theory about XLogFlush startup failures" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> I know PageRepairFragmentation is real paranoid about this, because I\n>> made it so recently. I suppose it might be worth adding some more\n>> sanity checks to PageAddItem, maybe PageZero (is that ever called on a\n>> pre-existing page?), and PageIndexTupleDelete. Seems like that should\n>> about cover it --- noplace else inserts items on disk pages or\n>> reshuffles disk page contents, AFAIK.\n\n> What about PageGetItem ? It seems to be able to touch the item\n> via HeapTupleSatisfies etc. \n\nHmm. Strictly speaking I think you are right, but I'm hesitant to add a\nbunch of new tests to PageGetItem --- that is much more of a hot spot\nthan PageAddItem, and it'll cost us something in speed I fear.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 23:58:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "I said:\n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> What about PageGetItem ? It seems to be able to touch the item\n>> via HeapTupleSatisfies etc. \n\n> Hmm. Strictly speaking I think you are right, but I'm hesitant to add a\n> bunch of new tests to PageGetItem --- that is much more of a hot spot\n> than PageAddItem, and it'll cost us something in speed I fear.\n\nI wasn't totally comfortable with this (and I'm sure you weren't\neither), but after more thought I still feel it's the right tradeoff.\nHere are a couple of heuristic arguments why we don't need more error\nchecking in PageGetItem:\n\n1. tqual.c won't ever try to set tuple status bits until it's checked\nt_xmin or t_xmax against TransactionIdDidCommit/DidAbort. If messed-up\npage headers have caused us to compute a bogus item pointer, one would\nexpect that a more-or-less-random transaction ID will be passed to \nTransactionIdDidCommit/DidAbort. Now in 7.2, it's unlikely that more\nthan about 2 segments (2 million transactions' worth) of CLOG will exist\nat any instant, so the odds that asking about a random XID will produce\nan answer and not elog(STOP) are less than 1 in 1000.\n\n2. If this happened in the field, the signature would be one or two bits\nset apparently at random in an otherwise-okay page. In the data\ncorruption cases I've been able to examine personally, I can't recall\never having seen such a case. The usual form of corruption is dozens of\nconsecutive bytes worth of garbage overlaying part of an otherwise-valid\npage. While I tend to blame such stuff on hardware glitches (especially\nwhen the damage is aligned on power-of-2 byte boundaries), it's\ncertainly possible that it comes from a misdirected memcpy, which is why\nI think it's a good idea to introduce more bounds checking in\nPageAddItem and so forth.\n\nIf we start to see failure reports that look like they might have been\ncaused by tqual.c let loose on the wrong bits, we can certainly revisit\nthis decision. But right now I think that adding more checks in \nPageGetItem would waste a lot of cycles to little purpose.\n\nBTW, to close the loop back to the original topic: I think it's quite\nlikely that some of the elog(STOP)s in clog.c will need to be reduced to\nlesser error levels once we see what sorts of problems arise in the\nfield, just as we found that this particular elog(STOP) in xlog.c was\noverkill. But I want to wait and see which ones cause problems before\nbacking off the error severity.\n\nI will go and add a few more sanity checks to bufpage.c.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Jan 2002 11:47:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>\n> I said:\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >> What about PageGetItem ? It seems to be able to touch the item\n> >> via HeapTupleSatisfies etc.\n>\n> > Hmm. Strictly speaking I think you are right, but I'm hesitant to add a\n> > bunch of new tests to PageGetItem --- that is much more of a hot spot\n> > than PageAddItem, and it'll cost us something in speed I fear.\n>\n> I wasn't totally comfortable with this (and I'm sure you weren't\n> either), but after more thought I still feel it's the right tradeoff.\n> Here are a couple of heuristic arguments why we don't need more error\n> checking in PageGetItem:\n\nWhat I have minded is e.g. the following case.\nUndoubtedly the page is corrupted(too big offset number).\nI'm suspicious if other pages are safe under such a situation.\n\nregards,\nHiroshi Inoue\n\n a part of the report [GENERAL] Database corruption?\n by Alvaro Herrera [alvherre@atentus.com]\n\n> DEBUG: --Relation delay_171--\n> NOTICE: Rel delay_171: TID 15502/4279: OID IS INVALID. TUPGONE 0.\n> NOTICE: Rel delay_171: TID 15502/4291: OID IS INVALID. TUPGONE 1.\n> NOTICE: Rel delay_171: TID 15502/4315: OID IS INVALID. TUPGONE 1.\n> NOTICE: Rel delay_171: TID 15502/4375: OID IS INVALID. TUPGONE 0.\n> NOTICE: Rel delay_171: TID 15502/4723: OID IS INVALID. TUPGONE 1.\n> NOTICE: Rel delay_171: TID 15502/4771: OID IS INVALID. TUPGONE 0.\n> NOTICE: Rel delay_171: TID 15502/4783: OID IS INVALID. TUPGONE 0.\n> NOTICE: Rel delay_171: TID 15502/4831: OID IS INVALID. TUPGONE 1.\n> NOTICE: Rel delay_171: TID 15502/4843: OID IS INVALID. TUPGONE 0.\n> NOTICE: Rel delay_171: TID 15502/4867: InsertTransactionInProgress 0 -\ncan't shrink relation\n> NOTICE: Rel delay_171: TID 15502/4867: OID IS INVALID. TUPGONE 0.\n> [a lot similarly looking lines]\n> NOTICE: Rel delay_171: TID 15502/6067: OID IS INVALID. TUPGONE 0.\n> Server process (pid 22773) exited with status 139 at Sun Oct 21 02:30:27\n2001\n> Terminating any active server processes...\n\n", "msg_date": "Wed, 16 Jan 2002 05:49:50 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> What I have minded is e.g. the following case.\n> Undoubtedly the page is corrupted(too big offset number).\n> I'm suspicious if other pages are safe under such a situation.\n\nYou have a point, but I still don't like slowing down PageGetItem.\n\nHow about this instead: whenever we read in a page, check to see\nif its page header data is sane. We could do this right after the\nsmgrread call in ReadBufferInternal, and follow the \"status = SM_FAIL\"\nexit path if we see trouble.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Jan 2002 17:19:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Theory about XLogFlush startup failures " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > What I have minded is e.g. the following case.\n> > Undoubtedly the page is corrupted(too big offset number).\n> > I'm suspicious if other pages are safe under such a situation.\n> \n> You have a point, but I still don't like slowing down PageGetItem.\n> \n> How about this instead: whenever we read in a page, check to see\n> if its page header data is sane. We could do this right after the\n> smgrread call in ReadBufferInternal, and follow the \"status = SM_FAIL\"\n> exit path if we see trouble.\n\nAgreed. What we really expect is to not see such troubles\nfrom the first.\n\nregards,\nHiroshi Inoue\n", "msg_date": "Wed, 16 Jan 2002 09:19:11 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Theory about XLogFlush startup failures" } ]
[ { "msg_contents": "For those of you who don't already know, the catalog version was updated\ntoday to fix problems with timestamp with/without timezone. This will\nrequire an initdb for all 7.2beta users.\n\nThe good news is that I have successfully migrated a 7.2 regression\ndatabase without an initdb using pg_upgrade. It will be a few more days\nuntil pg_upgrade is ready for serious testing. I will make an\nannouncement at that time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Jan 2002 20:18:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "initdb, pg_upgrade" } ]
[ { "msg_contents": "> > What is \"isync\"? Also, how I can implement calling \n> \n> sorry no idea :-(\n> \n> > compare_and_swap in the assembly language?\n> \n> In assembly language you would do the locking yourself,\n> the code would be identical, or at least very similar to\n> the __APPLE__ __ppc__ code.\n> \n> sample lock code supplied in the PowerPC\n> Architecture book (page 254):\n> \n> unlock: sync\n> stw 0, lock_location\n> blr\n> \n> In the unlock case the sync is all that is necessary to make all changes\n> protected by the lock globally visible. Note that no lwarx or stwcx. is\n> needed.\n> \n> lock:\n> 1: lwarx r5, lock_location\n> cmpiw r5, 0\n> bne 2f:\n> stwcx. 1, lock_location\n> bne 1b\n> isync\n> blr\n> 2: need to indicate the lock is already locked (could spin if you want to\n> in this case or put on a sleep queue)\n> blr\n> \n> isync only affects the running processor.\n\nI have tried LinuxPPC's TAS code but AIX's assembler complains that\nlwarx and stwcx are unsupported op. So it seems that we need to tweak\nyour code actually.\n--\nTatsuo Ishii\n\n", "msg_date": "Sun, 13 Jan 2002 12:07:25 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " } ]
[ { "msg_contents": "I am testing pg_upgrade. I successfully did a pg_upgrade of a 7.2\nregression database into a fresh 7.2 install. I compared the output of\npg_dump from both copies and found that c_star dump caused a crash. I\nthen started doing more testing of the regression database and found\nthat the regression database does not load in cleanly. These failures\ncause pg_upgrade files not to match the loaded schema.\n\nLooks like there is a problem with inheritance, patch attached listing\nthe pg_dump load failures. I also see what looks like a crash in the\nserver logs:\n\t\n\tDEBUG: pq_flush: send() failed: Broken pipe\n\tFATAL 1: Socket command type 1 unknown\n\nLooks like it should be fixed before final.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n-- TOC Entry ID 2 (OID 18411)\n--\n-- Name: \"widget_in\" (opaque) Type: FUNCTION Owner: postgres\n--\nCREATE FUNCTION \"widget_in\" (opaque) RETURNS widget AS '/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/regress.so', 'widget_in' LANGUAGE 'C';\nNOTICE: ProcedureCreate: type widget is not yet defined\nCREATE\n--\n-- TOC Entry ID 3 (OID 18412)\n--\n-- Name: \"widget_out\" (opaque) Type: FUNCTION Owner: postgres\n--\n--\nCREATE TABLE \"stud_emp\" (\n\t\"percent\" integer\n)\nINHERITS (\"emp\", \"student\");\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"name\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"age\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"location\"\nCREATE\n--\n-- TOC Entry ID 61 (OID 18465)\n--\n-- Name: city Type: TABLE Owner: postgres\n--\n--\nCREATE TABLE \"d_star\" (\n\t\"dd\" double precision\n)\nINHERITS (\"b_star\", \"c_star\");\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"class\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"aa\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"a\"\nCREATE\n--\n-- TOC Entry ID 73 (OID 18510)\n--\n-- Name: e_star Type: TABLE Owner: postgres\n--\n--\nCREATE TABLE \"d\" (\n\t\"dd\" text\n)\nINHERITS (\"b\", \"c\", \"a\");\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"aa\"\nNOTICE: CREATE TABLE: merging multiple inherited definitions of attribute \"aa\"\nCREATE\n--\n-- TOC Entry ID 100 (OID 135278)\n--\n-- Name: street Type: VIEW Owner: postgres\n-- Data for TOC Entry ID 325 (OID 18512)\n--\n-- Name: f_star Type: TABLE DATA Owner: postgres\n--\nCOPY \"f_star\" FROM stdin;\nERROR: copy: line 1, pg_atoi: error in \"((1,3),(2,4))\": can't parse \"((1,3),(2,4))\"\nlost synchronization with server, resetting connection\n--\n-- Data for TOC Entry ID 326 (OID 18517)\n--\n-- Name: aggtest Type: TABLE DATA Owner: postgres", "msg_date": "Sat, 12 Jan 2002 23:18:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Problem reloading regression database" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am testing pg_upgrade. I successfully did a pg_upgrade of a 7.2\n> regression database into a fresh 7.2 install. I compared the output of\n> pg_dump from both copies and found that c_star dump caused a crash. I\n> then started doing more testing of the regression database and found\n> that the regression database does not load in cleanly.\n\nNo kidding. That's been a known issue for *years*, Bruce. Without a\nway to reorder the columns in COPY, it can't be fixed. That's the main\nreason why we have a TODO item to allow column specification in COPY.\n\n> I also see what looks like a crash in the server logs:\n\t\n> \tDEBUG: pq_flush: send() failed: Broken pipe\n> \tFATAL 1: Socket command type 1 unknown\n\nNo, that's just the COPY failing (and resetting the connection). That's\nnot going to be fixed before final either, unless you'd like us to\ndevelop a new frontend COPY protocol before final...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 12 Jan 2002 23:45:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am testing pg_upgrade. I successfully did a pg_upgrade of a 7.2\n> > regression database into a fresh 7.2 install. I compared the output of\n> > pg_dump from both copies and found that c_star dump caused a crash. I\n> > then started doing more testing of the regression database and found\n> > that the regression database does not load in cleanly.\n> \n> No kidding. That's been a known issue for *years*, Bruce. Without a\n> way to reorder the columns in COPY, it can't be fixed. That's the main\n> reason why we have a TODO item to allow column specification in COPY.\n> \n> > I also see what looks like a crash in the server logs:\n> \t\n> > \tDEBUG: pq_flush: send() failed: Broken pipe\n> > \tFATAL 1: Socket command type 1 unknown\n> \n> No, that's just the COPY failing (and resetting the connection). That's\n> not going to be fixed before final either, unless you'd like us to\n> develop a new frontend COPY protocol before final...\n\nI used to test regression dumps a long time ago. It seems I haven't\ndone so recently; guess this is a non-problem or at least a known,\nminor one.\n\nIt also means my pg_upgrade is working pretty well if the rest of it\nworked fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Jan 2002 23:48:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-12 23:45] Tom Lane said:\n| Bruce Momjian <pgman@candle.pha.pa.us> writes:\n| > I am testing pg_upgrade. I successfully did a pg_upgrade of a 7.2\n| > regression database into a fresh 7.2 install. I compared the output of\n| > pg_dump from both copies and found that c_star dump caused a crash. I\n| > then started doing more testing of the regression database and found\n| > that the regression database does not load in cleanly.\n| \n| No kidding. That's been a known issue for *years*, Bruce. Without a\n| way to reorder the columns in COPY, it can't be fixed. That's the main\n| reason why we have a TODO item to allow column specification in COPY.\n\nThe attached patch is a first-cut at implementing column specification \nin COPY FROM with the following syntax.\n\n COPY atable (col1,col2,col3,col4) FROM ...\n\nThe details:\n Add \"List* attlist\" member to CopyStmt parse node.\n Adds <please supply term ;-)> to gram.y allowing opt_column_list\n in COPY FROM Node.\n In CopyFrom, if attlist present, create Form_pg_attribute* ordered\n same as attlist.\n If all columns in the table are not found in attlist, elog(ERROR).\n Continue normal operation.\n\nRegression tests all still pass. There is still a problem where\nduplicate columns in the list will allow the operation to succeed,\nbut I believe this is the only problem. If this approach is sane,\nI'll clean it up later today.\n\ncomments?\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman", "msg_date": "Sun, 13 Jan 2002 10:30:19 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> In CopyFrom, if attlist present, create Form_pg_attribute* ordered\n> same as attlist.\n\nDoesn't seem like this can possibly work as-is. The eventual call to\nheap_formtuple must supply the column values in the order expected\nby the table, but I don't see you remapping from input-column indices to\ntable-column indices anywhere in the data processing loop.\n\nAlso, a reasonable version of this capability would allow the input\ncolumn list to be just a subset of the table column list; with the\ncolumn default expressions, if any, being evaluated to fill the missing\ncolumns. This would answer the requests we keep having for COPY to be\nable to load a table containing a serial-number column.\n\nDon't forget that if the syntax allows COPY (collist) TO file, people\nwill expect that to work too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 11:41:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-13 11:41] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > In CopyFrom, if attlist present, create Form_pg_attribute* ordered\n| > same as attlist.\n| \n| Doesn't seem like this can possibly work as-is. The eventual call to\n| heap_formtuple must supply the column values in the order expected\n| by the table, but I don't see you remapping from input-column indices to\n| table-column indices anywhere in the data processing loop.\n\nyup. back to the drawing board ;-)\n\n| Also, a reasonable version of this capability would allow the input\n| column list to be just a subset of the table column list; with the\n| column default expressions, if any, being evaluated to fill the missing\n| columns. This would answer the requests we keep having for COPY to be\n| able to load a table containing a serial-number column.\n\nright.\n\n| Don't forget that if the syntax allows COPY (collist) TO file, people\n| will expect that to work too.\n\n;-) darnit!\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 13 Jan 2002 12:33:17 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-13 12:33] Brent Verner said:\n| [2002-01-13 11:41] Tom Lane said:\n| | Brent Verner <brent@rcfile.org> writes:\n| | > In CopyFrom, if attlist present, create Form_pg_attribute* ordered\n| | > same as attlist.\n| | \n| | Doesn't seem like this can possibly work as-is. The eventual call to\n| | heap_formtuple must supply the column values in the order expected\n| | by the table, but I don't see you remapping from input-column indices to\n| | table-column indices anywhere in the data processing loop.\n| \n| yup. back to the drawing board ;-)\n\nI fixed this by making an int* mapping from specified collist \nposition to actual rd_att->attrs position.\n\n| | Also, a reasonable version of this capability would allow the input\n| | column list to be just a subset of the table column list; with the\n| | column default expressions, if any, being evaluated to fill the missing\n| | columns. This would answer the requests we keep having for COPY to be\n| | able to load a table containing a serial-number column.\n| \n| right.\n\nI'm still a bit^W^W lost as hell on how the column default magic \nhappens. It appears that in the INSERT case, the query goes thru\nthe planner and picks up the necessary Node* representing the\ndefault(s) for a relation, then later evaluates those nodes if\nnot attisset.\n\nShould I be looking to call \n ExecEvalFunc(stringToNode(adbin),ec,&rvnull,NULL);\nwhen an attr is not specified and it has a default? Or is there\na more straightforward way of getting the default for an att? \n(I sure hope there is ;-)\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 13 Jan 2002 14:40:37 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I fixed this by making an int* mapping from specified collist \n> position to actual rd_att->attrs position.\n\nSounds better.\n\n> I'm still a bit^W^W lost as hell on how the column default magic \n> happens.\n\nI'd say use build_column_default() in src/backend/optimizer/prep/preptlist.c\nto set up a default expression (possibly just NULL) for every column\nthat's not supplied by the input. That routine's not exported now, but\nit could be, or perhaps it should be moved somewhere else. (Suggestions\nanyone? Someplace in src/backend/catalog might be a more appropriate\nplace for it.)\n\nThen in the per-tuple loop you use ExecEvalExpr, or more likely\nExecEvalExprSwitchContext, to execute the default expressions.\nThe econtext wanted by ExecEvalExpr can be had from the estate\nthat CopyFrom already creates; use GetPerTupleExprContext(estate).\n\nYou'll need to verify that you have got the memory context business\nright, ie, no memory leak across rows. I think the above sketch is\nsufficient, but check it with a memory-eating default expression\nevaluated for a few million input rows ... and you are doing your\ntesting with --enable-cassert, I trust, to catch any dangling pointers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 15:17:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-13 15:17] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > I fixed this by making an int* mapping from specified collist \n| > position to actual rd_att->attrs position.\n| \n| Sounds better.\n| \n| > I'm still a bit^W^W lost as hell on how the column default magic \n| > happens.\n| \n| I'd say use build_column_default() in src/backend/optimizer/prep/preptlist.c\n| to set up a default expression (possibly just NULL) for every column\n| that's not supplied by the input. That routine's not exported now, but\n| it could be, or perhaps it should be moved somewhere else. (Suggestions\n| anyone? Someplace in src/backend/catalog might be a more appropriate\n| place for it.)\n\ngotcha.\n\n| Then in the per-tuple loop you use ExecEvalExpr, or more likely\n| ExecEvalExprSwitchContext, to execute the default expressions.\n| The econtext wanted by ExecEvalExpr can be had from the estate\n| that CopyFrom already creates; use GetPerTupleExprContext(estate).\n\nmany, many thanks!\n\n| You'll need to verify that you have got the memory context business\n| right, ie, no memory leak across rows. I think the above sketch is\n| sufficient, but check it with a memory-eating default expression\n| evaluated for a few million input rows ... \n\nYes, the above info should get me through.\n\n| and you are doing your\n| testing with --enable-cassert, I trust, to catch any dangling pointers.\n\n<ducks>\nI am now :-o\n\nthank you.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 13 Jan 2002 16:42:06 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-13 16:42] Brent Verner said:\n| [2002-01-13 15:17] Tom Lane said:\n| | Brent Verner <brent@rcfile.org> writes:\n| | > I fixed this by making an int* mapping from specified collist \n| | > position to actual rd_att->attrs position.\n| | \n| | Sounds better.\n| | \n| | > I'm still a bit^W^W lost as hell on how the column default magic \n| | > happens.\n| | \n| | I'd say use build_column_default() in src/backend/optimizer/prep/preptlist.c\n| | to set up a default expression (possibly just NULL) for every column\n| | that's not supplied by the input. That routine's not exported now, but\n| | it could be, or perhaps it should be moved somewhere else. (Suggestions\n| | anyone? Someplace in src/backend/catalog might be a more appropriate\n| | place for it.)\n| \n| gotcha.\n| \n| | Then in the per-tuple loop you use ExecEvalExpr, or more likely\n| | ExecEvalExprSwitchContext, to execute the default expressions.\n| | The econtext wanted by ExecEvalExpr can be had from the estate\n| | that CopyFrom already creates; use GetPerTupleExprContext(estate).\n| \n| many, many thanks!\n| \n| | You'll need to verify that you have got the memory context business\n| | right, ie, no memory leak across rows. I think the above sketch is\n| | sufficient, but check it with a memory-eating default expression\n| | evaluated for a few million input rows ... \n| \n| Yes, the above info should get me through.\n\nround two...\n\n 1) I (kludgily) exported build_column_default() from its current\n location.\n 2) defaults expressions are now tried if a column is not in the\n COPY attlist specification.\n\nThere are still some kinks... (probably more than I've thought of)\n 1) a column in attlist that is not in the table will cause a segv \n in the backend.\n 2) duplicate names in attlist are still not treated as an error.\n\nI believe the memory context issues are handled correctly, but I've\nnot run the few million copy tests yet, and I probably won't be able\nto until late(r) tomorrow. No strangeness running compiled with \n--enable-cassert. Regression tests still pass.\n\nSanity checks much appreciated.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman", "msg_date": "Sun, 13 Jan 2002 21:39:24 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-13 21:39] Brent Verner said:\n|\n| I believe the memory context issues are handled correctly, but I've\n\nI must retract this assertion. As posted, this patch dies on the\nsecond line of a COPY file... argh. What did I break?\n\n b\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 13 Jan 2002 22:48:41 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I must retract this assertion. As posted, this patch dies on the\n> second line of a COPY file... argh. What did I break?\n\nFirst guess is that you allocated some data structure in the per-tuple\ncontext that needs to be in the per-query context (ie, needs to live\nthroughout the copy).\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 22:51:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-13 22:51] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > I must retract this assertion. As posted, this patch dies on the\n| > second line of a COPY file... argh. What did I break?\n| \n| First guess is that you allocated some data structure in the per-tuple\n| context that needs to be in the per-query context (ie, needs to live\n| throughout the copy).\n\nyup. The problem sneaks up when I get a default value for a \"text\"\ncolumn via ExecEvalExprSwithContext. Commenting out the pfree above \nheap_formtuple makes the error go away, but I know that's not the\nright answer. Should I avoid freeing the !attbyval items when they've\ncome from ExecEvalExpr -- I don't see any other examples of freeing\nreturns from this fn.\n\n b\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Sun, 13 Jan 2002 23:34:44 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> yup. The problem sneaks up when I get a default value for a \"text\"\n> column via ExecEvalExprSwithContext. Commenting out the pfree above \n> heap_formtuple makes the error go away, but I know that's not the\n> right answer.\n\nOh, the pfree for the attribute values? Ah so. I knew that would\nbite us someday. See, the way this code presently works is that all of\ncopy.c runs in the per-query memory context. It calls all of the\ndatatype conversion routines in that same context. It assumes that\nthe routines that return pass-by-ref datatypes will return palloc'd\nvalues (and not, say, pointers to constant values) --- which is not\na good assumption IMHO, even though I think it's true at the moment.\nThis assumption is what's needed to justify the pfree's at the bottom of\nthe loop. What's even worse is that it assumes that the conversion\nroutines leak no other memory; if any conversion routine palloc's\nsomething it doesn't pfree, then over the course of a long enough copy\nwe run out of memory.\n\nIn the case of ExecEvalExpr, if the expression is just a T_Const node\nthen what you get back (for a pass-by-ref datatype) is a pointer to\nthe value sitting in the Const node. pfreeing this is bad juju.\n\n> Should I avoid freeing the !attbyval items when they've\n> come from ExecEvalExpr -- I don't see any other examples of freeing\n> returns from this fn.\n\nI believe the correct solution is to get rid of the retail pfree's\naltogether. The clean way to run this code would be to switch to\nthe per-tuple context at the head of the per-tuple loop (say, right\nafter ResetPerTupleExprContext), run all the datatype conversion\nroutines *and* ExecEvalExpr in this context, and then switch back\nto per-query context just before heap_formtuple. Then at the\nloop bottom the only explicit free you need is the heap_freetuple.\nThe individual attribute values are inside the per-tuple context\nand they'll be freed by the ResetPerTupleExprContext at the start\nof the next loop. Fewer cycles, works right whether the values are\npalloc'd or not, and positively prevents any problems with leaks\ninside the datatype conversion routines --- since any leaked pallocs\nwill also be inside the per-tuple context.\n\nAn even more radical approach would be to try to run the whole loop in\nper-tuple context, but I think that will probably break things; the\nindex insertion code, at least, expects to be called in per-query\ncontext because it sometimes makes allocations that must live across\ncalls. (Cleaning that up is on my long-term to-do list; I'd prefer\nto see almost all of the executor run in per-tuple contexts, so as\nto avoid potential memory leaks very similar to the situation here.)\n\nYou'll need to make sure that the code isn't expecting to palloc\nanything first-time-through and re-use it on later loops, but I\nthink that will be okay. (The attribute_buf is the most obvious\nrisk, but that's all right, see stringinfo.c.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 00:03:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-14 00:03] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > yup. The problem sneaks up when I get a default value for a \"text\"\n| > column via ExecEvalExprSwithContext. Commenting out the pfree above \n| > heap_formtuple makes the error go away, but I know that's not the\n| > right answer.\n| \n| Oh, the pfree for the attribute values? Ah so. I knew that would\n| bite us someday. See, the way this code presently works is that all of\n| copy.c runs in the per-query memory context. It calls all of the\n| datatype conversion routines in that same context. It assumes that\n| the routines that return pass-by-ref datatypes will return palloc'd\n| values (and not, say, pointers to constant values) --- which is not\n| a good assumption IMHO, even though I think it's true at the moment.\n| This assumption is what's needed to justify the pfree's at the bottom of\n| the loop. What's even worse is that it assumes that the conversion\n| routines leak no other memory; if any conversion routine palloc's\n| something it doesn't pfree, then over the course of a long enough copy\n| we run out of memory.\n\ncheck. I just loaded 3mil records (with my hacked copy.c), and had\nended up around 36M... <phew!!> I'm gonna load a similar file \nwith a clean copy.c, just to see if that leak is present without\nmy changes -- I suspect it's not, but I'd like to see the empirical\neffect of my change(s)....\n\nThanks for the commentary. It really helps glue together the\nthoughts I had from reading over the memory context code.\n\n| In the case of ExecEvalExpr, if the expression is just a T_Const node\n| then what you get back (for a pass-by-ref datatype) is a pointer to\n| the value sitting in the Const node. pfreeing this is bad juju.\n\nyup, that seems like it'd explain my symptom.\n\n| > Should I avoid freeing the !attbyval items when they've\n| > come from ExecEvalExpr -- I don't see any other examples of freeing\n| > returns from this fn.\n| \n| I believe the correct solution is to get rid of the retail pfree's\n| altogether. The clean way to run this code would be to switch to\n| the per-tuple context at the head of the per-tuple loop (say, right\n| after ResetPerTupleExprContext), run all the datatype conversion\n| routines *and* ExecEvalExpr in this context, and then switch back\n| to per-query context just before heap_formtuple. Then at the\n| loop bottom the only explicit free you need is the heap_freetuple.\n| The individual attribute values are inside the per-tuple context\n| and they'll be freed by the ResetPerTupleExprContext at the start\n| of the next loop. Fewer cycles, works right whether the values are\n| palloc'd or not, and positively prevents any problems with leaks\n| inside the datatype conversion routines --- since any leaked pallocs\n| will also be inside the per-tuple context.\n\nGotcha. This certainly sounds like it will alleviate my pfree \nproblem. I'll get back to this tomorrow evening.\n\n| An even more radical approach would be to try to run the whole loop in\n| per-tuple context, but I think that will probably break things; the\n| index insertion code, at least, expects to be called in per-query\n| context because it sometimes makes allocations that must live across\n| calls. (Cleaning that up is on my long-term to-do list; I'd prefer\n| to see almost all of the executor run in per-tuple contexts, so as\n| to avoid potential memory leaks very similar to the situation here.)\n| \n| You'll need to make sure that the code isn't expecting to palloc\n| anything first-time-through and re-use it on later loops, but I\n| think that will be okay. (The attribute_buf is the most obvious\n| risk, but that's all right, see stringinfo.c.)\n\nSo I /can't/ palloc some things /before/ switching context to \nper-tuple-context? I ask because I'm palloc'ing a couple of \narrays, that would have to be MaxHeapAttributeNumber long to \nmake sure we've enough space. Though, thinking about it, an\nadditional 13k of static storage in the binary is not all that\nmuch.\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Mon, 14 Jan 2002 00:29:32 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> | You'll need to make sure that the code isn't expecting to palloc\n> | anything first-time-through and re-use it on later loops, but I\n> | think that will be okay. (The attribute_buf is the most obvious\n> | risk, but that's all right, see stringinfo.c.)\n\n> So I /can't/ palloc some things /before/ switching context to \n> per-tuple-context?\n\nOh, sure you can. That's the point of having a per-query context.\nWhat I was wondering was whether there were any pallocs executed\n*after* entering the loop that the code expected to live across\nloop cycles. I don't think so, I'm just mentioning the risk as\npart of your education ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 00:41:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-14 00:41] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > | You'll need to make sure that the code isn't expecting to palloc\n| > | anything first-time-through and re-use it on later loops, but I\n| > | think that will be okay. (The attribute_buf is the most obvious\n| > | risk, but that's all right, see stringinfo.c.)\n| \n| > So I /can't/ palloc some things /before/ switching context to \n| > per-tuple-context?\n| \n| Oh, sure you can. That's the point of having a per-query context.\n| What I was wondering was whether there were any pallocs executed\n| *after* entering the loop that the code expected to live across\n| loop cycles. I don't think so, I'm just mentioning the risk as\n| part of your education ;-)\n\ngotcha. No, I don't think anything inside that loop expects to \npersist across iterations. The attribute_buf is static to the\nfile, and initialized in DoCopy.\n\nWhat I ended up doing is switching to per-tuple-context prior to \nthe input loop, then switching back to the (saved) query-context\nafter exiting the loop. I followed ResetTupleExprContext back, and\nit doesn't seem to do anything that would require a switch per loop.\nAre there any problems this might cause that I'm not seeing with \nmy test case?\n\nMemory use is now under control, and things look good (stable around \n2.8M).\n\nsleepy:/usr/local/pg-7.2/bin\nbrent$ ./psql -c '\\d yyy'\n Table \"yyy\"\n Column | Type | Modifiers \n--------+---------+------------------------------------------------\n id | integer | not null default nextval('\"yyy_id_seq\"'::text)\n a | integer | not null default 1\n b | text | not null default 'test'\n c | integer | \nUnique keys: yyy_id_key\n\nsleepy:/usr/local/pg-7.2/bin\nbrent$ wc -l mmm\n3200386 mmm\nsleepy:/usr/local/pg-7.2/bin\nbrent$ head -10 mmm\n\\N\n\\N\n\\N\n20\n10\n20\n20\n40\n50\n20\nsleepy:/usr/local/pg-7.2/bin\nbrent$ ./psql -c 'copy yyy(c) from stdin' < mmm\nsleepy:/usr/local/pg-7.2/bin\n\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Mon, 14 Jan 2002 21:30:38 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> gotcha. No, I don't think anything inside that loop expects to \n> persist across iterations. The attribute_buf is static to the\n> file, and initialized in DoCopy.\n\nThere is more to attribute_buf than meets the eye ;-)\n\n> What I ended up doing is switching to per-tuple-context prior to \n> the input loop, then switching back to the (saved) query-context\n> after exiting the loop. I followed ResetTupleExprContext back, and\n> it doesn't seem to do anything that would require a switch per loop.\n> Are there any problems this might cause that I'm not seeing with \n> my test case?\n\nI really don't feel comfortable with running heap_insert or the\nsubsequent operations in a per-tuple context. Have you tried any\ntest cases that involve triggers or indexes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 21:52:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-14 21:52] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > gotcha. No, I don't think anything inside that loop expects to \n| > persist across iterations. The attribute_buf is static to the\n| > file, and initialized in DoCopy.\n| \n| There is more to attribute_buf than meets the eye ;-)\n\nI certainly don't doubt that, especially when it's my eye :-O\n\n| > What I ended up doing is switching to per-tuple-context prior to \n| > the input loop, then switching back to the (saved) query-context\n| > after exiting the loop. I followed ResetTupleExprContext back, and\n| > it doesn't seem to do anything that would require a switch per loop.\n| > Are there any problems this might cause that I'm not seeing with \n| > my test case?\n| \n| I really don't feel comfortable with running heap_insert or the\n| subsequent operations in a per-tuple context. Have you tried any\n| test cases that involve triggers or indexes?\n\nno, I dropped the index for the 3mil COPY. I will run with some\ntriggers and indexes in the table.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Mon, 14 Jan 2002 22:05:51 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-14 21:52] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > gotcha. No, I don't think anything inside that loop expects to \n| > persist across iterations. The attribute_buf is static to the\n| > file, and initialized in DoCopy.\n| \n| There is more to attribute_buf than meets the eye ;-)\n| \n| > What I ended up doing is switching to per-tuple-context prior to \n| > the input loop, then switching back to the (saved) query-context\n| > after exiting the loop. I followed ResetTupleExprContext back, and\n| > it doesn't seem to do anything that would require a switch per loop.\n| > Are there any problems this might cause that I'm not seeing with \n| > my test case?\n| \n| I really don't feel comfortable with running heap_insert or the\n| subsequent operations in a per-tuple context. Have you tried any\n| test cases that involve triggers or indexes?\n\nYes, and I'm seeing no new problems (so far), but there is a problem \nin the current copy.c. Running the following on unmodified 7.2b5 \ncauses the backend to consume 17-18Mb of memory. Removing the \nREFERENCES on yyy.b causes memory use to be normal.\n\nbash$ cat copy.sql \nDROP table yyy;\nDROP SEQUENCE yyy_id_seq ;\nDROP TABLE zzz;\nDROP SEQUENCE zzz_id_seq ;\nCREATE TABLE zzz (\n id SERIAL,\n a INT,\n b TEXT NOT NULL DEFAULT 'test' PRIMARY KEY,\n c INT NOT NULL DEFAULT 1\n);\nCREATE TABLE yyy (\n id SERIAL,\n a INT,\n b TEXT NOT NULL DEFAULT 'test' REFERENCES zzz(b),\n c INT NOT NULL DEFAULT 1\n);\n-- make sure there is a 'test' value in zzz.b\nINSERT INTO zzz (a) VALUES (10);\nCOPY yyy FROM '/tmp/sometmpfilehuh'\n\nbash$ for i in `seq 1 200000`; do echo \"$i $i test $i\" >> /tmp/sometmpfilehuh; done\n\nbash$ head -1 /tmp/sometmpfilehuh; tail -1 /tmp/sometmpfilehuh\n1 1 test 1\n200000 200000 test 200000\n\nbash$ ./psql < copy.sql\n\n\nAny ideas? I'm looking around ExecBRInsertTriggers() to see what \nmight need to be freed around that call.\n\nthanks.\n brent\n\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Tue, 15 Jan 2002 00:44:38 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-15 00:44] Brent Verner said:\n\n| I'm looking around ExecBRInsertTriggers() to see what \n| might need to be freed around that call.\n\nscratch this idea. this bit is not even hit in my test case... sorry.\n\n b\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Tue, 15 Jan 2002 00:56:52 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Yes, and I'm seeing no new problems (so far), but there is a problem \n> in the current copy.c. Running the following on unmodified 7.2b5 \n> causes the backend to consume 17-18Mb of memory.\n\nProbably that's just the space consumed for the pending-trigger events\ncreated by the AFTER trigger that implements the foreign key check.\nThere should be a provision for shoving that list out to disk when\nit gets too large ... but it ain't happening for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 15 Jan 2002 01:07:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database " }, { "msg_contents": "[2002-01-15 01:07] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > Yes, and I'm seeing no new problems (so far), but there is a problem \n| > in the current copy.c. Running the following on unmodified 7.2b5 \n| > causes the backend to consume 17-18Mb of memory.\n| \n| Probably that's just the space consumed for the pending-trigger events\n| created by the AFTER trigger that implements the foreign key check.\n| There should be a provision for shoving that list out to disk when\n| it gets too large ... but it ain't happening for 7.2.\n\ngotcha. I'll move on along then...\n\nthanks.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n", "msg_date": "Tue, 15 Jan 2002 01:10:57 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "[2002-01-14 21:52] Tom Lane said:\n| Brent Verner <brent@rcfile.org> writes:\n| > gotcha. No, I don't think anything inside that loop expects to \n| > persist across iterations. The attribute_buf is static to the\n| > file, and initialized in DoCopy.\n| \n| There is more to attribute_buf than meets the eye ;-)\n| \n| > What I ended up doing is switching to per-tuple-context prior to \n| > the input loop, then switching back to the (saved) query-context\n| > after exiting the loop. I followed ResetTupleExprContext back, and\n| > it doesn't seem to do anything that would require a switch per loop.\n| > Are there any problems this might cause that I'm not seeing with \n| > my test case?\n| \n| I really don't feel comfortable with running heap_insert or the\n| subsequent operations in a per-tuple context. Have you tried any\n| test cases that involve triggers or indexes?\n\nYes. The attached patch appears to do the right thing with all \nindexes and triggers (RI) that I've tested. I'm still doing the\nMemoryContextSwitchTo() outside the main loop, and have added some \nmore sanity checking for column name input.\n\nIf anyone could test this (with non-critical data ;-) or otherwise \ngive feedback, I'd appreciate it; especially if someone could test\nwith a BEFORE INSERT trigger.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman", "msg_date": "Tue, 15 Jan 2002 02:02:13 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Problem reloading regression database" }, { "msg_contents": "\nThis has been saved for the 7.3 release:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nBrent Verner wrote:\n> [2002-01-14 21:52] Tom Lane said:\n> | Brent Verner <brent@rcfile.org> writes:\n> | > gotcha. No, I don't think anything inside that loop expects to \n> | > persist across iterations. The attribute_buf is static to the\n> | > file, and initialized in DoCopy.\n> | \n> | There is more to attribute_buf than meets the eye ;-)\n> | \n> | > What I ended up doing is switching to per-tuple-context prior to \n> | > the input loop, then switching back to the (saved) query-context\n> | > after exiting the loop. I followed ResetTupleExprContext back, and\n> | > it doesn't seem to do anything that would require a switch per loop.\n> | > Are there any problems this might cause that I'm not seeing with \n> | > my test case?\n> | \n> | I really don't feel comfortable with running heap_insert or the\n> | subsequent operations in a per-tuple context. Have you tried any\n> | test cases that involve triggers or indexes?\n> \n> Yes. The attached patch appears to do the right thing with all \n> indexes and triggers (RI) that I've tested. I'm still doing the\n> MemoryContextSwitchTo() outside the main loop, and have added some \n> more sanity checking for column name input.\n> \n> If anyone could test this (with non-critical data ;-) or otherwise \n> give feedback, I'd appreciate it; especially if someone could test\n> with a BEFORE INSERT trigger.\n> \n> cheers.\n> brent\n> \n> -- \n> \"Develop your talent, man, and leave the world something. Records are \n> really gifts from people. To think that an artist would love you enough\n> to share his music with anyone is a beautiful thing.\" -- Duane Allman\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 24 Jan 2002 20:14:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Problem reloading regression database" } ]
[ { "msg_contents": "pgman wrote:\n> I am testing pg_upgrade. I successfully did a pg_upgrade of a 7.2\n> regression database into a fresh 7.2 install. I compared the output of\n> pg_dump from both copies and found that c_star dump caused a crash. I\n> then started doing more testing of the regression database and found\n> that the regression database does not load in cleanly. These failures\n> cause pg_upgrade files not to match the loaded schema.\n> \n> Looks like there is a problem with inheritance, patch attached listing\n> the pg_dump load failures. I also see what looks like a crash in the\n> server logs:\n> \t\n> \tDEBUG: pq_flush: send() failed: Broken pipe\n> \tFATAL 1: Socket command type 1 unknown\n> \n> Looks like it should be fixed before final.\n\nI should have been clearer how to reproduce this:\n\n\t1) run regression tests\n\t2) pg_dump regression > /tmp/dump\n\t3) dropdb regression\n\t4) createdb regression\n\t5) psql regression < /tmp/dump > out 2> err\n\nLook at the err file.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 12 Jan 2002 23:22:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Problem reloading regression database" } ]
[ { "msg_contents": "OK, I have tested pg_upgrade with 7.1->7.2 and 7.2->7.2 databases, and\nit seems to work fine. It is ready for wider testing.\n\nTo use pg_upgrade, copy it from CVS to somewhere in your path. Edit the\nfile and change ENABLE=\"N\" to ENABLE=\"Y\". Then, read pg_upgrade.sgml\nfor the steps required for its use.\n\npg_upgrade isn't enabled in the SGML build right now so you have to read\nthe SGML source at this point.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 Jan 2002 00:46:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pg_upgrade ready for testing" } ]
[ { "msg_contents": "I was just looking at the latest pg_upgrade revision. Maybe it works for\nparticular installations, but before we can think about releasing this, at\nleast the issues below need to be addressed.\n\nIt seems to require that the data directory is under a directory named\n\"data\" and that your current directory is above that. This is not\nappropriate for real software. I suggest using standard -D options and/or\nthe PGDATA environment variable.\n\nThe locations of directories like the INFODIR and the OLDDIR should be\nconfigurable. Don't just put them where they happen to fall. (Remember\nthat possibly a lot of data will end up there.)\n\nThe way temp files are allocated looks very insecure. And does this thing\neven check under what user it's running?\n\n'test -e' is not portable.\n\n'grep -q' is not portable. (At least it doesn't portably behave as you\nmight think it does.)\n\nAlthough 'head' is probably portable, it has never been explored, because\n'sed 1q' is better.\n\nIf you set an exit trap, then 'exit 1' is not portable. (I'm not kidding,\nsee pg_regress.)\n\nYou can't nest \" inside ` inside \".\n\nPattern matching with 'expr' should be invoked like this: expr x\"$OBJ\" :\nx'pg_', or it might blow up when $OBJ has funny values.\n\nMoving directories with 'mv' is not necessarily a good idea.\n\nShould do a lot more error checking.\n\npsql, pg_ctl, etc. should not just be called from the path. You know the\ninstall directory, so use that.\n\nawk should be called as determined by configure.\n\nPoor quality (spelling, wording) of messages.\n\nThe man page is very confusing.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 13 Jan 2002 01:16:19 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "About pg_upgrade" }, { "msg_contents": "\nThese are all excellent points. Peter, I will be out for a few hours so\nI can't get to this right away. If you want to get started, the current\nversion is in CVS; feel free to wack it around. I have been committing\nto CVS so people would view/modify it. I will send you email before I\nget started with any changes.\n\nMoving to /contrib I think makes sense, particularly because you need\n7.2 pg_upgrade while 7.1 is still installed. I believe this will\nremain true for future releases as well. I was going to add a mention\nof that to pg_upgrade.sgml.\n\nOne more change I need which I haven't yet is to force sequence\nregeneration even for 7.2 because the XID sequence status changed from\n7.2beta4 to 7.2beta5. I will fix that today.\n\nI had no great plans for pg_upgrade; I just knew it had to be done some\nday and I figured now was a good time. The group can decide when and\nhow to use it.\n\n---------------------------------------------------------------------------\n\nPeter Eisentraut wrote:\n> I was just looking at the latest pg_upgrade revision. Maybe it works for\n> particular installations, but before we can think about releasing this, at\n> least the issues below need to be addressed.\n> \n> It seems to require that the data directory is under a directory named\n> \"data\" and that your current directory is above that. This is not\n> appropriate for real software. I suggest using standard -D options and/or\n> the PGDATA environment variable.\n> \n> The locations of directories like the INFODIR and the OLDDIR should be\n> configurable. Don't just put them where they happen to fall. (Remember\n> that possibly a lot of data will end up there.)\n> \n> The way temp files are allocated looks very insecure. And does this thing\n> even check under what user it's running?\n> \n> 'test -e' is not portable.\n> \n> 'grep -q' is not portable. (At least it doesn't portably behave as you\n> might think it does.)\n> \n> Although 'head' is probably portable, it has never been explored, because\n> 'sed 1q' is better.\n> \n> If you set an exit trap, then 'exit 1' is not portable. (I'm not kidding,\n> see pg_regress.)\n> \n> You can't nest \" inside ` inside \".\n> \n> Pattern matching with 'expr' should be invoked like this: expr x\"$OBJ\" :\n> x'pg_', or it might blow up when $OBJ has funny values.\n> \n> Moving directories with 'mv' is not necessarily a good idea.\n> \n> Should do a lot more error checking.\n> \n> psql, pg_ctl, etc. should not just be called from the path. You know the\n> install directory, so use that.\n> \n> awk should be called as determined by configure.\n> \n> Poor quality (spelling, wording) of messages.\n> \n> The man page is very confusing.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 Jan 2002 10:03:50 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About pg_upgrade" }, { "msg_contents": "> One more change I need which I haven't yet is to force sequence\n> regeneration even for 7.2 because the XID sequence status changed from\n> 7.2beta4 to 7.2beta5. I will fix that today.\n\nI have made this change and committed the fix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 13 Jan 2002 12:52:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About pg_upgrade" }, { "msg_contents": "Peter Eisentraut wrote:\n> I was just looking at the latest pg_upgrade revision. Maybe it works for\n> particular installations, but before we can think about releasing this, at\n> least the issues below need to be addressed.\n> \n> It seems to require that the data directory is under a directory named\n> \"data\" and that your current directory is above that. This is not\n> appropriate for real software. I suggest using standard -D options and/or\n> the PGDATA environment variable.\n\nI have added code to use PGDATA, which can be overridden with a new -D\noption, as you suggested.\n\n> The locations of directories like the INFODIR and the OLDDIR should be\n> configurable. Don't just put them where they happen to fall. (Remember\n> that possibly a lot of data will end up there.)\n\nWell, actually, we want the INFODIR and SAVEDATA (was OLDDIR) to be in\nthe same filesystem as PGDATA. In fact, we use mv so the data files are\njust moved into different directories and not actually moved. I added\ncode to put these in the directory _above_ PGDATA. Does that help?\n\n> The way temp files are allocated looks very insecure. And does this thing\n> even check under what user it's running?\n\nIt does now --- It checks for root user and makes sure you can read the\nPGDATA directory.\n\n> 'test -e' is not portable.\n\nOK, changed to -f and -h.\n\n> 'grep -q' is not portable. (At least it doesn't portably behave as you\n> might think it does.)\n\nOK, changed to grep \"\" > /dev/null 2>&1\n\n> Although 'head' is probably portable, it has never been explored, because\n> 'sed 1q' is better.\n\nI changed it so \"sed -n '1p'\" which is a more general solution; sed\n'Xq' only works for the top X lines, i.e. sed '2q' prints lines 1 and 2.\n\n> If you set an exit trap, then 'exit 1' is not portable. (I'm not kidding,\n> see pg_regress.)\n\nFixed -- I change 'exit 1' to 'return 1' and changed the calls to:\n\n\tfunc || exit \"$?\"\n\n> You can't nest \" inside ` inside \".\n\nCan you show me where? I don't see it.\n\n> Pattern matching with 'expr' should be invoked like this: expr x\"$OBJ\" :\n> x'pg_', or it might blow up when $OBJ has funny values.\n\nDone.\n\n> Moving directories with 'mv' is not necessarily a good idea.\n\nCan you be more specific? Seems pretty portable.\n\n> Should do a lot more error checking.\n\nSuggestions? Already lots in there.\n\n> psql, pg_ctl, etc. should not just be called from the path. You know the\n> install directory, so use that.\n\nWell, we do use the same script on the old install and the new one so I\nam not sure if looking at the install or coupling it with 'configure'\nrun is a good idea --- it may cause more problems than it solves.\n\n> awk should be called as determined by configure.\n\nAgain, requires configure handling.\n\n> Poor quality (spelling, wording) of messages.\n\nImproved. Let me know if I missed something.\n\n> The man page is very confusing.\n\nI have improved it. All moved to /contrib/pg_upgrade.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 Jan 2002 23:45:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About pg_upgrade" }, { "msg_contents": "Bruce Momjian writes:\n\n> Well, actually, we want the INFODIR and SAVEDATA (was OLDDIR) to be in\n> the same filesystem as PGDATA. In fact, we use mv so the data files are\n> just moved into different directories and not actually moved. I added\n> code to put these in the directory _above_ PGDATA. Does that help?\n\nHmm, I think that's actually worse, because the directory above PGDATA\nmight in fact be a different file system. It's probably better to keep it\nwithin one directory then. And using \"dirname\" isn't portable anyway.\n\n> > The way temp files are allocated looks very insecure. And does this thing\n> > even check under what user it's running?\n>\n> It does now --- It checks for root user and makes sure you can read the\n> PGDATA directory.\n\nStill not sure about those temp files. People like to see a possible\nexploit in every temp file.\n\n> > 'test -e' is not portable.\n>\n> OK, changed to -f and -h.\n\n-h isn't portable either. I think you have to do something like\n\nif (test -L \"$name\") >/dev/null 2>&1 && (test -h \"$name\") >/dev/null 2>&1\n\nIf should also add here that \"if ! command\" is not portable, you'd need to\nwrite \"if command; then : ; else action fi\"\n\n> > If you set an exit trap, then 'exit 1' is not portable. (I'm not kidding,\n> > see pg_regress.)\n>\n> Fixed -- I change 'exit 1' to 'return 1' and changed the calls to:\n>\n> \tfunc || exit \"$?\"\n\nThat doesn't matter. You have to write (exit x); exit x so the exit code\nis actually seen outside pg_upgrade.\n\n> > You can't nest \" inside ` inside \".\n>\n> Can you show me where? I don't see it.\n\nLine 87 looks suspicious. Especially with the echo and backslash in there\nit is not very portable.\n\n> > Moving directories with 'mv' is not necessarily a good idea.\n>\n> Can you be more specific? Seems pretty portable.\n\nI was only concerned about moving across file systems, which you're not\ndoing.\n\n> > Should do a lot more error checking.\n>\n> Suggestions? Already lots in there.\n\nIt looks better already.\n\n> > psql, pg_ctl, etc. should not just be called from the path. You know the\n> > install directory, so use that.\n>\n> Well, we do use the same script on the old install and the new one so I\n> am not sure if looking at the install or coupling it with 'configure'\n> run is a good idea --- it may cause more problems than it solves.\n\nIt's better than using some random tool that might not even be in the\npath. You have to run configure anyway, so what's the point in avoiding\nit?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 15 Jan 2002 00:25:23 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Re: About pg_upgrade" }, { "msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > Well, actually, we want the INFODIR and SAVEDATA (was OLDDIR) to be in\n> > the same filesystem as PGDATA. In fact, we use mv so the data files are\n> > just moved into different directories and not actually moved. I added\n> > code to put these in the directory _above_ PGDATA. Does that help?\n> \n> Hmm, I think that's actually worse, because the directory above PGDATA\n> might in fact be a different file system. It's probably better to keep it\n> within one directory then. And using \"dirname\" isn't portable anyway.\n\nWell, actually, it may not be better or worse. We actually use PGDATA\nto point not to the top-level PostgreSQL directory but to the /data part\nof that directory, so the old behavior was to put it above /data and the\nnew code will do exactly that too. Am I confused?\n\n> > > The way temp files are allocated looks very insecure. And does this thing\n> > > even check under what user it's running?\n> >\n> > It does now --- It checks for root user and makes sure you can read the\n> > PGDATA directory.\n> \n> Still not sure about those temp files. People like to see a possible\n> exploit in every temp file.\n\nWell, yes, if you get the pid, you can create symlink files in /tmp and\noverwrite things. How do I handle this properly, probably a directory\nin /tmp that I create but I have to set my umask first -- is that a\nplan?\n\n> \n> > > 'test -e' is not portable.\n> >\n> > OK, changed to -f and -h.\n> \n> -h isn't portable either. I think you have to do something like\n\nOh, great!\n\n> \n> if (test -L \"$name\") >/dev/null 2>&1 && (test -h \"$name\") >/dev/null 2>&1\n\nI am confused about this, I don't have -L and the code seems to say that\nthe second part executes only if the first part is OK --- I don't really\nneed the test, I could just try the 'mv' and report a failure.\n\n> If should also add here that \"if ! command\" is not portable, you'd need to\n> write \"if command; then : ; else action fi\"\n\nWow, I have written portable scripts but never went to these lengths ---\nare there actually that many broken OS's?\n\n> > > If you set an exit trap, then 'exit 1' is not portable. (I'm not kidding,\n> > > see pg_regress.)\n> >\n> > Fixed -- I change 'exit 1' to 'return 1' and changed the calls to:\n> >\n> > \tfunc || exit \"$?\"\n> \n> That doesn't matter. You have to write (exit x); exit x so the exit code\n> is actually seen outside pg_upgrade.\n\nSo 'return 1' doesn't propogate to the caller?\n\n> > > You can't nest \" inside ` inside \".\n> >\n> > Can you show me where? I don't see it.\n> \n> Line 87 looks suspicious. Especially with the echo and backslash in there\n> it is not very portable.\n\nWow, that command was really stupid --- I just changed it to make the\nassignment without the 'echo.'\n\n> \n> > > Moving directories with 'mv' is not necessarily a good idea.\n> >\n> > Can you be more specific? Seems pretty portable.\n> \n> I was only concerned about moving across file systems, which you're not\n> doing.\n\nI sure hope so. :-) If not, the script will fail.\n\n> \n> > > Should do a lot more error checking.\n> >\n> > Suggestions? Already lots in there.\n> \n> It looks better already.\n\nGood.\n\n> > > psql, pg_ctl, etc. should not just be called from the path. You know the\n> > > install directory, so use that.\n> >\n> > Well, we do use the same script on the old install and the new one so I\n> > am not sure if looking at the install or coupling it with 'configure'\n> > run is a good idea --- it may cause more problems than it solves.\n> \n> It's better than using some random tool that might not even be in the\n> path. You have to run configure anyway, so what's the point in avoiding\n> it?\n\nBut I don't assume they have run configure on the new source tree when\npg_upgrade is run in phase one --- very likely they will grab\npg_upgrade, do phase one, then configure/compile the new code and\ninstall it as part of the upgrade steps.\n\nIf they don't have a working 'awk' in their path, they have bigger\nproblems than pg_upgrade not working --- I know there are quirky\nproblems with awk (I saw them in pgmonitor) but these are vanilla awk\nscripts that would work on any one.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 Jan 2002 00:38:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About pg_upgrade" }, { "msg_contents": "> > Still not sure about those temp files. People like to see a possible\n> > exploit in every temp file.\n> \n> Well, yes, if you get the pid, you can create symlink files in /tmp and\n> overwrite things. How do I handle this properly, probably a directory\n> in /tmp that I create but I have to set my umask first -- is that a\n> plan?\n\nForget what I said, you don't need to change the umask, just do:\n\n\ttrap \"rm -rf /tmp/$$\" 0 1 2 3 15\n\tmkdir /tmp/$$ || exit 1\n\nand you call all your temp files /tmp/$$/XXX, right? Once you create\nthe directory, you own it and no one else can write into there.\n\nI just did a Google search and no one came up with this idea, though I\nbelieve X11 uses /tmp directories for this exact reason, right?\n\nI finally found one mention of it: Seems Suse uses it, but they did\n'mkdir -p' which doesn't return an error if it fails so it was a\nsecurity problem itself:\n\n http://groups.google.com/groups?q=tmp+security+race+directory+script+mkdir&hl=en&selm=bugtraq/Pine.LNX.4.30.0101170202040.15609-100000%40dent.suse.de&rnum=1\n\nI just looked in /usr/bin on BSD/OS and found a whole bunch that do the\ninsecure /tmp/$$ trick I currently do in pg_upgrade:\n\t\n\t#$ file `grep -l '\\$\\$' *` | grep shell \n\tcvsbug: Bourne shell script text\n\tigawk: Bourne shell script text\n\tlorder: Bourne shell script text\n\tmkdep: Bourne shell script text\n\tpppattach: Korn shell script text\n\trcsfreeze: Bourne shell script text\n\tsendbug: Bourne shell script text\n\tuupick: Bourne shell script text\n\nFor example, cvsbug does:\n\t\n\t[ -z \"$TMPDIR\" ] && TMPDIR=/tmp\n\t\n\tTEMP=$TMPDIR/p$$\n\tBAD=$TMPDIR/pbad$$\n\tREF=$TMPDIR/pf$$\n\nBet everyone has that one on their system. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 Jan 2002 01:15:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: About pg_upgrade" } ]
[ { "msg_contents": "On 01/12/2002 06:03:44 PM Karl DeBisschop wrote:\n> On Sat, 2002-01-12 at 03:32, David Terrell wrote:\n> > On Thu, Jan 10, 2002 at 09:07:50AM +0100, Alexander Pucher wrote:\n> > > I need to run a shell script that logs in to Postgresql, executes a\n> > > query and logs off again.\n\n> > > My problem is that I can't find a way to put the password in an \n'psql'\n> > > statement at the prompt.\n\n> If you absolutely need to do something like this, look into expect.\n\nWoul be useful if there's an example expect script for this somewhere in \nthe distribution or documentation. Lots of people don't know expect, and \ndon't want to learn it. They just want to automate database tasks.\n\nMaarten\n\n----\n\nMaarten Boekhold, maarten.boekhold@reuters.com\n\nReuters Consulting / TIBCO Finance Technology Inc.\nDubai Media City\nBuilding 1, 5th Floor\nPO Box 1426\nDubai, United Arab Emirates\ntel:+971(0)4 3918300 ext 249\nfax:+971(0)4 3918333\nmob:+971(0)505526539\n\n------------------------------------------------------------- ---\n Visit our Internet site at http://www.reuters.com\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.\n\nOn 01/12/2002 06:03:44 PM Karl DeBisschop wrote:\n> On Sat, 2002-01-12 at 03:32, David Terrell wrote:\n> > On Thu, Jan 10, 2002 at 09:07:50AM +0100, Alexander Pucher wrote:\n> > > I need to run a shell script that logs in to Postgresql, executes a\n> > > query and logs off again.\n\n> > > My problem is that I can't find a way to put the password in an 'psql'\n> > > statement at the prompt.\n\n> If you absolutely need to do something like this, look into expect.\n\nWoul be useful if there's an example expect script for this somewhere in the distribution or documentation. Lots of people don't know expect, and don't want to learn it. They just want to automate database tasks.\n\nMaarten\n\n----\n\nMaarten Boekhold, maarten.boekhold@reuters.com\n\nReuters Consulting / TIBCO Finance Technology Inc.\nDubai Media City\nBuilding 1, 5th Floor\nPO Box 1426\nDubai, United Arab Emirates\ntel:+971(0)4 3918300 ext 249\nfax:+971(0)4 3918333\nmob:+971(0)505526539\n\n------------------------------------------------------------- ---\n Visit our Internet site at http://www.reuters.com\n\nAny views expressed in this message are those of the individual\nsender, except where the sender specifically states them to be\nthe views of Reuters Ltd.", "msg_date": "Sun, 13 Jan 2002 10:20:53 +0400", "msg_from": "Maarten.Boekhold@reuters.com", "msg_from_op": true, "msg_subject": "Re: Postgres in bash-mode" }, { "msg_contents": "On Sun, 2002-01-13 at 01:20, Maarten.Boekhold@reuters.com wrote:\n> On 01/12/2002 06:03:44 PM Karl DeBisschop wrote:\n> > On Sat, 2002-01-12 at 03:32, David Terrell wrote:\n> > > On Thu, Jan 10, 2002 at 09:07:50AM +0100, Alexander Pucher wrote:\n> > > > I need to run a shell script that logs in to Postgresql, executes a\n> > > > query and logs off again.\n> \n> > > > My problem is that I can't find a way to put the password in an \n> 'psql'\n> > > > statement at the prompt.\n> \n> > If you absolutely need to do something like this, look into expect.\n> \n> Woul be useful if there's an example expect script for this somewhere in \n> the distribution or documentation. Lots of people don't know expect, and \n> don't want to learn it. They just want to automate database tasks.\n\nBefore you forga ahead with expect, perhaps you als want to read the\ndocs for pg_hba.conf.\n\nAs I said, expect can be secure if uoy are careful (make sure noone else\ncan read the scipt). But it's not alot of fun to maintain.\n\nThere are other options. If you want to avoid admiin hassles, I'd\nsuggest looking into ident on a close set of machines. If the machine in\nquestion is not close by, then try ssh to make them seem closer.\n\nBasically, you can trust identd if and only if you know that its your\nidentd. But with identd, you can have a large number of scripts that\ncontinue to work after you change the password. (Note that you cannot\nuse identd on the ub=nix socket).\n\nWe are presently revamping our own security. If I had a good example of\na system in a final state, I'd post it. I don't know, but maybe someone\nelse can. Or maybe in a few weeks I'll post ours, if I can assure myself\nthat disclosure won't reduce scurity (iff well designed, that should be\nthe case, I think). \n\nKarl\n", "msg_date": "13 Jan 2002 06:41:06 -0500", "msg_from": "Karl DeBisschop <kdebisschop@range.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" }, { "msg_contents": "On Sun, Jan 13, 2002 at 06:41:06AM -0500, Karl DeBisschop wrote:\n> Basically, you can trust identd if and only if you know that its your\n> identd. But with identd, you can have a large number of scripts that\n> continue to work after you change the password. (Note that you cannot\n> use identd on the ub=nix socket).\n> \n> We are presently revamping our own security. If I had a good example of\n> a system in a final state, I'd post it. I don't know, but maybe someone\n> else can. Or maybe in a few weeks I'll post ours, if I can assure myself\n> that disclosure won't reduce scurity (iff well designed, that should be\n> the case, I think). \n\n7.2 can run something like ident but better (guaranteed accurate) on \nthe unix socket on BSD and linux.\n\n-- \nDavid Terrell | \"The fact that you can't name the place\ndbt@meat.net | you're going to die doesn't mean you\nhttp://wwn.nebcorp.com/ | shouldn't pay attention to your health.\" -whg3\n", "msg_date": "Sun, 13 Jan 2002 03:43:25 -0800", "msg_from": "David Terrell <dbt@meat.net>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" }, { "msg_contents": "On Sun, 2002-01-13 at 06:43, David Terrell wrote:\n> On Sun, Jan 13, 2002 at 06:41:06AM -0500, Karl DeBisschop wrote:\n> > Basically, you can trust identd if and only if you know that its your\n> > identd. But with identd, you can have a large number of scripts that\n> > continue to work after you change the password. (Note that you cannot\n> > use identd on the ub=nix socket).\n> \n> 7.2 can run something like ident but better (guaranteed accurate) on \n> the unix socket on BSD and linux.\n\n\nThat's cool. Looks like I just got my rationale for doing the 7.2\nupgrade sooner rather than later.\n\n--\nKarl\n", "msg_date": "13 Jan 2002 06:50:48 -0500", "msg_from": "Karl DeBisschop <kdebisschop@range.infoplease.com>", "msg_from_op": false, "msg_subject": "Re: Postgres in bash-mode" } ]
[ { "msg_contents": "Hi,\n\nI just read one of those MySQL-PGSQL comparison on a German Linux magazine\nthat's not exactly known to be well informed. :-)\n\nThis article really cries for a reply. Here are some extracts:\n\nThe only benchmarks that everyone can download for free to test MySQL\nversus PostgreSQL are the MySQL benchmarks.\n...\nThis benchmark suite is pretty old and was tuned over the years to be as\nfair as possible to PostgreSQL.\n...\nThe results show a huge difference between PostgreSQL and MySQL.\n...\nThe MySQL developers tried to make the benchmarks as fair as possible.\nUnfortunately PostgreSQL never answered to the repeated questions about\noptimal tuning.\n\nSince this is not the first kind stuff like that comes up, I wonder if we\nhave some kind of standard answer. I for one don't like to read that kind of\nFUD about us.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 13 Jan 2002 09:53:33 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "mysql-pgsql comparison" }, { "msg_contents": "Le Dimanche 13 Janvier 2002 09:53, Michael Meskes a �crit :\n> The MySQL developers tried to make the benchmarks as fair as possible.\n> Unfortunately PostgreSQL never answered to the repeated questions about\n> optimal tuning.\n\nYeah. This article was probably written directly by the MySQL team. This is \nwhat you learn in the computing business. If you want someting published .. \nwrite it yourself and invite the journalist in a restaurant. I did it 2 or 3 \ntimes when working for a telecommunication company.\n\nDon't worry about the results of such an article. It is simply not possible \nto benchmark MySQL against PostgreSQL because MySQL lacks many PostgreSQL \nfeatures. A serious user immediatly understands this.\n\nLately I resigned from the MySQL list with some comment like \"sorry I use \nPostgreSQL\". Monthy replied me with a series of ready-made-emails explaining \nwhy MySQL was superior and all this crap. When you think MySQL is not a \ntransactional database, it makes you laught. I had to send hime emails like \n\"Thank you for this answer and Happy new year\" and all this stuff to stop the \nspam.\n\nThe point of view is : MySQL is just a trademark and a marketing sofware. \nMonthy is prepared to spend hours selling his crap. \n\nWhy don't you start a mailing list for \"Propaganda\" and \"Public relations\". \nIt should be possible to meet the main journalists and contact them on \nregular basis with ready-to-publish articles.\n\nWith a network of one person by country (US, Canada, France,Germany, Italy) \nyou would improve dramaticaly the presence of PostgreSQL in the world. It is \nnot difficult to have articles published ...\n\nBes regards,\nJean-Michel POURE\n", "msg_date": "Sun, 13 Jan 2002 14:23:44 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "I am more concerned with the PHP/MySQL connection. It seems like it is a\ndefacto grouping. Engineers I know, who do not know SQL, will just use MySQL\nbecause \"everyone else does.\"\n\nInvariably, MySQL (at first) seems like a good decision, then they have to do a\ncomplex query. MySQL just can not execute the queries that a real SQL database\ncan. If an engineer does not know SQL, he will walk away thinking SQL is a\nlimiting environment.\n\nAn contractor came on to the company with a solid MySQL background. He was a\nbright guy, but we are an Oracle/PostgreSQL shop. More than a few times, I had\nto rewrite his code to use the SQL engine with a more complex, but overall more\nefficient, query. His response is always, \"Wow I didn't know you could do\nthat.\" \n\nThe word that needs to be heard is that a real SQL database is easier than a\nsimplistic SQL-like environment such as MySQL. I have tried several projects\nwith MySQL and have never been able to finish them with it, because MySQL is\njust too limited.\n", "msg_date": "Sun, 13 Jan 2002 09:56:23 -0500", "msg_from": "mlw <markw@mohawksoft.com>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "Michael Meskes writes:\n\n> I just read one of those MySQL-PGSQL comparison on a German Linux magazine\n> that's not exactly known to be well informed. :-)\n\nI think the article you're speaking of is here:\n\nhttp://www.linuxenterprise.de/artikel/my_sql.pdf\n\nThe last part of this, which is the supposed comparison, was pretty much\ncopied straight out of their manual. I have a response to that here:\n\nhttp://www.ca.postgresql.org/~petere/comparison.html\n\nHowever, if you don't look hard enough, it's actually true that the only\nbenchmark that runs on both systems is the MySQL benchmark. I think the\nOSDB benchmark should also work, though.\n\nWhat we could do is:\n\n1. Actually run the MySQL benchmark, to see what's wrong with it. (Note\nthat this is not the crashme test.)\n\n2. Port pgbench to MySQL. Pgbench seems to be our daily benchmark of\nchoice, so it's not unimportant to verify how it does elsewhere.\n\n3. Check out the OSDB benchmark more closely.\n\nI think I should also submit my article to their interactive docs, so that\nthose that do comparisons in the future by just reading the manual will\nget a clue.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 13 Jan 2002 11:53:51 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> What we could do is:\n\n> 1. Actually run the MySQL benchmark, to see what's wrong with it. (Note\n> that this is not the crashme test.)\n\nThis would be a worthwhile thing to try, just to see if it exposes any\nPG weaknesses we didn't already know about. I have run crashme in the\npast (not recently though); but I've never found time for their\nbenchmark.\n\n> 2. Port pgbench to MySQL. Pgbench seems to be our daily benchmark of\n> choice, so it's not unimportant to verify how it does elsewhere.\n\nWe use pgbench mainly because it happens to be sitting in contrib ;-).\nI'm not convinced that it's a really good benchmark. It certainly\nemphasizes performance of only a very small part of the system. For\nexample, we could make a huge improvement in pgbench results just by\nfixing the repeated-index-lookups-of-dead-tuples problem that's been\ndiscussed so often. But I'm not sure that that problem is as bad in\nthe real world as it is in pgbench.\n\n> 3. Check out the OSDB benchmark more closely.\n\nI've been planning to take a hard look at that myself, but haven't found\nthe time. If it's reasonably easy to install and run, it probably ought\nto become our standard benchmarking tool. (For anyone who wants to\ntake a look, OSDB lives at osdb.sourceforge.net.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 12:09:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison " }, { "msg_contents": "On Sun, 13 Jan 2002, Tom Lane wrote:\n\n> > 3. Check out the OSDB benchmark more closely.\n>\n> I've been planning to take a hard look at that myself, but haven't found\n> the time. If it's reasonably easy to install and run, it probably ought\n> to become our standard benchmarking tool. (For anyone who wants to\n> take a look, OSDB lives at osdb.sourceforge.net.)\n\nI tried it with current CVS and make test fails (didn't tried\nto find what's the problem). Also, you have to edit configure script -\nit tries to find postgres_fe.h in include directory, which now lives under\ninclude/internal directory.\n\nError in test load() at (14252)osdb-pg.c:314:\n... cmd\nPQresultStatus: 7\npostgres reports: ERROR: copy: line 740, value too long for type character varying(80)\nOSDB_ERROR: ERR_DML, (null)\n***** osdb-pg-emb *****\n\n\nError in test Counting tuples at (14285)../../osdb.c:280:\n... empty database -- empty results\nperror() reports: No child processes\nOSDB_ERROR.error: 0***** osdb-pg-ui *****\n\n\nError in test Counting tuples at (14323)../../osdb.c:280:\n... empty database -- empty results\nperror() reports: Resource temporarily unavailable\nmake[1]: Leaving directory /db1/u/megera/app/pgsql/bench/osdb/cvs/osdb'\n\n\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Sun, 13 Jan 2002 20:55:28 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison " }, { "msg_contents": "> Yeah. This article was probably written directly by the MySQL team. This is \n> what you learn in the computing business. If you want someting published .. \n> write it yourself and invite the journalist in a restaurant. I did it 2 or 3 \n> times when working for a telecommunication company.\n\nOn a related note, I've been thinking about \"Linux Journal\" or\n\"Linux Magazine\" level article on user-defined types and functions\nand how they can make life much easier. \n\nA related topic are views and rules, again how they can be used\nto eliminate otherwise difficult problems.\n\nMy PKIX stuff is a good example of this. I defy anyone to implement\nthe following table definitions in MySQL:\n\n create table certs (\n cert x509 not null,\n\n key text not null unique\n constraint check (key = iands_hash(cert)),\n\n subject text not null unique\n constraint check (subject = subject(cert)),\n\n issuer text not null\n constraint check (issuer = issuer(cert)),\n\n primary key(key),\n\n foreign key(issuer) references certs(subject) deferrable\n );\n\n create view cert_insert as select cert from certs;\n\n create rule certi as on insert into cert_insert do instead\n insert into certs (cert, key, subject, issuer)\n values (new.cert, iands_hash(new.cert), subject(new.cert), \n issuer(new.cert));\n\nThere's a performance hit with all of these constraints,\nof course, but that's more than offset by the confidence\nthat I (as database developer) can have in the database as\na whole. A solid database means that I can keep my application\ncode thin.\n\n(As an aside, I hope to get release 0.4 out this evening; it\ndocuments all PKIX types and functions and includes support\nfor public key encryption. But I digress....)\n\nThe problem with this example is that it's too obscure - it\ngets people who understand this domain excited, but everyone\nelse gets sidetracked by it.\n\nCan anyone think of a better example? I've come up with\nthree possibilities, each with its own problems:\n\n - geographical positions - latitude, longitude and possibly\n elevation, with related functions. But few constraints\n make sense.\n\n - credit cards. Constraints would be the number of digits,\n the check digit and the leading digit (4xxx is Visa, 5xxx \n is Mastercard). A \"neat\" feature would be a function that\n masks the credit card information - you could use views/rules\n so you could insert or update credit card info, but it would\n be masked during access.\n\n But this strikes me as evil. This information is still\n in the datatabase, still accessible in any db dump.\n If you have credit card info online, you *must* use\n strong crypto. (Such as libpkix 0.4 et seq.)\n\n - email and netnews. Create a new RFC822 type that understands\n the RFC822 format and provides access to the headers via\n accessor functions.\n\n The benefits are that this type has all of the features\n you would want. \"Message-ID\" should be unique. \"References\"\n should give you referential integrity checks (although \n it would not be possible to make it an actual constraint\n in a live system.) \n\n You could even illustrate advanced techniques by looking up\n the sender's nominal email address with DNS. If it's not\n a valid address, it goes into the bit bucket as spam. Even\n if it is valid, a small configuration change and you're\n checking RBL sites instead of the DNS servers and rejecting\n mail from spammers.\n\n The downside is that, like PKIX, this is too heavyweight\n for an introductory article. But it may still be best -\n the article would just need to gloss over the gory details.\n\n> Don't worry about the results of such an article. It is simply not possible \n> to benchmark MySQL against PostgreSQL because MySQL lacks many PostgreSQL \n> features. A serious user immediatly understands this.\n\nThe problem is that there are a lot of people out there who want\nto use a relational database, but don't understand just how much\nthey give up with MySQL. Besides, if 4 of the 5 hosting companies\nthey considered suported MySQL instead of PostgreSQL it must be\nbetter for their needs, right?\n\nI've even caught friends making this mistake. For simple\nschemas and light loads MySQL looks soooo good. The stuff\nthat I say is crucial for maintainability of even modestly\nlarge databases (50 MB+) seems so abstract.\n\n> Why don't you start a mailing list for \"Propaganda\" and \"Public relations\". \n> It should be possible to meet the main journalists and contact them on \n> regular basis with ready-to-publish articles.\n\nAn even better approach is to find a vict... person willing and\nable to write a regular \"Database Guru\" column for LJ, LM, or the\nlike. Most already have Kernel Korner, sysadmin, and similar columns.\n\nMaybe 2 of 3 months will be basic. How do you access the database\nwith Perl or JDBC? Using ODBC to quietly replace SQL Server.\nWhy ESQL/C (ecpg, pro*c) is a tool every C/C++ developer should\nhave in their toolbox. Even basic things like actually setting\nup the configuration files for the first time.\n\nThat odd month... that's when you pull out the advanced topics\nthat separate the real databases from the toys. Views and Rules.\nUser defined types. Clustering. Strong authentication of clients.\nCrypto.\n\nMySQL has mindspace for one reason alone - it's perceived as \nfaster. \n\nPostgreSQL shouldn't try to compete in the same mindspace, it\nshould point out the many things that PostgreSQL supports but\nMySQL doesn't... while quietly pointing out the folly in selecting\na system on the basis of single-user responsiveness. MySQL handles\na single query faster, but PostgreSQL handles far more concurrent\nqueries and does not catastrophically fail under heavy loads.\n\n", "msg_date": "Sun, 13 Jan 2002 11:35:51 -0700 (MST)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": false, "msg_subject": "articles [was: mysql-pgsql comparison]" }, { "msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> On Sun, 13 Jan 2002, Tom Lane wrote:\n> 3. Check out the OSDB benchmark more closely.\n\n> I tried it with current CVS and make test fails (didn't tried\n> to find what's the problem). Also, you have to edit configure script -\n> it tries to find postgres_fe.h in include directory, which now lives under\n> include/internal directory.\n\nI tried to build it on HPUX just now, and was rather sadly disillusioned\nabout its portability. After fixing several problems in the configure\nscript, removing hard-coded references to include files and libraries\nthat don't exist here, etc, I ended up with\n\nmake[2]: Entering directory `/home/tgl/tmp/osdb/src/callable-sql/postgres-call'\ngcc -L/home/postgres/testversion/lib -lpq -g osdb-pg.o ../callable-sql.o ../../program-control.o ../../osdb.o -o osdb-pg\n/usr/ccs/bin/ld: Unsatisfied symbols:\n forkpty (code)\ncollect2: ld returned 1 exit status\n\nAnybody know what forkpty() does? I see it exists in libutil.a on Linux,\nbut there's no man page for it.\n\nOh, OSDB also seems to assume that ecpg is in your $PATH. In general,\nit's not prepared to deal with a Postgres that's not installed in the\n\"standard\" place.\n\nThis thing needs some work :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 14:51:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison " }, { "msg_contents": "On Sun, 13 Jan 2002, Tom Lane wrote:\n\n> Anybody know what forkpty() does? I see it exists in libutil.a on\n> Linux, but there's no man page for it.\n\nIt's a BSD function:\n\nhttp://www.freebsd.org/cgi/man.cgi?query=forkpty\n\nglibc does have info documentation about it. (Yes, info\nis foul, but glibc's doco is actually pretty decent, and\n\"pinfo\" makes it just about bearable.)\n\nMatthew.\n\n", "msg_date": "Sun, 13 Jan 2002 21:14:38 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison " }, { "msg_contents": "Hi Guys,\n\nI'm not sure how long MySQL will be \"non-transactional\" and stuff for.\n\nThey have a version 4.0x out the door. Whilst it's still beta status,\nthey're recommending it for production use :\n\nhttp://www.mysql.com/products/mysql-4.0/index.html\n\nConsider me concerned.\n\nRegards and best wishes,\n\nJustin Clift\n\n\nmlw wrote:\n> \n> I am more concerned with the PHP/MySQL connection. It seems like it is a\n> defacto grouping. Engineers I know, who do not know SQL, will just use MySQL\n> because \"everyone else does.\"\n> \n> Invariably, MySQL (at first) seems like a good decision, then they have to do a\n> complex query. MySQL just can not execute the queries that a real SQL database\n> can. If an engineer does not know SQL, he will walk away thinking SQL is a\n> limiting environment.\n> \n> An contractor came on to the company with a solid MySQL background. He was a\n> bright guy, but we are an Oracle/PostgreSQL shop. More than a few times, I had\n> to rewrite his code to use the SQL engine with a more complex, but overall more\n> efficient, query. His response is always, \"Wow I didn't know you could do\n> that.\"\n> \n> The word that needs to be heard is that a real SQL database is easier than a\n> simplistic SQL-like environment such as MySQL. I have tried several projects\n> with MySQL and have never been able to finish them with it, because MySQL is\n> just too limited.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 14 Jan 2002 09:08:21 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "\n\nPeter Eisentraut wrote:\n> \n> Michael Meskes writes:\n> \n> > I just read one of those MySQL-PGSQL comparison on a German Linux magazine\n> > that's not exactly known to be well informed. :-)\n> \n> I think the article you're speaking of is here:\n> \n> http://www.linuxenterprise.de/artikel/my_sql.pdf\n> \n> The last part of this, which is the supposed comparison, was pretty much\n> copied straight out of their manual. I have a response to that here:\n> \n> http://www.ca.postgresql.org/~petere/comparison.html\n> \n> However, if you don't look hard enough, it's actually true that the only\n> benchmark that runs on both systems is the MySQL benchmark. I think the\n> OSDB benchmark should also work, though.\n> \n> What we could do is:\n> \n> 1. Actually run the MySQL benchmark, to see what's wrong with it. (Note\n> that this is not the crashme test.)\n> \n> 2. Port pgbench to MySQL. Pgbench seems to be our daily benchmark of\n> choice, so it's not unimportant to verify how it does elsewhere.\n> \n> 3. Check out the OSDB benchmark more closely.\n> \n> I think I should also submit my article to their interactive docs, so that\n> those that do comparisons in the future by just reading the manual will\n> get a clue.\n\nI used to have a pointer to the OSDB benchmark in their interactive\ndocs. It would be very crappy of them to have removed it, as OSDB does\nwork for both MySQL and PostgreSQL. :(\n\n+ Justin\n\n> \n> --\n> Peter Eisentraut peter_e@gmx.net\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 14 Jan 2002 09:10:49 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "Hi Tom,\n\nIt definitely works under Linux for PostgreSQL 7.1.x. BUT, as you can\nsee the code is fairly simple and I don't believe it would be too hard\nto make it far more cross-platform. In fact, with OSDB this is what is\nwanted.\n\nBTW - If anyone has patches for it they'd like to have committed, I'm an\nofficial developer for it (username \"Vapour\") with CVS commit access.\n\nAndy Riebs, the guy who is in charge of it is a nice guy and I'm sure\nwould allow you and experienced others to commit to CVS. He's just\nshort of time, the same as I.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nTom Lane wrote:\n> \n> Oleg Bartunov <oleg@sai.msu.su> writes:\n> > On Sun, 13 Jan 2002, Tom Lane wrote:\n> > 3. Check out the OSDB benchmark more closely.\n> \n> > I tried it with current CVS and make test fails (didn't tried\n> > to find what's the problem). Also, you have to edit configure script -\n> > it tries to find postgres_fe.h in include directory, which now lives under\n> > include/internal directory.\n> \n> I tried to build it on HPUX just now, and was rather sadly disillusioned\n> about its portability. After fixing several problems in the configure\n> script, removing hard-coded references to include files and libraries\n> that don't exist here, etc, I ended up with\n> \n> make[2]: Entering directory `/home/tgl/tmp/osdb/src/callable-sql/postgres-call'\n> gcc -L/home/postgres/testversion/lib -lpq -g osdb-pg.o ../callable-sql.o ../../program-control.o ../../osdb.o -o osdb-pg\n> /usr/ccs/bin/ld: Unsatisfied symbols:\n> forkpty (code)\n> collect2: ld returned 1 exit status\n> \n> Anybody know what forkpty() does? I see it exists in libutil.a on Linux,\n> but there's no man page for it.\n> \n> Oh, OSDB also seems to assume that ecpg is in your $PATH. In general,\n> it's not prepared to deal with a Postgres that's not installed in the\n> \"standard\" place.\n> \n> This thing needs some work :-(\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Mon, 14 Jan 2002 09:19:01 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "Tom Lane writes:\n\n> > 1. Actually run the MySQL benchmark, to see what's wrong with it. (Note\n> > that this is not the crashme test.)\n>\n> This would be a worthwhile thing to try, just to see if it exposes any\n> PG weaknesses we didn't already know about. I have run crashme in the\n> past (not recently though); but I've never found time for their\n> benchmark.\n\nWell, I attempted to run the benchmark today, and it's not dissimilar to\ncrashme in the sense that a) it tends to crash, and b) it's quite useless.\nIt's the same single-user \"run select a whole bunch of times\" sort of\nthing that's not really interesting anymore. The MySQL 4.0 server was\nconsistently killed by the kernel during this test. The stable\n3.something version didn't quite work either because of some \"unknown\nerrors\", so I only got some tests to run. But when was the last time\nsomeone actually cared about the speed of alter table or drop index?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 13 Jan 2002 20:29:18 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison " }, { "msg_contents": "On Sun, Jan 13, 2002 at 02:23:44PM +0100, Jean-Michel POURE wrote:\n> Yeah. This article was probably written directly by the MySQL team. This is \n> what you learn in the computing business. If you want someting published .. \n> write it yourself and invite the journalist in a restaurant. I did it 2 or 3 \n> times when working for a telecommunication company.\n\n:-)\n\n> Don't worry about the results of such an article. It is simply not possible \n> to benchmark MySQL against PostgreSQL because MySQL lacks many PostgreSQL \n> features. A serious user immediatly understands this.\n\nBut too many people decide without this knowledge. :-(\n\n> Why don't you start a mailing list for \"Propaganda\" and \"Public relations\". \n> It should be possible to meet the main journalists and contact them on \n> regular basis with ready-to-publish articles.\n\nGood idea. Anyone else interested?\n\n> With a network of one person by country (US, Canada, France,Germany, Italy) \n> you would improve dramaticaly the presence of PostgreSQL in the world. It is \n> not difficult to have articles published ...\n\nI would be willing to be the one person in Germany. After all we could\npublish the same article in different countries.\n\nBut I wonder how easy it is to get the articles out. The last time I tried,\nI was asked how much I would pay for writing the articles. Of course it was\nformulated as \"how much advertisments do you order?\" :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 14 Jan 2002 13:37:06 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "On Sun, Jan 13, 2002 at 11:53:51AM -0500, Peter Eisentraut wrote:\n> I think the article you're speaking of is here:\n> \n> http://www.linuxenterprise.de/artikel/my_sql.pdf\n\nYes, that's it. I didn't know it's online.\n\n> The last part of this, which is the supposed comparison, was pretty much\n> copied straight out of their manual. I have a response to that here:\n> \n> http://www.ca.postgresql.org/~petere/comparison.html\n\nThanks. I doubt they are interested in any comment. They never printed\nsomething like a comment. :-)\n\nBut their sales rep calls about once per week. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Mon, 14 Jan 2002 13:47:25 +0100", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "Le Lundi 14 Janvier 2002 13:37, Michael Meskes a �crit :\n> But I wonder how easy it is to get the articles out. The last time I tried,\n> I was asked how much I would pay for writing the articles. Of course it was\n> formulated as \"how much advertisments do you order?\" :-)\n\nFor what I experience, it is free and you don't have to buy \nadvertising space, unless you are a large company (and this can also be \ndiscussed: I worked for a large company and never had to buy advertising \nspace).\n\nJournalists are interested in product comparision (Pentium vs. Athlon). What \nthey need is ready-to-publish articles/figures/charts. They do not always \nhave the time to very the information. Journalists are not \"liers\", they just \nneed the right information ...\n\nIf you read Monty emails, he never mentions technical information when \ncomparing MySQL vs PostgreSQL. He rather write \"read this article in which \nyou will find that...\" or \"a recent survey showed\" or \"MySQL has a market \nshare of\". Some of these emails went to the basket immediately for obvious \nreasons.\n\n> I would be willing to be the one person in Germany. After all we could\n> publish the same article in different countries.\n\nI could be the one for France (das lebte die franz�siche Freundschaft) and \nthe two of us could start with an experimental \nhttp://postgresql-propaganda.org site, with secure access for \nthe team.\n\nI can handle the creation of a small database with newspapers address and \njournalist names.\n\nCheers,\nJean-Michel POURE\n-- \nTel : +33(0)1 39 64 87 61\nMobile : +33 (0)6 76 88 60 29\nFax : +33(0)1 39 64 96 72\nMailto:jm.poure@freesurf.fr\n38 bld de la R�publique\n95160 MONTMORENCY\nFrance\n", "msg_date": "Mon, 14 Jan 2002 14:34:10 +0100", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: mysql-pgsql comparison" }, { "msg_contents": "On Sun, 13 Jan 2002, Bear Giles wrote:\n\nLots of great stuff - then\n\n> - geographical positions - latitude, longitude and possibly\n> elevation, with related functions. But few constraints\n> make sense.\n\nPostGIS?\n\n\nCheers,\nRod\n-- \n Let Accuracy Triumph Over Victory\n\n Zetetic Institute\n \"David's Sling\"\n Marc Stiegler\n\n", "msg_date": "Mon, 14 Jan 2002 07:44:31 -0800 (PST)", "msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>", "msg_from_op": false, "msg_subject": "Re: articles [was: mysql-pgsql comparison]" }, { "msg_contents": "> MySQL has mindspace for one reason alone - it's perceived as \n> faster. \n> \n> PostgreSQL shouldn't try to compete in the same mindspace, it\n> should point out the many things that PostgreSQL supports but\n> MySQL doesn't... while quietly pointing out the folly in selecting\n> a system on the basis of single-user responsiveness. MySQL handles\n> a single query faster, but PostgreSQL handles far more concurrent\n> queries and does not catastrophically fail under heavy loads.\n\nThis is the crux of the issue -- I most people who are objective would\nchoose PostgreSQL; it is the \"just go with what most people are using\"\ncrowd that we have trouble attracting, and that is the largest group of\nthe population.\n\nGreat Bridge helped with that mind share, and Red Hat is in that space\nnow. We are clearly on the upswing, it is just taking time to get where\nwe want to be. Certainly any articles/comments people can make would be\na help in that direction. [ Not sure why this disucssion is taking\nplace on hackers and not in general, where most of our users live, CC'ing.]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 14 Jan 2002 10:45:24 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] articles [was: mysql-pgsql comparison]" } ]
[ { "msg_contents": "Commercial: New Book!! PostgreSQL book is released\ninto the market\n\nA Complete Guide to PostgreSQL (Redhat Database)\nSecond Edition (revised and enlarged and covers latest\nversion of PostgreSQL 7.1.3),\nPublished Jan 2002\nAuthor: Al Dev\n\nInternational Edition: \n-----------------------\nThe cost of the book is very minimal at about $20.99.\n\nAsian Economy Edition: \n------------------------\nFor consumers in India, Bangladesh, Bhutan, Maldives,\nNepal, Pakistan, Sri Lanka, Myanmar and Thailand\nplease click on the button which says Rs. 330 (Indian\nRupees).\nYou can also buy this book for Rs. 330 from \nhttp://www.gtcdrom.com , Bangalore, INDIA as well at\nall GTCdrom branches \nin every major Indian city.\n\nThe book is sold at http://www.aldev.8m.com\n\nThe first edition of this book (under different title)\ncovered\nPostgreSQL 6.1 and sold about 400,000 copies in\nUSA/Canada. The \nsecond edition is available for sale world-wide and\ntranslated version in\nJapanese and German will be available in near future.\n\nEdition bound, paper back.\nThe book has 43 chapters and is about 550 pages.\nLevel: Intermediate to Advanced User.\n\nAbout 40% of the book is generic and can be used for\nother databases\nlike MySQL, Oracle, MS SQL server and other 60% is\nspecific to\nPostgreSQL.\n\nBooks are immensely important for open-source projects\nlike\nPostgreSQL as they help the new & experienced users.\nThere are several books on PostgreSQL in the market\nwhich directly and indirectly are helping the\ndevelopment of PostgreSQL.\n\nPostgreSQL is a very sophisticated software and to use\nsuch\na sophisticated database, you need a book.\n\n\"A Complete Guide to PostgreSQL (Redhat Database)\" is \na \"must have\" book as it is very difficult to use\nPostgreSQL without\nthis book. This book had evolved over a very long\nperiod of time.\n\nVisit http://www.aldev.8m.com to buy this book.\n\nThe Table of Contents are as follows:\n\nTable Contents\n--------------\n\nPART I\n\nCHAPTER 1. Introduction \nCHAPTER 2. Principles of Database Management\nCHAPTER 3. PostgreSQL Solution \nCHAPTER 4. Evaluation and Selection of SQL Servers\nCHAPTER 5. PostgreSQL, Embedded Databases and LDAP \nCHAPTER 6. The PostgreSQL Database Architecture \nCHAPTER 7. The PostgreSQL Instance Architecture \nCHAPTER 8. PostgreSQL Database Administration\nOverview \nCHAPTER 9. PostgreSQL Quick Installation Instructions\n \nCHAPTER 10. Quick Start Guide \nCHAPTER 11. PostgreSQL Supports Extremely Large\nDatabases greater than 500 Gig \nCHAPTER 12. Setup accurate Time on a SQL Server\nCHAPTER 13. Replication Management \n\nPART II\n\nCHAPTER 14. Object-Oriented Features \nCHAPTER 15. Java Capabilities in PostgreSQL \nCHAPTER 16. Performance Tuning of PostgreSQL Server \nCHAPTER 17. Security Management \nCHAPTER 18. Backup and Recovery \nCHAPTER 19. PSQL Utility \nCHAPTER 20. GUI FrontEnd Tool for PostgreSQL\n(Graphical User Interface)\nCHAPTER 21. Interface Drivers for PostgreSQL \nCHAPTER 22. Perl Database Interface Driver for\nPostgreSQL \nCHAPTER 23. PostgreSQL Management Tools \n\nPART III\n\nCHAPTER 24. Web-Application-Servers for PostgreSQL\nCHAPTER 25. Applications and Tools for PostgreSQL\nCHAPTER 26. Database Design Tool - Entity Relation\nDiagram Tool\nCHAPTER 27. Web Database Design/Implementation tool\nfor PostgreSQL\nCHAPTER 28. PHP Hypertext Preprocessor (Server-side\nHTML-embedded scripting language)\nCHAPTER 29. Web Gem package \nCHAPTER 30. Python Interface for PostgreSQL\nCHAPTER 31. Gateway between PostgreSQL and the WWW\nCHAPTER 32. \"C\", \"C++\", ESQL/C language Interfaces and\nBitwise Operators for PostgreSQL\nCHAPTER 33. Procedural Languages \n\nPART IV\n\nCHAPTER 34. Japanese Language Interface/Kanji Code for\nPostgreSQL, NLS support\nCHAPTER 35. PostgreSQL Port to Windows 95/Windows\nNT/2000/XP \nCHAPTER 36. Mailing Lists \nCHAPTER 37. Documentation and Reference Books \nCHAPTER 38. Technical support for PostgreSQL \nCHAPTER 39. Economic and Business Aspects \nCHAPTER 40. FAQ - Questions on PostgreSQL \n\nAPPENDIX\n\n41. APPENDIX A - SYNTAX OF ANSI/ISO SQL 1992,\nANSI/ISO SQL 1998 \n42. APPENDIX B - SQL TUTORIAL FOR BEGINNERS\n42.1 TUTORIAL FOR POSTGRESQL\n42.2 INTERNET URL POINTERS\n42.3 ON-LINE SQL TUTORIALS\n43. APPENDIX C - MIDGARD INSTALLATION\n43.1 SECURITY with OPENSSL\n\nVisit http://www.aldev.8m.com to buy this book.\n\n\n__________________________________________________\nDo You Yahoo!?\nSend FREE video emails in Yahoo! Mail!\nhttp://promo.yahoo.com/videomail/\n", "msg_date": "Sun, 13 Jan 2002 10:08:57 -0800 (PST)", "msg_from": "Al Dev <pgsqlbook@yahoo.com>", "msg_from_op": true, "msg_subject": "Commercial: New Book!! PostgreSQL book is released into the market" }, { "msg_contents": "Dear PostgreSQL community,\n\n Most of the longer standing members of the PostgreSQL\n developers or users communities know very well what to do\n with information provided by Al Dev (aka Alavoor Vasudevan).\n\n For those who don't know yet, some personal notes that might\n help you to make your decision:\n\n * The first edition of Al Dev's book was published by\n iUniverse under the title \"Database-SQL-RDBMS Howto\", ISBN\n 0-595-13675-3. It is currently on rank 439,321 at\n amazon.com. Rank! Not to be mistakenly read as sold copies.\n\n * The book contains Al Dev's original Database-HOWTO. The\n PostgreSQL coreteam had a hard time to get this document\n \"removed\" from official sites like linuxdoc.org. It can\n still be found on the net, google for the keywords:\n\n \"al dev\" postgresql howto nuclear weapons in ancient india\n\n An obvious relationship ... no?\n\n * For more examples of his style to put important information\n together visit his website\n\n http://www.aldev.8m.com\n\n The table of content for the new book looks the same again.\n This is no real drawback and since it is all important\n information, one has to read it all anyway, so there was no\n need to waste time by putting it into any specific order.\n\n * Every now and then Al Dev seems to feel the need to clearly\n point out the superior advantage of open source technology\n over commercial products. He then posts his important new\n information on a number of mailing lists.\n\n I do appreciating when people think open source technology\n is a good thing in general. I also prefer arguments based\n on technical facts without polemic bashing.\n\n I have to admit that I am quite courious about it. Since this\n one seems not to be available online, I might have order it,\n only to write a readers review.\n\n\nJan\n\nAl Dev wrote:\n> Commercial: New Book!! PostgreSQL book is released\n> into the market\n>\n> A Complete Guide to PostgreSQL (Redhat Database)\n> Second Edition (revised and enlarged and covers latest\n> version of PostgreSQL 7.1.3),\n> Published Jan 2002\n> Author: Al Dev\n>\n> International Edition:\n> -----------------------\n> The cost of the book is very minimal at about $20.99.\n>\n> Asian Economy Edition:\n> ------------------------\n> For consumers in India, Bangladesh, Bhutan, Maldives,\n> Nepal, Pakistan, Sri Lanka, Myanmar and Thailand\n> please click on the button which says Rs. 330 (Indian\n> Rupees).\n> You can also buy this book for Rs. 330 from\n> http://www.gtcdrom.com , Bangalore, INDIA as well at\n> all GTCdrom branches\n> in every major Indian city.\n>\n> The book is sold at http://www.aldev.8m.com\n>\n> The first edition of this book (under different title)\n> covered\n> PostgreSQL 6.1 and sold about 400,000 copies in\n> USA/Canada. The\n> second edition is available for sale world-wide and\n> translated version in\n> Japanese and German will be available in near future.\n>\n> Edition bound, paper back.\n> The book has 43 chapters and is about 550 pages.\n> Level: Intermediate to Advanced User.\n>\n> About 40% of the book is generic and can be used for\n> other databases\n> like MySQL, Oracle, MS SQL server and other 60% is\n> specific to\n> PostgreSQL.\n>\n> Books are immensely important for open-source projects\n> like\n> PostgreSQL as they help the new & experienced users.\n> There are several books on PostgreSQL in the market\n> which directly and indirectly are helping the\n> development of PostgreSQL.\n>\n> PostgreSQL is a very sophisticated software and to use\n> such\n> a sophisticated database, you need a book.\n>\n> \"A Complete Guide to PostgreSQL (Redhat Database)\" is\n> a \"must have\" book as it is very difficult to use\n> PostgreSQL without\n> this book. This book had evolved over a very long\n> period of time.\n>\n> Visit http://www.aldev.8m.com to buy this book.\n>\n> The Table of Contents are as follows:\n>\n> Table Contents\n> --------------\n>\n> PART I\n>\n> CHAPTER 1. Introduction\n> CHAPTER 2. Principles of Database Management\n> CHAPTER 3. PostgreSQL Solution\n> CHAPTER 4. Evaluation and Selection of SQL Servers\n> CHAPTER 5. PostgreSQL, Embedded Databases and LDAP\n> CHAPTER 6. The PostgreSQL Database Architecture\n> CHAPTER 7. The PostgreSQL Instance Architecture\n> CHAPTER 8. PostgreSQL Database Administration\n> Overview\n> CHAPTER 9. PostgreSQL Quick Installation Instructions\n>\n> CHAPTER 10. Quick Start Guide\n> CHAPTER 11. PostgreSQL Supports Extremely Large\n> Databases greater than 500 Gig\n> CHAPTER 12. Setup accurate Time on a SQL Server\n> CHAPTER 13. Replication Management\n>\n> PART II\n>\n> CHAPTER 14. Object-Oriented Features\n> CHAPTER 15. Java Capabilities in PostgreSQL\n> CHAPTER 16. Performance Tuning of PostgreSQL Server\n> CHAPTER 17. Security Management\n> CHAPTER 18. Backup and Recovery\n> CHAPTER 19. PSQL Utility\n> CHAPTER 20. GUI FrontEnd Tool for PostgreSQL\n> (Graphical User Interface)\n> CHAPTER 21. Interface Drivers for PostgreSQL\n> CHAPTER 22. Perl Database Interface Driver for\n> PostgreSQL\n> CHAPTER 23. PostgreSQL Management Tools\n>\n> PART III\n>\n> CHAPTER 24. Web-Application-Servers for PostgreSQL\n> CHAPTER 25. Applications and Tools for PostgreSQL\n> CHAPTER 26. Database Design Tool - Entity Relation\n> Diagram Tool\n> CHAPTER 27. Web Database Design/Implementation tool\n> for PostgreSQL\n> CHAPTER 28. PHP Hypertext Preprocessor (Server-side\n> HTML-embedded scripting language)\n> CHAPTER 29. Web Gem package\n> CHAPTER 30. Python Interface for PostgreSQL\n> CHAPTER 31. Gateway between PostgreSQL and the WWW\n> CHAPTER 32. \"C\", \"C++\", ESQL/C language Interfaces and\n> Bitwise Operators for PostgreSQL\n> CHAPTER 33. Procedural Languages\n>\n> PART IV\n>\n> CHAPTER 34. Japanese Language Interface/Kanji Code for\n> PostgreSQL, NLS support\n> CHAPTER 35. PostgreSQL Port to Windows 95/Windows\n> NT/2000/XP\n> CHAPTER 36. Mailing Lists\n> CHAPTER 37. Documentation and Reference Books\n> CHAPTER 38. Technical support for PostgreSQL\n> CHAPTER 39. Economic and Business Aspects\n> CHAPTER 40. FAQ - Questions on PostgreSQL\n>\n> APPENDIX\n>\n> 41. APPENDIX A - SYNTAX OF ANSI/ISO SQL 1992,\n> ANSI/ISO SQL 1998\n> 42. APPENDIX B - SQL TUTORIAL FOR BEGINNERS\n> 42.1 TUTORIAL FOR POSTGRESQL\n> 42.2 INTERNET URL POINTERS\n> 42.3 ON-LINE SQL TUTORIALS\n> 43. APPENDIX C - MIDGARD INSTALLATION\n> 43.1 SECURITY with OPENSSL\n>\n> Visit http://www.aldev.8m.com to buy this book.\n>\n>\n> __________________________________________________\n> Do You Yahoo!?\n> Send FREE video emails in Yahoo! Mail!\n> http://promo.yahoo.com/videomail/\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 14 Jan 2002 15:30:54 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Commercial: New Book!! PostgreSQL book is released into" }, { "msg_contents": "Jan Wieck wrote:\n> \n> Dear PostgreSQL community,\n> \n> Most of the longer standing members of the PostgreSQL\n> developers or users communities know very well what to do\n> with information provided by Al Dev (aka Alavoor Vasudevan).\n> ....\n\n\nIs all I did was visit his web site, and got a bill from\n\"billpoint@billpoint.com\" for $25.98. I DID NOT order his book:\n\n> Subject: Billpoint Invoice for Item # , 1 , for $25.98\n> Date: Tue, 15 Jan 2002 17:02:14 -0800 (PST)\n> From: Billpoint System Notification <billpoint@billpoint.com>\n> To: Doug@royer.com\n>\n> ...\n>\n> This is a Billpoint Invoice from alavoor@yahoo.com requesting\n> payment of $25.98 for \"1\" . To pay for this item, click on the\n> link below, or copy and paste it into your browser. (If the\n> link appears on multiple lines, you may need to copy each line\n> separately.) \n>\n> ...\n>\n> Invoice Details \n> --------------- \n> ...\n> Seller: alavoor@yahoo.com \n> Seller email: alavoor@yahoo.com\n> Seller Message: A Complete Guide to PostgreSQL.\n> ...\n\n\n\nWhen I called the WHOIS database phone number for billpoint,\nthe answering maching answered BILL - POINT, then I hit '0'\nand E-BAY answered.", "msg_date": "Wed, 16 Jan 2002 13:57:04 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] Commercial: New Book!! PostgreSQL book is " }, { "msg_contents": "Hi Doug,\n\nI just had a similar experience. I emailed him to request a copy of his\nbook for review, and bingo, I get an invoice in the email to cough up\ncash from Billpoint as well.\n\nThis is ridiculous, and is giving us a bad rep.\n\nI'll pursue this further.\n\nBTW, has anyone else suffered this probably-illegal experience with Al\nDev?\n\nRegards and best wishes,\n\nJustin Clift\n\n\nDoug Royer wrote:\n> \n> Jan Wieck wrote:\n> >\n> > Dear PostgreSQL community,\n> >\n> > Most of the longer standing members of the PostgreSQL\n> > developers or users communities know very well what to do\n> > with information provided by Al Dev (aka Alavoor Vasudevan).\n> > ....\n> \n> Is all I did was visit his web site, and got a bill from\n> \"billpoint@billpoint.com\" for $25.98. I DID NOT order his book:\n> \n> > Subject: Billpoint Invoice for Item # , 1 , for $25.98\n> > Date: Tue, 15 Jan 2002 17:02:14 -0800 (PST)\n> > From: Billpoint System Notification <billpoint@billpoint.com>\n> > To: Doug@royer.com\n> >\n> > ...\n> >\n> > This is a Billpoint Invoice from alavoor@yahoo.com requesting\n> > payment of $25.98 for \"1\" . To pay for this item, click on the\n> > link below, or copy and paste it into your browser. (If the\n> > link appears on multiple lines, you may need to copy each line\n> > separately.)\n> >\n> > ...\n> >\n> > Invoice Details\n> > ---------------\n> > ...\n> > Seller: alavoor@yahoo.com\n> > Seller email: alavoor@yahoo.com\n> > Seller Message: A Complete Guide to PostgreSQL.\n> > ...\n> \n> When I called the WHOIS database phone number for billpoint,\n> the answering maching answered BILL - POINT, then I hit '0'\n> and E-BAY answered.\n> \n> ------------------------------------------------------------------------\n> Name: Doug.vcf\n> Doug.vcf Type: VCard (text/x-vcard)\n> Encoding: 7bit\n> Description: Card for Doug Royer\n> \n> Part 1.3 Type: Plain Text (text/plain)\n> Encoding: binary\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Thu, 17 Jan 2002 13:34:45 +1100", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] [ANNOUNCE] Commercial: New Book!! PostgreSQL book is" }, { "msg_contents": "Justin Clift wrote:\n> \n> Hi Doug,\n> \n> I just had a similar experience. I emailed him to request a copy of his\n> book for review, and bingo, I get an invoice in the email to cough up\n> cash from Billpoint as well.\n> \n> This is ridiculous, and is giving us a bad rep.\n> \n> I'll pursue this further.\n> \n> BTW, has anyone else suffered this probably-illegal experience with Al\n> Dev?\n\n Billpoint, Inc.\n 2145 Hamilton Avenue\n San Jose, CA 95125\n (408) 626 4910\n (FAX) (408) 626 4901\n\nI would recommend that everyone that has problems call to complain.\nLike I said in the previous email, it seems to also be E-BAY.\nThey DO NOT want a bad reputation.", "msg_date": "Thu, 17 Jan 2002 17:05:35 -0700", "msg_from": "Doug Royer <Doug@royer.com>", "msg_from_op": false, "msg_subject": "Re: [ANNOUNCE] Commercial: New Book!! PostgreSQL book is" } ]
[ { "msg_contents": "Is there ever going to be a release?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Sun, 13 Jan 2002 20:35:34 -0500 (EST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Release time" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Is there ever going to be a release?\n\nMarc was supposed to roll a beta5 this weekend, but has shown no sign\nof life since Friday. (OTOH, maybe he's been busy fixing the servers.\nCertainly the mailing list response for the last day or so has been\nway better than it was last week.)\n\nWe were talking about an RC1 in a week, and official release a week\nlater, barring any nasty bugs surfacing.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 22:18:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Release time " }, { "msg_contents": "\nI thought I was waiting for word from you concerning changes to the\nregression test for Thomas' fixes for the timestamp stuff? *puzzled look*\nI swear that was the last thing that I recall being mentioned :( If its\nwaiting on me, I'll roll her first thing in the morning, and apologize\nprofusely for the delay ... :(\n\nOn Sun, 13 Jan 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Is there ever going to be a release?\n>\n> Marc was supposed to roll a beta5 this weekend, but has shown no sign\n> of life since Friday. (OTOH, maybe he's been busy fixing the servers.\n> Certainly the mailing list response for the last day or so has been\n> way better than it was last week.)\n>\n> We were talking about an RC1 in a week, and official release a week\n> later, barring any nasty bugs surfacing.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Mon, 14 Jan 2002 00:27:46 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Release time " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> I thought I was waiting for word from you concerning changes to the\n> regression test for Thomas' fixes for the timestamp stuff? *puzzled look*\n\nYou musta missed the \"all clear\" I sent Saturday. We're ready to roll\nAFAIK.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 13 Jan 2002 23:30:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Release time " }, { "msg_contents": "\nmust have ... :( okay, will do the packaging first thing in the morning\n... beta5 on Mon, and if all is quiet by friday, RC1 on Friday *cross\nfingers*\n\n\nOn Sun, 13 Jan 2002, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > I thought I was waiting for word from you concerning changes to the\n> > regression test for Thomas' fixes for the timestamp stuff? *puzzled look*\n>\n> You musta missed the \"all clear\" I sent Saturday. We're ready to roll\n> AFAIK.\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Mon, 14 Jan 2002 01:19:57 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Release time " }, { "msg_contents": "\ntag'd, packaged and up on the main ftp site ... will do a full announce\nlater on this evening, after its had a chance to propogate out ...\n\nOn Sun, 13 Jan 2002, Tom Lane wrote:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Is there ever going to be a release?\n>\n> Marc was supposed to roll a beta5 this weekend, but has shown no sign\n> of life since Friday. (OTOH, maybe he's been busy fixing the servers.\n> Certainly the mailing list response for the last day or so has been\n> way better than it was last week.)\n>\n> We were talking about an RC1 in a week, and official release a week\n> later, barring any nasty bugs surfacing.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Mon, 14 Jan 2002 09:56:41 -0400 (AST)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Release time " }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> tag'd, packaged and up on the main ftp site ... will do a full announce\n> later on this evening, after its had a chance to propogate out ...\n\nLooks good from here.\n\nDon't forget to mention in the announcement that an initdb is needed\nto update from earlier betas.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 14:20:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Release time " } ]
[ { "msg_contents": "I am pleased to announce the release of libpkixpq 0.4, a major release.\n\nNEW FEATURES\n------------\n\n - documentation\n\n - support for PKCS7 asymmetric encrypted messages.\n (OpenPGP symmeric encryption support has been dropped, at\n least temporarily.) The PKCS7 asymmetric encryption \n includes support for recovery keys, but not for multiple\n recipients.\n\n - support for the OpenSSL TYPE_print functions as used \n defined functions for the standard types,\n\n - a new text type, XML, which indicates X.509 certificates,\n X.509 CRLs (rsn), and public keys should be written or\n parsed as specified by the W3C XML Signature schema. \n\n This means that the XML representation of a certificate\n can be intuitively obtained via a query such as:\n\n select cast(cert as xml) from certs where ...\n\n The corresponding input functions are not yet implemented.\n\n - the initial release of a JSP/JSP tags X.509 certificate\n repository, part of a CA tentatively named 'beastmark.'\n This repository can be searched in all manners covered by\n Gutman et al, and is designed to use XML natively (hence\n the prior item) and XSLT to convert results to the \n appropriate format for the client - text/html,\n application/x509-certificate, application/x-x509-ca-cert,\n etc.\n\nSOURCE CODE AVAILABILITY\n------------------------\n\nSource code can be downloaded from http://www.dimensional.com/~bgiles/ ;\nas always US export laws apply.\n\nDocumentation online at http://www.dimensional.com/~bgiles/pkixdoc/\n\nMessage cc'd to crypt@bxa.doc.gov\n", "msg_date": "Sun, 13 Jan 2002 22:07:11 -0700 (MST)", "msg_from": "Bear Giles <bear@coyotesong.com>", "msg_from_op": true, "msg_subject": "Announcement: libpkixpq 0.4 (pkix + strong crypto for db)" } ]
[ { "msg_contents": "Hi\n\nIf I'm not mistaken, somewhere, sometimes ago Tom Lane mentioned that he's working on the Tablespace on his personal TODO list.\nI'm just throwing idea here, perhaps Tom already think of it. But I send it anyway. \n\nIf the Tablespace is the same concept with Oracle then it can take advantage of clustering.\nSearching for certain data with Seq Scan does not mean it searching the *whole* table but only the part of the table that resides on the certain cluster, if the key is part of the cluster key.\n\nBest regards\nAndy\n\n\n\n\n\n\n\nHi\n \nIf I'm not mistaken, somewhere, sometimes ago Tom \nLane mentioned that he's working on the Tablespace on his personal TODO \nlist.\nI'm just throwing idea here, perhaps Tom already \nthink of it.  But I send it anyway. \n \nIf the Tablespace is the same concept with Oracle \nthen it can take advantage of clustering.\nSearching for certain data with Seq Scan does not \nmean it searching the *whole* table but only the part of the table that resides \non the certain cluster, if the key is part of the cluster key.\n \nBest regards\nAndy", "msg_date": "Mon, 14 Jan 2002 13:11:34 +0700", "msg_from": "\"Andy Samuel\" <andysamuel@geocities.com>", "msg_from_op": true, "msg_subject": "Tablespace and clustering" } ]
[ { "msg_contents": "\n> > isync only affects the running processor.\n> \n> I have tried LinuxPPC's TAS code but AIX's assembler complains that\n> lwarx and stwcx are unsupported op. So it seems that we need to tweak\n> your code actually.\n\nThe problem is, that the default on AIX is to produce architecture independent\ncode (arch=COM). Unfortunately not all AIX architectures seem to have these \ninstructions. With arch=ppc it works (two lines adjusted .globl .tas and .tas:).\nMy worry is, that the Architecture book sais that the isync is necessary on SMP. \nI wonder why that would not also apply to LinuxPPC or Apple.\n\nAndreas\n", "msg_date": "Mon, 14 Jan 2002 10:16:43 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " }, { "msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> The problem is, that the default on AIX is to produce architecture\n> independent code (arch=COM). Unfortunately not all AIX architectures\n> seem to have these instructions.\n\nAIX does more than one architecture? Hmm, s_lock.h doesn't know that...\n\n> With arch=ppc it works (two lines\n> adjusted .globl .tas and .tas:). My worry is, that the Architecture\n> book sais that the isync is necessary on SMP. I wonder why that would\n> not also apply to LinuxPPC or Apple.\n\nI doubt we've had anyone test on SMP PPC machines, other than Tatsuo's\ntests on AIX. Worse, I'd imagine that any failures from a missing sync\ninstruction would be rare and tough to reproduce. So there may indeed\nbe a lurking problem here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 14 Jan 2002 10:17:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " } ]
[ { "msg_contents": "Tom wrote:\n> > The question is: how does one find the proper value? That is, is it\n> > possible to design planner benchmarking utility to aid in tuning\n> > Postgres?\n> \n> The trouble is that getting trustworthy numbers requires huge test\n> cases, because you have to swamp out the effects of the kernel's own\n> buffer caching. I spent about a week running 24-hour-constant-disk-\n> banging experiments when I came up with the 4.0 number we use now,\n> and even then I didn't feel that I had a really solid range of test\n> cases to back it up.\n\nWell, I certainly think you did a great job on your test cases :-)\nIf you look at Daniel's idle system he has a measured factor of ~4.8.\n\nThe number is also (imho correctly) rather an underestimate than an \noverestimate. That is, I haven't been able to empirically proove mlw's\npoint about modern disks not beeing sensitive to scan with larger blocks\nvs. random 8k.\n(My tests typically used raw devices circumventing the OS completely,\nsince I do the tests for Informix servers)\n\n> My advice to you is just to drop it to 2.0 and see if you like the plans\n> you get any better.\n\nYup, maybe even lower. I cannot see a difference in disk IO troughput \nduring seq or index scan in Daniel's test case during his normal workload test. \nThis is because his system had a CPU bottleneck during his normal workload\ntest. (Shows again how bad OS's distribute processes (1 CPU serves 2 backends\nthat have a CPU bottleneck, 1 CPU is idle))\n\nAndreas\n", "msg_date": "Mon, 14 Jan 2002 12:22:21 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: again on index usage " } ]
[ { "msg_contents": "dear sir\n\nI have 7.1.3 sql statement \" select '<chinese> 中文測試' \" In unicode datbase ,\nbut I try 7.2b4 can't run \n \n\n\n\n\n\n\n\ndear sir\n \nI have 7.1.3 sql statement  \"  select \n'<chinese> 中文測試' \" In unicode datbase  ,\nbut I try 7.2b4 can't run", "msg_date": "Mon, 14 Jan 2002 22:56:34 +0800", "msg_from": "\"guard\" <guard29@seed.net.tw>", "msg_from_op": true, "msg_subject": "unicode words" }, { "msg_contents": "> dear sir\n> \n> I have 7.1.3 sql statement \" select '<chinese> ' \" In unicode datbase ,\n> but I try 7.2b4 can't run \n[\"Chinese\" character removed]\n\nPlease do not post raw Chinese characters. Instead you could tell the\ncharcters in Hexa decimal.\n\nAlso, it's vague the words \"can't run\". Could you tell me exactly what\nhappend?\n--\nTatsuo Ishii\n", "msg_date": "Tue, 15 Jan 2002 09:59:16 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: unicode words" } ]
[ { "msg_contents": "Tom writes:\n> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > The problem is, that the default on AIX is to produce architecture\n> > independent code (arch=COM). Unfortunately not all AIX architectures\n> > seem to have these instructions.\n> \n> AIX does more than one architecture? Hmm, s_lock.h doesn't \n> know that...\n\nIt does not need to, since all of them currently use cs(). \nThe compilers by default generate executables that run on all\nof the different processors (they are all Risc).\n\nAndreas\n", "msg_date": "Mon, 14 Jan 2002 17:02:44 +0100", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " } ]
[ { "msg_contents": "Um, and then there's darwin on the dual g4's.... SMP POSIX code on PPC is\nnot limited to AIX and Linux (ugh). freebsd 5.0 is rumored to be smp and run\non ppc as well.\n\nalex\n\n", "msg_date": "Mon, 14 Jan 2002 13:26:32 -0500", "msg_from": "Alex Avriette <a_avriette@acs.org>", "msg_from_op": true, "msg_subject": "Re: 7.1 vs. 7.2 on AIX 5L " } ]
[ { "msg_contents": "Now that I have moved pg_upgrade into /contrib, can I enable the script\nso you don't have to change ENABLE=\"Y\" to run it? Seems safe.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 15 Jan 2002 00:05:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pg_upgrade activated?" } ]