threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "I have found current CVS doesn't work with Perl 5.005 patch 3. It tries\nto do:\n\n\t#$ pod2man Pg.pm Pg.3\n /usr/bin/pod2man: Need one and only one podpage argument\n usage: /usr/bin/pod2man [options] podpage\n Options are:\n --section=manext (default \"1\")\n --release=relpatch (default \"perl 5.005, patch 03\")\n --center=string (default \"User Contributed Perl Documentation\")\n --date=string (default \"2/Jun/2002\")\n --fixed=font (default \"CW\")\n --official (default NOT)\n --lax (default NOT)\n\nI have applied the following patch which seems to do the same thing and\nworks with my perl here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: src/interfaces/perl5/GNUmakefile\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/perl5/GNUmakefile,v\nretrieving revision 1.7\ndiff -c -r1.7 GNUmakefile\n*** src/interfaces/perl5/GNUmakefile\t28 May 2002 16:57:53 -0000\t1.7\n--- src/interfaces/perl5/GNUmakefile\t2 Jun 2002 21:20:58 -0000\n***************\n*** 53,59 ****\n \ttouch $@\n \n Pg.$(perl_man3ext): Pg.pm\n! \t$(POD2MAN) $< $@\n \n \n # During install, we must guard against the likelihood that we don't\n--- 53,59 ----\n \ttouch $@\n \n Pg.$(perl_man3ext): Pg.pm\n! \t$(POD2MAN) --section=$(perl_man3ext) $< > Pg.$(perl_man3ext)\n \n \n # During install, we must guard against the likelihood that we don't",
"msg_date": "Sun, 2 Jun 2002 17:36:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "perl pod2man bug"
}
] |
[
{
"msg_contents": "Krzysztof Stachlewski wrote:\n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: <stach@toya.net.pl>; <pgsql-bugs@postgresql.org>\n> Sent: Sunday, June 02, 2002 10:45 PM\n> Subject: Re: [BUGS] Bug #655: win32 client and bytea column\n> \n> > Yes, this is certainly our error message:\n> >\n> > pg_exec() query failed: pqReadData() -- read() failed: errno=0 No error\n> >\n> > Of course, the \"0 No error\" is odd. I think that just means that read()\n> > itself didn't set errno for the failure.\n> >\n> > You are clearly base64 encoding the storage info, meaning it isn't some\n> > strange character in the data. Are you using 7.2 for the server and the\n> > client?\n> \n> Yes. libpq.dll is version 7.2.1. So is the server.\n> \n> > My guess is that Win isn't handling some of the larger packets,\n> > but I may be wrong. If it fails reliably, could you find the exact\n> > length where it fails. That may help.\n> \n> It is exactly 6106 bytes of raw data.\n> That is 8144 bytes of base64 encoded data.\n> Below this everything works just fine.\n\nCould it be that Win32 somehow can't handle large tcp packets? Would\nyou create a TEXT field, and stuff 9k of a string into it and see if\nthat fails?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 2 Jun 2002 17:53:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] Bug #655: win32 client and bytea column"
}
] |
[
{
"msg_contents": "I think its already been determined that the cygwin option is too low\nperforming.\n\nHowever, the apache stuff could be quite useful - but if that effort\nwere to be undertaken, it would make more sense to move all versions of the\ncode the\nthe apache runtime, for all platforms. Are there any other runtime\nlibraries out there\nthat are cross platform, open/free and high performance? I know the mozilla\nXPCOM\nlibraries work quite nicely, but are geared more towards multithreaded\napps - and the\nCOM-alike infrastructure is something we wouldn't need.\n\n~Jon\n\n----- Original Message -----\nFrom: Bruce Momjian <pgman@candle.pha.pa.us>\nTo: mlw <markw@mohawksoft.com>\nCc: Tom Lane <tgl@sss.pgh.pa.us>; Marc G. Fournier <scrappy@hub.org>;\n<pgsql-hackers@postgresql.org>\nSent: Sunday, June 02, 2002 8:49 PM\nSubject: Re: [HACKERS] HEADS UP: Win32/OS2/BeOS native ports\n\n\n> mlw wrote:\n> > Like I told Marc, I don't care. You spec out what you want and I'll\nwrite it\n> > for Windows.\n> >\n> > That being said, a SysV IPC interface for native Windows would be kind\nof cool\n> > to have.\n>\n> I am wondering why we don't just use the Cygwin shm/sem code in our\n> project, or maybe the Apache stuff; why bother reinventing the wheel.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n\n",
"msg_date": "Sun, 2 Jun 2002 21:11:02 -0400",
"msg_from": "\"coventry\" <coventry@one.net>",
"msg_from_op": true,
"msg_subject": "Re: HEADS UP: Win32/OS2/BeOS native ports"
}
] |
[
{
"msg_contents": "\nOn Mon, June 03 Bruce wrote:\n> > On Wed, May 08, 2002 at 06:47:46PM +0200, Zeugswetter SB SD Andreas wrote:\n> > > When we are talking about the places where you need double escaping \n> > > (once for parser, once for input function) to make it work, I would also \n> > > say that that is very cumbersome (not broken, since it is thus documented) :-) \n> > > I would also default to strict ANSI, but not depricate the escaping when set.\n> > > All imho of course.\n\n> Yes, these are good points. Our big problem is that we use backslash\n> for two things, one for escaping single quotes and for escaping standard\n> C characters, like \\n. While we can use the standard-supported '' to\n> insert single quotes, what should we do with \\n? The problem is\n> switching to standard ANSI solution reduces our functionality.\n\nThe problem imho is, that this (no doubt in many cases valuable)\nfeature reduces the functionality from the ANSI SQL perspective.\nConsider a field that is supposed to store Windows filenames,\nnam_file='C:\\node1\\resend\\b.dat' :-)\n\nThus I think a GUC to turn off all escaping except '' would be valuable.\n\nAndreas\n",
"msg_date": "Mon, 3 Jun 2002 13:20:13 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "At 01:20 PM 6/3/02 +0200, Zeugswetter Andreas SB SD wrote:\n> > for two things, one for escaping single quotes and for escaping standard\n> > C characters, like \\n. While we can use the standard-supported '' to\n> > insert single quotes, what should we do with \\n? The problem is\n> > switching to standard ANSI solution reduces our functionality.\n>\n>The problem imho is, that this (no doubt in many cases valuable)\n>feature reduces the functionality from the ANSI SQL perspective.\n>Consider a field that is supposed to store Windows filenames,\n>nam_file='C:\\node1\\resend\\b.dat' :-)\n>\n>Thus I think a GUC to turn off all escaping except '' would be valuable.\n\nWith current behaviour 'C:\\node1\\resend\\b.dat' can be quoted as \n'C:\\\\node1\\\\resend\\\\b.dat'\n\nBut for the ANSI standard how does one stuff \\r\\n\\t and other control \ncharacters into the database?\n\nIf there's no way other than actually sending the control characters then \nthat is a bad idea especially from a security viewpoint.\n\nCheerio,\nLink.\n\n",
"msg_date": "Mon, 03 Jun 2002 23:25:10 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "Lincoln Yeoh writes:\n\n> But for the ANSI standard how does one stuff \\r\\n\\t and other control\n> characters into the database?\n>\n> If there's no way other than actually sending the control characters then\n> that is a bad idea especially from a security viewpoint.\n\nWhy??\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 4 Jun 2002 21:58:27 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "At 09:58 PM 6/4/02 +0200, Peter Eisentraut wrote:\n>Lincoln Yeoh writes:\n>\n> > But for the ANSI standard how does one stuff \\r\\n\\t and other control\n> > characters into the database?\n> >\n> > If there's no way other than actually sending the control characters then\n> > that is a bad idea especially from a security viewpoint.\n>\n>Why??\n\nQuoting is to help separate data from commands. Though '' is sufficient for \nquoting ' it seems to me not sufficient for control characters.\n\nThere could be control characters that cause problems with the DB, and \npeople may not be sufficiently aware of potential problems. If you just \nremove the problematic characters, it means you can't store them in the \ndatabase - the db can become less useful.\n\nWhereas with the current way of quoting control characters, if you are \nunsure what to quote, you could safely quote every \"untrusted\" character. \nLess chance of things going wrong. Also being able to quote allows you to \nstore control characters in the database.\n\nAn example of what could go wrong: a RDBMS may treat raw backspaces as part \nof the command stream and not the data, and thus\n\ninsert into pics (data) values ('$CGIPARAM')\ncould become -\ninsert into pics (data) values('....JFIF^H^H^H^H^H^H...^H^H^HUPDATE row \nfrom IMPORTANT where (rowid='1')\nWhich is treated as\nUPDATE row from IMPORTANT where (rowid='1')\n\nAnd so a file upload becomes an insiduous alteration of important data.\n\nHope that helps,\nLink.\n\n\n",
"msg_date": "Wed, 05 Jun 2002 12:20:08 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
}
] |
[
{
"msg_contents": "Please apply attached patch to contrib/intarray (7.2, 7.3).\n\n Fixed bug with '=' operator for gist__int_ops and\n define '=' operator for gist__intbig_ops opclass.\n Now '=' operator is consistent with standard 'array' type.\n <br>\n Tnanks Achilleus Mantzios for bug report and suggestion.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83",
"msg_date": "Mon, 3 Jun 2002 19:45:03 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "patch for contrib/intarray (7.2 and 7.3)"
}
] |
[
{
"msg_contents": "It is possible de make a cluster with postgresql ?\n\nmichael\n\n\n\n\n\n\n\n\nIt is possible de make a cluster with postgresql \n?\n \nmichael",
"msg_date": "Tue, 4 Jun 2002 16:51:06 +0200",
"msg_from": "\"Vergoz Michael\" <mvergoz@sysdoor.com>",
"msg_from_op": true,
"msg_subject": "clustering"
},
{
"msg_contents": "There are several replication projects underway that can provide varying levels of functionality - by utilizing one of these, you could have a 'cluster', but it would not be the same functionality as you would get from a comercial cluster such as Oracle...\n\nsee the /contrib folder for several of these projects, and search the archive of the mailing list for the recent discusions.\n\n----- Original Message ----- \n From: Vergoz Michael \n To: pgsql-hackers@postgresql.org \n Sent: Tuesday, June 04, 2002 10:51 AM\n Subject: [HACKERS] clustering\n\n\n It is possible de make a cluster with postgresql ?\n\n michael\n\n\n\n\n\n\n\nThere are several replication projects underway \nthat can provide varying levels of functionality - by utilizing one of these, \nyou could have a 'cluster', but it would not be the same functionality as you \nwould get from a comercial cluster such as Oracle...\n \nsee the /contrib folder for several of these \nprojects, and search the archive of the mailing list for the recent \ndiscusions.\n \n----- Original Message ----- \n\nFrom:\nVergoz \n Michael \nTo: pgsql-hackers@postgresql.org\n\nSent: Tuesday, June 04, 2002 10:51 \n AM\nSubject: [HACKERS] clustering\n\n\nIt is possible de make a cluster with postgresql \n ?\n \nmichael",
"msg_date": "Wed, 5 Jun 2002 12:45:24 -0400",
"msg_from": "\"Jon Franz\" <coventry@one.net>",
"msg_from_op": false,
"msg_subject": "Re: clustering"
}
] |
[
{
"msg_contents": "This report was submitted as a Debian bug.\n\n-----Forwarded Message-----\n\nFrom: russell@coker.com.au\nTo: submit@bugs.debian.org\nSubject: Bug#149056: postgresql: should not try in a busy loop when allocating resources\nDate: 04 Jun 2002 23:06:35 +0200\n\nPackage: postgresql\nVersion: N/A\nSeverity: normal\n\nWhen trying to create a semaphore Postgresql 7.2.1-3 will try 400,000 times per\nsecond if it has problems. I think this is excessive, trying a mere 100 times\na second will give the same chance of getting a good result without wasting\nexcessive CPU time.\n\nNB If another process which is CPU bound is using all the resources then\nleaving CPU time free will improve the results...\n\n-- System Information\nDebian Release: 3.0\nKernel Version: Linux lyta 2.4.18lsm #1 Wed May 22 14:20:37 CEST 2002 i686 unknown\n\n-----End of Forwarded Message-----\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"But without faith it is impossible to please him; for \n he that cometh to God must believe that he is, and \n that he is a rewarder of them that diligently seek \n him.\" Hebrews 11:6",
"msg_date": "04 Jun 2002 22:34:38 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "[Fwd: Bug#149056: postgresql: should not try in a busy loop when\n\tallocating resources]"
},
{
"msg_contents": "Oliver Elphick <olly@lfix.co.uk> forwards:\n> When trying to create a semaphore Postgresql 7.2.1-3 will try 400,000 times=\n> per\n> second if it has problems.\n\nAFAICS it will try *once* and abort if it fails. Can you provide a\nreproducible test case for the above behavior?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 Jun 2002 22:03:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Bug#149056: postgresql: should not try in a busy loop when\n\tallocating resources]"
},
{
"msg_contents": "Tom Lane wrote:\n> Oliver Elphick <olly@lfix.co.uk> forwards:\n> > When trying to create a semaphore Postgresql 7.2.1-3 will try 400,000 times=\n> > per\n> > second if it has problems.\n> \n> AFAICS it will try *once* and abort if it fails. Can you provide a\n> reproducible test case for the above behavior?\n\nI assume he meant tries to grab a semaphore 400,000 times, but I may be wrong.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 8 Jun 2002 22:09:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Bug#149056: postgresql: should not try in a busy"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I assume he meant tries to grab a semaphore 400,000 times, but I may\n> be wrong.\n\nI don't believe that would happen either ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 Jun 2002 22:21:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: Bug#149056: postgresql: should not try in a busy "
}
] |
[
{
"msg_contents": "OK, I was wrong. '' can be sufficient. The DB just has to treat everything \nbetween single quotes as data except for '' which is treated as a ' in the \ndata.\n\nHowever raw control characters can still cause problems in the various \nstages from the source to the DB.\n\nCheerio,\nLink.\n\nLincoln Yeoh wrote:\nQuoting is to help separate data from commands. Though '' is sufficient for \nquoting ' it seems to me not sufficient for control characters.\n\n",
"msg_date": "Wed, 05 Jun 2002 12:27:25 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": true,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "Lincoln Yeoh writes:\n\n> However raw control characters can still cause problems in the various\n> stages from the source to the DB.\n\nI still don't see why. You are merely speculating about implementation\nfallacies that aren't there.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 6 Jun 2002 19:10:34 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: non-standard escapes in string literals"
},
{
"msg_contents": "Yes it's speculation. The implementation at the DB isn't there, neither are \nthe associated DBD/JDBC/ODBC drivers for it.\n\nBasically if the fallacies aren't in postgresql _if_ the decision is to \nimplement it, I'd be happy.\n\nI was just noting (perhaps superfluously) that backspaces and friends \n(nulls) have been useful for exploiting databases (and other programs). \nRecently at least one multibyte character (0x81a2) allowed potential \nsecurity problems with certain configurations/installations of Postgresql. \nWould switching to the standard cause such problems to be less or more \nlikely? Would making it an option make such problems more likely?\n\nCheerio,\nLink.\n\np.s. Even +++AT[H]<cr>(remove square brackets and <cr> = carriage return) \nas data can cause problems sometimes - esp with crappy modems. Once there \nwas a site whose EDI metadata had lots of +++ and they were experiencing \n\"bad connections\" <grin>...\n\n\nAt 07:10 PM 6/6/02 +0200, Peter Eisentraut wrote:\n>Lincoln Yeoh writes:\n>\n> > However raw control characters can still cause problems in the various\n> > stages from the source to the DB.\n>\n>I still don't see why. You are merely speculating about implementation\n>fallacies that aren't there.\n>\n>--\n>Peter Eisentraut peter_e@gmx.net\n\n\n",
"msg_date": "Fri, 07 Jun 2002 03:00:49 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": true,
"msg_subject": "Re: non-standard escapes in string literals"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Tuesday, June 04, 2002 9:34 PM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Roadmap for a Win32 port\n> \n> \n> OK, I think I am now caught up on the Win32/cygwin \n> discussion, and would\n> like to make some remarks.\n> \n> First, are we doing enough to support the Win32 platform? I think the\n> answer is clearly \"no\". There are 3-5 groups/companies \n> working on Win32\n> ports of PostgreSQL. We always said there would not be \n> PostgreSQL forks\n> if we were doing our job to meet user needs. Well, \n> obviously, a number\n> of groups see a need for a better Win32 port and we aren't \n> meeting that\n> need, so they are. I believe this is one of the few cases \n> where groups\n> are going out on their own because we are falling behind.\n> \n> So, there is no question in my mind we need to do more to encourage\n> Win32 ports. Now, on to the details.\n> \n> INSTALLER\n> ---------\n> \n> We clearly need an installer that is zero-hassle for users. \n> We need to\n> decide on a direction for this.\n> \n> GUI\n> ---\n> \n> We need a slick GUI. pgadmin2 seems to be everyone's favorite, with\n> pgaccess on Win32 also an option. What else do we need here?\n\nNothing else. It is better than any commercial tools in current use.\nAn excellent piece of work.\n \n> BINARY\n> ------\n> \n> This is the big daddy. It is broken down into several sections:\n> \n> FORK()\n> \n> How do we handle fork()? Do we use the cygwin method that copies the\n> whole data segment, or put the global data in shared memory and copy\n> that small part manually after we create a new process?\n\nDo not try to do a fork() on Win32. The one at PW32 is better, but\nstill awful. Win32 just does not have fascilities for fork().\n\nIf you use Cygwin, it will kill the project for commercial use (at least\nfor many institutions). That's fine, but it will become an academic\nexercise instead of a viable commercial tool. If they are comfortable\nin that [Cygwin] environment, it makes no sense to use Cygwin instead of\nRedhat. The Redhat version will fork() 100 times faster. After all, if\nthey are going to use unix tools in a unix interface with Unix scripts\nyou might as well use UNIX. And Cygwin requires a license for\ncommercial use.\nhttp://cygwin.com/licensing.html\n \n> THREADING\n> \n> Related to fork(), do we implement an optionally threaded postmaster,\n> which eliminates CreateProcess() entirely? I don't think we will have\n> superior performance on Win32 without it. (This would greatly help\n> Solaris as well.)\n\nCreateProcess() works well for Win32. That is the approach that we used\nand also the approach used by the Japanese team.\nIt is very simple. Simply do a create process call and then perform the\nsame operations that were done up to that point. It isn't difficult.\nThreading is another possibility. I think create process is better,\nbecause you can clone the rights of the one who attaches for the spawned\nserver (if you want to do that).\n\n> \n> IPC\n> \n> We can use Cygwin, MinGW, Apache, or our own code for this. Are there\n> other options?\n\nWe wrote our own from scratch. Cygwin will kill it. If there is a\nMinGW version it might be OK, but if MinGW is GPL, that will kill it.\nHave a look at ACE:\nhttp://www.cs.wustl.edu/~schmidt/ACE.html\nTheir license is on the same level as a BSD license. Now, they use C++,\nbut you can always write:\nextern \"C\" {\n}\nwrappers for stuff and keep PostgreSQL itself in pure, vanilla C. GCC\ndoes come with a C++ compiler, so it isn't going to cut anyone off.\n \n> ENVIRONMENT\n> \n> Lots of our code requires a Unix shell and utilities. Will \n> we continue\n> using cygwin for this?\n\nWe wrote our own utilities from scratch (e.g. initdb). The Japanese\ngroup that did the port did the same thing.\n \n> --------------------------------------------------------------\n> -------------\n> \n> As a roadmap, it would be good to get consensus on as many of these\n> items as possible so people can start working in these areas. We can\n> keep a web page of decisions we have made to help rally developers to\n> the project.\n\nIf you want a roadmap, the Japanese group laid it out for you. They\ndid the exact same steps as we did. Now, I don't know if we will be\nable to contribute or not (it is very much up in the air). And we had\nto do a lot of hacking of the source, so you might not want it if we\nvolunteered.\n\nSuggestion:\nAsk the Japanese group if they would like to post their changes back or\nexpose them so that the programming team can get ideas form it.\n\nI actually like what they did better than what we did (A giant DLL and\nall the binaries are microscopic -- it was how I suggested to do it here\nbut it was vetoed).\n\nAnyway, here is a roadmap laid out for you exactly. Just do what it\nsays and you will be fine:\nhttp://hp.vector.co.jp/authors/VA023283/PostgreSQLe.html\n\nLook at where it says \"Gists for patch\" and do that.\n",
"msg_date": "Tue, 4 Jun 2002 22:02:14 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "Dan,\n\nThe following is to help keep the archives accurate and should not be\nconstrued as an argument against the native Win32 port.\n\nOn Tue, Jun 04, 2002 at 10:02:14PM -0700, Dann Corbit wrote:\n> And Cygwin requires a license for commercial use.\n> http://cygwin.com/licensing.html\n\nThe above is not necessarily true:\n\n Red Hat sells a special Cygwin License for customers who are unable\n to provide their application in open source code form.\n\nNote that the above only comes into play if your application links\nwith the Cygwin DLL. This is easily avoidable by using JDBC, ODBC,\nWin32 libpq, etc. Hence, most people will not be required to purchase\nthis license from Red Hat.\n\nJason\n",
"msg_date": "Wed, 05 Jun 2002 08:07:06 -0400",
"msg_from": "Jason Tishler <jason@tishler.net>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "Jason Tishler wrote:\n> Dan,\n> \n> The following is to help keep the archives accurate and should not be\n> construed as an argument against the native Win32 port.\n> \n> On Tue, Jun 04, 2002 at 10:02:14PM -0700, Dann Corbit wrote:\n> > And Cygwin requires a license for commercial use.\n> > http://cygwin.com/licensing.html\n> \n> The above is not necessarily true:\n> \n> Red Hat sells a special Cygwin License for customers who are unable\n> to provide their application in open source code form.\n> \n> Note that the above only comes into play if your application links\n> with the Cygwin DLL. This is easily avoidable by using JDBC, ODBC,\n> Win32 libpq, etc. Hence, most people will not be required to purchase\n> this license from Red Hat.\n\nSo apps written using client libraries are BSD, while server-side\nchanges would have to release source. Makes sense, though we have never\nhad this distinction before. I assume plpgsql stored procedures would\nhave be open source, but of course those are stored in plaintext on the\nserver so that isn't a problem. If companies created custom C stored\nprocedures, those would have to be open source if using cygwin.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Jun 2002 11:17:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
}
] |
[
{
"msg_contents": "I apologize for my English language message. I am unable to speak\nJapanese. We do have a native Japanese speaker here, who could be\ncalled upon if necessary.\n\nThe PostgreSQL team is planning to do a native Win32 port. Perhaps you\nwould like to help with the effort. In that way, your changes will get\npropagated back up the source code tree and you can gain the benefits\nfrom future development efforts without performing any work.\n\nWe did a port to Win32 also, but your approach seems much better. We\nhave very fat executables and you have a marvelous DLL approach.\nProbably, the way that you perform the operations is much better.\n",
"msg_date": "Tue, 4 Jun 2002 22:06:31 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Cooperation"
},
{
"msg_contents": "Dann Corbit wrote:\n\n> I apologize for my English language message. I am unable to speak\n> Japanese. We do have a native Japanese speaker here, who could be\n> called upon if necessary.\n\nThere is no need to aplogize writing an e-mail in English.\nIt's global standards, but some portion is a bit difficult\nto understand. Anyhow, we must firstly express our thanks\nfor your interest in our project, though we are facing also\nhard obstacles as listed on the Web site. \n \n> The PostgreSQL team is planning to do a native Win32 port. Perhaps you\n> would like to help with the effort. In that way, your changes will get\n> propagated back up the source code tree and you can gain the benefits\n> from future development efforts without performing any work.\n\nIt is nice to hear that the PostgreSQL development team has also working\non this subject. Will you please illustrate the procedure more clearly how\nto we contribute our effort to your project. The last four words in the\nabove clause mean that once we supply you with the changed source, then\neverything afterwords could be handled by the team? How the copy right\nwill be dealt with? \n\nThe development has been continued by the volunteer developers here,\nhowever, we have to admit that businesses (companies) are also involved\nto support those people providing time to work on the development,\nnot to commercialization purpose but expecting some return, e.g. earning\ncompany's prestige. So, we have to regulate those backgrouds first based\nupon your proposal. We are positive to help you with our effort anyway,\nif things goes well.\n\n> We did a port to Win32 also, but your approach seems much better. We\n> have very fat executables and you have a marvelous DLL approach.\n> Probably, the way that you perform the operations is much better.\n\nThanks.\n\nToshi\n\n",
"msg_date": "Thu, 6 Jun 2002 16:38:55 +0900",
"msg_from": "ISHIKAWA Toshiyuki <t-ishikawa@astrodesign.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Cooperation"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 05 June 2002 21:00\n> To: Mike Mascari\n> Cc: Rod Taylor; Dave Page; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Operator Comments\n> \n> \n> Mike Mascari wrote:\n> > Here's the history, FWIW:\n> > \n> > I implemented COMMENT ON for just TABLES and COLUMNS, like Oracle.\n> > \n> > Bruce requested it for all objects\n> > \n> > I extended for all objects - including databases (my bad) ;-)\n> > \n> > Peter E. was rewriting psql and wanted the COMMENT on operators to \n> > reflect a COMMENT on the underlying function\n> > \n> > I submitted a patch to do that - I just do what I'm told ;-)\n> \n> Actually, the use of function comments for operators goes \n> back to when I added comments to system tables in \n> include/catalog. I wanted to avoid duplication of comments \n> so I placed them only on the functions and let the operators \n> display the function comments. Were there cases where we \n> don't want the function comments for certain operators? I \n> never anticipated that.\n> \n> Anyway, I looked at the new psql code and it works fine, \n> tries pg_operator description first, then pg_proc if missing.\n\nThe problem that I found was that if you update the comment on an\noperator (a trivial task in pgAdmin which is what I was coding at the\ntime) it updates the comment on the underlying function - not so good as\nthe new comment may no longer make sense when read from the perspective\nof the function. Of course, if the function can be used by different\noperators or even for other uses, then this situation is more likely to\noccur.\n\nDefaulting to the functions comment sounds OK, but I think an update\nshould be stored against the operators oid, not the functions.\n\nRegards, Dave.\n",
"msg_date": "Wed, 5 Jun 2002 21:28:46 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Operator Comments"
},
{
"msg_contents": "Dave Page wrote:\n> The problem that I found was that if you update the comment on an\n> operator (a trivial task in pgAdmin which is what I was coding at the\n> time) it updates the comment on the underlying function - not so good as\n> the new comment may no longer make sense when read from the perspective\n> of the function. Of course, if the function can be used by different\n> operators or even for other uses, then this situation is more likely to\n> occur.\n> \n> Defaulting to the functions comment sounds OK, but I think an update\n> should be stored against the operators oid, not the functions.\n\nYes, agreed. Operator-specific comments are better, if that is what is\nspecirfied by the user.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 5 Jun 2002 16:45:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Operator Comments"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Thomas Lockhart [mailto:thomas@fourpalms.org]\n> Sent: Wednesday, June 05, 2002 3:03 PM\n> To: Bruce Momjian\n> Cc: Igor Kovalenko; PostgreSQL-development\n> Subject: Re: [HACKERS] Roadmap for a Win32 port\n> \n> \n> ...\n> > Good summary. I think we would support both threaded and fork()\n> > operation, and users can control which they prefer. For a \n> web backend\n> > where many sessions are a single query, people may want to \n> give up the\n> > stability of fork() and go with threads, even on Unix.\n> \n> I would think that we would build on our strengths of having \n> a fork/exec\n> model for separate clients. A threaded model *could* benefit \n> individual\n> clients who are doing queries on multiprocessor servers, and \n> I would be\n> supportive of efforts to enable that.\n> \n> But the requirements for that may be less severe than for managing\n> multiple clients within the same process, and imho there is not strong\n> requirement to enable the latter for our current crop of well \n> supported\n> targets. If it came for free then great, but if it came with \n> a high cost\n> then the choice is not as obvious. It is also not a \n> *requirement* if we\n> were instead able to do the multiple threads for a single client\n> scenerio first.\n\nNotion:\nHave one version do both. Your server can fork(), and your sever can\nthread. It can fork() and thread, it can fork() or thread.\n\nThat gives the best of all worlds. One client who has his attachments\nto a database all setup might want to do a bunch of similar queries.\nHence a threaded model is nice.\n\nA server may be set up to clone the rights of the attaching process for\nsecurity reasons. Then you launch a new server with fork().\n",
"msg_date": "Wed, 5 Jun 2002 15:09:29 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "...\n> Notion:\n> Have one version do both. Your server can fork(), and your sever can\n> thread. It can fork() and thread, it can fork() or thread.\n> That gives the best of all worlds. One client who has his attachments\n> to a database all setup might want to do a bunch of similar queries.\n> Hence a threaded model is nice.\n> A server may be set up to clone the rights of the attaching process for\n> security reasons. Then you launch a new server with fork().\n\nRight. If/when that is possible then let's do it, as long as the cost is\nnot too high. But the intermediate steps are a possibility also, and are\nnot precluded from discussion.\n\nThis will all work out as a *convergence* of interests imho. And there\nis no great identifiable benefit for our current crop of platforms for\ngoing to a threaded model *unless* that enables queries for a single\nclient to execute in parallel (all imho of course ;). \n\nSo our convergence of interests for all platforms is in enabling\nthreading for these two purposes, and focusing on enabling the\nmultithreaded single client *first* means that the current crop of\nclients don't have to accept all negatives while we start on the road to\nbetter support of Win32 machines.\n\n - Thomas\n",
"msg_date": "Wed, 05 Jun 2002 15:21:22 -0700",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
}
] |
[
{
"msg_contents": "I've been having a lot of fun here at the SIGMOD annual conference,\nattaching faces to names like Stonebraker, Hellerstein, Aoki,\nSeltzer (if these do not ring a bell, you ain't read enough Postgres\nsource code lately). I felt I had to pass along this gem from Joe\nHellerstein, right after he observed that he knew the PG sources\nquite well, and he'd noticed MySQL was a lot smaller:\n\n\"Postgres is bloatware by design: it was built to house PhD theses.\"\n\nimmediately followed by\n\n\"The current maintainers [he's looking right at me while he says this]\nhave done a great job of trimming the fat. I know that *my* thesis\nis gone entirely.\"\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 Jun 2002 01:18:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Straight-from-the-horses-mouth dept"
},
{
"msg_contents": "On Thu, 2002-06-06 at 07:18, Tom Lane wrote:\n> I've been having a lot of fun here at the SIGMOD annual conference,\n> attaching faces to names like Stonebraker, Hellerstein, Aoki,\n> Seltzer (if these do not ring a bell, you ain't read enough Postgres\n> source code lately). I felt I had to pass along this gem from Joe\n> Hellerstein, right after he observed that he knew the PG sources\n> quite well, and he'd noticed MySQL was a lot smaller:\n> \n> \"Postgres is bloatware by design: it was built to house PhD theses.\"\n> \n> immediately followed by\n> \n> \"The current maintainers [he's looking right at me while he says this]\n> have done a great job of trimming the fat. I know that *my* thesis\n> is gone entirely.\"\n\nI hope it is removed in such a clean way that it can be put back _as an\ninstallable module_ if needed.\n\nBloatware or not, one of the main advantages of PG is that it is\ndesigned to be extensible.\n\n<rant>\n\nOne thing I think we have stripped too much is time travel. I hope that\nthere will be possibility to put back hooks for the following:\n\n1) logging dead tuples as they are removed,either to text file or\narchive table/database depending on installed logging function)\n\n2) have VACUUM delete only tuples dead before transaction N (I'd prefer\nsome timestamp, but that would need logging transaction times or moving\ntransaction iods to 64bits so that they can embed time)\n\n3) some way to tell postgres to get data as it was at transaction N - it\nwould be a great way for recovering accidentally deleted data (I send\nout my quite crappy python code for retrieving dead tuples from data\nfiles 1-2 times a month :)\n\nSELECT * FROM MYTABLE AS OF TRANSACTION(123456) would be great.\n\n4) some sparse logging of transaction times, say log current trx nr\nevery 5 minutes, making 3) more usable.\n\n</rant>\n\n-------------------\nHannu, anxiousy wating for more stuff the alleged horses have to say ;)\n\n\n\n\n",
"msg_date": "06 Jun 2002 13:00:44 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Straight-from-the-horses-mouth dept"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> <rant>\n> One thing I think we have stripped too much is time travel.\n\nActually, I was just discussing that at last night's dinner with someone\nwhose name I forget at the moment (I have his card, but not on me).\nHe claimed to know how to support time travel as an optional feature\n--- ie, you don't pay for it if you don't need it. I'm hoping to hear\nmore about this after the conference is over...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 06 Jun 2002 12:13:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Straight-from-the-horses-mouth dept "
},
{
"msg_contents": "On Thu, 2002-06-06 at 21:13, Tom Lane wrote:\n> Hannu Krosing <hannu@tm.ee> writes:\n> > <rant>\n> > One thing I think we have stripped too much is time travel.\n> \n> Actually, I was just discussing that at last night's dinner with someone\n> whose name I forget at the moment (I have his card, but not on me).\n> He claimed to know how to support time travel as an optional feature\n> --- ie, you don't pay for it if you don't need it. I'm hoping to hear\n> more about this after the conference is over...\n\nI guess that we could do something similar to oracle (yes, they have\nsome limited time travel starting from ver 9i ;-p )\n\n1. They log transaction times at some rather coarse interval - this is\nthe cheap part if done relatively seldom.\n\n2. then they have gone through much of trouble to get historic data from\nthe logs.\n\nThe part that could make it cheap for us is that we don't need to go to\nlogs, just having an option to tell the executor to assume it is in some\nother transaction in some part of the tree would be enough (and to\nignore the new dead-tuple bit in indexes now that it is there) - this\nshould be possible at very low extra cost.\n\nso we could resurrect old tuples by selecting them into new table:\n\nCREATE TABLE salvage_mytable \nAS\nSELECT oid as oldoid,tmin as oldtmin, ..., *\n FROM mytable\nAS OF YESTERDAY;\n\n-------------\nHannu\n\n\n\n\n",
"msg_date": "07 Jun 2002 00:11:51 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Straight-from-the-horses-mouth dept"
}
] |
[
{
"msg_contents": "\n\nProblem:\n\nWin2000 and Cygwin/postgresql-7.1 produce\n\"defunct\" processes after each run which can ONLY be killed\nby Win Task Manager. If you allow too many \"defunct\" processes\nyour database requests slow down and your number of connections\nincreases, i.e. if you\nhave 32 connections specified in your postgresql.conf file and\nand 32 \"defunct\" processes and try to run again\nthe connection will be refused.\nWhen you are killing \"defunct\" processes and reach the one which was\nfirst formed\nthe postmaster restarts postgresql.\n\nHow to get read of those \"defunct\" processes?\n\nMuch obliged.\n\nSteven.\n\nP.S. You do not have these \"defunct\" processes under Win98!!!???\n\n",
"msg_date": "Thu, 06 Jun 2002 16:23:29 +0930",
"msg_from": "Steven Vajdic <svajdic@asc.corp.mot.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and Windows2000 and defunct processes"
},
{
"msg_contents": "Hi.\n\nOn Thu, 06 Jun 2002 16:23:29 +0930\nSteven Vajdic <svajdic@asc.corp.mot.com> wrote:\n\n> How to get read of those \"defunct\" processes?\n\nWhat version of cygipc and cygwin do you use?\n\n---\nYutaka tanida<yutaka@hi-net.zaq.ne.jp>\n謎のWebsite http://www.hi-net.zaq.ne.jp/yutaka/\n\n",
"msg_date": "Thu, 06 Jun 2002 23:43:09 +0900",
"msg_from": "Yutaka tanida <yutaka@hi-net.zaq.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL and Windows2000 and defunct processes"
},
{
"msg_contents": "Hi Michael/Yutaka,\n\nThanks for your reply.\n\nI discovered the cause of the problem (I think) - I just do not know why\nit happens.\n\nPROBLEM (again): DEFUNCT processes under Win2K not seen under Win98.\n\nI am running cygwin-2.125.2.10 (postgresql 7.1) and cigipc-1.11-1\ninstalled in Jan 2002.\n\nPrior to that I struggled with prior cygwin version - no proper\npostgresql run at all.\n\nIn April/May I tried then new version of cygwin (postgresql 7.2) and\ncigipc BUT\ncygwin environment was very poor and without some essential unix\ncommands\nand I gave up.\n\nI can try again with now latest versions if that will help (!!!???).\n\nThe problem is in closing pgSQL database. Somehow the process does not\nfinish.\nUnder Win98 cygwin does not show (ps -ef) defunct processes BUT if you\nrun a program which\nopens/closes pgSQL 2 times in succession the second time the connection\nwill be refused - like it\nwas not properly closed the first time!!!???\n\nUnder Win2K (same version og cygwin/cigipc) this does not happen BUT a\ndefunct process remains\nand accumulates/multiplies after each run\n\nI'll try with Michael's suggestion RE: threads and process closure but I\ndo not think I'll discover\nanything new - the \"postgresql\" process does not finish for some reason\n- the question is why:\nbad cygwin/cigipc, or bad postgresql.conf\nparameters or bad postmaster/postgresql run or .. .\n\nPlease, do not ask why I run under Win2K/Win98 - I normally run my\nHTML/PHP/pgSQL\napplication under linux (redhat, mandrake, SuSe) and it works fine BUT\nWin is a much more used\nOS and one has to take that into account.\n\nMore advice/help would be appreciated.\n\nCheers,\n\nSteven.\n\n\n--\n***********************************************\n\nSteven Vajdic (BSc/Hon, MSc)\nSenior Software Engineer\nMotorola Australia Software Centre (MASC)\n2 Second Avenue, Technology Park\nAdelaide, South Australia 5095\nemail: Steven.Vajdic@motorola.com\nemail: svajdic@asc.corp.mot.com\nPh.: +61-8-8168-3543\nFax: +61-8-8168-3501\nFront Office (Ph): +61-8-8168-3500\n\n----------------------------------------\nmobile: +61 (0)419 860 903\nAFTER WORK email: steven_vajdic@ivillage.com\nHome address: 6 Allawah Av., Glen Osmond SA 5064, Australia\n----------------------------------------\n\n***********************************************\n\n\n\n",
"msg_date": "Fri, 07 Jun 2002 11:56:39 +0930",
"msg_from": "Steven Vajdic <svajdic@asc.corp.mot.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL and Windows2000 and defunct processes"
}
] |
[
{
"msg_contents": "Steven,\n\nI found the following snippet in MSDN under CreateProcess:\n\n\"The created process remains in the system until all threads within the\nprocess have terminated and all handles to the process and any of its\nthreads have been closed through calls to CloseHandle. The handles for both\nthe process and the main thread must be closed through calls to CloseHandle.\nIf these handles are not needed, it is best to close them immediately after\nthe process is created.\"\n\nIf this is your case, what is unknown is why it is \"not\" happening in\nwin98?? Can you run a test and close the PROCESS_INFORMATION HANDLEs for\nthe main thread and the process itself and see if that makes a difference\n(and any other HANDLEs that you might have gotten through OpenProcess()\ncalls...)?\n\nMike Shelton\n\n\n-----Original Message-----\nFrom: Steven Vajdic [mailto:svajdic@asc.corp.mot.com]\nSent: Thursday, June 06, 2002 12:53 AM\nTo: pgsql-general@postgresql.org; pgsql-hackers@postgresql.org;\nsvajdic@asc.corp.mot.com; steven_vajdic@yahoo.com.au\nSubject: [HACKERS] PostgreSQL and Windows2000 and defunct processes\n\n\n\n\nProblem:\n\nWin2000 and Cygwin/postgresql-7.1 produce\n\"defunct\" processes after each run which can ONLY be killed\nby Win Task Manager. If you allow too many \"defunct\" processes\nyour database requests slow down and your number of connections\nincreases, i.e. if you\nhave 32 connections specified in your postgresql.conf file and\nand 32 \"defunct\" processes and try to run again\nthe connection will be refused.\nWhen you are killing \"defunct\" processes and reach the one which was\nfirst formed\nthe postmaster restarts postgresql.\n\nHow to get read of those \"defunct\" processes?\n\nMuch obliged.\n\nSteven.\n\nP.S. You do not have these \"defunct\" processes under Win98!!!???\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 6: Have you searched our list archives?\n\nhttp://archives.postgresql.org\n",
"msg_date": "Thu, 6 Jun 2002 08:26:23 -0700 ",
"msg_from": "\"SHELTON,MICHAEL (Non-HP-Boise,ex1)\" <michael_shelton@non.hp.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL and Windows2000 and defunct processes"
}
] |
[
{
"msg_contents": "Patch to contrib/intarray is attached to this message.\nPlease apply it to 7.2 and CVS\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Mon, 3 Jun 2002 19:45:03 +0300 (GMT)\nFrom: Oleg Bartunov <oleg@sai.msu.su>\nTo: Pgsql Hackers <pgsql-hackers@postgresql.org>\nSubject: [HACKERS] patch for contrib/intarray (7.2 and 7.3)\n\nPlease apply attached patch to contrib/intarray (7.2, 7.3).\n\n Fixed bug with '=' operator for gist__int_ops and\n define '=' operator for gist__intbig_ops opclass.\n Now '=' operator is consistent with standard 'array' type.\n <br>\n Tnanks Achilleus Mantzios for bug report and suggestion.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n\n\r\n---------------------------(end of broadcast)---------------------------\r\nTIP 2: you can get off all lists at once with the unregister command\r\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\r\n\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000",
"msg_date": "Thu, 6 Jun 2002 19:43:26 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "patch for contrib/intarray (7.2 and 7.3) (fwd)"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Patch to contrib/intarray is attached to this message.\n> Please apply it to 7.2 and CVS\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> ---------- Forwarded message ----------\n> Date: Mon, 3 Jun 2002 19:45:03 +0300 (GMT)\n> From: Oleg Bartunov <oleg@sai.msu.su>\n> To: Pgsql Hackers <pgsql-hackers@postgresql.org>\n> Subject: [HACKERS] patch for contrib/intarray (7.2 and 7.3)\n> \n> Please apply attached patch to contrib/intarray (7.2, 7.3).\n> \n> Fixed bug with '=' operator for gist__int_ops and\n> define '=' operator for gist__intbig_ops opclass.\n> Now '=' operator is consistent with standard 'array' type.\n> <br>\n> Tnanks Achilleus Mantzios for bug report and suggestion.\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 10:43:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (7.2 and 7.3) (fwd)"
},
{
"msg_contents": "\nPatch applied to 7.2.X and 7.3. Thanks.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Patch to contrib/intarray is attached to this message.\n> Please apply it to 7.2 and CVS\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n> \n> ---------- Forwarded message ----------\n> Date: Mon, 3 Jun 2002 19:45:03 +0300 (GMT)\n> From: Oleg Bartunov <oleg@sai.msu.su>\n> To: Pgsql Hackers <pgsql-hackers@postgresql.org>\n> Subject: [HACKERS] patch for contrib/intarray (7.2 and 7.3)\n> \n> Please apply attached patch to contrib/intarray (7.2, 7.3).\n> \n> Fixed bug with '=' operator for gist__int_ops and\n> define '=' operator for gist__intbig_ops opclass.\n> Now '=' operator is consistent with standard 'array' type.\n> <br>\n> Tnanks Achilleus Mantzios for bug report and suggestion.\n> \n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\nContent-Description: \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 17:52:39 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: patch for contrib/intarray (7.2 and 7.3) (fwd)"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Steve Howe [mailto:howe@carcass.dhs.org] \n> Sent: 06 June 2002 02:37\n> To: Bruce Momjian\n> Cc: PostgreSQL-development\n> Subject: Re: Roadmap for a Win32 port\n> \n> \n> Hello Bruce,\n> \n> Wednesday, June 5, 2002, 1:33:44 AM, you wrote:\n> \n> BM> INSTALLER\n> BM> ---------\n> \n> BM> We clearly need an installer that is zero-hassle for \n> users. We need \n> BM> to decide on a direction for this.\n> I suggest Nullsoft install system \n> (http://www.nullsoft.com/free/nsis/). It's > real good and very \n> simple to use. I can help on this if you want.\n\nI think that a Windows Installer compatible package would be better as\nit would allow us to build the package as a merge module which others\ncould use in their installers for their PostgreSQL based apps, allowing\none installation to install everything they require easily and (more\nimportantly) correctly. An example of this can be found in the psqlODBC\ninstaller.\n\nI can handle this if required.\n\n> BM> ENVIRONMENT\n> \n> I also would like to empathize that probably a small GUI for \n> controlling the PostgreSQL service/application would be nice. \n\nI'm happy to add such code to pgAdmin - seems like the natural thing to\ndo (to me at least!).\n\nRegards, Dave.\n",
"msg_date": "Fri, 7 Jun 2002 08:42:33 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
}
] |
[
{
"msg_contents": "Hi,\nI'm trying to start to program with the PostgreSQL's geometric primitive\ntypes, and have started to write some code using them (PostgreSQL\nversion 7.1.3, installed from source). However when I\ninclude the file utils/geo_decls.h I get an error starting that the type\nPGFunction (found in a file included from geo_decls.h) cannot be found.\n\nI've done a search of all the header files in my installation (and also\nall the source files that I compiled), and cannot find the definition of\nthe PGFunction type. Does anyone have any idea of where I can find this\ndefinition, or of why it might be missing.\n-- \nTony\n\n---------------------------------\nDr. Tony Griffiths\nResearch Fellow\nInformation Management Group,\nDepartment of Computer Science,\nThe University of Manchester,\nOxford Road,\nManchester M13 9PL, \nUnited Kingdom\n\nTel. +44 (0) 161 275 6139\nFax +44 (0) 161 275 6236\nemail tony.griffiths@cs.man.ac.uk\n---------------------------------\n",
"msg_date": "Fri, 07 Jun 2002 11:11:30 +0100",
"msg_from": "Tony Griffiths <tony.griffiths@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "Missing types in C header files"
},
{
"msg_contents": "Tony Griffiths writes:\n\n> I've done a search of all the header files in my installation (and also\n> all the source files that I compiled), and cannot find the definition of\n> the PGFunction type. Does anyone have any idea of where I can find this\n> definition, or of why it might be missing.\n\nfmgr.h\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 8 Jun 2002 00:27:29 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Missing types in C header files"
},
{
"msg_contents": "Tony Griffiths wrote:\n> \n> Hi,\n> I'm trying to start to program with the PostgreSQL's geometric primitive\n> types, and have started to write some code using them (PostgreSQL\n> version 7.1.3, installed from source). However when I\n> include the file utils/geo_decls.h I get an error starting that the type\n> PGFunction (found in a file included from geo_decls.h) cannot be found.\n> \n> I've done a search of all the header files in my installation (and also\n> all the source files that I compiled), and cannot find the definition of\n> the PGFunction type. Does anyone have any idea of where I can find this\n> definition, or of why it might be missing.\n\nHmm. I'm not sure (having not integrated client-side handling of these\ntypes using Postgres' own definition of the structures) but I think that\nthese definitions are not intended for you to use directly. There is a\nset of include files intended for client-side programs, and perhaps this\ndata type needs to be mentioned there. Or perhaps we make no provision\nfor that (yet) and you are on your own. Perhaps someone else will speak\nup on The Right Way To Do This?\n\n - Thomas\n",
"msg_date": "Fri, 07 Jun 2002 18:03:30 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Missing types in C header files"
},
{
"msg_contents": "I've looked in fmgr.h and there is no definition of this type there - it\nuses the type, but does not define it.\n\nPeter Eisentraut wrote:\n> \n> Tony Griffiths writes:\n> \n> > I've done a search of all the header files in my installation (and also\n> > all the source files that I compiled), and cannot find the definition of\n> > the PGFunction type. Does anyone have any idea of where I can find this\n> > definition, or of why it might be missing.\n> \n> fmgr.h\n> \n> --\n> Peter Eisentraut peter_e@gmx.net\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \nTony\n\n---------------------------------\nDr. Tony Griffiths\nResearch Fellow\nInformation Management Group,\nDepartment of Computer Science,\nThe University of Manchester,\nOxford Road,\nManchester M13 9PL, \nUnited Kingdom\n\nTel. +44 (0) 161 275 6139\nFax +44 (0) 161 275 6236\nemail tony.griffiths@cs.man.ac.uk\n---------------------------------\n",
"msg_date": "Sun, 09 Jun 2002 23:42:04 +0100",
"msg_from": "Tony Griffiths <tony.griffiths@cs.man.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: Missing types in C header files"
},
{
"msg_contents": "I've found the mistake - as usual it's down to me! I didn't realise that \nI had to include postgres.h before including geo_decls.h All now \ncompiles ok.\n\nTony\n\nTony Griffiths wrote:\n\n>I've looked in fmgr.h and there is no definition of this type there - it\n>uses the type, but does not define it.\n>\n>Peter Eisentraut wrote:\n>\n>>Tony Griffiths writes:\n>>\n>>>I've done a search of all the header files in my installation (and also\n>>>all the source files that I compiled), and cannot find the definition of\n>>>the PGFunction type. Does anyone have any idea of where I can find this\n>>>definition, or of why it might be missing.\n>>>\n>>fmgr.h\n>>\n>>--\n>>Peter Eisentraut peter_e@gmx.net\n>>\n>>---------------------------(end of broadcast)---------------------------\n>>TIP 6: Have you searched our list archives?\n>>\n>>http://archives.postgresql.org\n>>\n>\n\n\n\n\n\n\nI've found the mistake - as usual it's down to me! I didn't realise that\nI had to include postgres.h before including geo_decls.h All now compiles\nok.\n\nTony\n\nTony Griffiths wrote:\n\nI've looked in fmgr.h and there is no definition of this type there - ituses the type, but does not define it.Peter Eisentraut wrote:\n\nTony Griffiths writes:\n\nI've done a search of all the header files in my installation (and alsoall the source files that I compiled), and cannot find the definition ofthe PGFunction type. Does anyone have any idea of where I can find thisdefinition, or of why it might be missing.\n\nfmgr.h--Peter Eisentraut peter_e@gmx.net---------------------------(end of broadcast)---------------------------TIP 6: Have you searched our list archives?http://archives.postgresql.org",
"msg_date": "Mon, 10 Jun 2002 14:27:24 +0100",
"msg_from": "\"Tony Griffiths(RA)\" <griffitt@cs.man.ac.uk>",
"msg_from_op": false,
"msg_subject": "Re: Missing types in C header files"
},
{
"msg_contents": "Tony Griffiths <tony.griffiths@cs.man.ac.uk> writes:\n> I've looked in fmgr.h and there is no definition of this type there - it\n> uses the type, but does not define it.\n\nEh?\n\n\ttypedef Datum (*PGFunction) (FunctionCallInfo fcinfo);\n\nLooks like a definition to me ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Jun 2002 10:04:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Missing types in C header files "
}
] |
[
{
"msg_contents": "\nI have just committed the last of the schema related changes to pgAdmin\nto CVS. This was a significant amount of work and there are bound to be\nsome bugs, so if you have a Windows PC & a little spare time, I would\nappreciate it if you could find some time to try it out with any of\nPostgreSQL 7.1.x, 7.2.x or 7.3dev.\n\nThe source code can be found at: http://cvs.pgadmin.org\n\nPrecompiled development binaries, with installation instructions are at:\nhttp://cvs.pgadmin.org/cgi-bin/viewcvs.cgi/binaries/binaries.tar.gz?tarb\nall=1\n\nBelow is a list of changes from v1.2.0:\n\n- New resize code in frmMain - allows adjustment of the\nListview/Definition pane split.\n- Fixed a bug where selecting privilege ALL did not disable the Rule\nprivilege.\n- Hide System Objects in the SQL Wizard.\n- Updated icons.\n- Added Refresh button to DataGrid.\n- Added Select All/Select None buttons to potentially large listviews.\n- Make System Objects not applicable for Revision Control.\n- Simplified Listview handling code.\n- Reworded text on encrypted passwords in frmOptions.\n- Added Query Log Recorder.\n- Fixed a bug that cleared default values instead of updating them.\n- Check the PostgreSQL version when connecting and handle correctly.\n- Updated the PostgreSQL docs to the 7.2 Release version.\n- Clear Upgrade Wizard listview before populating.\n- REVOKE privileges from groups correctly.\n- Allow creation of tables with no columns, just inherits.\n- Don't include inherited columns & checks in table definitions.\n- Fixed a bug in the Import Wizard data parser.\n- Fix mouse pointer and allow display of errors when timer is stopped\n(Mark A. Taff).\n- Fixed a bug in the query parser in the SQL output grid.\n- Allow pseudo modification of views with PostgreSQL 7.2+\n- Views can now be renamed.\n- Fixed a bug in the trigger reverse engineering that prepended the\nexecution conditions of previous triggers to the current.\n- Added an option to enable or disable Auto Row Counts.\n- Added Rows property to View objects.\n- Set default database encoding to \"SQL_ASCII\".\n- Quote function definition when needed.\n- Fixed a bug that prepended a carriage return when loading SQL queries\nfrom file.\n- Improved warnings about dropping objects that are not up to date in\nthe Revision Control System.\n- Added AllowConnections property to database objects, display it in\npgAdmin, and check it before attempting to connect to a database.\n- Invalidate Caches before refreshing hierarchy in pgSchema.\n- Standardised db name access method throughout pgSchema's classes, and\nadded caching.\n- Added support for renaming Sequences & Indexes.\n- Committing a table now also commits sub objects.\n- Added a '-wine' command line option to disable modal dialogues (they\ndon't seem to work under Wine).\n- Prevent connection to databases until they are selected. Added an\noption to revert to old behaviour.\n- Allow commit of entire database to revision control.\n- Added support for dropping checks with PostgreSQL 7.2+.\n- Excel Exporter: Format the cell for the data type & inserts the data\nusing the cells 'FormulaR1C1' property (David Horwitz).\n- Allow selection of font for display of data.\n- Added a guide to setting up a development environment.\n- Added a HOWTO on using MD5 Encrypted Passwords.\n- Added support for viewing statistics on PostgreSQL 7.2+\n- Cancelling closure of child windows will now cancel application exit.\n- Allow sorting of listview/statsview by clicking the column headers.\n- Use Primary Keys for updating/deleting rows in the data editor where\npossible.\n- Fix EXPLAIN for PostgreSQL 7.3+\n- Added support for Domains in PostgreSQL 7.3+\n- Enhanced the query parser to detect queries with functions and\nsubselects in the column list, and aliased columns as non-updateable.\n- Quote sequence names properly when using setval.\n- Treat all objects named pgadmin_* as system objects.\n- Fixed a bug in the Operator Cache which also cached the left & right\ntypes.\n- Allow addition & removal of NOT NULL constraints on columns with\nPostgreSQL 7.3+.\n- Check the ODBC driver version correctly for EXPLAIN.\n- Added support for Schemas in PostgreSQL 7.3+.\n- Removed Revision Control due to it's complexity and lack of use\nfollowing an RFD on the hackers and support lists.\n- Rewrote the code that associates treeview nodes with pgSchema objects.\nThe new code is faster and more reliable.\n- Correctly recognise functions with one opaque argument.\n- Filter functions listed when creating Types & Operators to only those\nthat are suitable.\n- Only quote identifiers when required.\n\nRegards, Dave.\n",
"msg_date": "Fri, 7 Jun 2002 14:15:05 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "pgAdmin schema updates complete"
}
] |
[
{
"msg_contents": "\n> Anyway, I am pretty sure that PostgreSQL is not the culprit here. As it \n> happens this project is back on the table for me so it is interesting that \n> your email popped up now. I just compiled the latest version of PostgreSQL \n> on my AIX system and it generated lots of errors and then completed and \n> installed fine. Makes me sort of nervous. We'll see how it goes. Anyone \n> have any horror/success stories about PostgreSQL on AIX for me?\n\nThe \"errors\" are mostly duplicate symbol warnings, that are part of generating\na shared lib on AIX (in a mostly gcc and xlc independent way), and can be safely\nignored.\n\nThe imho most needed effort for AIX would be to switch the TAS stuff from\ncs() to fetch_and_or() or a PowerPC assembler or the test_and_set() that is \nundocumented/intended for kernel, see discussions from last year.\n\nThe fetch_and_or() is a lot faster on multi processor systems but a little\nslower on single processor. But cs() is documented as depricated, so ...\n\nI might get round to doing this.\n\nAndreas\n",
"msg_date": "Fri, 7 Jun 2002 16:23:42 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] PostgreSQL on AIX"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> > Anyway, I am pretty sure that PostgreSQL is not the culprit here. As it \n> > happens this project is back on the table for me so it is interesting that \n> > your email popped up now. I just compiled the latest version of PostgreSQL \n> > on my AIX system and it generated lots of errors and then completed and \n> > installed fine. Makes me sort of nervous. We'll see how it goes. Anyone \n> > have any horror/success stories about PostgreSQL on AIX for me?\n> \n> The \"errors\" are mostly duplicate symbol warnings, that are part of generating\n> a shared lib on AIX (in a mostly gcc and xlc independent way), and can be safely\n> ignored.\n> \n> The imho most needed effort for AIX would be to switch the TAS stuff from\n> cs() to fetch_and_or() or a PowerPC assembler or the test_and_set() that is \n> undocumented/intended for kernel, see discussions from last year.\n\nYes, TODO has:\n\n\t* Evaluate AIX cs() spinlock macro for performance optimizations (Tatsuo)\n\n> The fetch_and_or() is a lot faster on multi processor systems but a little\n> slower on single processor. But cs() is documented as depricated, so ...\n\nShould I update the TODO item?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 22:53:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PostgreSQL on AIX"
}
] |
[
{
"msg_contents": "\nThis problem was discovered in 7.1.2. Was wondering whether this is a known problem or not; we plan to test this on the latest postgres sometime later.\n\nWe have a large table, lets call it A, millions of rows. And in the table is a field called time, which is TIMESTAMP type. We have an index on it.\n\nOftentimes we like to get the latest row inserted by time on a given constraint. So we do a:\n\n\nSELECT * FROM A WHERE someconstraint = somerandomnumber ORDER BY time desc limit 1;\n\nPostgres intellegently uses the index to scan through the table from the end forward.\n\nIf there are no items that fit the constraint, the query will take a long time (cause it has to scan the whole table).\n\nIf there are items (plural important here, read below) that fit the constraint, the database finds the first item, and returns it right away (fairly quickly if the item is near the end).\n\nHowever, if there is only ONE item, postgres still scans the whole database. Not sure why. We also find out that if:\n\nThere are 2 items that match the criteria, and you do a LIMIT 2, it scans the whole table as well. Limit 1 returns quickly. Basically it seems like postgres is looking for one more item than it needs to.\n\n-rchit\n",
"msg_date": "Fri, 7 Jun 2002 16:26:34 -0700 ",
"msg_from": "Rachit Siamwalla <rachit@ensim.com>",
"msg_from_op": true,
"msg_subject": "Question whether this is a known problem in 7.1.2"
},
{
"msg_contents": "Rachit Siamwalla <rachit@ensim.com> writes:\n> There are 2 items that match the criteria, and you do a LIMIT 2, it\n> scans the whole table as well. Limit 1 returns quickly. Basically it\n> seems like postgres is looking for one more item than it needs to. \n\nThis is not a bug; or at least it's not something I'm prepared to break\nother things to change.\n\nIf you can figure out a way to change nodeLimit.c to not get confused\nabout change-of-fetch-direction without the extra fetch, then send a\npatch.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 Jun 2002 11:52:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question whether this is a known problem in 7.1.2 "
}
] |
[
{
"msg_contents": "Developers,\n\nHere's part to of my proposal to enhance, improve, and fix Timestamp and \nInterval in PostgreSQL. Part I is included after Part II in case everyone \nhas forgotten it.\n\nPlease give me feedback on this. My interest is that I develop calendaring \napps based on Postgresql, and the current Timestamp + Interval limitations \nand wierdnesses are giving me tsuris. Thus I'm not particularly attached \nto the specifics of my proposals, so long as we do *something* to fix the \nissues.\n\nPart II\n\nInterval\n-----------------------------------\n\nThere are a few problems currently with the Interval data type. The biggest \nis that the current rules give us no clear path for implementation of a full \nset of operators. The SQL92 standard is no help here; its implementation is \nunintuitive and extremely limited ... more limited, in fact, than the current \nincomplete implementation in PostgreSQL.\n\n\nProposal #3: We should support the addition of \"whole days\".\n\nDescription: Interval should support a \"Weeks to Days\" increment which is \natomic per day, and not as a aggregate of hours.\n\nReason: Currently, the \"days\" increment in Interval is treated as \"x 24 hours\" \nand not as whole days. This can cause some confusion when date calculations \nbreak over a DST change; users do *not* expect events to get an hour earlier \nor later in the fall or the spring. The current result is that a lot of \nusers give up on utilizing time zones because they can't deal with the time \nshift in calendar applications.\n\n\nProposal #4: Create to_char(INTERVAL, 'format string') Function.\n\nDescription: We could really use a built-in function that supports output \nformatting of Intervals.\n\nReason: self-evident, I think.\n\n\nProposal #5: Two alternate proposals for overhaul of the interval data type.\n\nDescription: Interval needs some radical changes to calculations and \noperators.\n\nReason: Currently, it is nearly impossible to conceive of an implementation \nfor a full set of operators for the interval data type ( + - / * ) because of \nthe variability of conversions from one interval increment to another. For \nexample, what exactly should be the result of '3 months' / '4 days'? Here \nare two alternatives.\n\nAlternative #1: Treat Interval Increments as Atomic, and Round\n\nIf we implemented this, each of the 3 sub-types of Interval (Year to Month, \nWeek to Day, and Hour to Millesecond per proposal #3) would be treated as \n\"atomic\" and not renderable in terms of smaller increments, in the same way \nthat integers are not divisible beyond a prime. In fact, rather than \nexpressing remainders in smaller increments, the modulo ( % ) operator would \nbe used to express the remainder.\n\nFurther, we would need to create a set of casting functions that allows for \nthe conversion of one interval subtype into another, using rounding by \napproximates, such as 1 year = 365 days, 1 month = 30 days, 1 day = 24 hours, \netc. This is not that different from how the DATE data type works. If users \nattempt multiplication and division with intervals of different subtypes, an \nimplicit cast would be made into the subtype of the smallest value. \n\nFinally, multiplication and division by floats would be disallowed and \nreplaced by multiplication and division by integers. Thus:\n\n'1 month' + '33 days' = '1 month 33 days'\n'1 month 33 days'::INTERVAL WEEK TO DAY = '63 days'\n'1 month' + '33 days'::INTERVAL YEAR TO MONTH = '2 months'\n'5 months' / '2 months' = 2\n'5 months' % '2 months' = '1 month'\n'5 months' / 2 = '2 months'\n'5 months' % 2 = '1 month'\n'9 months' / '2 weeks' = '270 days' / '14 days' = 19\n'15 hours' * 20 = '300 hours' (not '12 days 12 hours')\netc.\n\nPros: It's simple and relatively intuitive. This approach also is similar \nto the SQL92 spec, which focuses on interval subtypes.\nCons: It requires an annoying implementation of subtypes, which is cumbersome \nand difficult to manage when you have mixed intervals (e.g. '4 days 8 hours 9 \nminutes'). And, with every operation, rounding is being used which can \nresult in some ghastly inequalities:\n'1 year'/12 --> '1 month'::INTERVAL WEEK TO DAY --> '30 days' * 12 \n--> '360 days' / '1 year' = 0\n\n\nAlternative #2: Tie Intervals to a Specific Timestamp\n\nThis is the most robust interval implementation I can imagine. The basic idea \nis this: instead of intervals being an \"absolute\" value, they would be \nrooted in a specific timestamp. For example, rather than:\n\tINTERVAL '45 Days'\nWe would use:\n\tINTERVAL '2002-03-30 +45 days'\nThis would allow us to ground our intervals in the real calendar, and any \nsubtype conversion problems could be eliminated by resorting to the calendar. \nWe would know, for example, that:\n\t'2002-05-30 +2 months' / '2002-05-30 +2 weeks' = 4.35714...\nand even that\n\t'2002-05-30 +2 months' / 14 = '2002-05-30 +4 days 8 hours 34 min 17 sec ...'\n\nFor simplicity, users would be allowed to use intervals which did not state a \nstart date. In this case, the start date would be assumed to be a default \nstart date, such as '2000-01-01 00:00:00'. Also, start dates could be \nassumed from timestamp math:\n\n'2002-07-30' - '2002-05-30' = '2002-07-30 -61 days'\n\nOf course, this does not get us entirely away from subtyping. For example, if \nwe did arithmatic with disparate dates, increments would have to be applied \nper subtype. That is:\n\n'2002-05-30 +61 days' = '2002-05-30 +2 months'\nbut '2002-05-30 +2 months' + '2002-01-28' \n\t= '2002-01-28 +2 months' < '2002-01-28 +61 days'\n\nAlso, interval to interval math would no longer be commutative, becuase we \nwould need to use the start date of the first interval in the case of \ndisparate start dates:\n\n'2002-05-30 + 61 days' + '2002-01-28 +59 days' \n\t= '2002-05-30 +120 days' < '2002-05-30 + 4 months'\neven though '2002-05-30 + 61 days' = '2002-05-30 + 2 months'\nand '2002-01-28 +59 days' = '2002-01-28 +2 months' \n\nPros: The most accurate interval calculations possible.\nCons: How the heck would we implement it? And *explain* it? And it's pretty \ndarn far from the SQL92 implementation.\n\n---------------------------------------------\nAnd, a re-hash of Part I:\n\nPROPOSAL FOR ADJUSTMENTS OF POSTGRESQL TIMESTAMP AND INTERVAL HANDLING\nDraft 0.2\n\nTimestamp\n------------------------------\nProposal #1: TIMESTAMP WITHOUT TIME ZONE as default\n\nDescription: Currently, the data type invoked when users select TIMESTAMP is\nTIMESTAMP WITH TIME ZONE. We should change this so that TIMESTAMP defaults to\nTIMESTAMP WITHOUT TIME ZONE unless WITH TIME ZONE is specificied.\n\nReason: Handling time zones is tricky and non-intuitive for the beginning\nuser. TIMESTAMP WITH TIME ZONE should be reserved for DBAs who know what\nthey're doing.\n\nResolution: Taken care of in 7.3.\n\n\nProposal #2: We need more time zones.\n\nDescription: We need to add, or be able to add, many new time zones to\nPostgresql. Ideal would be some kind of \"create time zone\" statement.\n\nReason: Current included time zones do not cover all real-world time zones,\nand the situation is likely to get worse as various governments play with\ntheir calendars. For example, there is no current time zone which would be\nappropriate for the state of Arizona, i.e. \"Central Standard Time without\nDaylight Savings Time\". \n\nFurther: A CREATE TIME ZONE statement would have the following syntax:\nCREATE TIME ZONE GMT_adjustment, abbreviation, uses_DST, DST_starts \n(optional),\nDST_ends (optional) \nThis would allow, to some degree, DBA creation of time zones to take into\naccount local laws and wierdnesses.\n\nAlternative: We can allow users to designate timezones according to GMT \noffset and whether or not they support DST. Example \"-8:00 DST\" for PST/PDT, \nand \"-7:00 NDS\" for the Arizona example above.\n\n\n-- \n-Josh Berkus\n Techdocs Writer\n\n",
"msg_date": "Fri, 7 Jun 2002 16:34:34 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": true,
"msg_subject": "Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "> Please give me feedback on this...\n> There are a few problems currently with the Interval data type. The biggest\n> is that the current rules give us no clear path for implementation of a full\n> set of operators. The SQL92 standard is no help here; its implementation is\n> unintuitive and extremely limited ... more limited, in fact, than the current\n> incomplete implementation in PostgreSQL.\n\nPlease define \"a full set of operators\". Or do the subsequent proposals\ndefining new behaviors and some operations constitute that list?\n\n> Proposal #3: We should support the addition of \"whole days\".\n> Description: Interval should support a \"Weeks to Days\" increment which is\n> atomic per day, and not as a aggregate of hours.\n> Reason: Currently, the \"days\" increment in Interval is treated as \"x 24 hours\"\n> and not as whole days. This can cause some confusion when date calculations\n> break over a DST change; users do *not* expect events to get an hour earlier\n> or later in the fall or the spring. The current result is that a lot of\n> users give up on utilizing time zones because they can't deal with the time\n> shift in calendar applications.\n\nYou are overstating the problem imho, but there is a problem for some\nusers. SQL9x avoids the issue by defining *only* constant offsets for\ntime zones. That doesn't work in the real world :/\n\nWe would expand the storage size by at least 4 bytes to accomodate the\n\"qualitative day\" information. Currently takes 12 bytes, and will take\n16 or more. We will need to check for overflows during date/time math,\nwe will need some heuristics for conversions between hours and days\nduring calculations, and some users will need to cope with the changed\nbehavior. Operations like math and comparisons will be more expensive\n(though may not be a hugely noticable effect).\n\n> Proposal #4: Create to_char(INTERVAL, 'format string') Function.\n> Reason: self-evident, I think.\n\nOh. Didn't know it wasn't already there.\n\n> Proposal #5: Two alternate proposals for overhaul of the interval data type.\n> Description: Interval needs some radical changes to calculations and\n> operators.\n> Reason: Currently, it is nearly impossible to conceive of an implementation\n> for a full set of operators for the interval data type ( + - / * ) because of\n> the variability of conversions from one interval increment to another. For\n> example, what exactly should be the result of '3 months' / '4 days'? Here\n> are two alternatives.\n> Alternative #1: Treat Interval Increments as Atomic, and Round\n\nYuck (imho of course ;)\n\n> If we implemented this, each of the 3 sub-types of Interval (Year to Month,\n> Week to Day, and Hour to Millesecond per proposal #3) would be treated as\n> \"atomic\" and not renderable in terms of smaller increments, in the same way\n> that integers are not divisible beyond a prime. In fact, rather than\n> expressing remainders in smaller increments, the modulo ( % ) operator would\n> be used to express the remainder.\n> \n> Further, we would need to create a set of casting functions that allows for\n> the conversion of one interval subtype into another, using rounding by\n> approximates, such as 1 year = 365 days, 1 month = 30 days, 1 day = 24 hours,\n> etc. This is not that different from how the DATE data type works. If users\n> attempt multiplication and division with intervals of different subtypes, an\n> implicit cast would be made into the subtype of the smallest value.\n\n\n> Finally, multiplication and division by floats would be disallowed and\n> replaced by multiplication and division by integers. Thus:\n\nOverly restrictive I think. There *is* a use for maintaining precision\nduring math operations, though apparently not for your use cases.\n\n> '1 month' + '33 days' = '1 month 33 days'\n> '1 month 33 days'::INTERVAL WEEK TO DAY = '63 days'\n> '1 month' + '33 days'::INTERVAL YEAR TO MONTH = '2 months'\n> '5 months' / '2 months' = 2\n> '5 months' % '2 months' = '1 month'\n> '5 months' / 2 = '2 months'\n> '5 months' % 2 = '1 month'\n> '9 months' / '2 weeks' = '270 days' / '14 days' = 19\n> '15 hours' * 20 = '300 hours' (not '12 days 12 hours')\n> etc.\n> \n> Pros: It's simple and relatively intuitive. This approach also is similar\n> to the SQL92 spec, which focuses on interval subtypes.\n> Cons: It requires an annoying implementation of subtypes, which is cumbersome\n> and difficult to manage when you have mixed intervals (e.g. '4 days 8 hours 9\n> minutes'). And, with every operation, rounding is being used which can\n> result in some ghastly inequalities:\n> '1 year'/12 --> '1 month'::INTERVAL WEEK TO DAY --> '30 days' * 12\n> --> '360 days' / '1 year' = 0\n> \n> Alternative #2: Tie Intervals to a Specific Timestamp\n\nDouble yuck. You already have this capability by your choice of schema;\nintervals are intervals and timestamps are timestamps. The behaviors you\ndiscuss above (both current and possible) handle this.\n\n> ---------------------------------------------\n> And, a re-hash of Part I:\n> \n> PROPOSAL FOR ADJUSTMENTS OF POSTGRESQL TIMESTAMP AND INTERVAL HANDLING\n> Draft 0.2\n> Proposal #2: We need more time zones.\n> Description: We need to add, or be able to add, many new time zones to\n> Postgresql. Ideal would be some kind of \"create time zone\" statement.\n> Reason: Current included time zones do not cover all real-world time zones,\n> and the situation is likely to get worse as various governments play with\n> their calendars. For example, there is no current time zone which would be\n> appropriate for the state of Arizona, i.e. \"Central Standard Time without\n> Daylight Savings Time\".\n\nBad example, and I'm not following your argument here. PostgreSQL\nsupports *many* time zones (Peter E. has said \"too many\") and any change\nfor the Arizona example will be at odds with how dates and times are\nexpected to be handled in, uh, Arizona. They use Mountain Standard Time\n(MST), except for years when they didn't, and are covered by specifying\n\"MST\" on input and \"SET TIME ZONE 'America/Phoenix'\" (and perhaps others\ntoo; it seems that \"MST6\" gives me consistant behavior on my Linux box).\n\n> Further: A CREATE TIME ZONE statement would have the following syntax:\n> CREATE TIME ZONE GMT_adjustment, abbreviation, uses_DST, DST_starts\n> (optional),\n> DST_ends (optional)\n> This would allow, to some degree, DBA creation of time zones to take into\n> account local laws and wierdnesses.\n> Alternative: We can allow users to designate timezones according to GMT\n> offset and whether or not they support DST. Example \"-8:00 DST\" for PST/PDT,\n> and \"-7:00 NDS\" for the Arizona example above.\n\nI can't imagine that you are not finding a workable solution with the\ncurrent capabilities. That said, we are considering adopting the\nhistoric zinc package to support time zones within PostgreSQL (sounds\nlike you might be doing some of the development ;). And for time zone\nlookup (not supported in the zinc API) it *would* be nice to move to a\nDBMS table-based implementation, rather than the hardcoded tables we\nhave now. They may have been good enough for the last 12 years, but\ncertainly lookup stuff seems like it should be in a database table, eh?\n\n - Thomas\n",
"msg_date": "Fri, 07 Jun 2002 18:48:31 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "Thomas,\n\n> Please define \"a full set of operators\". Or do the subsequent\n> proposals\n> defining new behaviors and some operations constitute that list?\n\n+ - / * < > = and, if appropriate, %\nWhere support is lacking is * and /\n\nDon't get me wrong. PostgreSQL has the best implementation of\ndate/time/interval handling in any database I use. It's just that\nthere are a few limitations and wierdnesses left, and I'd really like\nto see them ironed out so that we can call our impelmentation \"near\nperfect\". Also, so I can stop coding workarounds into my database\napps.\n\n> You are overstating the problem imho, but there is a problem for some\n> users. SQL9x avoids the issue by defining *only* constant offsets for\n> time zones. That doesn't work in the real world :/\n> \n> We would expand the storage size by at least 4 bytes to accomodate\n> the\n> \"qualitative day\" information. Currently takes 12 bytes, and will\n> take\n> 16 or more. We will need to check for overflows during date/time\n> math,\n> we will need some heuristics for conversions between hours and days\n> during calculations, and some users will need to cope with the\n> changed\n> behavior. Operations like math and comparisons will be more expensive\n> (though may not be a hugely noticable effect).\n\nI can see why you've put off doing it. At a basic level, though,\ncurrent behaviour is counter-intuitive, so we'll need to do it someday.\n\n> Oh. Didn't know it wasn't already there.\n\nNot in 7.2.1. And if you don't know about it, probably not in 7.3\neither.\n\n> > Alternative #1: Treat Interval Increments as Atomic, and Round\n> \n> Yuck (imho of course ;)\n\nHey, I did ask for an opinion. <grin>\n\n> > Alternative #2: Tie Intervals to a Specific Timestamp\n> \n> Double yuck. You already have this capability by your choice of\n> schema;\n> intervals are intervals and timestamps are timestamps. The behaviors\n> you\n> discuss above (both current and possible) handle this.\n\nHmmm? How much is '1 month' / '4 days' then?\n\nThe current implementation does not support the / and * operators; that\nis, they are supported for some type combos, but not for others, and\nthe results are inconsistent and sometimes confusing.\n\n> Bad example, and I'm not following your argument here. PostgreSQL\n> supports *many* time zones (Peter E. has said \"too many\") and any\n> change\n> for the Arizona example will be at odds with how dates and times are\n> expected to be handled in, uh, Arizona. They use Mountain Standard\n> Time\n> (MST), except for years when they didn't, and are covered by\n> specifying\n> \"MST\" on input and \"SET TIME ZONE 'America/Phoenix'\" (and perhaps\n> others\n> too; it seems that \"MST6\" gives me consistant behavior on my Linux\n> box).\n\nActually, the real problems I have encountered with time zones would be\nsolved mostly by adding the 'WEEKS TO DAYS' subtype above. Currently\nI'm forced to use TIMESTAMP WITHOUT TIMEZONE in order to avoid the\nwierd one-hour shifts in my calendaring app.\n\n> I can't imagine that you are not finding a workable solution with the\n> current capabilities. That said, we are considering adopting the\n> historic zinc package to support time zones within PostgreSQL (sounds\n> like you might be doing some of the development ;). And for time zone\n> lookup (not supported in the zinc API) it *would* be nice to move to\n> a\n> DBMS table-based implementation, rather than the hardcoded tables we\n> have now. They may have been good enough for the last 12 years, but\n> certainly lookup stuff seems like it should be in a database table,\n> eh?\n\nYeah. I'd love to have somebody explain this to me. I noticed when\nzinc was mentioned, but I don't know *what* it is. Care to send me a\nlink?\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Sat, 08 Jun 2002 16:45:59 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> Yeah. I'd love to have somebody explain this to me. I noticed when\n> zinc was mentioned, but I don't know *what* it is. Care to send me a\n> link?\n\nI think http://www.twinsun.com/tz/tz-link.htm is the underlying timezone\ndatabase that Thomas is referring to. I can't find anything named zinc\nthat seems relevant.\n\nI'm not as excited about sticking the info into Postgres tables as\nThomas seems to be. I think that's (a) unnecessary and (b) likely to\ncreate severe startup problems, since the postmaster needs access to\ntimezone info to interpret the TZ environment variable, but it can't\nread the database. It seems to me that a precalculated timezone table\nis plenty good enough.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 09 Jun 2002 12:30:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2 "
},
{
"msg_contents": "On Fri, Jun 07, 2002 at 06:48:31PM -0700, Thomas Lockhart wrote:\n> \n> > Proposal #4: Create to_char(INTERVAL, 'format string') Function.\n> > Reason: self-evident, I think.\n> \n> Oh. Didn't know it wasn't already there.\n\n I'm _sure_ that to_char() is there for interval.\n\ntestt=# select to_char('33s 15h 10m 5month'::interval, 'HH:MI:SS Month');\n to_char \n--------------------\n 03:10:33 May\n(1 row)\n\ntest=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.2 on i686-pc-linux-gnu, compiled by GCC 2.95.4\n(1 row)\n\n\n And it's in the docs too....\n\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 10 Jun 2002 09:58:59 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "\n> > I'm _sure_ that to_char() is there for interval.\n> > \n> > testt=# select to_char('33s 15h 10m 5month'::interval, 'HH:MI:SS Month');\n> > to_char \n> > --------------------\n> > 03:10:33 May\n> > (1 row)\n> \n> Does \"May\" make sense for an _interval _ ? (Feb 22 + May = Jul 22)?\n> \n> Would not \"5 months\" make more sense ?\n\n to_char() convert interval to 'tm' and make output like this struct,\n I don't know what other is possible do with it.\n\n> Or is it some ISO standard ?\n> \n> Ditto for 15h -> 03 .\n\n HH vs. HH24\n\ntest=# select to_char('33s 15h 10m 5months'::interval, 'HH24:MI:SS Month');\n to_char \n--------------------\n 15:10:33 May \n \n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 10 Jun 2002 10:49:30 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Mon, 2002-06-10 at 09:58, Karel Zak wrote:\n> On Fri, Jun 07, 2002 at 06:48:31PM -0700, Thomas Lockhart wrote:\n> > \n> > > Proposal #4: Create to_char(INTERVAL, 'format string') Function.\n> > > Reason: self-evident, I think.\n> > \n> > Oh. Didn't know it wasn't already there.\n> \n> I'm _sure_ that to_char() is there for interval.\n> \n> testt=# select to_char('33s 15h 10m 5month'::interval, 'HH:MI:SS Month');\n> to_char \n> --------------------\n> 03:10:33 May\n> (1 row)\n\nDoes \"May\" make sense for an _interval _ ? (Feb 22 + May = Jul 22)?\n\nWould not \"5 months\" make more sense ?\n\nOr is it some ISO standard ?\n\nDitto for 15h -> 03 .\n\n--------------------\nHannu\n\n\n\n",
"msg_date": "10 Jun 2002 11:13:29 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Mon, Jun 10, 2002 at 04:26:47PM +0200, Hannu Krosing wrote:\n \n> > to_char() convert interval to 'tm' and make output like this struct,\n> \n> My point is that to_char-ing intervals by converting them to dates is\n> non-intuitive.\n> \n> It is really confusing to say that an interval of 5 months = \"May\"\n> and 15months == \"1 March\" ;(\n> \n> > I don't know what other is possible do with it.\n> \n> perhaps show them with the precision specified and keep data for bigger\n> units in biggest specified unit.\n> \n> to_char('2years 1min 4sec'::interval, 'MM SS'); ==> '24mon 64sec'\n> to_char('2years 1min 4sec'::interval, 'MM MI SS'); ==> '24mon 1min 4sec'\n> \n\n Hmmm, but it's really out of to_char(). For example 'MM' is defined\n as number in range 1..12.\n \n The to_char() convert date/time data to string and not to better formatted \n interval. The right name for your request is to_interval(). \n \n TODO?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 10 Jun 2002 15:43:34 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Mon, 2002-06-10 at 10:49, Karel Zak wrote:\n> \n> > > I'm _sure_ that to_char() is there for interval.\n> > > \n> > > testt=# select to_char('33s 15h 10m 5month'::interval, 'HH:MI:SS Month');\n> > > to_char \n> > > --------------------\n> > > 03:10:33 May\n> > > (1 row)\n> > \n> > Does \"May\" make sense for an _interval _ ? (Feb 22 + May = Jul 22)?\n> > \n> > Would not \"5 months\" make more sense ?\n> \n> to_char() convert interval to 'tm' and make output like this struct,\n\nMy point is that to_char-ing intervals by converting them to dates is\nnon-intuitive.\n\nIt is really confusing to say that an interval of 5 months = \"May\"\nand 15months == \"1 March\" ;(\n\n> I don't know what other is possible do with it.\n\nperhaps show them with the precision specified and keep data for bigger\nunits in biggest specified unit.\n\nto_char('2years 1min 4sec'::interval, 'MM SS'); ==> '24mon 64sec'\nto_char('2years 1min 4sec'::interval, 'MM MI SS'); ==> '24mon 1min 4sec'\n\n\n> > Or is it some ISO standard ?\n\nDoes anyone know what standard says about interval formats?\n\n------------\nannu\n\n",
"msg_date": "10 Jun 2002 16:26:47 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Mon, 2002-06-10 at 15:43, Karel Zak wrote:\n> On Mon, Jun 10, 2002 at 04:26:47PM +0200, Hannu Krosing wrote:\n> \n> > > to_char() convert interval to 'tm' and make output like this struct,\n> > \n> > My point is that to_char-ing intervals by converting them to dates is\n> > non-intuitive.\n> > \n> > It is really confusing to say that an interval of 5 months = \"May\"\n> > and 15months == \"1 March\" ;(\n> > \n> > > I don't know what other is possible do with it.\n> > \n> > perhaps show them with the precision specified and keep data for bigger\n> > units in biggest specified unit.\n> > \n> > to_char('2years 1min 4sec'::interval, 'MM SS'); ==> '24mon 64sec'\n> > to_char('2years 1min 4sec'::interval, 'MM MI SS'); ==> '24mon 1min 4sec'\n> > \n> \n> Hmmm, but it's really out of to_char(). For example 'MM' is defined\n> as number in range 1..12.\n> \n> The to_char() convert date/time data to string and not to better formatted \n> interval. The right name for your request is to_interval(). \n\nif there were a to_interval() then it should convert char data to\ninterval, like to_date(), to_number() and to_timestamp() do\n\nactually we currently have to_char(x,t) functions for formatting the\nfollowing input types, where the second arg is always the format - and\nthey do take different format strings for different types (i.e. we dont\nconvert int or double to timestamp and then format that)\n\nto_char | bigint, text\nto_char | double precision, text\nto_char | integer, text\nto_char | interval, text\nto_char | numeric, text\nto_char | real, text\nto_char | timestamp with time zone, text\nto_char | timestamp without time zone, text\n\nif our current implementation just converts interval to date it is\nsurely wrong, at least because the year will be 0000 which does not\nexist (AFAIK, the year before 0001 was -0001)\n\nhannu=# select to_char('33s 15h 10m 5months'::interval, 'YYYY.MM.DD\nHH24:MI:SS');\n to_char \n---------------------\n 0000.05.00 15:10:33\n(1 row)\n\nIMHO there should be INTERVAL-specific format characters - calling\n5-month period \"a May\" is stupid (calling 1-month period \"a January\" is\neven stupider :)\n\nIf folks want to convert interval to datetime they can always do it by\nadding an interval to some base date - doing it automatically by adding\nit to non-existing base date 000-00-00 will confuse people \n\nand it is not supported in \"plain\" postgresql\n\nhannu=# select ('33s 15h 10m 5months'::interval::timestamp);\nERROR: Cannot cast type 'interval' to 'timestamp with time zone'\n\n> TODO?\n\nhaving strictly defined to_interval would be nice, but I think this\nwould be _another_ todo :)\n\n--------------------------------\nHannu\n\n\n",
"msg_date": "10 Jun 2002 19:18:44 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "Karel, Hannu,\n\nTo be perfectly honest, I was looking at my 7.1 documentation (courtesy\nof DOSSIER) and hadn't realized that 7.2's implementation had got as\nfar as a function. I had tried to_char(interval) on 7.2.1, received\nwhat looked like gibberish in return, and assumed that it was\nunimplemented.\n\n> if there were a to_interval() then it should convert char data to\n> interval, like to_date(), to_number() and to_timestamp() do\n\nCan we put THAT on the to-do list? I find it highly inconsistent that\nthe function for creating intervals is \"interval\". Currently, I deal\nwith it by creating my own to_interval function in template1. \n\n> actually we currently have to_char(x,t) functions for formatting the\n> following input types, where the second arg is always the format -\n> and\n> they do take different format strings for different types (i.e. we\n> dont\n> convert int or double to timestamp and then format that)\n<snip>\n> IMHO there should be INTERVAL-specific format characters - calling\n> 5-month period \"a May\" is stupid (calling 1-month period \"a January\"\n> is\n> even stupider :)\n\nI wholeheartedly agree with Hannu, here. Might I suggest:\n\nM# - Nummber of Months - abbr (Interval)\nMM# - Number of Months (interval)\nY# - Number of years - abbr (Interval)\nYY# - Number of years (Interval)\nD# - Number of Days (interval)\nW# - Number of weeks -abbr (interval)\nWW# - number of weeks (interval)\nHH# - Number of hours (interval)\nMI# - Number of minutes (interval)\nSS# - Number of seconds (interval)\n\nThus allowing:\n\nhannu=# select to_char('33s 15h 10m 5months'::interval, 'M# D# HH# MI#\nSS#');\n �������to_char \n---------------------\n 5 mon 0 days 15 hrs 10 min 33 sec�\n\nor:\n\nhannu=# select to_char('33s 15h 10m 5months'::interval, 'MM# D# HH# MI#\nSS#');\n �������to_char \n---------------------\n 5 months 0 days 15 hrs 10 min 33 sec�\n\nThis needs more polishing, of course, but you can see where I'm going\nwith it.\n\n-Josh\n\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Mon, 10 Jun 2002 11:12:08 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Mon, Jun 10, 2002 at 07:18:44PM +0200, Hannu Krosing wrote:\n\n OK, I add to_interval() to may TODO (but it's unsure for 7.3).\n\n> hannu=# select to_char('33s 15h 10m 5months'::interval, 'YYYY.MM.DD\n> HH24:MI:SS');\n> to_char \n> ---------------------\n> 0000.05.00 15:10:33\n> (1 row)\n\n I think, we can keep this behaviour for to_char(), the good thing\n is that you can formatting interval to strings that seems like\n standard time (15:10:33), etc.\n\n The to_interval() will have another (you wanted) behaviour.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 11 Jun 2002 09:34:36 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Mon, Jun 10, 2002 at 03:43:34PM +0200, Karel Zak wrote:\n> On Mon, Jun 10, 2002 at 04:26:47PM +0200, Hannu Krosing wr ote:\n> > perhaps show them with the precision specified and keep data for bigger\n> > units in biggest specified unit.\n> > \n> > to_char('2years 1min 4sec'::interval, 'MM SS'); ==> '24mon 64sec'\n> > to_char('2years 1min 4sec'::interval, 'MM MI SS'); ==> '24mon 1min 4sec'\n> > \n> \n> Hmmm, but it's really out of to_char(). For example 'MM' is defined\n> as number in range 1..12.\n\nAnd 'DD' is defined as in range 1..31...\nWhat if I try to select '100 days'?\n\nfduch=> SELECT to_char('100days'::interval, 'YYYY-MM-DD HH24:MI:SS');\n to_char\n---------------------\n 0000-00-10 00:00:00\n\nEven more:\nDDD is day of year, but\n\nfduch=> SELECT to_char('100days'::interval, 'YYYY-MM-DDD HH24:MI:SS');\n to_char\n----------------------\n 0000-00-069 00:00:00\n\nHowever, this works fine:\nfduch=> SELECT extract(DAY from '100days'::interval);\n date_part\n-----------\n 100\n\t\nfduch=> SELECT version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.2.1 on i386-portbld-freebsd4.6, compiled by GCC 2.95.3\n\n\nI think, interval is too different from timestamp,\nand to_char(interval) needs another format syntax and logics...\n\n-- \nFduch M. Pravking\n",
"msg_date": "Tue, 11 Jun 2002 12:37:09 +0400",
"msg_from": "Fduch the Pravking <fduch@antar.bryansk.ru>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, 2002-06-11 at 09:34, Karel Zak wrote:\n> On Mon, Jun 10, 2002 at 07:18:44PM +0200, Hannu Krosing wrote:\n> \n> OK, I add to_interval() to may TODO (but it's unsure for 7.3).\n> \n> > hannu=# select to_char('33s 15h 10m 5months'::interval, 'YYYY.MM.DD\n> > HH24:MI:SS');\n> > to_char \n> > ---------------------\n> > 0000.05.00 15:10:33\n> > (1 row)\n\nI have not checked the SQL9x standards, but it seems from reading the\nfollowing links that Interval in Oracle and MimerSQL is actually 2\ndistinct types (YEAR-MONTH interval and DAY-HOUR-MINUTE-SECOND interval)\nwhich can't be mixed (it is impossible to know if 1 \"month\" is 28, 29,\n30 or 31 days\n\nhttp://otn.oracle.com/products/rdb7/htdocs/y2000.htm\n\nhttp://developer.mimer.com/documentation/Mimer_SQL_Reference_Manual/Syntax_Rules4.html#1113356\n\n> I think, we can keep this behaviour for to_char(), the good thing\n> is that you can formatting interval to strings that seems like\n> standard time (15:10:33), etc.\n\nBut interval _is_ _not_ point-in-time, it is a time_span_ .\n\nIt can be either good if it gives the results you want or bad if it does\ngive wrong results like returning 03:10:33 for the above \n\nI would suggest that a separate to_char function would be written that\nwould be _specific_to_interval_ datatype - so wheb i do\n\nto_char('33s 15h 10m'::interval, 'SS') I will get the actual length of \n\ninterval in seconds, 15*3600+10*60+33 = 54633s and not just the seconds part (33)\n\nwhereas to_char('33s 15h 10m'::interval, 'MI SS') would give \n\n15*60+10=910 min 33 sec ('910 33')\n\n\n-----------------\nHannu\n\n",
"msg_date": "11 Jun 2002 11:16:13 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, Jun 11, 2002 at 11:16:13AM +0200, Hannu Krosing wrote:\n> On Tue, 2002-06-11 at 09:34, Karel Zak wrote:\n\n> > I think, we can keep this behaviour for to_char(), the good thing\n> > is that you can formatting interval to strings that seems like\n> > standard time (15:10:33), etc.\n> \n> But interval _is_ _not_ point-in-time, it is a time_span_ .\n> \n> It can be either good if it gives the results you want or bad if it does\n> give wrong results like returning 03:10:33 for the above \n> \n> I would suggest that a separate to_char function would be written that\n> would be _specific_to_interval_ datatype - so wheb i do\n> \n> to_char('33s 15h 10m'::interval, 'SS') I will get the actual length of \n> \n> interval in seconds, 15*3600+10*60+33 = 54633s and not just the seconds part (33)\n>\n> whereas to_char('33s 15h 10m'::interval, 'MI SS') would give \n> \n> 15*60+10=910 min 33 sec ('910 33')\n\n Well, If the to_char() for interval will output result that you want,\n how can I output '15:10:33'?\n\n For this I want two direffent function or anothers format marks for \n to_char() like\n\n to_char('33s 15h 10m'::interval, '#MI #SS');\n ---\n '910 33'\n\n but for \"standard\" marks (that now works like docs describe :-) will output\n MI in 0..59 range.\n\n to_char('33s 15h 10m'::interval, 'MI:SS');\n ---\n '10:33'\n\n IMHO it's acceptable. I don't want close the way for output formatting\n in \"standard\" date/time ranges. We can support _both_ ways. Or not?\n \n Thomas, you are quiet? :-)\n \n Karel\n\n\nPS. the PostgreSQL converting intervals to \"standard\" format too:\n\ntest=# select '33h 15m'::interval - '10h 2m 3s'::interval ;\n ?column? \n----------\n 23:12:57\n(1 row)\n\ntest=# select '45h 15m'::interval - '10h 2m 3s'::interval ;\n ?column? \n----------------\n 1 day 11:12:57\n\n(hmm.. I unsure if this is really released 7.2, I maybe have\n some pre-7.2 version now. Is this 7.2 behaviuor?)\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 11 Jun 2002 11:21:40 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, Jun 11, 2002 at 12:37:09PM +0400, Fduch the Pravking wrote:\n \n> And 'DD' is defined as in range 1..31...\n> What if I try to select '100 days'?\n> \n> fduch=> SELECT to_char('100days'::interval, 'YYYY-MM-DD HH24:MI:SS');\n> to_char\n> ---------------------\n> 0000-00-10 00:00:00\n\n I already said it. The to_char() is 'tm' struct interpreter and use\n standard internal PG routines for interval to 'tm' conversion. We can\n talk about why 100days is converted to '10' days and months aren't\n used. I agree this example seems strange. Thomas?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 11 Jun 2002 11:31:49 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, 2002-06-11 at 11:31, Karel Zak wrote:\n> On Tue, Jun 11, 2002 at 12:37:09PM +0400, Fduch the Pravking wrote:\n> \n> > And 'DD' is defined as in range 1..31...\n> > What if I try to select '100 days'?\n> > \n> > fduch=> SELECT to_char('100days'::interval, 'YYYY-MM-DD HH24:MI:SS');\n> > to_char\n> > ---------------------\n> > 0000-00-10 00:00:00\n> \n> I already said it. The to_char() is 'tm' struct interpreter and use\n> standard internal PG routines for interval to 'tm' conversion.\n\nThe point is it should _not_ do that for interval. \n\nIt does not convert to 'tm' for other types:\n\nhannu=# select to_char(3.1415927,'0009D9');\n to_char \n---------\n 0003.1\n(1 row)\n\nalso, afaik there is no conversion of interval to datetime in\npostgresql:\n\nhannu=# select '25mon37d1s'::interval::timestamp;\nERROR: Cannot cast type 'interval' to 'timestamp with time zone'\n\n> We can\n> talk about why 100days is converted to '10' days and months aren't\n> used. I agree this example seems strange. Thomas?\n\nYou can't convert days to months as there is no universal month length.\n\nthis is the current (correct) behaviour:\n\nhannu=# select '25mon37d1s'::interval;\n interval \n--------------------------------\n 2 years 1 mon 37 days 00:00:01\n(1 row)\n\n\n------------------\nHannu\n\n",
"msg_date": "11 Jun 2002 12:52:41 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, 2002-06-11 at 11:21, Karel Zak wrote:\n> On Tue, Jun 11, 2002 at 11:16:13AM +0200, Hannu Krosing wrote:\n> > On Tue, 2002-06-11 at 09:34, Karel Zak wrote:\n> \n> > > I think, we can keep this behaviour for to_char(), the good thing\n> > > is that you can formatting interval to strings that seems like\n> > > standard time (15:10:33), etc.\n> > \n> > But interval _is_ _not_ point-in-time, it is a time_span_ .\n> > \n> > It can be either good if it gives the results you want or bad if it does\n> > give wrong results like returning 03:10:33 for the above \n> > \n> > I would suggest that a separate to_char function would be written that\n> > would be _specific_to_interval_ datatype - so wheb i do\n> > \n> > to_char('33s 15h 10m'::interval, 'SS') I will get the actual length of \n> > \n> > interval in seconds, 15*3600+10*60+33 = 54633s and not just the seconds part (33)\n> >\n> > whereas to_char('33s 15h 10m'::interval, 'MI SS') would give \n> > \n> > 15*60+10=910 min 33 sec ('910 33')\n> \n> Well, If the to_char() for interval will output result that you want,\n> how can I output '15:10:33'?\n> \n> For this I want two direffent function or anothers format marks for \n> to_char() like\n> \n> to_char('33s 15h 10m'::interval, '#MI #SS');\n> ---\n> '910 33'\n\nand it is probably easyer to implement too - no need to first collect\nall possible format chars.\n\n> but for \"standard\" marks (that now works like docs describe :-) will output\n> MI in 0..59 range.\n> \n> to_char('33s 15h 10m'::interval, 'MI:SS');\n> ---\n> '10:33'\n>\n> IMHO it's acceptable. I don't want close the way for output formatting\n> in \"standard\" date/time ranges. We can support _both_ ways. Or not?\n\nperhaps we should do as to_char does for floats -- return ### if\nargument cant be shown with given format ?\n\nhannu=# select to_char(1000.0,'0000D00') as good, \nhannu-# to_char(1000.0, '000D00') as bad;\n good | bad \n----------+---------\n 1000.00 | ###.##\n(1 row)\n\n\nno need to change current documented behaviour without good reason \n\n> Thomas, you are quiet? :-)\n> \n> Karel\n> \n> \n> PS. the PostgreSQL converting intervals to \"standard\" format too:\n> \n> test=# select '33h 15m'::interval - '10h 2m 3s'::interval ;\n> ?column? \n> ----------\n> 23:12:57\n> (1 row)\n> \n> test=# select '45h 15m'::interval - '10h 2m 3s'::interval ;\n> ?column? \n> ----------------\n> 1 day 11:12:57\n> \n> (hmm.. I unsure if this is really released 7.2, I maybe have\n> some pre-7.2 version now. Is this 7.2 behaviuor?)\n\nYes.\n\nAnd this is still an interval, not a timestamp:\n\nhannu=# select '4500h 15m'::interval - '10h 2m 3s'::interval ;\n ?column? \n-------------------\n 187 days 02:12:57\n(1 row)\n\n----------------------------------\nHannu\n\n",
"msg_date": "11 Jun 2002 13:47:13 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "> > fduch=> SELECT to_char('100days'::interval, 'YYYY-MM-DD HH24:MI:SS');\n> > ---------------------\n> > 0000-00-10 00:00:00\n> I already said it. The to_char() is 'tm' struct interpreter and use\n> standard internal PG routines for interval to 'tm' conversion. We can\n> talk about why 100days is converted to '10' days and months aren't\n> used. I agree this example seems strange. Thomas?\n\nNot sure why 100 is becoming 10, except that the formatting string is\nspecifying a field width of two characters (right?). And for intervals,\nyears and months are not interchangable with days so values do not\noverflow from days to months fields.\n\nI played around with to_char(interval,text) but don't understand the\nbehavior either.\n\n - Thomas\n",
"msg_date": "Tue, 11 Jun 2002 06:22:55 -0700",
"msg_from": "Thomas Lockhart <thomas@pgsql.com>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "> > I already said it. The to_char() is 'tm' struct interpreter and use\n> > standard internal PG routines for interval to 'tm' conversion.\n> The point is it should _not_ do that for interval.\n\nI use the tm structure to hold this structured information. I *think*\nthat Karel's usage is just what is intended by my support routines,\nthough I haven't looked at it in quite some time. Let me know if you\nwant me to look Karel...\n\n - Thomas\n",
"msg_date": "Tue, 11 Jun 2002 06:49:18 -0700",
"msg_from": "Thomas Lockhart <thomas@pgsql.com>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, Jun 11, 2002 at 06:22:55AM -0700, Thomas Lockhart wrote:\n> > > fduch=> SELECT to_char('100days'::interval, 'YYYY-MM-DD HH24:MI:SS');\n> > > ---------------------\n> > > 0000-00-10 00:00:00\n> > I already said it. The to_char() is 'tm' struct interpreter and use\n> > standard internal PG routines for interval to 'tm' conversion. We can\n> > talk about why 100days is converted to '10' days and months aren't\n> > used. I agree this example seems strange. Thomas?\n> \n> Not sure why 100 is becoming 10, except that the formatting string is\n> specifying a field width of two characters (right?). And for intervals,\n\n Oops. Yes, you are right it's %02d. I forgot it. Sorry :-)\n\n> years and months are not interchangable with days so values do not\n> overflow from days to months fields.\n> \n> I played around with to_char(interval,text) but don't understand the\n> behavior either.\n\n OK. And what is wanted behavior?\n\n DD = day\n ## = error\n\n 1) '30h 10m 15s' 'HH MI SS' ---> '06 10 15'\n '30h 10m 15s' 'HH MI SS DD' ---> '06 10 15 1'\n\n 2) '30h 10m 15s' 'HH MI SS' ---> '30 10 15'\n '30h 10m 15s' 'HH MI SS DD' ---> '30 10 15 ##'\n\n 3) '30h 10m 15s' 'HH MI SS' ---> '30 10 15'\n '30h 10m 15s' 'HH MI SS DD' ---> '06 10 15 1'\n\n 4) use both 1) and 2) but with different marks like\n 'HH' and '#HH' (or other special prefix)\n\n 5) '2week' 'DD' ---> '14'\n \n 6) '2week' 'HH' ---> '00'\n\n 7) '2week' 'HH' ---> '336'\n\n 8) '2week' 'DD HH' ---> '14 00'\n\n 9) ???\n\n I unsure what is best, Please, mark right outputs or write examples.\n\n -- for all is probably right idea use '####' in output \n if input is not possible convert to wanted format (like current \n float to_char() behavior).\n\n BTW:\n\ntest=# select date_part('hour', '30h 10m 15s'::interval);\n date_part \n-----------\n 6\n \ntest=# select date_part('day', '30h 10m 15s'::interval);\n date_part \n-----------\n 1\n\n\n Karel\n \n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 11 Jun 2002 17:02:44 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": " Karel,\n\n> The to_interval() will have another (you wanted) behaviour.\n\nPlease, please, please do not use to_interval for text formatting of \nintervals. It's very inconsistent with the naming of other conversion \nfunctions, and will confuse the heck out of a lot of users. As well as \nmessing up my databases, which have to_interval as a replacement for the \nproblematically named \"interval\" function.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \tjosh@agliodbs.com\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Tue, 11 Jun 2002 09:36:39 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": true,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, Jun 11, 2002 at 09:36:39AM -0700, Josh Berkus wrote:\n> Karel,\n> \n> > The to_interval() will have another (you wanted) behaviour.\n> \n> Please, please, please do not use to_interval for text formatting of \n> intervals. It's very inconsistent with the naming of other conversion \n> functions, and will confuse the heck out of a lot of users. As well as \n> messing up my databases, which have to_interval as a replacement for the \n> problematically named \"interval\" function.\n\n Yes, agree. It wasn't well-advised.\n \n It will probably to_char() with special 'interval' behaviour or \n format marks. But I still don't know how behaviour is right.\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 11 Jun 2002 18:45:31 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
},
{
"msg_contents": "On Tue, 2002-06-11 at 18:36, Josh Berkus wrote:\n> Karel,\n> \n> > The to_interval() will have another (you wanted) behaviour.\n> \n> Please, please, please do not use to_interval for text formatting of \n> intervals.\n\nIf he meant what _I_ described then this was exactly that, i.e.\nconverting (string,format) to interval.\n\n----------------\nHannu\n\n",
"msg_date": "12 Jun 2002 12:41:45 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Timestamp/Interval proposals: Part 2"
}
] |
[
{
"msg_contents": "\nI received this question about buffer management. Can someone answer it?\n\n---------------------------------------------------------------------------\n\nnield@usol.com wrote:\n> \n> Dear Mr. Momjian:\n> \n> First let me thank you for the great work you have done on PostgreSQL.\n> This is a huge project, and as someone who is just starting to look at\n> the code I'm grateful for the effort that has gone into commenting and\n> clean module interfaces. \n> \n> Right now, I'm working on a draft proposal involving the WAL system,\n> in the perhaps vain hope of eliminating VACUUM and allowing\n> point-in-time recovery and play-forward file recovery from a saved\n> log-stream. I'm not ready yet to make the proposal, since I'm still\n> trying to figure out everything I need to know about PostgreSQL\n> internals, but I hope to have a Request for Comments I can post to the\n> pgsql-hackers list in the near future.\n> \n> The reason I'm writing you is because while looking at the buffer\n> manager, I had a question about locking and extending relations. There\n> is surely somthing I don't understand that explains why this scenerio\n> is not a problem, but if you could point out what I've missed it would\n> help me understand PostgreSQL better, and move me closer to being able\n> to contribute to the project.\n> \n> Please feel free to forward this to the pgsql-hackers list if you\n> don't have time to deal with it.\n> \n> Regards,\n> \n> J.R. Nield <nield@usol.com>\n> \n> \n> \n> \n> \n> \n> [see the scenerio at bottom first]\n> \n> Is there a race condition in ReadBufferInternal() ?\n> (From bufmgr.c,v 1.123 2002/04/15 23:47:12 momjian Exp)\n> \n> When blockNum argument to ReadBufferInternal is 'P_NEW', then\n> smgrnblocks() is called on the relation to get the last block in the\n> relation file. This is assigned to blockNum, and the function proceeds\n> as if ReadBufferInternal() were called with a blockNum equal to\n> smgrnblocks().\n> \n> Two things to note:\n> \n> A) The BufMgrLock is not held when smgrnblocks() is called. (unless\n> bufferLockHeld was true, which will not be the case when we're\n> called through ReadBuffer() )\n> \n> B) ReadBufferInternal() then gets an exclusive lock on BufMgrLock\n> and calls BufferAlloc().\n> \n> If between Time A and B another backend allocates a buffer with P_NEW,\n> then BufferAlloc() will find the buffer in the block cache.\n> \n> If so, it would seem like there is a problem, because later in the\n> function we will zero this block.\n> \n> On line 198, we check if the block was found (Yes), then if we were\n> expecting it (No) we return. If the buffer is local (It is not) we\n> do StartBufferIO().\n> \n> The next thing is at 224 where isExtend == true, so we zero the\n> block.\n> \n> *** Why is it OK to zero the block? ***\n> \n> Scenerio:\n> \n> PROC1 and PROC2 both call ReadBuffer(reln, P_NEW) -->\n> ReadBufferInternal(reld, P_NEW, false)\n> \n> PROC1 gets NOT FOUND from BufferAlloc, so it zero's the buffer and\n> calls smgrextend.\n> \n> PROC2 finds the buffer and waits in WaitIO() for PROC1 to complete.\n> \n> PROC1 finishes ReadBuffer and goes on to modify the buffer.\n> \n> PROC2 gets a FOUND buffer from BufferAlloc, but was expecting to\n> extend. It zero's the buffer and calls smgrextend, stomping on PROC1.\n> \n> \n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 7 Jun 2002 19:38:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Internals question about buffers"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Is there a race condition in ReadBufferInternal() ?\n\nNo.\n\nAs the comments in bufmgr.c point out, this is not bufmgr.c's problem:\n\n * ReadBuffer -- returns a buffer containing the requested\n *\t\tblock of the requested relation. If the blknum\n *\t\trequested is P_NEW, extend the relation file and\n *\t\tallocate a new block. (Caller is responsible for\n *\t\tensuring that only one backend tries to extend a\n *\t\trelation at the same time!)\n\nIn practice, the necessary locking is done by hio.c in the case of\nheap relations:\n\n *\tNote that we use LockPage(rel, 0) to lock relation for extension.\n\nand in the case of index relations the various index AMs have their own\napproaches.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 08 Jun 2002 00:47:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Internals question about buffers "
}
] |
[
{
"msg_contents": "Hi all!\n\nSorry for the length of this but I'm trying to get an idea of where my\ncompany can contribute to the best effect so I have a number of questions.\n\nTo begin with my team and I have some energy/time/$ over the coming months\nto put directly into PostgreSQL-related development work. Technical Pursuit,\nhas made a commitment to PostgreSQL for both our internal projects and our\ncustomer projects. We've been evangelizing PostgreSQL for a while now (I did\na talk on it at the Database Discovery cruise last June in Alaska -- a lone\nvoice in literally a sea of Oracle folks) and have started doing\nOracle-to-PostgreSQL conversions for customers wishing to transition away\nfrom Oracle. We're also getting ready to ship a beta release of our TIBET\nproduct that uses PG as the backend source code repository among other\nthings.\n\nAreas we have customer/business needs in include replication,\nbackup/recovery, monitoring/control, XML support, HTTP/HTTPS protocol\nsupport for postmaster, pl/pgperl, possible pl/jython, and possible\ncompile-time inclusion/configuration of time-travel (--with-time-travel ?).\n\nOn the process side, is there an IRC or other chat-based system in place for\nthe PG team to coordinate their efforts? If not, would an IRC system hosted\nby TPI be something folks would be interested in using? We'd be willing to\nstart hosting a set of IRC channels if that would assist the team and the\ncommunity in support issues etc.\n\nFor XML support I've contacted John Gray who did the current XML contrib but\nhas since ceased development and he's granted me permission to pick up where\nhe left off to improve XML support as it relates to his contrib module. Is\nthere any move underway to integrate XML with PG at any other level?\n\nIf we were to contribute to replication/backup/recovery solutions who would\nwe coordinate that effort with? If that's something the core team is on top\nof, can we get an idea of what is expected by August so we can advise our\ncustomers and plan accordingly?\n\nWhat is the planned status of Java support in the engine? Is there anyone\nworking on JVM integration at this stage and if not, how could we best\nintegrate with the team to take on this task? We're looking seriously at the\nidea of a pl/jython that would leverage the Jython language to provide\nscriptable Java procedures in the engine. Any feedback on that idea?\n\nFor several scenarios we see value in having the postmaster respond to\nhttp(s) protocols and relay to internal functions for processing of web\ntraffic. We've got a simple perl \"bridge\" that performs this task now and\nthe performance is more than adequate but we're sure it would be even better\nif we could avoid having the separate perl component. Is there any interest\nin this elsewhere? Any feedback on how/where we should start other than\nhacking postmaster? ;)\n\nWhat is the current thinking on re-introducing time-travel support, perhaps\nas a compile-time feature ala --with-time-travel? This is a feature our\ncustomers would get significant financial benefit from. I've seen recent\nnotes that this might be possible to do. We're *extremely* interested in\nthis direction given the potential for differentiation from other products\nin the marketplace. It would strengthen PG significantly in our mind.\n\nFinally, we're starting to do a lot of work in pl/pgperl(u) and are\nwondering whether that's an area we can again contribute in. If so, how do\nwe get involved? PG/Financials anyone?\n\nAgain, sorry for the length of this and the raft of questions. I hope you\nunderstand we're in a somewhat interesting position with some time and $ to\nfocus on PG, particularly as it relates to replication/recovery issues. Any\nguidance on how we can put that to work for the community would be\nappreciated.\n\nThanks.\n\nss\n\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n\n",
"msg_date": "Fri, 7 Jun 2002 17:43:03 -0600",
"msg_from": "\"Scott Shattuck\" <ss@technicalpursuit.com>",
"msg_from_op": true,
"msg_subject": "How can we help?"
},
{
"msg_contents": "On Fri, 7 Jun 2002 17:43:03 -0600\n\"Scott Shattuck\" <ss@technicalpursuit.com> wrote:\n> On the process side, is there an IRC or other chat-based system in place for\n> the PG team to coordinate their efforts?\n\nThere's #postgresql on efnet and irc.openprojects.net, but it's mostly used\nfor user support.\n\n> If we were to contribute to replication/backup/recovery solutions who would\n> we coordinate that effort with?\n\nThere are a number of replication efforts, each attempting to provide\nvarious levels of functionality. The PGReplication project\n(http://gborg.postgresql.org/project/pgreplication/projdisplay.php) is one\nof the more ambitious ones; others include PGReplicator, DBMirror,\nrserv/erserv, and probably more that I'm not aware of.\n\nI can't speak to the backup/recovery efforts -- since there seems\nto be less activity in this area, perhaps this would be an appropriate\nplace for Technical Pursuit to focus on?\n\n> What is the planned status of Java support in the engine? Is there anyone\n> working on JVM integration at this stage and if not, how could we best\n> integrate with the team to take on this task?\n\nhttp://pljava.sourceforge.net/ is the only project that I'm aware of.\n\n> For several scenarios we see value in having the postmaster respond to\n> http(s) protocols and relay to internal functions for processing of web\n> traffic.\n\nPerhaps you could elaborate on this? It sounds like bloatware, but maybe\nI'm just cynical.\n\n> Finally, we're starting to do a lot of work in pl/pgperl(u) and are\n> wondering whether that's an area we can again contribute in.\n\nIf you mean plperl, then I don't see why not ; if you're talking about\na new procedural language based upon some unholy union of pl/pgsql and\nperl, I'd be skeptical :-)\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Fri, 7 Jun 2002 21:24:09 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: How can we help?"
},
{
"msg_contents": "First of all thanks for the feedback!\n\n> There's #postgresql on efnet and irc.openprojects.net, but it's mostly\nused\n> for user support.\n>\n\nThe offer for development coordination channel(s) stands if other folks are\ninterested.\n\n...snip...\n\n>\n> I can't speak to the backup/recovery efforts -- since there seems\n> to be less activity in this area, perhaps this would be an appropriate\n> place for Technical Pursuit to focus on?\n>\n\nWe'd be happy to if others agree. I'll post a separate message trying to\nsummarize what I understand of the current backup/recovery items on the TODO\nlist and looking for input.\n\n> > What is the planned status of Java support in the engine? Is there\nanyone\n> > working on JVM integration at this stage and if not, how could we best\n> > integrate with the team to take on this task?\n>\n> http://pljava.sourceforge.net/ is the only project that I'm aware of.\n>\n\nI hadn't found this. Thanks.\n\n> > For several scenarios we see value in having the postmaster respond to\n> > http(s) protocols and relay to internal functions for processing of web\n> > traffic.\n>\n> Perhaps you could elaborate on this? It sounds like bloatware, but maybe\n> I'm just cynical.\n>\n\nOK. But remember you asked for it :).\n\nGiven a market which seems bent on HTTP(S)-based XML/SOAP access to data and\nXSchema/XML output it seems natural to consider putting these into PG. The\nbuzzword seems to be XML Databases. I'm not a big subscriber to that concept\nso don't get me wrong, I'm not looking to go that route, but I do see value\nin unifying the protocols for data access so PG can be a fully qualified\nplayer in the game.\n\nIn one sense, we're trying to use PG as we think it was designed, not as a\ndatabase server so much, but as an application server. Smart databases don't\nneed app servers -- they are app servers. The problem is, web apps need\nHTTP(S) support. So, we're thinking we'd create new \"listeners\" for PG that\nadd alternative protocol support. We've done a simple HTTP listener in Perl\nthat hands off to the postmaster process and while I hesitate to publish any\nraw data at this point let's just say that even with the extra overhead of\nthe Perl the results are enlightening.Web servers aren't the only things\nbuilt to scale under load and our tests show that the team working on PG has\ndone a great job.\n\nOur business case is simple. We want to avoid having to ship a combination\nof Apache, Tomcat, and PostgreSQL to our customers. While a lot of products\nneed a database and web access do they really need to ship with a manual\nthat tells the customer to configure Apache, Tomcat, and PG and make sure\nthey all start up and stay up? We'd like to reduce that complexity.\n\nThe complexity of today's web designs is what I'd define as \"bloatware\".\nBut, rather than referring to a single product, my definition applies to the\ncombination of technologies currently required just so we can put a web face\non our database. Web server, servlet container, J2EE server, database.\nThat's bloat.\n\nWhy use PG at all if we're not going to use it for what it was designed from\nday one to do? Namely, support writing applications directly in the database\nitself. Given current web architecture I'm sure some might say I'm crazy to\nconsider such a thing and I have two words for them -- Oracle Financials.\n\nOracle has made billions off the design I'm talking about. Oracle Financials\nisn't written in Java and it doesn't need 3 servers and 500 jar files from\nthe Jakarta project (although with 9i who knows ;)). It's written in plsql.\nIf stored procs can do all that in a database that wasn't even designed to\nbe extended for application support they can certainly parse a GET/POST\nrequest. I just need the postmaster to listen for HTTP so it can figure out\nwhich proc to call on my way to replacing 5 years of web bloatware ;).\n\n> > Finally, we're starting to do a lot of work in pl/pgperl(u) and are\n> > wondering whether that's an area we can again contribute in.\n>\n> If you mean plperl, then I don't see why not ; if you're talking about\n> a new procedural language based upon some unholy union of pl/pgsql and\n> perl, I'd be skeptical :-)\n\nAllowing perl programmers to think in perl full time and use interfaces\nthey're familiar with is more our goal. Something like a DBI module that\nwould function as if you were external to PG even though you aren't. Write a\nstored proc as if it were going to run outside of the database. Install it\ninside when it makes sense. No code change. We should be able to say \"if you\nknow perl/DBI you know how to write stored procs for PG\". Same for\npl/python. Same for Jython. I don't know if we can get there from here but\nit's a goal we're going to work hard for.\n\n\nss\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n\n\n",
"msg_date": "Fri, 7 Jun 2002 23:41:26 -0600",
"msg_from": "\"Scott Shattuck\" <ss@technicalpursuit.com>",
"msg_from_op": true,
"msg_subject": "Re: How can we help?"
},
{
"msg_contents": "Le Samedi 8 Juin 2002 01:43, Scott Shattuck a écrit :\n> What is the planned status of Java support in the engine? Is there anyone\n> working on JVM integration at this stage and if not, how could we best\n> integrate with the team to take on this task?\n\nYou may be interested in looking at PLjava on \nhttp://sourceforge.net/projects/pljava/\n\nCheers,\nJean-Michel POURE\n",
"msg_date": "Mon, 10 Jun 2002 09:41:15 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: How can we help?"
}
] |
[
{
"msg_contents": "By the way, a colleague just reproduced this problem on a 7.2.1 postgres.\n\n-----Original Message-----\nFrom: Rachit Siamwalla [mailto:rachit@ensim.com]\nSent: Friday, June 07, 2002 4:27 PM\nTo: pgsql-hackers; Paul Menage\nSubject: [HACKERS] Question whether this is a known problem in 7.1.2\n\n\n\nThis problem was discovered in 7.1.2. Was wondering whether this is a known problem or not; we plan to test this on the latest postgres sometime later.\n\nWe have a large table, lets call it A, millions of rows. And in the table is a field called time, which is TIMESTAMP type. We have an index on it.\n\nOftentimes we like to get the latest row inserted by time on a given constraint. So we do a:\n\n\nSELECT * FROM A WHERE someconstraint = somerandomnumber ORDER BY time desc limit 1;\n\nPostgres intellegently uses the index to scan through the table from the end forward.\n\nIf there are no items that fit the constraint, the query will take a long time (cause it has to scan the whole table).\n\nIf there are items (plural important here, read below) that fit the constraint, the database finds the first item, and returns it right away (fairly quickly if the item is near the end).\n\nHowever, if there is only ONE item, postgres still scans the whole database. Not sure why. We also find out that if:\n\nThere are 2 items that match the criteria, and you do a LIMIT 2, it scans the whole table as well. Limit 1 returns quickly. Basically it seems like postgres is looking for one more item than it needs to.\n\n-rchit\n\n---------------------------(end of broadcast)---------------------------\nTIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Fri, 7 Jun 2002 16:47:13 -0700 ",
"msg_from": "Rachit Siamwalla <rachit@ensim.com>",
"msg_from_op": true,
"msg_subject": "Re: Question whether this is a known problem in 7.1.2"
}
] |
[
{
"msg_contents": "After uncommenting this I receive errors about it not being a valid\noption. I assume this has been replaced by\n (server|client)_min_messages?\n\nIn which case it should be removed from what initdb installs.\n--\nRod\n\n",
"msg_date": "Fri, 7 Jun 2002 21:25:50 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "postgresql.conf -> debug_level"
},
{
"msg_contents": "Rod Taylor wrote:\n> After uncommenting this I receive errors about it not being a valid\n> option. I assume this has been replaced by\n> (server|client)_min_messages?\n> \n> In which case it should be removed from what initdb installs.\n\nThanks. Fixed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 8 Jun 2002 00:08:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: postgresql.conf -> debug_level"
}
] |
[
{
"msg_contents": "Russell, can you provide a test case, or at least explain the\ncircumstances, please. Please maintain the Cc list.\n\n-----Forwarded Message-----\n\nFrom: Tom Lane <tgl@sss.pgh.pa.us>\nTo: Oliver Elphick <olly@lfix.co.uk>\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] [Fwd: Bug#149056: postgresql: should not try in a busy loop when allocating resources]\nDate: 08 Jun 2002 22:03:33 -0400\n\nOliver Elphick <olly@lfix.co.uk> forwards:\n> When trying to create a semaphore Postgresql 7.2.1-3 will try 400,000 times=\n> per\n> second if it has problems.\n\nAFAICS it will try *once* and abort if it fails. Can you provide a\nreproducible test case for the above behavior?\n\n\t\t\tregards, tom lane\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Verily, verily, I say unto you, He that heareth my\n word, and believeth on him that sent me, hath\n everlasting life, and shall not come into\n condemnation; but is passed from death unto life.\" \n John 5:24",
"msg_date": "09 Jun 2002 03:53:24 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "[Fwd: Re: [Fwd: Bug#149056: postgresql: should not try in\n\ta busy loop when allocating resources]]"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 08 June 2002 22:48\n> To: Peter Eisentraut\n> Cc: PostgreSQL-development\n> Subject: Re: Roadmap for a Win32 port\n> \n> \n> > \n> > > Also, it seems Win32 doesn't need these scripts, except initdb.\n> > \n> > The utility of these programs is independent of the \n> platform. If we \n> > think pg_dumpall is not useful, then let's remove it.\n> \n> I think the first two targets for C-ification would be \n> pg_dumpall and initdb. The others have SQL equivalents. \n> Maybe pg_ctl too.\n\nI looked at this issue some time ago & came to the conclusion that the\nonly scripts that Win32 really needed were pg_dumpall, initdb &\ninitlocation.\n\nThe others have SQL equivalents as you say, apart from pg_ctl which\nunder Windows should probably (and generally is) be replaced by the SCM\n(Service Control Manager). The only thing that comes to mind that the\nSCM can't do is a reload.\n\nRegards, Dave.\n",
"msg_date": "Sun, 9 Jun 2002 11:38:26 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
}
] |
[
{
"msg_contents": "Hi, \n\nBased on an entry in the mailing list from 30 Oct 2001 \nabout efficient deletes on subqueries, \nI've found two ways to do so (PostgreSQL 7.2.1): \n\n1.\nBEGIN ;\nEXPLAIN ANALYZE\nDELETE FROM onfvalue WHERE EXISTS(\nSELECT * FROM onfvalue j WHERE \nj.sid= 5 AND\nonfvalue.lid = j.lid AND \nonfvalue.mid = j.mid AND\nonfvalue.timepoint = j.timepoint AND \nonfvalue.entrancetime < j.entrancetime\n) ;\nROLLBACK ;\nQUERY PLAN:\n\nSeq Scan on onfvalue \n(cost=0.00..805528.05 rows=66669 width=6) \n(actual time=61.84..25361.82 rows=24 loops=1)\n SubPlan\n -> Index Scan using advncd_onfvalue_idx_stlme on onfvalue j \n (cost=0.00..6.02 rows=1 width=36) \n (actual time=0.14..0.14 rows=0 loops=133338)\nTotal runtime: 25364.76 msec\n\n2.\nBEGIN ;\nEXPLAIN ANALYZE\nINSERT INTO temprefentrancetime(timepoint,lid,mid,sid,entrancetime)\nSELECT o.timepoint,o.lid,o.mid,o.sid,o.entrancetime\nFROM onfvalue o join onfvalue j ON (\no.lid = j.lid AND \no.mid = j.mid AND\no.timepoint = j.timepoint AND \no.entrancetime < j.entrancetime\n) WHERE o.sid= 5 ;\nEXPLAIN ANALYZE\nDELETE FROM onfvalue WHERE\nonfvalue.timepoint = temprefentrancetime.timepoint AND\nonfvalue.mid = temprefentrancetime.mid AND\nonfvalue.lid = temprefentrancetime.lid AND\nonfvalue.sid = temprefentrancetime.sid AND\nonfvalue.entrancetime = temprefentrancetime.entrancetime ;\nDELETE FROM temprefentrancetime;\nROLLBACK ;\nQUERY PLAN:\n\nMerge Join \n(cost=16083.12..16418.36 rows=4 width=52) \n(actual time=17728.06..19325.02 rows=24 loops=1)\n -> Sort \n (cost=2152.53..2152.53 rows=667 width=28) \n (actual time=1937.70..2066.46 rows=16850 loops=1)\n -> Index Scan using advncd_onfvalue_idx_stlme on onfvalue o \n\t(cost=0.00..2121.26 rows=667 width=28) \n\t(actual time=0.57..709.89 rows=16850 loops=1)\n -> Sort \n (cost=13930.60..13930.60 rows=133338 width=24) \n (actual time=13986.07..14997.43 rows=133110 loops=1)\n -> Seq Scan on onfvalue j \n\t(cost=0.00..2580.38 rows=133338 width=24) \n\t(actual time=0.15..3301.06 rows=133338 loops=1)\nTotal runtime: 19487.49 msec\n\nQUERY PLAN:\n\nNested Loop \n(cost=0.00..6064.40 rows=1 width=62) \n(actual time=1.34..8.32 rows=24 loops=1)\n -> Seq Scan on temprefentrancetime \n (cost=0.00..20.00 rows=1000 width=28) \n (actual time=0.44..1.07 rows=24 loops=1)\n -> Index Scan using advncd_onfvalue_idx_stlme on onfvalue \n (cost=0.00..6.02 rows=1 width=34) \n (actual time=0.22..0.25 rows=1 loops=24)\nTotal runtime: 10.15 msec\n\nThe questions are: \nIs there a way to put the second form (more complicated, but faster) \nin one statement? \nOr is there even a third way to delete, which I cannot see? \nRegards, Christoph \n",
"msg_date": "Mon, 10 Jun 2002 13:42:10 METDST",
"msg_from": "Christoph Haller <ch@rodos.fzk.de>",
"msg_from_op": true,
"msg_subject": "Efficient DELETE Strategies "
},
{
"msg_contents": "Christoph Haller <ch@rodos.fzk.de> writes:\n> Based on an entry in the mailing list from 30 Oct 2001 \n> about efficient deletes on subqueries, \n> I've found two ways to do so (PostgreSQL 7.2.1): \n> ...\n> Is there a way to put the second form (more complicated, but faster) \n> in one statement? \n> Or is there even a third way to delete, which I cannot see? \n\nThe clean way to do this would be to allow extra FROM-list relations\nin DELETE. We already have a similar facility for UPDATE, so it's not\nclear to me why there's not one for DELETE. Then you could do, say,\n\nDELETE FROM onfvalue , onfvalue j WHERE\nj.sid= 5 AND\nonfvalue.lid = j.lid AND \nonfvalue.mid = j.mid AND\nonfvalue.timepoint = j.timepoint AND \nonfvalue.entrancetime < j.entrancetime ;\n\nIf you were using two separate tables you could force this to happen\nvia an implicit FROM-clause entry, much as you've done in your second\nalternative --- but there's no way to set up a self-join in a DELETE\nbecause of the lack of any place to put an alias declaration.\n\nAFAIK this extension would be utterly trivial to implement, since all\nthe machinery is there already --- for 99% of the backend, it doesn't\nmatter whether a FROM-item is implicit or explicit. We'd only need to\nargue out what the syntax should be. I could imagine\n\n\tDELETE FROM relation_expr [ , table_ref [ , ... ] ]\n\t[ WHERE bool_expr ]\n\nor\n\n\tDELETE FROM relation_expr [ FROM table_ref [ , ... ] ]\n\t[ WHERE bool_expr ]\n\nThe two FROMs in the second form look a little weird, but they help to\nmake a clear separation between the deletion target table and the\nmerely-referenced tables. Also, the first one might look to people\nlike they'd be allowed to write\n\n\tDELETE FROM foo FULL JOIN bar ...\n\nwhich is not any part of my intention (it's very unclear what it'd\nmean for the target table to be on the nullable side of an outer join).\nOTOH there'd be no harm in outer joins in a separate from-clause, eg\n\n\tDELETE FROM foo FROM (bar FULL JOIN baz ON ...) WHERE ...\n\nActually, either syntax above would support that; I guess what's really\nbothering me about the first syntax is that a comma suggests a list of\nthings that will all be treated similarly, while in reality the first\nitem will be treated much differently from the rest.\n\nDoes anyone know whether other systems that support the UPDATE extension\nfor multiple tables also support a DELETE extension for multiple tables?\nIf so, what's their syntax?\n\nA somewhat-related issue is that people keep expecting to be able to\nattach an alias to the target table name in UPDATE and DELETE; seems\nlike we get that question every couple months. While this is clearly\ndisallowed by the SQL spec, it's apparently supported by some other\nimplementations (else we'd not get the question so much). Should we\nadd that extension to our syntax? Or should we continue to resist it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Jun 2002 09:56:27 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Efficient DELETE Strategies "
},
{
"msg_contents": "Tom Lane wrote:\n> Christoph Haller <ch@rodos.fzk.de> writes:\n> \n> \tDELETE FROM relation_expr [ FROM table_ref [ , ... ] ]\n> \t[ WHERE bool_expr ]\n> \n> The two FROMs in the second form look a little weird, but they help to\n> make a clear separation between the deletion target table and the\n> merely-referenced tables. Also, the first one might look to people\n> like they'd be allowed to write\n> \n> \tDELETE FROM foo FULL JOIN bar ...\n> \n> which is not any part of my intention (it's very unclear what it'd\n> mean for the target table to be on the nullable side of an outer join).\n> OTOH there'd be no harm in outer joins in a separate from-clause, eg\n> \n> \tDELETE FROM foo FROM (bar FULL JOIN baz ON ...) WHERE ...\n> \n> Actually, either syntax above would support that; I guess what's really\n> bothering me about the first syntax is that a comma suggests a list of\n> things that will all be treated similarly, while in reality the first\n> item will be treated much differently from the rest.\n\nInteresting. We could allow an alias on the primary table:\n\n\tDELETE FROM foo f\n\tWHERE\n\nand allow the non-alias version of the table for the join. Of course,\nthat doesn't allow \"FULL JOIN\" and stuff like that. The FROM ... FROM\nlooks weird, and there is clearly confusion over the FROM t1, t2. I\nwish there was another option.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 Jun 2002 12:48:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Efficient DELETE Strategies"
},
{
"msg_contents": "On Mon, 2002-06-10 at 15:56, Tom Lane wrote:\n> Christoph Haller <ch@rodos.fzk.de> writes:\n> > Based on an entry in the mailing list from 30 Oct 2001 \n> > about efficient deletes on subqueries, \n> > I've found two ways to do so (PostgreSQL 7.2.1): \n> > ...\n> > Is there a way to put the second form (more complicated, but faster) \n> > in one statement? \n> > Or is there even a third way to delete, which I cannot see? \n\n...\n \n> AFAIK this extension would be utterly trivial to implement, since all\n> the machinery is there already --- for 99% of the backend, it doesn't\n> matter whether a FROM-item is implicit or explicit. We'd only need to\n> argue out what the syntax should be. I could imagine\n> \n> \tDELETE FROM relation_expr [ , table_ref [ , ... ] ]\n> \t[ WHERE bool_expr ]\n> \n> or\n> \n> \tDELETE FROM relation_expr [ FROM table_ref [ , ... ] ]\n> \t[ WHERE bool_expr ]\n\nWhat about\n\nDELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]\n \t[ WHERE bool_expr ]\n\nor\n\nDELETE relation_expr.* FROM relation_expr [ , table_ref [ , ... ] ]\n \t[ WHERE bool_expr ]\n\n\n--------------\nHannu\n\n",
"msg_date": "10 Jun 2002 19:33:48 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies"
},
{
"msg_contents": "Hannu Krosing wrote:\n> What about\n> \n> DELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]\n> \t[ WHERE bool_expr ]\n> \n> or\n> \n> DELETE relation_expr.* FROM relation_expr [ , table_ref [ , ... ] ]\n> \t[ WHERE bool_expr ]\n\nSo make the initial FROM optional and allow the later FROM to be a list\nof relations? Seems kind of strange.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 Jun 2002 14:33:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies"
},
{
"msg_contents": "On Mon, 10 Jun 2002 09:56:27 -0400, Tom Lane <tgl@sss.pgh.pa.us>\nwrote:\n>Does anyone know whether other systems that support the UPDATE extension\n>for multiple tables also support a DELETE extension for multiple tables?\n>If so, what's their syntax?\n\nMSSQL seems to guess what the user wants. All the following\nstatements do the same:\n\n(0) DELETE FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.i=t2.i)\n(1) DELETE t1 FROM t2 WHERE t1.i=t2.i\n(2a) DELETE t1 FROM t2, t1 WHERE t1.i=t2.i\n(2b) DELETE t1 FROM t2 INNER JOIN t1 ON t1.i=t2.i\n(3a) DELETE t1 FROM t2, t1 a WHERE a.i=t2.i\n(3b) DELETE t1 FROM t2 INNER JOIN t1 a ON a.i=t2.i\n(4a) DELETE a FROM t2, t1 a WHERE a.i=t2.i\n(4b) DELETE a FROM t2 INNER JOIN t1 a ON a.i=t2.i\n(5) DELETE t1 FROM t1 a\n WHERE EXISTS (SELECT * FROM t2 WHERE a.i=t2.i)\n(6) DELETE a FROM t1 a WHERE EXISTS (SELECT * FROM t2 WHERE a.i=t2.i)\n\n(0) is standard SQL and should always work. As an extension I'd like\n(1) or (2), but only one of them and forbid the other one. I'd also\nforbid (3), don't know what to think of (4), and don't see a reason\nwhy we would want (5) or (6). I'd rather have (7) or (8).\n\nThese don't work:\n(7) DELETE t1 a FROM t2 WHERE a.i = t2.i\n\"Incorrect syntax near 'a'.\"\n\n(8) DELETE FROM t1 a WHERE EXISTS (SELECT * FROM t2 WHERE a.i = t2.i)\n\"Incorrect syntax near 'a'.\"\n\nSelf joins:\n(2as) DELETE t1 FROM t1, t1 b WHERE 2*b.i=t1.i\n(4as) DELETE a FROM t1 a, t1 b WHERE 2*b.i=a.i\n(4bs) DELETE a FROM t1 a INNER JOIN t1 b on 2*b.i=a.i\n\nThese don't work:\nDELETE t1 FROM t1 b WHERE 2 * b.i = t1.i\n\"The column prefix 't1' does not match with a table name or alias name\nused in the query.\"\n\nDELETE t1 FROM t1 a, t1 b WHERE 2 * b.i = a.i\n\"The table 't1' is ambiguous.\"\n\nAnd as if there aren't enough ways yet, I just discovered that (1) to\n(6) just as much work with \"DELETE FROM\" where I wrote \"DELETE\" ...\n\nServus\n Manfred\n",
"msg_date": "Mon, 10 Jun 2002 22:23:38 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Efficient DELETE Strategies "
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Hannu Krosing wrote:\n>> What about\n>> \n>> DELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]\n>> [ WHERE bool_expr ]\n>> \n>> or\n>> \n>> DELETE relation_expr.* FROM relation_expr [ , table_ref [ , ... ] ]\n>> [ WHERE bool_expr ]\n\n> So make the initial FROM optional and allow the later FROM to be a list\n> of relations? Seems kind of strange.\n\nNo, I think he's suggesting that one be able to pick out any element of\nthe FROM-list and say that that is the deletion target. I really don't\nwant to get into that (unless there is precedent in Oracle or\nsomeplace); it seems way too confusing to me. It would also force us to\ndo error checking to eliminate cases that ought to just be syntactically\nimpossible: target table not present, target is a join or subselect\ninstead of a table, target is on wrong side of an outer join, etc.\n\n[ and in another message ]\n> The FROM ... FROM looks weird, and there is clearly confusion over the\n> FROM t1, t2. I wish there was another option.\n\nThe only other thing that's come to mind is to use a different keyword\n(ie, not FROM) for the list of auxiliary relations. WITH might work\nfrom a simple readability point of view:\n\tDELETE FROM target WITH other-tables WHERE ...\nBut we've already got FROM as the equivalent construct in UPDATE, so it\nseems weird to use something else in DELETE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Jun 2002 16:34:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies "
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n>> If so, what's their syntax?\n\n> MSSQL seems to guess what the user wants.\n\nGack. Nothing like treating mindless syntax variations as a \"feature\"\nlist...\n\n> All the following statements do the same:\n\n> (1) DELETE t1 FROM t2 WHERE t1.i=t2.i\n> (2a) DELETE t1 FROM t2, t1 WHERE t1.i=t2.i\n> (5) DELETE t1 FROM t1 a\n> WHERE EXISTS (SELECT * FROM t2 WHERE a.i=t2.i)\n> (6) DELETE a FROM t1 a WHERE EXISTS (SELECT * FROM t2 WHERE a.i=t2.i)\n\nSo in other words, MSSQL has no idea whether the name following DELETE\nis a real table name or an alias, and it's also unclear whether the name\nappears in the separate FROM clause or generates a FROM-item all by\nitself. This is why they have to punt on these cases:\n\n> These don't work:\n> DELETE t1 FROM t1 b WHERE 2 * b.i = t1.i\n> \"The column prefix 't1' does not match with a table name or alias name\n> used in the query.\"\n\n> DELETE t1 FROM t1 a, t1 b WHERE 2 * b.i = a.i\n> \"The table 't1' is ambiguous.\"\n\nThe ambiguity is entirely self-inflicted...\n\n> And as if there aren't enough ways yet, I just discovered that (1) to\n> (6) just as much work with \"DELETE FROM\" where I wrote \"DELETE\" ...\n\nHm. So (1) with the DELETE FROM corresponds exactly to what I was\nsuggesting:\n\tDELETE FROM t1 FROM t2 WHERE t1.i=t2.i\nexcept that I'd also allow an alias in there:\n\tDELETE FROM t1 a FROM t2 b WHERE a.i=b.i\n\nGiven the plethora of mutually incompatible interpretations that MSSQL\nevidently supports, though, I fear we can't use it as precedent for\nmaking any choices :-(.\n\nCan anyone check out other systems?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 10 Jun 2002 17:07:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Efficient DELETE Strategies "
},
{
"msg_contents": "\nTom,\n\n> >> If so, what's their syntax?\n> \n> > MSSQL seems to guess what the user wants.\n> \n> Gack. Nothing like treating mindless syntax variations as a \"feature\"\n> list...\n\nI vote that we stick to a strick SQL92 interpretation, here. \n1) It's standard\n2) Strict syntax on DELETE statements is better.\n\nPersonally, I would *not* want the database to \"guess what I want\" in a delete \nstatement; it might guess wrong and there go my records ...\n\nHeck, one of the things I need to research how to turn off in PostgreSQL is \nthe \"Add missing FROM-clause\" feature, which has tripped me up many times. \n\n-- \n-Josh Berkus\n\n",
"msg_date": "Mon, 10 Jun 2002 15:41:37 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Efficient DELETE Strategies"
},
{
"msg_contents": "This\n\nHannu Krosing wrote:\n> DELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]\n> \t[ WHERE bool_expr ]\n\n\nThis in some ways is similar to Oracle where the FROM is optional in a \nDELETE (ie. DELETE foo WHERE ...). By omitting the first FROM, the \nsyntax ends up mirroring the UPDATE case:\n\nDELETE foo FROM bar WHERE ...\n\nUPDATE foo FROM bar WHERE ...\n\nHowever I think the syntax should also support the first FROM as being \noptional (even though it looks confusing):\n\nDELETE FROM foo FROM bar WHERE ...\n\nthanks,\n--Barry\n\n",
"msg_date": "Mon, 10 Jun 2002 17:25:19 -0700",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Hannu Krosing wrote:\n> >> What about\n> >> \n> >> DELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]\n> >> [ WHERE bool_expr ]\n> >> \n> >> or\n> >> \n> >> DELETE relation_expr.* FROM relation_expr [ , table_ref [ , ... ] ]\n> >> [ WHERE bool_expr ]\n> \n> > So make the initial FROM optional and allow the later FROM to be a list\n> > of relations? Seems kind of strange.\n> \n> No, I think he's suggesting that one be able to pick out any element of\n> the FROM-list and say that that is the deletion target. I really don't\n> want to get into that (unless there is precedent in Oracle or\n> someplace); it seems way too confusing to me. It would also force us to\n> do error checking to eliminate cases that ought to just be syntactically\n> impossible: target table not present, target is a join or subselect\n> instead of a table, target is on wrong side of an outer join, etc.\n\nYuck.\n\n> [ and in another message ]\n> > The FROM ... FROM looks weird, and there is clearly confusion over the\n> > FROM t1, t2. I wish there was another option.\n> \n> The only other thing that's come to mind is to use a different keyword\n> (ie, not FROM) for the list of auxiliary relations. WITH might work\n> from a simple readability point of view:\n> \tDELETE FROM target WITH other-tables WHERE ...\n> But we've already got FROM as the equivalent construct in UPDATE, so it\n> seems weird to use something else in DELETE.\n\nYes, another keyword is the only solution. Having FROM after DELETE\nmean something different from FROM after a tablename is just too weird. \nI know UPDATE uses FROM, and it is logical to use it here, but it is\njust too wierd when DELETE already has a FROM. Should we allow FROM and\nadd WITH to UPDATE as well, and document WITH but support FROM too? No\nidea. What if we support ADD FROM as the keywords for the new clause?\n\nClearly this is a TODO item. I will document it when we decide on a\ndirection.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 10 Jun 2002 22:53:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies"
},
{
"msg_contents": "> Given the plethora of mutually incompatible interpretations that MSSQL\n> evidently supports, though, I fear we can't use it as precedent for\n> making any choices :-(.\n>\n> Can anyone check out other systems?\n\nMySQL:\n\n6.4.6 DELETE Syntax\n\nDELETE [LOW_PRIORITY | QUICK] FROM table_name\n [WHERE where_definition]\n [ORDER BY ...]\n [LIMIT rows]\n\nor\n\nDELETE [LOW_PRIORITY | QUICK] table_name[.*] [,table_name[.*] ...]\n FROM table-references\n [WHERE where_definition]\n\nor\n\nDELETE [LOW_PRIORITY | QUICK]\n FROM table_name[.*], [table_name[.*] ...]\n USING table-references\n [WHERE where_definition]\n\nDELETE deletes rows from table_name that satisfy the condition given by\nwhere_definition, and returns the number of records deleted.\n\nIf you issue a DELETE with no WHERE clause, all rows are deleted. If you do\nthis in AUTOCOMMIT mode, this works as TRUNCATE. See section 6.4.7 TRUNCATE\nSyntax. In MySQL 3.23, DELETE without a WHERE clause will return zero as the\nnumber of affected records.\n\nIf you really want to know how many records are deleted when you are\ndeleting all rows, and are willing to suffer a speed penalty, you can use a\nDELETE statement of this form:\n\nmysql> DELETE FROM table_name WHERE 1>0;\n\nNote that this is much slower than DELETE FROM table_name with no WHERE\nclause, because it deletes rows one at a time.\n\nIf you specify the keyword LOW_PRIORITY, execution of the DELETE is delayed\nuntil no other clients are reading from the table.\n\nIf you specify the word QUICK then the table handler will not merge index\nleaves during delete, which may speed up certain kind of deletes.\n\nIn MyISAM tables, deleted records are maintained in a linked list and\nsubsequent INSERT operations reuse old record positions. To reclaim unused\nspace and reduce file-sizes, use the OPTIMIZE TABLE statement or the\nmyisamchk utility to reorganise tables. OPTIMIZE TABLE is easier, but\nmyisamchk is faster. See section 4.5.1 OPTIMIZE TABLE Syntax and section\n4.4.6.10 Table Optimisation.\n\nThe first multi-table delete format is supported starting from MySQL 4.0.0.\nThe second multi-table delete format is supported starting from MySQL 4.0.2.\n\nThe idea is that only matching rows from the tables listed before the FROM\nor before the USING clause are deleted. The effect is that you can delete\nrows from many tables at the same time and also have additional tables that\nare used for searching.\n\nThe .* after the table names is there just to be compatible with Access:\n\nDELETE t1,t2 FROM t1,t2,t3 WHERE t1.id=t2.id AND t2.id=t3.id\n\nor\n\nDELETE FROM t1,t2 USING t1,t2,t3 WHERE t1.id=t2.id AND t2.id=t3.id\n\nIn the above case we delete matching rows just from tables t1 and t2.\n\nORDER BY and using multiple tables in the DELETE statement is supported in\nMySQL 4.0.\n\nIf an ORDER BY clause is used, the rows will be deleted in that order. This\nis really only useful in conjunction with LIMIT. For example:\n\nDELETE FROM somelog\nWHERE user = 'jcole'\nORDER BY timestamp\nLIMIT 1\n\nThis will delete the oldest entry (by timestamp) where the row matches the\nWHERE clause.\n\nThe MySQL-specific LIMIT rows option to DELETE tells the server the maximum\nnumber of rows to be deleted before control is returned to the client. This\ncan be used to ensure that a specific DELETE command doesn't take too much\ntime. You can simply repeat the DELETE command until the number of affected\nrows is less than the LIMIT value.\n\nChris\n\n",
"msg_date": "Tue, 11 Jun 2002 11:18:09 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies "
},
{
"msg_contents": "On Tue, 2002-06-11 at 04:53, Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Hannu Krosing wrote:\n> > >> What about\n> > >> \n> > >> DELETE relation_expr FROM relation_expr [ , table_ref [ , ... ] ]\n> > >> [ WHERE bool_expr ]\n> > >> \n> > >> or\n> > >> \n> > >> DELETE relation_expr.* FROM relation_expr [ , table_ref [ , ... ] ]\n> > >> [ WHERE bool_expr ]\n> > \n> > > So make the initial FROM optional and allow the later FROM to be a list\n> > > of relations? Seems kind of strange.\n\nI was inspired by MS Access syntax that has optional relation_expr.* :\n\n DELETE [relation_expr.*] FROM relation_expr WHERE criteria\n\nit does not allow any other tablerefs in from \n\n> Clearly this is a TODO item. I will document it when we decide on a\n> direction.\n\nOr then we can just stick with standard syntax and teach people to do\n\nDELETE FROM t1 where t1.id1 in \n (select id2 from t2 where t2.id2 = t1.id1)\n\nand perhaps even teach our optimizer to add the t2.id2 = t1.id1 part\nitself to make it fast\n\nAFAIK this should be exactly the same as the proposed\n\nDELETE FROM t1 FROM t2\nWHERE t2.id2 = t1.id1\n\n--------------\nHannu\n\n",
"msg_date": "11 Jun 2002 12:02:49 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies"
},
{
"msg_contents": "\nAdded to TODO:\n\n\t* Allow DELETE to handle table aliases for self-joins [delete]\n\n---------------------------------------------------------------------------\n\nManfred Koizar wrote:\n> On Mon, 10 Jun 2002 09:56:27 -0400, Tom Lane <tgl@sss.pgh.pa.us>\n> wrote:\n> >Does anyone know whether other systems that support the UPDATE extension\n> >for multiple tables also support a DELETE extension for multiple tables?\n> >If so, what's their syntax?\n> \n> MSSQL seems to guess what the user wants. All the following\n> statements do the same:\n> \n> (0) DELETE FROM t1 WHERE EXISTS (SELECT * FROM t2 WHERE t1.i=t2.i)\n> (1) DELETE t1 FROM t2 WHERE t1.i=t2.i\n> (2a) DELETE t1 FROM t2, t1 WHERE t1.i=t2.i\n> (2b) DELETE t1 FROM t2 INNER JOIN t1 ON t1.i=t2.i\n> (3a) DELETE t1 FROM t2, t1 a WHERE a.i=t2.i\n> (3b) DELETE t1 FROM t2 INNER JOIN t1 a ON a.i=t2.i\n> (4a) DELETE a FROM t2, t1 a WHERE a.i=t2.i\n> (4b) DELETE a FROM t2 INNER JOIN t1 a ON a.i=t2.i\n> (5) DELETE t1 FROM t1 a\n> WHERE EXISTS (SELECT * FROM t2 WHERE a.i=t2.i)\n> (6) DELETE a FROM t1 a WHERE EXISTS (SELECT * FROM t2 WHERE a.i=t2.i)\n> \n> (0) is standard SQL and should always work. As an extension I'd like\n> (1) or (2), but only one of them and forbid the other one. I'd also\n> forbid (3), don't know what to think of (4), and don't see a reason\n> why we would want (5) or (6). I'd rather have (7) or (8).\n> \n> These don't work:\n> (7) DELETE t1 a FROM t2 WHERE a.i = t2.i\n> \"Incorrect syntax near 'a'.\"\n> \n> (8) DELETE FROM t1 a WHERE EXISTS (SELECT * FROM t2 WHERE a.i = t2.i)\n> \"Incorrect syntax near 'a'.\"\n> \n> Self joins:\n> (2as) DELETE t1 FROM t1, t1 b WHERE 2*b.i=t1.i\n> (4as) DELETE a FROM t1 a, t1 b WHERE 2*b.i=a.i\n> (4bs) DELETE a FROM t1 a INNER JOIN t1 b on 2*b.i=a.i\n> \n> These don't work:\n> DELETE t1 FROM t1 b WHERE 2 * b.i = t1.i\n> \"The column prefix 't1' does not match with a table name or alias name\n> used in the query.\"\n> \n> DELETE t1 FROM t1 a, t1 b WHERE 2 * b.i = a.i\n> \"The table 't1' is ambiguous.\"\n> \n> And as if there aren't enough ways yet, I just discovered that (1) to\n> (6) just as much work with \"DELETE FROM\" where I wrote \"DELETE\" ...\n> \n> Servus\n> Manfred\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Mon, 26 Aug 2002 17:35:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Efficient DELETE Strategies"
}
] |
[
{
"msg_contents": "In a 7.3 dev test database, I have a table called msysconf in a schema\ncalled biblio. If I execute:\n\nALTER TABLE biblio.msysconf OWNER TO dpage\n\nI get:\n\nERROR: msysconf_idx is an index relation\n\nThere is an index with this name on the table.\n\nAny ideas?\n\nRegards, Dave.\n",
"msg_date": "Mon, 10 Jun 2002 15:10:31 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "ALTER TABLE... OWNER bugette"
}
] |
[
{
"msg_contents": "\nHello,\n is there any news about the Mac OS X shutdown issue?\nIt was discussed in a few April-May/2002 messages with the Subject\n\"Mac OS X: system shutdown prevents checkpoint\". In short, during a\nregular system shutdown on Mac OS X the postmaster is not terminated\ngracefully, leading to troubles at the successive startup.\nAll OS X release I know of, up to the latest one (10.1.5), are prone to\nthis inconvenient.\n\nThanks,\n David\n",
"msg_date": "Mon, 10 Jun 2002 18:47:55 +0200",
"msg_from": "David Santinoli <u235@libero.it>",
"msg_from_op": true,
"msg_subject": "Mac OS X shutdown"
},
{
"msg_contents": "We've got an OSX machine set up now, however we haven't had time to look\ninto the problem yet.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of David Santinoli\n> Sent: Tuesday, 11 June 2002 12:48 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] Mac OS X shutdown\n>\n>\n>\n> Hello,\n> is there any news about the Mac OS X shutdown issue?\n> It was discussed in a few April-May/2002 messages with the Subject\n> \"Mac OS X: system shutdown prevents checkpoint\". In short, during a\n> regular system shutdown on Mac OS X the postmaster is not terminated\n> gracefully, leading to troubles at the successive startup.\n> All OS X release I know of, up to the latest one (10.1.5), are prone to\n> this inconvenient.\n>\n> Thanks,\n> David\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Wed, 12 Jun 2002 10:56:40 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Mac OS X shutdown"
}
] |
[
{
"msg_contents": "Hi ,\n\nI am extremely new to PostGreSql. If any one can please answer \nthis question of mine. I want to insert/update records into the \npostgres database through C or perl code. The only condition is \nthat it should be efficient. Can anybody tell me the difference \nbetween ecpg and libpq and which one should I work on for solving \nmy problem.\n\nThanks in advance.\nVikas.\n\n_________________________________________________________\nClick below to visit monsterindia.com and review jobs in India or \nAbroad\nhttp://monsterindia.rediff.com/jobs\n\n",
"msg_date": "10 Jun 2002 20:09:57 -0000",
"msg_from": "\"vikas p verma\" <vvicky72@rediffmail.com>",
"msg_from_op": true,
"msg_subject": "PostGres Doubt"
},
{
"msg_contents": "On Mon, Jun 10, 2002 at 08:09:57PM -0000, vikas p verma wrote:\n> this question of mine. I want to insert/update records into the \n> postgres database through C or perl code. The only condition is \n> that it should be efficient. Can anybody tell me the difference \n> between ecpg and libpq and which one should I work on for solving \n> my problem.\n\nBoth will work and both will be efficient. ecpg internally uses libpq to\ndo the real work but gives you the standard SQL commands without having\nto learn any new library API.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 12 Jun 2002 14:35:38 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: vikas p verma [mailto:vvicky72@rediffmail.com]\n> Sent: Monday, June 10, 2002 1:10 PM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] PostGres Doubt\n> \n> \n> Hi ,\n> \n> I am extremely new to PostGreSql. If any one can please answer \n> this question of mine. I want to insert/update records into the \n> postgres database through C or perl code. The only condition is \n> that it should be efficient. Can anybody tell me the difference \n> between ecpg and libpq and which one should I work on for solving \n> my problem.\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\nECPG is single threading. Hence, tools written in ECPG are a pain in\nthe neck if you want multiple threads of execution. I recommend against\nusing it for any purpose except porting a single threading project that\nalready uses embedded SQL. The embedded SQL interface for PostgreSQL is\na disaster.\n\nThe libpq functions are reentrant. These will be useful for just about\nany project.\n\nIf you are populating empty tables, then use the bulk copy interface.\nIt is orders of magnitude faster.\nIf you are going to completely replace the data in a table, drop the\ntable, create the table, and use the bulk copy interface.\n",
"msg_date": "Mon, 10 Jun 2002 14:08:22 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "Is libpq/PQconnectdb() reentrant? I've tried repeatedly over time and \nit seems to incur segfaults every single time.\n\n-d\n\nDann Corbit wrote:\n\n>The libpq functions are reentrant. These will be useful for just about\n>any project.\n> \n>\n\n",
"msg_date": "Mon, 10 Jun 2002 21:16:23 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Mon, 10 Jun 2002, Dann Corbit wrote:\n\n> If you are going to completely replace the data in a table, drop the\n> table, create the table, and use the bulk copy interface.\n\nActually, that's a bad habit to get into. Views disappear, as do triggers \nor constraints. Better to 'truncate table' or 'delete from table'. I \nknow, I had a bear of a time with a nightly drop table;create table;copy \ndata in script that I forgot about and built a nice new app on views. \nworked fine, came in the next morning, app was down... \n\n",
"msg_date": "Tue, 11 Jun 2002 10:10:50 -0600 (MDT)",
"msg_from": "Scott Marlowe <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Mon, Jun 10, 2002 at 02:08:22PM -0700, Dann Corbit wrote:\n> ECPG is single threading. Hence, tools written in ECPG are a pain in\n> the neck if you want multiple threads of execution. I recommend against\n\nDid he say he wants to write a multi-threaded app?\n\n> using it for any purpose except porting a single threading project that\n> already uses embedded SQL. The embedded SQL interface for PostgreSQL is\n> a disaster.\n\nOh, that's what I call constructive critizism. I cannot remember you\nfiling any bug reports or asking for some special features. Wouldn't\nthat be the first step? And not calling other people's work a disaster.\n\n> The libpq functions are reentrant. These will be useful for just about\n> any project.\n\nWell if they are (I never checked myself) it shouldn't be too difficult\nto make ecpg reentrant too.\n\n> If you are going to completely replace the data in a table, drop the\n> table, create the table, and use the bulk copy interface.\n\nOh great! Talking about valuable comments. Ever bothered to even ask if\nthey are using triggers, constraints, etc. before coming with such a\nproposal?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 12 Jun 2002 14:40:45 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "Good points; not sure why I didn't pick up on this too.\n\nI should point out that I've seen code with heavy Oracle-isms brought\ninto PostgreSQL using ecpg with amazingly few changes. It is a great\npiece of code; any large complaints should perhaps be directed at the\nSQL standards themselves...\n\n - Thomas\n",
"msg_date": "Wed, 12 Jun 2002 07:38:32 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are moving to Postgres from Oracle. We have a few tables that have around\n8 to 10 millions of rows and their size increases very rapidly(deletions are\nvery less on these tables). How will Postgres hanlde very big tables like\nthis? or would it be very slow when compared to Oracle? Do you have any case\nstudies in this regd?\n\nAlso anyone know of any perticular documentation/links that talks\nspecifically about \"migrating to Postgres from Oracle\"?, Please let me know\nif you have kind of document that would be of great use to us.\n\nThanks\nYuva\nSr. Java Developer\nhttp://www.ebates.com\nmailto:ychandolu@ebates.com\n",
"msg_date": "Mon, 10 Jun 2002 14:42:33 -0700",
"msg_from": "Yuva Chandolu <ychandolu@ebates.com>",
"msg_from_op": true,
"msg_subject": "Will postgress handle too big tables?"
},
{
"msg_contents": "\nYuva,\n\n> Also anyone know of any perticular documentation/links that talks\n> specifically about \"migrating to Postgres from Oracle\"?, Please let me know\n> if you have kind of document that would be of great use to us.\n\nPlease see Techdocs ( http://techdocs.postgresql.org/ ) for performance \nwhitepapers and Oracle migration tips.\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \tjosh@agliodbs.com\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Mon, 10 Jun 2002 15:43:02 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Will postgress handle too big tables?"
},
{
"msg_contents": "On Mon, 10 Jun 2002, Yuva Chandolu wrote:\n\n> We are moving to Postgres from Oracle. We have a few tables that have around\n> 8 to 10 millions of rows and their size increases very rapidly(deletions are\n> very less on these tables). How will Postgres hanlde very big tables like\n> this?\n\nUh...\"what big tables?\" :-)\n\nHave a look back through the archives. I'm mucking about quite\nhappily with 500 million row tables, without much difficulty.\n\nI've found that my main barrier is disk I/O. If you're doing it on a\nlittle dual-IDE disk system as I am, things just ain't so fast. I'm\nhoping that in the next couple of weeks I get the go-ahead to put\ntogether a system with ten or so disks (based around a 3ware Escalade\nIDE RAID controller) that will make trillion-row-tables quite practical.\n\n> or would it be very slow when compared to Oracle? Do you have any case\n> studies in this regd?\n\nIt all depends entirely on the application. Really. Some applications\nwill work just as well on Postgres as they will on Oracle; others\nwill be almost impossible with Postgres.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Tue, 11 Jun 2002 14:45:27 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: Will postgress handle too big tables?"
},
{
"msg_contents": "Do you know any attempts to write native OLE DB provider for PostgreSQL (it\nwould give broader support for VS Net). I would like to write such provider\nand I want to know if sombody tried it before. Could somebody help me with\nprotocol issues (I have read Backend/Frontend Protocol and studied ODBC\ndriver) Are there any other interesting issues which aren not covered with\nit. I would like to know how could I implement precompiled statements. Is\nthere any way to send it without parameters to able backend to chache it for\nfuture use or it is not necessary. Are there any problems with large objects\n?\n\n",
"msg_date": "Tue, 11 Jun 2002 10:01:56 +0200",
"msg_from": "\"Marek Mosiewicz\" <marekmosiewicz@poczta.onet.pl>",
"msg_from_op": false,
"msg_subject": "PostgreSQL OLE DB Provider"
},
{
"msg_contents": "also, remember that for the cost of a single CPU oracle license you can \nbuild a crankin' postgresql server... memory and I/O are way more \nimportant than CPU power btw.\n\n",
"msg_date": "Tue, 11 Jun 2002 10:14:50 -0600 (MDT)",
"msg_from": "Scott Marlowe <scott.marlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: Will postgress handle too big tables?"
}
] |
[
{
"msg_contents": "Should the following piece of code cause an:\nERROR: <unnamed> referential integrity violation - key referenced\n from b not found in a\nOr should it work because the check is deferred and in the\nend no violations are present?\n\ncreate table a(ia int primary key);\ncreate table b(ia int references a initially deferred);\ninsert into a values (7);\nbegin;\ninsert into b values (-7);\nupdate b set ia=-ia where ia<0;\ncommit;\ndrop table a;\ndrop table b;\n\n-- \nSincerely, srb@cuci.nl\n Stephen R. van den Berg (AKA BuGless).\n\n\"-- hit any user to continue\"\n",
"msg_date": "Mon, 10 Jun 2002 23:58:33 +0200",
"msg_from": "srb@cuci.nl (Stephen R. van den Berg)",
"msg_from_op": true,
"msg_subject": "Referential integrity problem postgresql 7.2 ?"
},
{
"msg_contents": " >From billy Tue Jun 11 13:38:51 2002\n Date: Tue, 11 Jun 2002 10:54:27 -0700 (PDT)\n From: Stephan Szabo <sszabo@megazone23.bigpanda.com>\n Cc: <pgsql-bugs@postgresql.org>\n Sender: pgsql-bugs-owner@postgresql.org\n\n\n On Mon, 10 Jun 2002, Stephen R. van den Berg wrote:\n\n > Should the following piece of code cause an:\n > ERROR: <unnamed> referential integrity violation - key referenced\n > from b not found in a\n > Or should it work because the check is deferred and in the\n > end no violations are present?\n\n It should work (and does in current sources). If you look in the archives\n you should be able to get info on how to patch 7.2 (it came up recently,\n I'm not sure which list, and Tom Lane sent the message in question).\n\nI've verified that it does work in the current CVS checkout.\n\n--\nBilly O'Connor\n",
"msg_date": "Tue, 11 Jun 2002 13:41:40 -0400 (EDT)",
"msg_from": "Billy O'Connor <billy@oconnoronline.net>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ?"
},
{
"msg_contents": "\nOn Mon, 10 Jun 2002, Stephen R. van den Berg wrote:\n\n> Should the following piece of code cause an:\n> ERROR: <unnamed> referential integrity violation - key referenced\n> from b not found in a\n> Or should it work because the check is deferred and in the\n> end no violations are present?\n\nIt should work (and does in current sources). If you look in the archives\nyou should be able to get info on how to patch 7.2 (it came up recently,\nI'm not sure which list, and Tom Lane sent the message in question).\n\n",
"msg_date": "Tue, 11 Jun 2002 10:54:27 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ?"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> On Mon, 10 Jun 2002, Stephen R. van den Berg wrote:\n>> ERROR: <unnamed> referential integrity violation - key referenced\n>> from b not found in a\n>> Or should it work because the check is deferred and in the\n>> end no violations are present?\n\n> It should work (and does in current sources). If you look in the archives\n> you should be able to get info on how to patch 7.2 (it came up recently,\n> I'm not sure which list, and Tom Lane sent the message in question).\n\nBTW, should we back-patch that into 7.2.*? I was resistant to the idea\nbecause of concern about lack of testing, but seeing that we've gotten\nseveral complaints maybe we should do it anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jun 2002 14:43:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ? "
},
{
"msg_contents": "On Tue, 11 Jun 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > On Mon, 10 Jun 2002, Stephen R. van den Berg wrote:\n> >> ERROR: <unnamed> referential integrity violation - key referenced\n> >> from b not found in a\n> >> Or should it work because the check is deferred and in the\n> >> end no violations are present?\n>\n> > It should work (and does in current sources). If you look in the archives\n> > you should be able to get info on how to patch 7.2 (it came up recently,\n> > I'm not sure which list, and Tom Lane sent the message in question).\n>\n> BTW, should we back-patch that into 7.2.*? I was resistant to the idea\n> because of concern about lack of testing, but seeing that we've gotten\n> several complaints maybe we should do it anyway.\n\nIf we're doing a 7.2.2, it may be worth it. I think that part of the patch\n(minus concerns about variables possibly not being reset, etc) is\nreasonably safe (and that part could be reasonably looked at again\nquickly) and did have some limited testing due to a couple of people\ngetting the patch back during 7.2's development.\n\nAs a related side note. The other part of the original patch (the NOT\nEXISTS in the upd/del no action trigger) was rejected. For match\nfull and match unspecified the same result can be reached by doing another\nquery which may be better than the subquery. Do you think that'd be\nbetter? I'd like to get the other side of this bug fixed so that at least\nthe no action cases work reasonably correctly. :)\n\n\n",
"msg_date": "Tue, 11 Jun 2002 12:05:42 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ? "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> As a related side note. The other part of the original patch (the NOT\n> EXISTS in the upd/del no action trigger) was rejected. For match\n> full and match unspecified the same result can be reached by doing another\n> query which may be better than the subquery. Do you think that'd be\n> better?\n\nNo opinion offhand; can you show examples of the alternatives you have\nin mind?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jun 2002 15:27:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ? "
},
{
"msg_contents": "On 2002.06.11 at 14:43:17 -0400, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > It should work (and does in current sources). If you look in the archives\n> > you should be able to get info on how to patch 7.2 (it came up recently,\n> > I'm not sure which list, and Tom Lane sent the message in question).\n> \n> BTW, should we back-patch that into 7.2.*? I was resistant to the idea\n\nI would appreciate this.\n\nI doubt that I it would fix problem with \n\nupdate sometable set a=a+1\n\nwhere there exist unique index on sometable(a), but it would make\npostgresql behavoir closer to standard SQL. \n\nIn my (user) point of view, it is obvoisly bugfix, rather than added\nfeature, so it has right to appear in 7.2.x release.\n\n> because of concern about lack of testing, but seeing that we've gotten\n> several complaints maybe we should do it anyway.\n-- \nVictor Wagner\t\t\tvitus@ice.ru\nChief Technical Officer\t\tOffice:7-(095)-748-53-88\nCommuniware.Net \t\tHome: 7-(095)-135-46-61\nhttp://www.communiware.net http://www.ice.ru/~vitus\n",
"msg_date": "Wed, 12 Jun 2002 00:13:11 +0400",
"msg_from": "Victor Wagner <vitus@ice.ru>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ?"
},
{
"msg_contents": "On Tue, 11 Jun 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > As a related side note. The other part of the original patch (the NOT\n> > EXISTS in the upd/del no action trigger) was rejected. For match\n> > full and match unspecified the same result can be reached by doing another\n> > query which may be better than the subquery. Do you think that'd be\n> > better?\n>\n> No opinion offhand; can you show examples of the alternatives you have\n> in mind?\n\n[guessing that -bugs is probably not appropriate anymore, moving to\n-hackers]\n\nAn additional query of the form...\nSELECT 1 FROM ONLY <pktable> WHERE pkatt=<keyval1> [AND ...]\n\nto the upd/del no action triggers. Right now in either deferred\nconstraints or when multiple statements are run in a function\nwe can sometimes raise an error where there shouldn't be one\nif a pk row is modified and a new pk row that has the old values\nis added. The above should catch this (and in fact the first versions\nof the patch that I did which were only sent to a couple of people\nwho were having problems did exactly that). When I did the\nlater patch, I changed it to a NOT EXISTS() subquery because\nfor match partial, the new row might not need to exactly match,\nbut the details of how it needs to match are based on what\nmatching rows there are in the fk table. I'm not sure in general\nhow else (apart from doing a lower level scan of the table) how\nto tell if another unrelated row with the same values has been\nadded to the table between the point of the action that caused\nthis trigger to be added to the queue and the point the trigger\nruns.\n\n",
"msg_date": "Tue, 11 Jun 2002 13:52:21 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Referential integrity problem postgresql 7.2 ? "
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Josh Berkus [mailto:josh@agliodbs.com]\n> Sent: Monday, June 10, 2002 3:42 PM\n> To: Tom Lane; Manfred Koizar\n> Cc: Christoph Haller; pgsql-sql@postgresql.org;\n> pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] [SQL] Efficient DELETE Strategies\n> \n> Tom,\n> \n> > >> If so, what's their syntax?\n> > \n> > > MSSQL seems to guess what the user wants.\n> > \n> > Gack. Nothing like treating mindless syntax variations as \n> a \"feature\"\n> > list...\n> \n> I vote that we stick to a strick SQL92 interpretation, here. \n> 1) It's standard\n> 2) Strict syntax on DELETE statements is better.\n> \n> Personally, I would *not* want the database to \"guess what I \n> want\" in a delete \n> statement; it might guess wrong and there go my records ...\n> \n> Heck, one of the things I need to research how to turn off in \n> PostgreSQL is \n> the \"Add missing FROM-clause\" feature, which has tripped me \n> up many times. \n\nAgree strongly.\n\nI would be very annoyed at any database system that guesses about what I\nmight want. It might guess wrong and cause enormous damage. It does\nnot have to be an update or delete for this damage to occur. It could\nbe a report that financial decisions were based upon. If someone does\nget the PostgreSQL group to alter incoming statements, surely this\ndeserves *AT LEAST* a powerful warning message.\n",
"msg_date": "Mon, 10 Jun 2002 16:08:03 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Efficient DELETE Strategies"
}
] |
[
{
"msg_contents": "OK I know this has been long delayed but I've finished some work on the \nabove. The coster is actually doing a fairly good job. I only recieved \none submission from someone with data that replicated the problem, and \nwas myself hard pressed to replicate the situation. It's more-or-less a \nfencepost error. I don't have the expertise to figure out how to make \nthe coster more determinate in these types of situations. However as \nsome suggested the practice of storing actual run data from query plans \n(esp. when using precompiled and/or stored queries) would probably help \neliminate these byt adding another weight factor (IE last time we did \nthis it took X amount of time, and we estimated Y, so lets try it this \nway instead).\n\nUnfortunately I'm a bit too pressed for time looking for a job to \ncontinue pursuing this research any further.\n\nMichael Loftis\n\n",
"msg_date": "Mon, 10 Jun 2002 20:00:33 -0700",
"msg_from": "Michael Loftis <mloftis@wgops.com>",
"msg_from_op": true,
"msg_subject": "PG Index<->seqscan problems..."
}
] |
[
{
"msg_contents": "Are you using crypt on the connection?\n\nUnfortunately, crypt is not reentrant.\n\n> -----Original Message-----\n> From: David Ford [mailto:david+cert@blue-labs.org]\n> Sent: Monday, June 10, 2002 6:16 PM\n> To: Dann Corbit\n> Cc: vikas p verma; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] PostGres Doubt\n> \n> \n> Is libpq/PQconnectdb() reentrant? I've tried repeatedly over \n> time and \n> it seems to incur segfaults every single time.\n> \n> -d\n> \n> Dann Corbit wrote:\n> \n> >The libpq functions are reentrant. These will be useful for \n> just about\n> >any project.\n> > \n> >\n> \n> \n",
"msg_date": "Tue, 11 Jun 2002 00:59:24 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "I.e. \"md5\" in pg_hba.conf? This is rather disappointing.\n\nHere are a few references:\n\nhttp://lists.initd.org/pipermail/psycopg/2002-January/000673.html\nhttp://lists.initd.org/pipermail/psycopg/2002-January/000674.html\nhttp://archives.postgresql.org/pgsql-bugs/2002-03/msg00048.php\n\nAnd finally..\nhttp://archives.postgresql.org/pgsql-bugs/2002-01/msg00153.php\n\nSo reentrancy in libpq basically is put on hold until 7.3.\n\nDavid\n\nDann Corbit wrote:\n\n>Are you using crypt on the connection?\n>\n>Unfortunately, crypt is not reentrant.\n>\n> \n>\n>>-----Original Message-----\n>>From: David Ford [mailto:david+cert@blue-labs.org]\n>>Sent: Monday, June 10, 2002 6:16 PM\n>>To: Dann Corbit\n>>Cc: vikas p verma; pgsql-hackers@postgresql.org\n>>Subject: Re: [HACKERS] PostGres Doubt\n>>\n>>\n>>Is libpq/PQconnectdb() reentrant? I've tried repeatedly over \n>>time and \n>>it seems to incur segfaults every single time.\n>>\n>>-d\n>>\n>>Dann Corbit wrote:\n>>\n>> \n>>\n>>>The libpq functions are reentrant. These will be useful for \n>>> \n>>>\n>>just about\n>> \n>>\n>>>any project.\n>>> \n>>>\n>>> \n>>>\n>> \n>>\n\n",
"msg_date": "Wed, 12 Jun 2002 12:21:48 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "David Ford <david+cert@blue-labs.org> writes:\n> So reentrancy in libpq basically is put on hold until 7.3.\n\nOnly if you insist on using \"crypt\", which is deprecated anyway.\nmd5 is the preferred encryption method.\n\nMy feeling about the proposed patch was that crypt is now a legacy auth\nmethod, and it's not clear that we should create platform/library\ndependencies just to support making multiple connections simultaneously\nunder crypt auth. (Note that *using* connections concurrently is not\nat issue, only whether you can execute the authentication phase of\nstartup concurrently.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jun 2002 13:38:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt "
},
{
"msg_contents": "On Wed, 2002-06-12 at 19:38, Tom Lane wrote:\n> David Ford <david+cert@blue-labs.org> writes:\n> > So reentrancy in libpq basically is put on hold until 7.3.\n> \n> Only if you insist on using \"crypt\", which is deprecated anyway.\n> md5 is the preferred encryption method.\n> \n> My feeling about the proposed patch was that crypt is now a legacy auth\n> method, and it's not clear that we should create platform/library\n> dependencies just to support making multiple connections simultaneously\n> under crypt auth. (Note that *using* connections concurrently is not\n> at issue, only whether you can execute the authentication phase of\n> startup concurrently.)\n\ncan't this be solved by simple locking ?\n\nI know that postgres team can do locking properly ;)\n\n--------------\nHannu\n\n",
"msg_date": "13 Jun 2002 12:20:18 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "I'm using md5 in pg_hba.conf. That is the method, no?\n\nI'm writing a milter application which instantiates a private resource \nfor each thread upon thread startup. I have priv->conn which I \nestablish as priv->conn=PQconnectdb(connstr), connstr is const char \n*connstr=\"host=10.0.0.5 dbname=bmilter user=username password=password\";\n\nIt segfaults depending on it's mood but it tends to happen about 50-70% \nof the time. I switched to PQsetdbLogin() which has worked perfectly. \n I don't really want to use that however, I would much prefer using my \nconnstr.\n\nAm I missing something?\n\nThanks,\nDavid\n\nTom Lane wrote:\n\n>David Ford <david+cert@blue-labs.org> writes:\n> \n>\n>>So reentrancy in libpq basically is put on hold until 7.3.\n>> \n>>\n>\n>Only if you insist on using \"crypt\", which is deprecated anyway.\n>md5 is the preferred encryption method.\n>\n>My feeling about the proposed patch was that crypt is now a legacy auth\n>method, and it's not clear that we should create platform/library\n>dependencies just to support making multiple connections simultaneously\n>under crypt auth. (Note that *using* connections concurrently is not\n>at issue, only whether you can execute the authentication phase of\n>startup concurrently.)\n>\n>\t\t\tregards, tom lane\n> \n>\n\n",
"msg_date": "Thu, 13 Jun 2002 19:46:16 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "David Ford wrote:\n> I'm using md5 in pg_hba.conf. That is the method, no?\n> \n> I'm writing a milter application which instantiates a private resource \n> for each thread upon thread startup. I have priv->conn which I \n> establish as priv->conn=PQconnectdb(connstr), connstr is const char \n> *connstr=\"host=10.0.0.5 dbname=bmilter user=username password=password\";\n> \n> It segfaults depending on it's mood but it tends to happen about 50-70% \n> of the time. I switched to PQsetdbLogin() which has worked perfectly. \n> I don't really want to use that however, I would much prefer using my \n> connstr.\n\nWow, I am confused. md5 should be fine. Certainly sounds like there is\na thread problem with PQconnectdb(). Are you using 7.2.X?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jun 2002 22:45:25 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "David Ford <david+cert@blue-labs.org> writes:\n> I'm using md5 in pg_hba.conf. That is the method, no?\n> I'm writing a milter application which instantiates a private resource \n> for each thread upon thread startup. I have priv->conn which I \n> establish as priv->conn=PQconnectdb(connstr), connstr is const char \n> *connstr=\"host=10.0.0.5 dbname=bmilter user=username password=password\";\n\n> It segfaults depending on it's mood but it tends to happen about 50-70% \n> of the time.\n\nCould you dig out ye olde gdb and figure out *why* it's segfaulting?\nAt the very least, give us a stack backtrace from a debug-enabled build.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 23:17:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt "
},
{
"msg_contents": "pg_auth=# select version();\n version \n------------------------------------------------------------\n PostgreSQL 7.2 on i686-pc-linux-gnu, compiled by GCC 3.0.2\n\nWhich btw has a curious grant/revoke bug. create foo; grant select on \nfoo to bar; results in all rights being granted. You must revoke and \ngrant again in order to get the correct rights set.\n\nIf this rights bug has been fixed, I'll upgrade, but I don't consider it \na big problem since I am well aware of the bug.\n\nDavid\n\nBruce Momjian wrote:\n\n>David Ford wrote:\n> \n>\n>>I'm using md5 in pg_hba.conf. That is the method, no?\n>>\n>>I'm writing a milter application which instantiates a private resource \n>>for each thread upon thread startup. I have priv->conn which I \n>>establish as priv->conn=PQconnectdb(connstr), connstr is const char \n>>*connstr=\"host=10.0.0.5 dbname=bmilter user=username password=password\";\n>>\n>>It segfaults depending on it's mood but it tends to happen about 50-70% \n>>of the time. I switched to PQsetdbLogin() which has worked perfectly. \n>> I don't really want to use that however, I would much prefer using my \n>>connstr.\n>> \n>>\n>\n>Wow, I am confused. md5 should be fine. Certainly sounds like there is\n>a thread problem with PQconnectdb(). Are you using 7.2.X?\n>\n> \n>\n\n",
"msg_date": "Fri, 14 Jun 2002 09:19:03 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "David Ford <david+cert@blue-labs.org> writes:\n> Which btw has a curious grant/revoke bug. create foo; grant select on \n> foo to bar; results in all rights being granted. You must revoke and \n> grant again in order to get the correct rights set.\n\nI see no bug.\n\ntest72=# select version();\n version\n---------------------------------------------------------------\n PostgreSQL 7.2.1 on hppa-hp-hpux10.20, compiled by GCC 2.95.3\n(1 row)\n\ntest72=# create user bar;\nCREATE USER\ntest72=# create table foo (f1 int);\nCREATE\ntest72=# grant select on foo to bar;\nGRANT\ntest72=# \\z foo\nAccess privileges for database \"test72\"\n Table | Access privileges\n-------+----------------------------\n foo | {=,postgres=arwdRxt,bar=r}\n(1 row)\n\ntest72=#\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 09:55:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt "
},
{
"msg_contents": "My apologies, I was too brief in my example:\n\nheakin=> create table interviewers ( interviewer varchar );\nCREATE\nheakin=> insert into interviewers values ('Ryan');\nINSERT 932846 1\nheakin=> select * from interviewers ;\n interviewer\n-------------\n Ryan\n(1 row)\n\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+-------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers |\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nheakin=> grant select,insert,update on interviewers to heakin;\nGRANT\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges \n-------------------+--------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers | {=,heakin=arwdRxt}\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nheakin=> revoke all on interviewers from heakin;\nREVOKE\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+-------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers | {=}\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nheakin=> grant select,insert,update on interviewers to heakin;\nGRANT\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+-------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers | {=,heakin=arw}\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nDavid\n\nTom Lane wrote:\n\n>David Ford <david+cert@blue-labs.org> writes:\n> \n>\n>>Which btw has a curious grant/revoke bug. create foo; grant select on \n>>foo to bar; results in all rights being granted. You must revoke and \n>>grant again in order to get the correct rights set.\n>> \n>>\n>\n>I see no bug.\n>\n>test72=# select version();\n> version\n>---------------------------------------------------------------\n> PostgreSQL 7.2.1 on hppa-hp-hpux10.20, compiled by GCC 2.95.3\n>(1 row)\n>\n>test72=# create user bar;\n>CREATE USER\n>test72=# create table foo (f1 int);\n>CREATE\n>test72=# grant select on foo to bar;\n>GRANT\n>test72=# \\z foo\n>Access privileges for database \"test72\"\n> Table | Access privileges\n>-------+----------------------------\n> foo | {=,postgres=arwdRxt,bar=r}\n>(1 row)\n>\n>test72=#\n>\n>\t\t\tregards, tom lane\n> \n>\n\n",
"msg_date": "Mon, 17 Jun 2002 13:30:06 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "My apologies, I was too brief in my example:\n\nheakin=> create table interviewers ( interviewer varchar );\nCREATE\nheakin=> insert into interviewers values ('Ryan');\nINSERT 932846 1\nheakin=> select * from interviewers ;\n interviewer\n-------------\n Ryan\n(1 row)\n\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+-------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers |\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nheakin=> grant select,insert,update on interviewers to heakin;\nGRANT\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+--------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers | {=,heakin=arwdRxt}\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nheakin=> revoke all on interviewers from heakin;\nREVOKE\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+-------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers | {=}\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nheakin=> grant select,insert,update on interviewers to heakin;\nGRANT\nheakin=> \\z\nAccess privileges for database \"heakin\"\n Table | Access privileges\n-------------------+-------------------\n clients | {=,heakin=arwd}\n completed_surveys | {=,heakin=arwd}\n interviewers | {=,heakin=arw}\n respondents | {=,heakin=arwd}\n users | {=,heakin=ar}\n(5 rows)\n\nDavid\n\nTom Lane wrote:\n\n >David Ford <david+cert@blue-labs.org> writes:\n >\n >\n >>Which btw has a curious grant/revoke bug. create foo; grant select on\n >>foo to bar; results in all rights being granted. You must revoke and\n >>grant again in order to get the correct rights set.\n >>\n >>\n >\n >I see no bug.\n >\n >test72=# select version();\n > version\n >---------------------------------------------------------------\n > PostgreSQL 7.2.1 on hppa-hp-hpux10.20, compiled by GCC 2.95.3\n >(1 row)\n >\n >test72=# create user bar;\n >CREATE USER\n >test72=# create table foo (f1 int);\n >CREATE\n >test72=# grant select on foo to bar;\n >GRANT\n >test72=# \\z foo\n >Access privileges for database \"test72\"\n > Table | Access privileges\n >-------+----------------------------\n > foo | {=,postgres=arwdRxt,bar=r}\n >(1 row)\n >\n >test72=#\n >\n > \n\t\tregards, tom lane\n >\n >\n\n\n",
"msg_date": "Tue, 18 Jun 2002 00:38:27 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> heakin=> \\z\n> Access privileges for database \"heakin\"\n> Table | Access privileges\n> -------------------+-------------------\n> interviewers |\n\n> heakin=> grant select,insert,update on interviewers to heakin;\n> GRANT\n> heakin=> \\z\n> Access privileges for database \"heakin\"\n> Table | Access privileges \n> -------------------+--------------------\n> interviewers | {=,heakin=arwdRxt}\n\nI take it heakin is the owner of the table in question. As such,\nhe implicitly has all privileges --- the initial null privilege list\nis a shorthand for what you see explicitly in the second case.\n\nThe GRANT man page in current development sources has an example about\nthis; see the Notes section of\nhttp://developer.postgresql.org/docs/postgres/sql-grant.html\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:39:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt "
},
{
"msg_contents": "Gotcha. 'twas the first time I encountered it, I wasn't expecting it.\n\nThank you for the clarification. I hadn't paid attention to that \nparagraph when I read over it.\n\nDavid\n\nTom Lane wrote:\n\n>David Ford <david@blue-labs.org> writes:\n> \n>\n>>heakin=> \\z\n>>Access privileges for database \"heakin\"\n>> Table | Access privileges\n>>-------------------+-------------------\n>> interviewers |\n>> \n>>\n>\n> \n>\n>>heakin=> grant select,insert,update on interviewers to heakin;\n>>GRANT\n>>heakin=> \\z\n>>Access privileges for database \"heakin\"\n>> Table | Access privileges \n>>-------------------+--------------------\n>> interviewers | {=,heakin=arwdRxt}\n>> \n>>\n>\n>I take it heakin is the owner of the table in question. As such,\n>he implicitly has all privileges --- the initial null privilege list\n>is a shorthand for what you see explicitly in the second case.\n>\n>The GRANT man page in current development sources has an example about\n>this; see the Notes section of\n>http://developer.postgresql.org/docs/postgres/sql-grant.html\n>\n>\t\t\tregards, tom lane\n> \n>\n\n",
"msg_date": "Tue, 18 Jun 2002 11:58:53 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "Bruce, this error and the one in your earlier post are not indicative\nof the bug, but rather of the connection failing - looking at the\ncreated ecpgdebug file should confirm this.\n\nI have since compiled 7.3 with the patch locally and cannot recreate\nthe bug (after messing around with the HBA cfg file - I was getting\nthe same error as you).\n\nMy command line (with 7.3 sitting in /database/pgsql-test on port 5433\nand LD_LIBRARY_PATH setup):\n\n /database/pgsql-test/bin/ecpg insert-float.pgc\n gcc insert-float.c -I/database/pgsql-test/include -L/database/pgsql-test/lib -lecpg -lpq\n ./a.out floattest@localhost:5433\n\nRegards, Lee Kindness.\n\nBruce Momjian writes:\n > I am now getting this error:\n > \t#$ ./a.out floattest\n > \tcol1: -0.000006\n > \t*!*!* Error -220: No such connection NULL in line 21.\n > I will wait for Michael to comment on this.\n > \n > ---------------------------------------------------------------------------\n > \n > Lee Kindness wrote:\n > > Lee Kindness writes:\n > > > and the NULL goes... bang! I guess the '-' wasn't factored in and 21\n > > > bytes would be enough. Patch against current CVS (but untested):\n > > \n > > Ooops, a context diff is below...\n > > \n > > Index: src/interfaces/ecpg/lib/execute.c\n > > ===================================================================\n > > RCS file: /projects/cvsroot/pgsql/src/interfaces/ecpg/lib/execute.c,v\n > > retrieving revision 1.36\n > > diff -c -r1.36 execute.c\n > > *** src/interfaces/ecpg/lib/execute.c\t2002/01/13 08:52:08\t1.36\n > > --- src/interfaces/ecpg/lib/execute.c\t2002/06/11 11:45:35\n > > ***************\n > > *** 700,706 ****\n > > \t\t\t\tbreak;\n > > #endif /* HAVE_LONG_LONG_INT_64 */\n > > \t\t\tcase ECPGt_float:\n > > ! \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 20, stmt->lineno)))\n > > \t\t\t\t\treturn false;\n > > \n > > \t\t\t\tif (var->arrsize > 1)\n > > --- 700,706 ----\n > > \t\t\t\tbreak;\n > > #endif /* HAVE_LONG_LONG_INT_64 */\n > > \t\t\tcase ECPGt_float:\n > > ! \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 21, stmt->lineno)))\n > > \t\t\t\t\treturn false;\n > > \n > > \t\t\t\tif (var->arrsize > 1)\n > > ***************\n > > *** 720,726 ****\n > > \t\t\t\tbreak;\n > > \n > > \t\t\tcase ECPGt_double:\n > > ! \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 20, stmt->lineno)))\n > > \t\t\t\t\treturn false;\n > > \n > > \t\t\t\tif (var->arrsize > 1)\n > > --- 720,726 ----\n > > \t\t\t\tbreak;\n > > \n > > \t\t\tcase ECPGt_double:\n > > ! \t\t\t\tif (!(mallocedval = ECPGalloc(var->arrsize * 21, stmt->lineno)))\n > > \t\t\t\t\treturn false;\n > > \n > > \t\t\t\tif (var->arrsize > 1)\n",
"msg_date": "Tue, 11 Jun 2002 14:30:05 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [BUGS] Bug #640: ECPG: inserting float numbers"
}
] |
[
{
"msg_contents": "Hello together\n\ni've seen a lot of discussion about a native win32/OS2/BEOS port of\nPostgreSQL.\n\nDuring the last months i've ported PostgreSQL over to Novell NetWare\nand i've\nchanged the code that I use pthreads instead of fork() now.\n\nI had a lot of work with the variables and cleanup but mayor parts are\ndone.\n\nI would appreciate if we could combine this work.\n\nMy plan was to finish this port, discuss the port with other people and\noffer all the work\nto the PostgreSQL source tree, but now i'm jumping in here because of\nall the discussions.\n\nWhat i've done in detail:\n- i've defined #USE_PTHREADS in pg_config.h to differentiate between\nthe forked and the\nthreaded backend.\n- I've added several parts in postmaster.c so all functions are based\non pthreads now.\n- I've changed the signal handling because signals are process based\n- I've changed code in ipc.c to have a clean shutdown of threads\n- I've written some functions to switch the global variables. The\nglobals are controled with\nPOSIX semaphores.\n- I've written a new implementation of shared memory and semaphores-\nWith pthreads I don't\nneed real shared memory any more and i'm using POSIX semaphores now\n- Several minor changes.\n\nThere is still some more work to do like fixing memory leaks or\nhandling bad situations, but in general it's\nfunctional on NetWare.\n\nBTW: Is it possible to add some lines on the PostgreSQL webpage that\nthere is a first beta of\nPostgreSQL for NetWare available and to offer a binary download for the\nNetWare version?\n\nUlrich Neumann\n\n\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n",
"msg_date": "Tue, 11 Jun 2002 16:19:21 +0200",
"msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Native Win32/OS2/BeOS/NetWare ports"
},
{
"msg_contents": "> Hello together\n>\n> i've seen a lot of discussion about a native win32/OS2/BEOS port of\n> PostgreSQL.\n>\n> During the last months i've ported PostgreSQL over to Novell NetWare\n> and i've\n> changed the code that I use pthreads instead of fork() now.\n>\n> I had a lot of work with the variables and cleanup but mayor parts are\n> done.\n>\n> I would appreciate if we could combine this work.\n\nVery nice... I have patches for QNX6 which also involved redoing shared\nmemory and sempahores stuff. It would make very good sense to intergate,\nespecially since you managed to do something very close to what I wanted :)\n\n> My plan was to finish this port, discuss the port with other people and\n> offer all the work\n> to the PostgreSQL source tree, but now i'm jumping in here because of\n> all the discussions.\n>\n> What i've done in detail:\n> - i've defined #USE_PTHREADS in pg_config.h to differentiate between\n> the forked and the\n> threaded backend.\n> - I've added several parts in postmaster.c so all functions are based\n> on pthreads now.\n> - I've changed the signal handling because signals are process based\n\nCareful here. On certain systems (on many, I suspect) POSIX semantics for\nsignals is NOT default. Enforcing POSIX semantics requires certain compile\ntime switches which will also change behavior of various functions.\n\n> - I've changed code in ipc.c to have a clean shutdown of threads\n> - I've written some functions to switch the global variables. The\n> globals are controled with\n> POSIX semaphores.\n> - I've written a new implementation of shared memory and semaphores-\n> With pthreads I don't\n> need real shared memory any more and i'm using POSIX semaphores now\n\nPOSIX semaphores for what? I assume by the conext that you're talking about\nreplacing SysV semaphores which are used to control access to shared memory.\nIf that is the case, POSIX semaphores are not the best choice really. POSIX\nmutexes would be okay, but on SMP systems spinlocks (hardware TAS based\nmacros or POSIX spinlocks) would probably be better anyway. Note that on\nmost platforms spinlocks are used for that and SysV semaphores were just a\n'last resort' which had unacceptable performance and so I guess it was not\nused at all.\n\nDo you have your patch somewhere online?\n\n-- igor\n\n\n",
"msg_date": "Tue, 11 Jun 2002 13:14:58 -0500",
"msg_from": "\"Igor Kovalenko\" <Igor.Kovalenko@motorola.com>",
"msg_from_op": false,
"msg_subject": "Re: Native Win32/OS2/BeOS/NetWare ports"
}
] |
[
{
"msg_contents": "If you double-alias a column in a query (yeah, stupid, I know, but I did \nit by mistake and others will too!), then the dreaded \"fmgr_info: \nfunction <number>: cache lookup failed\" message is kicked out. For example:\n\n select * from company c, references r where r.company_id=c.company.id;\n\nNote that c.company.id references column id in table company twice!\n\nHope that this finds someone looking at the error handling in the \nparser! Should be chucked out as a syntax error.\n\nBrad\n\n",
"msg_date": "Tue, 11 Jun 2002 16:36:45 +0100",
"msg_from": "Bradley Kieser <brad@kieser.net>",
"msg_from_op": true,
"msg_subject": "Bug found: fmgr_info: function <number>: cache lookup failed"
},
{
"msg_contents": "Bradley Kieser <brad@kieser.net> writes:\n> If you double-alias a column in a query (yeah, stupid, I know, but I did \n> it by mistake and others will too!), then the dreaded \"fmgr_info: \n> function <number>: cache lookup failed\" message is kicked out. For example:\n\n> select * from company c, references r where r.company_id=c.company.id;\n\nCan you provide a *complete* example? Also, what version are you using?\nI tried this in 7.2.1:\n\ntest72=# create table company (id int);\nCREATE\ntest72=# create table refs(company_id int);\nCREATE\ntest72=# select * from company c, refs r where r.company_id=c.company.id;\nERROR: No such attribute or function 'company'\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jun 2002 12:00:56 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug found: fmgr_info: function <number>: cache lookup failed "
}
] |
[
{
"msg_contents": "I've just committed changes which implement three SQL99 functions and\noperators. OVERLAY() allows substituting a string into another string,\nSIMILAR TO is an operator for pattern matching, and a new variant of\nSUBSTRING() accepts a pattern to match.\n\nRegression tests have been augmented and pass. Docs have been updated.\nThe system catalogs were updated, so it is initdb time. Details from the\ncvs log below...\n\n - Thomas\n\nImplement SQL99 OVERLAY(). Allows substitution of a substring in a\nstring.\nImplement SQL99 SIMILAR TO as a synonym for our existing operator \"~\".\nImplement SQL99 regular expression SUBSTRING(string FROM pat FOR\nescape).\n Extend the definition to make the FOR clause optional.\n Define textregexsubstr() to actually implement this feature.\nUpdate the regression test to include these new string features.\n All tests pass.\nRename the regular expression support routines from \"pg95_xxx\" to\n\"pg_xxx\".\nDefine CREATE CHARACTER SET in the parser per SQL99. No implementation\nyet.\n",
"msg_date": "Tue, 11 Jun 2002 08:49:49 -0700",
"msg_from": "Thomas Lockhart <thomas@pgsql.com>",
"msg_from_op": true,
"msg_subject": "New string functions; initdb required"
},
{
"msg_contents": "Thomas,\n\n> I've just committed changes which implement three SQL99 functions and\n> operators. OVERLAY() allows substituting a string into another string,\n> SIMILAR TO is an operator for pattern matching, and a new variant of\n> SUBSTRING() accepts a pattern to match.\n\nWay cool! Thank you ... this replaces several of my custom PL/pgSQL \nfunctions.\n\nHow is SIMILAR TO different from ~ ?\n\n-- \n-Josh Berkus\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology \tjosh@agliodbs.com\n and data management solutions \t(415) 565-7293\n for law firms, small businesses \t fax 621-2533\n and non-profit organizations. \tSan Francisco\n\n",
"msg_date": "Tue, 11 Jun 2002 11:08:11 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "On Tue, Jun 11, 2002 at 11:08:11AM -0700, Josh Berkus wrote:\n> Thomas,\n> \n> > I've just committed changes which implement three SQL99 functions and\n> > operators. OVERLAY() allows substituting a string into another string,\n> > SIMILAR TO is an operator for pattern matching, and a new variant of\n> > SUBSTRING() accepts a pattern to match.\n> \n> Way cool! Thank you ... this replaces several of my custom PL/pgSQL \n> functions.\n> \n> How is SIMILAR TO different from ~ ?\n\n From the part of Thomas's email you snipped:\n\n Implement SQL99 SIMILAR TO as a synonym for our existing operator \"~\".\n\nSo the answer is \"not at all\"\n\nRoss\n",
"msg_date": "Tue, 11 Jun 2002 16:01:46 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> I've just committed changes which implement three SQL99 functions and\n> operators. OVERLAY() allows substituting a string into another string,\n> SIMILAR TO is an operator for pattern matching, and a new variant of\n\nTODO item marked as done:\n\n\t* -Add SIMILAR TO to allow character classes, 'pg_[a-c]%'\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 Jun 2002 17:27:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "> TODO item marked as done:\n> * -Add SIMILAR TO to allow character classes, 'pg_[a-c]%'\n\nDarn. Will have to be more careful next time ;)\n\n - Thomas\n",
"msg_date": "Tue, 11 Jun 2002 14:51:16 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "> > How is SIMILAR TO different from ~ ?\n> >From the part of Thomas's email you snipped:\n> Implement SQL99 SIMILAR TO as a synonym for our existing operator \"~\".\n> So the answer is \"not at all\"\n\nRight. I'm not certain about the regex syntax defined by SQL99; I used\nthe syntax that we already have enabled and it looks like we have a\ncouple of other variants available if we need them. If someone wants to\nresearch the *actual* syntax specified by SQL99 that would be good...\n\n - Thomas\n",
"msg_date": "Tue, 11 Jun 2002 14:53:14 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> Right. I'm not certain about the regex syntax defined by SQL99; I used\n> the syntax that we already have enabled and it looks like we have a\n> couple of other variants available if we need them. If someone wants to\n> research the *actual* syntax specified by SQL99 that would be good...\n\n As usual: ( ) + * [ ] |\n Instead of dot . there is underscore _\n There is % to mean .* just like LIKE\n There is no ? or ^ or $\n\n Regular expressions match the whole string, as if there were an\n implicit ^ before and $ after the pattern. You have to add % if\n you want to match anywhere in a string.\n\n As far as I can tell, there is no default escape character like \\\n but you can specify one.\n\n\n\n8.6 Similar predicate\nFunction\nSpecify a character string similarity by means of a regular expression.\n\n\nFormat\n\n <similar predicate> ::=\n <character match value> [ NOT ] SIMILAR TO <similar pattern>\n [ ESCAPE <escape character> ]\n \n <similar pattern> ::= <character value expression>\n\n <regular expression> ::=\n <regular term>\n | <regular expression> <vertical bar> <regular term>\n\n <regular term> ::=\n <regular factor>\n | <regular term> <regular factor>\n\n <regular factor> ::=\n <regular primary>\n | <regular primary> <asterisk>\n | <regular primary> <plus sign>\n\n <regular primary> ::=\n <character specifier>\n | <percent>\n | <regular character set>\n | <left paren> <regular expression> <right paren>\n\n <character specifier> ::=\n <non-escaped character>\n | <escaped character>\n\n <non-escaped character> ::= !! See the Syntax Rules\n <escaped character> ::= !! See the Syntax Rules\n\n <regular character set> ::=\n <underscore>\n | <left bracket> <character enumeration>... <right bracket>\n | <left bracket> <circumflex> \n <character enumeration>... <right bracket>\n | <left bracket> <colon> <regular character set identifier>\n <colon> <right bracket>\n\n <character enumeration> ::=\n <character specifier>\n | <character specifier> <minus sign> <character specifier>\n\n <regular character set identifier> ::= <identifier>\n\n\n\n*stuff omitted*\n\n3) The value of the <identifier> that is a <regular character set\nidentifier> shall be either ALPHA, UPPER, LOWER, DIGIT, or ALNUM.\n\n*collating stuff omitted*\n\n5) A <non-escaped character> is any single character from the\ncharacter set of the <similar pattern> that is not a <left bracket>,\n<right bracket>, <left paren>, <right paren>, <vertical bar>,\n<circumflex>, <minus sign>, <plus sign>, <asterisk>, <underscore>,\n<percent>, or the character specified by the result of the <character\nvalue expression> of <escape character>. A <character specifier> that\nis a <non-escaped character> represents itself.\n\n6) An <escaped character> is a sequence of two characters: the\ncharacter specified by the result of the <character value expression>\nof <escape character>, followed by a second character that is a <left\nbracket>, <right bracket>, <left paren>, <right paren>, <vertical\nbar>, <circumflex>, <minus sign>, <plus sign>, <asterisk>,\n<underscore>, <percent>, or the character specified by the result of\nthe <character value expression> of <escape character>. A <character\nspecifier> that is an <escaped character> represents its second\ncharacter.\n\n\n\n",
"msg_date": "Wed, 12 Jun 2002 19:02:32 -0400",
"msg_from": "\"Ken Hirsch\" <kenhirsch@myself.com>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "Thanks for the info! I have a question...\n\n> As usual: ( ) + * [ ] |\n> Instead of dot . there is underscore _\n> There is % to mean .* just like LIKE\n> There is no ? or ^ or $\n> Regular expressions match the whole string, as if there were an\n> implicit ^ before and $ after the pattern. You have to add % if\n> you want to match anywhere in a string.\n\nHmm. So if there are no explicit anchors then there must be a slightly\ndifferent syntax for the regular-expression version of the substring()\nfunction? Otherwise, substrings would always have to start from the\nfirst character, right?\n\nPercents and underscores carried over from LIKE are really annoying.\nI'll think about implementing an expression rewriter to convert SQL99 to\nour modern regexp syntax.\n\n - Thomas\n",
"msg_date": "Thu, 13 Jun 2002 06:01:15 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required"
}
] |
[
{
"msg_contents": "I've just committed changes which implement three SQL99 functions and\noperators. OVERLAY() allows substituting a string into another string,\nSIMILAR TO is an operator for pattern matching, and a new variant of\nSUBSTRING() accepts a pattern to match.\n\nRegression tests have been augmented and pass. Docs have been updated.\nThe system catalogs were updated, so it is initdb time. Details from the\ncvs log below...\n\n - Thomas\n\nImplement SQL99 OVERLAY(). Allows substitution of a substring in a\nstring.\nImplement SQL99 SIMILAR TO as a synonym for our existing operator \"~\".\nImplement SQL99 regular expression SUBSTRING(string FROM pat FOR\nescape).\n Extend the definition to make the FOR clause optional.\n Define textregexsubstr() to actually implement this feature.\nUpdate the regression test to include these new string features.\n All tests pass.\nRename the regular expression support routines from \"pg95_xxx\" to\n\"pg_xxx\".\nDefine CREATE CHARACTER SET in the parser per SQL99. No implementation\nyet.\n",
"msg_date": "Tue, 11 Jun 2002 08:58:00 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "New string functions; initdb required"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I've just committed changes which implement three SQL99 functions and\n> operators.\n\nI'm getting\n\ngcc -O1 -Wall -Wmissing-prototypes -Wmissing-declarations -g -I../../../../src/include -c -o regexp.o regexp.c\nregexp.c: In function `textregexsubstr':\nregexp.c:314: warning: unused variable `result'\n\nThe code seems to be rather undecided about whether it intends to return\nNULL or an empty string --- would you make up your mind and remove the\nother case entirely?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jun 2002 12:28:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required "
},
{
"msg_contents": "Also, you neglected to add PLACING to the gram.y keyword category lists.\n\n(Perhaps someone should whip up a cross-checking script to verify that\neverything known to keywords.c is listed exactly once in those gram.y\nlists.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jun 2002 12:34:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: New string functions; initdb required "
},
{
"msg_contents": "> Also, you neglected to add PLACING to the gram.y keyword category lists.\n\nOK. I'm also tracking down what seems to be funny business in the regex\npattern caching logic, so will have a couple of things to fix sometime\nsoon.\n\n - Thomas\n",
"msg_date": "Tue, 11 Jun 2002 10:16:42 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: New string functions; initdb required"
},
{
"msg_contents": "> Also, you neglected to add PLACING to the gram.y keyword category lists.\n\nI just now added and committed it as a reserved word.\n\n - Thomas\n",
"msg_date": "Thu, 13 Jun 2002 07:20:42 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: New string functions; initdb required"
}
] |
[
{
"msg_contents": "OK, I *really* need to get my majordomo account fixed up to keep from\nstalling posts from my various accounts to the various lists. \n\nI think that I can enter some aliases etc to allow this; where do I find\nout how? Searching the -hackers archives brought no joy since the\nobvious keywords show up in every stinkin' mail message ever run through\nthe mailing list :/\n\nAny help would be appreciated...\n\n - Thomas\n",
"msg_date": "Tue, 11 Jun 2002 09:03:23 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Majordomo aliases"
},
{
"msg_contents": "On Tue, 11 Jun 2002, Thomas Lockhart wrote:\n\n> OK, I *really* need to get my majordomo account fixed up to keep from\n> stalling posts from my various accounts to the various lists.\n>\n> I think that I can enter some aliases etc to allow this; where do I find\n> out how? Searching the -hackers archives brought no joy since the\n> obvious keywords show up in every stinkin' mail message ever run through\n> the mailing list :/\n>\n> Any help would be appreciated...\n\nYou can always subscribe to a list and do a set nomail\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 11 Jun 2002 12:10:38 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Majordomo aliases"
},
{
"msg_contents": "There's fairly extensive help available from the list 'bot itself.\nTry sending a message with\n\thelp\n\thelp set\nto majordomo@postgresql.org. (There are a bunch of other help topics\nbut I'm guessing \"set\" is most likely the command you need.)\n\nA low-tech solution would be to subscribe all your addresses and then\nset all but one to \"nomail\". Not sure if there's a better way.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 11 Jun 2002 12:14:26 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Majordomo aliases "
},
{
"msg_contents": "On Tue, 11 Jun 2002, Tom Lane wrote:\n\n> There's fairly extensive help available from the list 'bot itself.\n> Try sending a message with\n> \thelp\n> \thelp set\n> to majordomo@postgresql.org. (There are a bunch of other help topics\n> but I'm guessing \"set\" is most likely the command you need.)\n>\n> A low-tech solution would be to subscribe all your addresses and then\n> set all but one to \"nomail\". Not sure if there's a better way.\n\nThe better way *was* loophole, but it's gone.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 11 Jun 2002 12:20:20 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Majordomo aliases "
}
] |
[
{
"msg_contents": "> As part of createdb, the new database will have to have it's public\n> schema changed to world-writable.\n\nI have to admit that much of the schema related discussion has been over my\nhead, but I think what I understand you to be saying here is that the\ndefault would be to allow anybody to create tables in any database that they\nconnect to, in the same way that they currently can (with pg <= 7.2.1).\n\n(If that's not the case, you can ignore the rest of the message.)\n\nWhat value do users get from being able to create temp tables in any\ndatabase?\n\nDon't _most_ people expect databases (from any vendor) to be writable only\nby the owner? I have to confess that I was surprised when I discovered that\nothers could create tables in my PG database (although I don't have much\nexposure to other flavors of databases).\n\nISTM that the best default is to have it not world writable, but that will\ntend to cause some consternation when people transition to 7.3 and discover\n(as I did) that the current pg_restore may hit snags on a non-world writable\nDB in certain circumstances.\n\nIf I put data into a database and want to allow anybody to read it and don't\nwant to worry about administering accounts for hundreds of users, I might\ncreate an account that anybody can use to connect. I would be unhappy if\nsomeone was able to expand that permission into something like creating\ntables and filling them so much that it causes problems for me.\n\n(As I said, this is all predicated on my understanding at the beginning, so\nif I've misunderstood this issue then perhaps this wouldn't be a problem for\nme.)\n\n-ron\n\n\n\n\n\n\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, \n> Pennsylvania 19026\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to \n> majordomo@postgresql.org\n> \n",
"msg_date": "Tue, 11 Jun 2002 16:12:09 -0700",
"msg_from": "Ron Snyder <snyder@roguewave.com>",
"msg_from_op": true,
"msg_subject": "Re: Schemas and template1"
},
{
"msg_contents": "Ron Snyder wrote:\n> > As part of createdb, the new database will have to have it's public\n> > schema changed to world-writable.\n> \n> I have to admit that much of the schema related discussion has been over my\n> head, but I think what I understand you to be saying here is that the\n> default would be to allow anybody to create tables in any database that they\n> connect to, in the same way that they currently can (with pg <= 7.2.1).\n> \n> (If that's not the case, you can ignore the rest of the message.)\n\nThe issue I was raising is the creation of tables in the default\n'public' schema, which is the one used by users who don't have a schema\nmatching their name. I was saying that template1 should prevent\ncreation of tables by anyone but the superuser.\n\nAs far as temp tables, I think we should enable that for all\nnon-template1 databases.\n\n(In fact, what happens if you create a database while a temp table\nexists in template1. Seems it would not be cleaned up in the new\ndatabase.)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 11 Jun 2002 19:36:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Schemas and template1"
}
] |
[
{
"msg_contents": "Hi Igor,\n\nThanks for your information.\n\nI was aware of the \"signal\" problems and i've done it with thread based\nsignals\nThis part is functional on my platform but it isn't fully cooked.\nAnother problem\nis to make this part portable.\n\nYour assumption to replace SysV semaphores with POSIX semaphores is\ncorrect.\nMy first guess was to use mutexes instead of semaphores at all because\nthe\nway semaphores are used in Postgres is more something like a \"mutex\",\nbut only semaphores worked for me at this time because the underlying\nC Library had some problems with mutexes and spinlocks. (I'm also\nworking on a new C Library for a future OS).\n\nActually I don't have my code downloadable somewhere because the code\ndoesn't look very nice in some parts. There is also temporary debug\ncode\nin it right now. The best I think is to send it to you via email. If\nthis is OK\nplease give me a short notice or send an email to me and I'll send you\na\ncopy.\n\nUlrich\n\n>>> \"Igor Kovalenko\" <Igor.Kovalenko@motorola.com> 11.06.2002 20:14:58\n>>>\n> Hello together\n>\n> i've seen a lot of discussion about a native win32/OS2/BEOS port of\n> PostgreSQL.\n>\n> During the last months i've ported PostgreSQL over to Novell NetWare\n> and i've\n> changed the code that I use pthreads instead of fork() now.\n>\n> I had a lot of work with the variables and cleanup but mayor parts\nare\n> done.\n>\n> I would appreciate if we could combine this work.\n\nVery nice... I have patches for QNX6 which also involved redoing\nshared\nmemory and sempahores stuff. It would make very good sense to\nintergate,\nespecially since you managed to do something very close to what I\nwanted :)\n\n> My plan was to finish this port, discuss the port with other people\nand\n> offer all the work\n> to the PostgreSQL source tree, but now i'm jumping in here because\nof\n> all the discussions.\n>\n> What i've done in detail:\n> - i've defined #USE_PTHREADS in pg_config.h to differentiate between\n> the forked and the\n> threaded backend.\n> - I've added several parts in postmaster.c so all functions are\nbased\n> on pthreads now.\n> - I've changed the signal handling because signals are process based\n\nCareful here. On certain systems (on many, I suspect) POSIX semantics\nfor\nsignals is NOT default. Enforcing POSIX semantics requires certain\ncompile\ntime switches which will also change behavior of various functions.\n\n> - I've changed code in ipc.c to have a clean shutdown of threads\n> - I've written some functions to switch the global variables. The\n> globals are controled with\n> POSIX semaphores.\n> - I've written a new implementation of shared memory and semaphores-\n> With pthreads I don't\n> need real shared memory any more and i'm using POSIX semaphores now\n\nPOSIX semaphores for what? I assume by the conext that you're talking\nabout\nreplacing SysV semaphores which are used to control access to shared\nmemory.\nIf that is the case, POSIX semaphores are not the best choice really.\nPOSIX\nmutexes would be okay, but on SMP systems spinlocks (hardware TAS\nbased\nmacros or POSIX spinlocks) would probably be better anyway. Note that\non\nmost platforms spinlocks are used for that and SysV semaphores were\njust a\n'last resort' which had unacceptable performance and so I guess it was\nnot\nused at all.\n\nDo you have your patch somewhere online?\n\n-- igor\n\n\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n",
"msg_date": "Wed, 12 Jun 2002 10:35:24 +0200",
"msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Antw: Re: Native Win32/OS2/BeOS/NetWare ports"
}
] |
[
{
"msg_contents": "Hi,\n\nthere were several people at my talk asking if PostgreSQL is available\non Novell Netware. I know I read about someone working on this, but\ndidn't follow the thread at all since I do not have any Netware servers.\n\nDoes anyone out there know more?\n\nThanks in advance.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 12 Jun 2002 16:49:11 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "PostgreSQL and Novell Netware"
},
{
"msg_contents": "Michael Meskes wrote:\n> Hi,\n> \n> there were several people at my talk asking if PostgreSQL is available\n> on Novell Netware. I know I read about someone working on this, but\n> didn't follow the thread at all since I do not have any Netware servers.\n\nAttached is an email message from someone who is working on it. This is\nthe first I have heard of a Netware port.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "Wed, 12 Jun 2002 13:52:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL and Novell Netware"
}
] |
[
{
"msg_contents": "\nHi,\n\nI'm running a transaction with about 1600 INSERTs.\nEach INSERT involves a subselect.\n\nI've noticed that if one of the INSERTs fails, the remaining INSERTs run in about\n1/2 the time expected.\n\nIs postgresql optimising the inserts, knowing that it will rollback at the end ?\n\nIf not, why do the queries run faster after the failure ?\n\nThanks\nJohnT\n",
"msg_date": "Wed, 12 Jun 2002 16:07:26 +0100",
"msg_from": "John Taylor <postgres@jtresponse.co.uk>",
"msg_from_op": true,
"msg_subject": "Optimising inside transactions"
},
{
"msg_contents": "John Taylor <postgres@jtresponse.co.uk> writes:\n> I'm running a transaction with about 1600 INSERTs.\n> Each INSERT involves a subselect.\n\n> I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about\n> 1/2 the time expected.\n\n> Is postgresql optimising the inserts, knowing that it will rollback at the end ?\n\n> If not, why do the queries run faster after the failure ?\n\nQueries after the failure aren't run at all; they're only passed through\nthe parser's grammar so it can look for a COMMIT or ROLLBACK command.\nNormal processing resumes after ROLLBACK. If you were paying attention\nto the return codes you'd notice complaints like\n\nregression=# begin;\nBEGIN\nregression=# select 1/0;\nERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n-- subsequent queries will be rejected like so:\nregression=# select 1/0;\nWARNING: current transaction is aborted, queries ignored until end of transaction block\n*ABORT STATE*\n\nI'd actually expect much more than a 2:1 speed differential, because the\ngrammar is not a significant part of the runtime AFAICT. Perhaps you\nare including some large amount of communication overhead in that\ncomparison?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jun 2002 11:36:30 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Optimising inside transactions "
},
{
"msg_contents": "On Wednesday 12 June 2002 16:36, Tom Lane wrote:\n> John Taylor <postgres@jtresponse.co.uk> writes:\n> > I'm running a transaction with about 1600 INSERTs.\n> > Each INSERT involves a subselect.\n> \n> > I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about\n> > 1/2 the time expected.\n> \n> > Is postgresql optimising the inserts, knowing that it will rollback at the end ?\n> \n> > If not, why do the queries run faster after the failure ?\n> \n> Queries after the failure aren't run at all; they're only passed through\n> the parser's grammar so it can look for a COMMIT or ROLLBACK command.\n> Normal processing resumes after ROLLBACK. If you were paying attention\n> to the return codes you'd notice complaints like\n> \n> regression=# begin;\n> BEGIN\n> regression=# select 1/0;\n> ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n> -- subsequent queries will be rejected like so:\n> regression=# select 1/0;\n> WARNING: current transaction is aborted, queries ignored until end of transaction block\n> *ABORT STATE*\n\nWell, I'm using JDBC, and it isn't throwing any exceptions, so I assumed it was working :-/\n\n> \n> I'd actually expect much more than a 2:1 speed differential, because the\n> grammar is not a significant part of the runtime AFAICT. Perhaps you\n> are including some large amount of communication overhead in that\n> comparison?\n> \n\nYes, now that I think about it - I am getting a bigger differential\nI'm actually running queries to update two slightly different databases in parallel,\nso the failing one is taking almost no time at all.\n\nThanks\nJohnT\n",
"msg_date": "Wed, 12 Jun 2002 16:42:46 +0100",
"msg_from": "John Taylor <postgres@jtresponse.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Optimising inside transactions"
},
{
"msg_contents": "John Taylor <postgres@jtresponse.co.uk> writes:\n> On Wednesday 12 June 2002 16:36, Tom Lane wrote:\n>> Queries after the failure aren't run at all; they're only passed through\n>> the parser's grammar so it can look for a COMMIT or ROLLBACK command.\n>> Normal processing resumes after ROLLBACK. If you were paying attention\n>> to the return codes you'd notice complaints like\n>> \n>> regression=# begin;\n>> BEGIN\n>> regression=# select 1/0;\n>> ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n>> -- subsequent queries will be rejected like so:\n>> regression=# select 1/0;\n>> WARNING: current transaction is aborted, queries ignored until end of transaction block\n>> *ABORT STATE*\n\n> Well, I'm using JDBC, and it isn't throwing any exceptions, so I\n> assumed it was working :-/ \n\nThis brings up a point that's bothered me in the past. Why is the\n\"queries ignored\" response treated as a NOTICE and not an ERROR?\nA client that is not paying close attention to the command result code\n(as JDBC is evidently not doing :-() might think that its command had\nbeen executed.\n\nIt seems to me the right behavior is\n\nregression=# select 1/0;\nERROR: current transaction is aborted, queries ignored until end of transaction block\nregression=# \n\nI think the reason why it's been done with a NOTICE is that if we\nelog(ERROR) on the first command of a query string, we'll not be able to\nprocess a ROLLBACK appearing later in the same string --- but that\nbehavior does not seem nearly as helpful as throwing an error.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jun 2002 12:12:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Shouldn't \"aborted transaction\" be an ERROR? (was Re: [NOVICE]\n\tOptimising inside transactions)"
},
{
"msg_contents": "On Wed, 12 Jun 2002 16:07:26 +0100, John Taylor\n<postgres@jtresponse.co.uk> wrote:\n>\n>Hi,\n>\n>I'm running a transaction with about 1600 INSERTs.\n>Each INSERT involves a subselect.\n>\n>I've noticed that if one of the INSERTs fails, the remaining INSERTs run in about\n>1/2 the time expected.\n>\n>Is postgresql optimising the inserts, knowing that it will rollback at the end ?\n>\nISTM \"optimising\" is not the right word, it doesn't even try to\nexecute them.\n\nfred=# BEGIN;\nBEGIN\nfred=# INSERT INTO a VALUES (1, 'x');\nINSERT 174658 1\nfred=# blabla;\nERROR: parser: parse error at or near \"blabla\"\nfred=# INSERT INTO a VALUES (2, 'y');\nNOTICE: current transaction is aborted, queries ignored until end of\ntransaction block\n*ABORT STATE*\nfred=# ROLLBACK;\nROLLBACK\n\nServus\n Manfred\n",
"msg_date": "Wed, 12 Jun 2002 20:14:13 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": false,
"msg_subject": "Re: Optimising inside transactions"
},
{
"msg_contents": "I have just tested this on the latest code using the following\n\n Connection con = JDBC2Tests.openDB();\n\t\ttry\n\t\t{\n\n // transaction mode\n con.setAutoCommit(false);\n Statement stmt = con.createStatement();\n stmt.execute(\"select 1/0\");\n\t\t\tfail( \"Should not execute this, as a SQLException s/b thrown\" );\n con.commit();\n\t\t}\n\t\tcatch ( Exception ex )\n\t\t{\n\t\t}\n try\n {\n con.commit();\n con.close();\n }catch ( Exception ex) {}\n }\n\nand it executes as expected. It throws the SQLException and does not\nexecute the fail statement\n\nThanks,\n\nDave\n\nOn Wed, 2002-06-12 at 12:12, Tom Lane wrote:\n> John Taylor <postgres@jtresponse.co.uk> writes:\n> > On Wednesday 12 June 2002 16:36, Tom Lane wrote:\n> >> Queries after the failure aren't run at all; they're only passed through\n> >> the parser's grammar so it can look for a COMMIT or ROLLBACK command.\n> >> Normal processing resumes after ROLLBACK. If you were paying attention\n> >> to the return codes you'd notice complaints like\n> >> \n> >> regression=# begin;\n> >> BEGIN\n> >> regression=# select 1/0;\n> >> ERROR: floating point exception! The last floating point operation either exceeded legal ranges or was a divide by zero\n> >> -- subsequent queries will be rejected like so:\n> >> regression=# select 1/0;\n> >> WARNING: current transaction is aborted, queries ignored until end of transaction block\n> >> *ABORT STATE*\n> \n> > Well, I'm using JDBC, and it isn't throwing any exceptions, so I\n> > assumed it was working :-/ \n> \n> This brings up a point that's bothered me in the past. Why is the\n> \"queries ignored\" response treated as a NOTICE and not an ERROR?\n> A client that is not paying close attention to the command result code\n> (as JDBC is evidently not doing :-() might think that its command had\n> been executed.\n> \n> It seems to me the right behavior is\n> \n> regression=# select 1/0;\n> ERROR: current transaction is aborted, queries ignored until end of transaction block\n> regression=# \n> \n> I think the reason why it's been done with a NOTICE is that if we\n> elog(ERROR) on the first command of a query string, we'll not be able to\n> process a ROLLBACK appearing later in the same string --- but that\n> behavior does not seem nearly as helpful as throwing an error.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n\n",
"msg_date": "13 Jun 2002 10:44:46 -0400",
"msg_from": "Dave Cramer <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Re: Shouldn't \"aborted transaction\" be an ERROR? (was Re:"
}
] |
[
{
"msg_contents": "Hi Michael,\n\nI know all about the story of PostgreSQL for NetWare.\n\nI will tell you the story now:\nFirst I want to introduce myself:\nI'm a Novell DeveloperNet SysOp and i'm working with the database\nteam and the C Library team at Novell in Provo/Utah.\n\nThe story starts last year in may at Novell's BrainShare in\nNice/France.\nI've spoken with several people about the open source community and\nwhat's missing on NetWare and how this can be combined with Novells\nDirectory Service to manage all this stuff. During this time only\nApache 1.3\nand Tomcat 3.2 was available as open source on NetWare.\n\nThe Database area was a totally vacuum and the need for an alternative\nfor Oracle was huge.\n\nThe result was that I've started to analyze a lot of databases and\nafter some\ndiscussions i've decided to port PostgreSQL because of the features\nand\nstability. Nearly at the same time Novell was working on a new C\nLibrary\nto offer a C/C++ environment that is as much POSIX/ANSI C/BSD/UNIX98\ncompliant as possible. The first step was to enhance this C Library to\nhave\nall the functionality I need for PostgreSQL. In November i've been in\nUtah\nto discuss the open source database PostgreSQL and how it's going. The\nresult was that there is a small team now working on MySQL and\nPostgreSQL.\n\nI'm the man who drives PostgreSQL and another guy, R.Lyon drives\nMySQL.\nThere are several other people supporting us.\n\nAdditonal PHP4, mod_perl, JBoss, Motif and several other open source\ninitiatives are in the works at Novell. We are all working together to\nhave a\nunique solution. Everything is planned to be pushed back to the open\nsource\ncommunity. I think that's a great benefit for all these projects.\n\nIn march at Novell's Brainshare in Salt Lake City some of these\nprojects have\nbeen demonstrated and nearly everything was functional. Now there is\nmuch work in stability, performance and in directory integration\ntogether\nwith one unified management tool across all of these open source and\nthe commercial projects. At the end all open source projects will be\nstress\ntested in Novell's superlab, the biggest testlab on earth. It's\npossible to\ntest on hundreds of different servers and thousands of client\nmachines.\n\nThe first Beta's of PostgreSQL for NetWare have been sent out last\nweek\nto some selected customers and next week a first beta refresh will be\nsent\nout.\n\nIt is planned to ship all these open source projects during this\nsummer,\nand the new management tool in the 1st half next year.\nApache2 was the first in that line.\n\nUlrich Neumann\n\n\n>>> Michael Meskes <meskes@postgresql.org> 12.06.2002 16:49:11 >>>\nHi,\n\nthere were several people at my talk asking if PostgreSQL is available\non Novell Netware. I know I read about someone working on this, but\ndidn't follow the thread at all since I do not have any Netware\nservers.\n\nDoes anyone out there know more?\n\nThanks in advance.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De \nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n----------------------------------\n This e-mail is virus scanned\n Diese e-mail ist virusgeprueft\n\n",
"msg_date": "Wed, 12 Jun 2002 18:08:38 +0200",
"msg_from": "\"Ulrich Neumann\" <U_Neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Antw: PostgreSQL and Novell Netware"
},
{
"msg_contents": "\nI am going to take all of your patches, make one mega-patch, then send\nit to you for review.\n\n---------------------------------------------------------------------------\n\nUlrich Neumann wrote:\n> Hi Michael,\n> \n> I know all about the story of PostgreSQL for NetWare.\n> \n> I will tell you the story now:\n> First I want to introduce myself:\n> I'm a Novell DeveloperNet SysOp and i'm working with the database\n> team and the C Library team at Novell in Provo/Utah.\n> \n> The story starts last year in may at Novell's BrainShare in\n> Nice/France.\n> I've spoken with several people about the open source community and\n> what's missing on NetWare and how this can be combined with Novells\n> Directory Service to manage all this stuff. During this time only\n> Apache 1.3\n> and Tomcat 3.2 was available as open source on NetWare.\n> \n> The Database area was a totally vacuum and the need for an alternative\n> for Oracle was huge.\n> \n> The result was that I've started to analyze a lot of databases and\n> after some\n> discussions i've decided to port PostgreSQL because of the features\n> and\n> stability. Nearly at the same time Novell was working on a new C\n> Library\n> to offer a C/C++ environment that is as much POSIX/ANSI C/BSD/UNIX98\n> compliant as possible. The first step was to enhance this C Library to\n> have\n> all the functionality I need for PostgreSQL. In November i've been in\n> Utah\n> to discuss the open source database PostgreSQL and how it's going. The\n> result was that there is a small team now working on MySQL and\n> PostgreSQL.\n> \n> I'm the man who drives PostgreSQL and another guy, R.Lyon drives\n> MySQL.\n> There are several other people supporting us.\n> \n> Additonal PHP4, mod_perl, JBoss, Motif and several other open source\n> initiatives are in the works at Novell. We are all working together to\n> have a\n> unique solution. Everything is planned to be pushed back to the open\n> source\n> community. I think that's a great benefit for all these projects.\n> \n> In march at Novell's Brainshare in Salt Lake City some of these\n> projects have\n> been demonstrated and nearly everything was functional. Now there is\n> much work in stability, performance and in directory integration\n> together\n> with one unified management tool across all of these open source and\n> the commercial projects. At the end all open source projects will be\n> stress\n> tested in Novell's superlab, the biggest testlab on earth. It's\n> possible to\n> test on hundreds of different servers and thousands of client\n> machines.\n> \n> The first Beta's of PostgreSQL for NetWare have been sent out last\n> week\n> to some selected customers and next week a first beta refresh will be\n> sent\n> out.\n> \n> It is planned to ship all these open source projects during this\n> summer,\n> and the new management tool in the 1st half next year.\n> Apache2 was the first in that line.\n> \n> Ulrich Neumann\n> \n> \n> >>> Michael Meskes <meskes@postgresql.org> 12.06.2002 16:49:11 >>>\n> Hi,\n> \n> there were several people at my talk asking if PostgreSQL is available\n> on Novell Netware. I know I read about someone working on this, but\n> didn't follow the thread at all since I do not have any Netware\n> servers.\n> \n> Does anyone out there know more?\n> \n> Thanks in advance.\n> \n> Michael\n> -- \n> Michael Meskes\n> Michael@Fam-Meskes.De \n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\n> majordomo@postgresql.org)\n> ----------------------------------\n> This e-mail is virus scanned\n> Diese e-mail ist virusgeprueft\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 00:59:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Antw: PostgreSQL and Novell Netware"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Michael Meskes [mailto:meskes@postgresql.org]\n> Sent: Wednesday, June 12, 2002 5:41 AM\n> To: Dann Corbit\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] PostGres Doubt\n> \n> \n> On Mon, Jun 10, 2002 at 02:08:22PM -0700, Dann Corbit wrote:\n> > ECPG is single threading. Hence, tools written in ECPG are \n> a pain in\n> > the neck if you want multiple threads of execution. I \n> recommend against\n> \n> Did he say he wants to write a multi-threaded app?\n\nOr run concurrent queries queries at the same time? Or later discover\nthe need to do so?\n \n> > using it for any purpose except porting a single threading \n> project that\n> > already uses embedded SQL. The embedded SQL interface for \n> PostgreSQL is\n> > a disaster.\n> \n> Oh, that's what I call constructive critizism. I cannot remember you\n> filing any bug reports or asking for some special features. Wouldn't\n> that be the first step? And not calling other people's work a \n> disaster.\n\nI posted the problems to this list long ago. I wanted to use ECPG and\ndiscovered it was a joke. Do a search through the list and you will\nfind a half dozen complaints.\n \n> > The libpq functions are reentrant. These will be useful \n> for just about\n> > any project.\n> \n> Well if they are (I never checked myself) it shouldn't be too \n> difficult\n> to make ecpg reentrant too.\n\nThen why not do it. I looked at doing it myself, but the implementation\nof embedded SQL is totally nonstandard and uses global structures and\nfails to use the SQLCA and SQLDA structures properly. It would be a\nnightmare to try and fix it.\n \n> > If you are going to completely replace the data in a table, drop the\n> > table, create the table, and use the bulk copy interface.\n> \n> Oh great! Talking about valuable comments. Ever bothered to \n> even ask if\n> they are using triggers, constraints, etc. before coming with such a\n> proposal?\n\nI would assume that they would use their brain. \n",
"msg_date": "Wed, 12 Jun 2002 11:00:26 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "...\n> I would assume that they would use their brain.\n\nWay uncalled for. You must have some other underlying issues to get this\nbad 'tude, but please note that ad hominum attacks are *never* welcome\non this or any other PostgreSQL mailing list.\n\nRegards.\n\n - Thomas\n",
"msg_date": "Wed, 12 Jun 2002 12:13:50 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "Dann Corbit wrote:\n> > -----Original Message-----\n> > From: Michael Meskes [mailto:meskes@postgresql.org]\n> > Sent: Wednesday, June 12, 2002 5:41 AM\n> > To: Dann Corbit\n> > Cc: pgsql-hackers@postgresql.org\n> > Subject: Re: [HACKERS] PostGres Doubt\n> > \n> > \n> > On Mon, Jun 10, 2002 at 02:08:22PM -0700, Dann Corbit wrote:\n> > > ECPG is single threading. Hence, tools written in ECPG are \n> > a pain in\n> > > the neck if you want multiple threads of execution. I \n> > recommend against\n> > \n> > Did he say he wants to write a multi-threaded app?\n> \n> Or run concurrent queries queries at the same time? Or later discover\n> the need to do so?\n...\n> \n> I posted the problems to this list long ago. I wanted to use ECPG and\n> discovered it was a joke. Do a search through the list and you will\n> find a half dozen complaints.\n,..\n> \n> Then why not do it. I looked at doing it myself, but the implementation\n> of embedded SQL is totally nonstandard and uses global structures and\n> fails to use the SQLCA and SQLDA structures properly. It would be a\n> nightmare to try and fix it.\n> \n> > > If you are going to completely replace the data in a table, drop the\n> > > table, create the table, and use the bulk copy interface.\n> > \n> > Oh great! Talking about valuable comments. Ever bothered to \n> > even ask if\n> > they are using triggers, constraints, etc. before coming with such a\n> > proposal?\n> \n> I would assume that they would use their brain. \n\nIf you think ecpg is a joke, I think you will find PostgreSQL is a joke\ntoo. I suggest you find another database.\n\nIn fact, you may find all other databases to be a joke. I suggest you\nwrite your own.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 17:33:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 11:00:26AM -0700, Dann Corbit wrote:\n> Or run concurrent queries queries at the same time? Or later discover\n> the need to do so?\n\nI didn't say multi-threading is bad. I just don't think your answer\nhelped him much.\n\n> I posted the problems to this list long ago. I wanted to use ECPG and\n> discovered it was a joke. Do a search through the list and you will\n> find a half dozen complaints.\n\nIf you use this kind of language I wonder if anyone ever reacted on any\ncomplaint you send.\n\n> Then why not do it. I looked at doing it myself, but the implementation\n\nGotta like that attidude. Did you read aynthing about us not wanting to\nmake ecpg multi-threaded?\n\n> of embedded SQL is totally nonstandard and uses global structures and\n\nWhat's that about? Our parser is nonstandard? Please if you expect any\nmore answers, how about adding some facts and not just talking badly\nabout people.\n\n> > Oh great! Talking about valuable comments. Ever bothered to \n> > even ask if\n> > they are using triggers, constraints, etc. before coming with such a\n> > proposal?\n> \n> I would assume that they would use their brain. \n\nWhy? You don't use it either. I'm sorry, but I cannot stand this kind of\nbehaviour.\n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 13 Jun 2002 12:01:14 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "I think libpqxx, the alternative to libpq++, is just about ready for\nprime time. That means integrating it with the main source tree, I\nsuppose, but I have no idea where to start--particularly because libpqxx\nhas its own configure setup.\n\nAnyone who can help me with this?\n\n\nJeroen\n\nPS: find libpqxx source & description at\n\thttp://members.ams.chello.nl/j.vermeulen31/\n\n\n",
"msg_date": "Wed, 12 Jun 2002 20:29:21 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Integrating libpqxx"
},
{
"msg_contents": "On Wed, 12 Jun 2002 20:29:21 +0200\n\"Jeroen T. Vermeulen\" <jtv@xs4all.nl> wrote:\n> I think libpqxx, the alternative to libpq++, is just about ready for\n> prime time.\n\nGreat -- I like libpqxx a lot, and I'd like to see it in 7.3. We should\nalso probably keep libpq++ around for backward compatibility, but I\nsuppose we can stop distributing it eventually.\n\n> That means integrating it with the main source tree, I\n> suppose, but I have no idea where to start--particularly because libpqxx\n> has its own configure setup.\n\nI took a brief look at libpqxx's configure setup and ISTM that you won't\nneed to do a lot of work to integrate it into the PostgreSQL build system.\nUsers won't need to specify '--with-postgres' anymore, and the rest of the\nconfigure options look pretty standard (gnu-ld, pic, etc.)\n\nIs there a reason for keeping '--enable-postgres-dialect', when libpqxx\nis distributed with PostgreSQL?\n\nOtherwise, if you put the code into src/interfaces/libpqxx and modify\nthe PostgreSQL build system to be aware of it (as well as removing\nlibpqxx's autoconf stuff), it shouldn't be too difficult.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Wed, 12 Jun 2002 16:04:36 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 04:04:36PM -0400, Neil Conway wrote:\n>\n> Otherwise, if you put the code into src/interfaces/libpqxx and modify\n> the PostgreSQL build system to be aware of it (as well as removing\n> libpqxx's autoconf stuff), it shouldn't be too difficult.\n\nOne concern I have on this point is that not all platforms are going to\nbe able to build libpqxx. Also, there'd have to be a lot of C++ stuff\nin the existing config.h which I guess was meant to be C. \n\nAnyway, I found I'm not much good with automake and so on. I'm trying\nto merge the two configure.ins, but I feel I must be missing a lot of\ndetails.\n\n\nJeroen\n\n",
"msg_date": "Wed, 12 Jun 2002 23:01:38 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "Jeroen T. Vermeulen wrote:\n> On Wed, Jun 12, 2002 at 04:04:36PM -0400, Neil Conway wrote:\n> >\n> > Otherwise, if you put the code into src/interfaces/libpqxx and modify\n> > the PostgreSQL build system to be aware of it (as well as removing\n> > libpqxx's autoconf stuff), it shouldn't be too difficult.\n> \n> One concern I have on this point is that not all platforms are going to\n> be able to build libpqxx. Also, there'd have to be a lot of C++ stuff\n> in the existing config.h which I guess was meant to be C. \n> \n> Anyway, I found I'm not much good with automake and so on. I'm trying\n> to merge the two configure.ins, but I feel I must be missing a lot of\n> details.\n\nI can add it to CVS as interfaces/libpqxx and we can then let others\nmerge your configure tests into our main configure. Let me know when\nyou want it dumped into CVS.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 17:48:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 05:48:46PM -0400, Bruce Momjian wrote:\n> \n> I can add it to CVS as interfaces/libpqxx and we can then let others\n> merge your configure tests into our main configure. Let me know when\n> you want it dumped into CVS.\n\nMight as well do it right now, with 0.5.2. We'll call that 1.0, and \nleave the more radical future plans for 2.0. \n\nThere are some things I'd like to do in future 1.x releases that will \naffect the interface:\n - nonblocking operation, probably as a latency-hiding tuple stream;\n - change the way you select the quality of service for your transactor;\n - allow notice processors to have C++ linkage;\n - addtional bits & bobs like field and column iterators.\n\nOTOH there's no point in delaying 1.0 forever I guess.\n\nFWIW, I'm thinking of doing at least one of the following in 2.0:\n - an easy-to-use but intrusive object persistence layer; \n - offload some of the work to BOOST if possible;\n - adapt the interface to be more database-portable.\n\nBut back to 1.0... Would it be a useful idea to also integrate my own\nCVS history into the main tree? Or should I just keep developing in\nmy local tree and submit from there?\n\n\nJeroen\n\n",
"msg_date": "Thu, 13 Jun 2002 00:25:41 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "Jeroen T. Vermeulen wrote:\n> On Wed, Jun 12, 2002 at 05:48:46PM -0400, Bruce Momjian wrote:\n> > \n> > I can add it to CVS as interfaces/libpqxx and we can then let others\n> > merge your configure tests into our main configure. Let me know when\n> > you want it dumped into CVS.\n> \n> Might as well do it right now, with 0.5.2. We'll call that 1.0, and \n> leave the more radical future plans for 2.0. \n> \n> There are some things I'd like to do in future 1.x releases that will \n> affect the interface:\n> - nonblocking operation, probably as a latency-hiding tuple stream;\n> - change the way you select the quality of service for your transactor;\n> - allow notice processors to have C++ linkage;\n> - addtional bits & bobs like field and column iterators.\n> \n> OTOH there's no point in delaying 1.0 forever I guess.\n> \n> FWIW, I'm thinking of doing at least one of the following in 2.0:\n> - an easy-to-use but intrusive object persistence layer; \n> - offload some of the work to BOOST if possible;\n> - adapt the interface to be more database-portable.\n> \n> But back to 1.0... Would it be a useful idea to also integrate my own\n> CVS history into the main tree? Or should I just keep developing in\n> my local tree and submit from there?\n\nI think we will just give you CVS access. Not sure how to get the CVS\nhistory. I think if you send me the CVS root I can use CVS import to\nload it.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 18:30:06 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Wed, 2002-06-12 at 17:30, Bruce Momjian wrote:\n> Jeroen T. Vermeulen wrote:\n> > On Wed, Jun 12, 2002 at 05:48:46PM -0400, Bruce Momjian wrote:\n> > > \n> > > I can add it to CVS as interfaces/libpqxx and we can then let others\n> > > merge your configure tests into our main configure. Let me know when\n> > > you want it dumped into CVS.\n> > \n> > Might as well do it right now, with 0.5.2. We'll call that 1.0, and \n> > leave the more radical future plans for 2.0. \n> > \n> > There are some things I'd like to do in future 1.x releases that will \n> > affect the interface:\n> > - nonblocking operation, probably as a latency-hiding tuple stream;\n> > - change the way you select the quality of service for your transactor;\n> > - allow notice processors to have C++ linkage;\n> > - addtional bits & bobs like field and column iterators.\n> > \n> > OTOH there's no point in delaying 1.0 forever I guess.\n> > \n> > FWIW, I'm thinking of doing at least one of the following in 2.0:\n> > - an easy-to-use but intrusive object persistence layer; \n> > - offload some of the work to BOOST if possible;\n> > - adapt the interface to be more database-portable.\n> > \n> > But back to 1.0... Would it be a useful idea to also integrate my own\n> > CVS history into the main tree? Or should I just keep developing in\n> > my local tree and submit from there?\n> \n> I think we will just give you CVS access. Not sure how to get the CVS\n> history. I think if you send me the CVS root I can use CVS import to\n> load it.\nIf you \"Repocopy\" the files, the history will stay intact. Basically\nmove his CVS/ files to your repository, and add appropriate entries\nstuff. \n\nLER\n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "12 Jun 2002 19:40:14 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "Larry Rosenman wrote:\n> > \n> > I think we will just give you CVS access. Not sure how to get the CVS\n> > history. I think if you send me the CVS root I can use CVS import to\n> > load it.\n> If you \"Repocopy\" the files, the history will stay intact. Basically\n> move his CVS/ files to your repository, and add appropriate entries\n> stuff. \n\nEwe, appropriate entries?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 20:41:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Wed, 2002-06-12 at 19:41, Bruce Momjian wrote:\n> Larry Rosenman wrote:\n> > > \n> > > I think we will just give you CVS access. Not sure how to get the CVS\n> > > history. I think if you send me the CVS root I can use CVS import to\n> > > load it.\n> > If you \"Repocopy\" the files, the history will stay intact. Basically\n> > move his CVS/ files to your repository, and add appropriate entries\n> > stuff. \n> \n> Ewe, appropriate entries?\nWhat I did on a RANCID install was to just add the CVS/ stuff, but I'm\nnot sure with your scripts and stuff what else needs done. You might\nask Marc Fournier as I think he knows how the FreeBSD folks do\nRepoCopies. \n\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "12 Jun 2002 19:44:17 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "> On Wed, 2002-06-12 at 19:41, Bruce Momjian wrote:\n>> Ewe, appropriate entries?\n\nI'm thinking we should just import the current state of the files\nand not worry about preserving their change history.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 12 Jun 2002 22:41:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx "
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 10:41:32PM -0400, Tom Lane wrote:\n> \n> I'm thinking we should just import the current state of the files\n> and not worry about preserving their change history.\n\nFine with me, if that's easier. I just thought it might be \"nice to have\"\nbut I can't think of any compelling reason to go to any trouble. \n\n\nJeroen\n\n",
"msg_date": "Thu, 13 Jun 2002 13:49:55 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Thu, 13 Jun 2002, Jeroen T. Vermeulen wrote:\n\n> On Wed, Jun 12, 2002 at 10:41:32PM -0400, Tom Lane wrote:\n> >\n> > I'm thinking we should just import the current state of the files\n> > and not worry about preserving their change history.\n>\n> Fine with me, if that's easier. I just thought it might be \"nice to have\"\n> but I can't think of any compelling reason to go to any trouble.\n\nJeroen ... can you send me a copy of the CVSROOT for this? Email will\nwork ... if we can, I would like to save the development history, and I\n*think* I can ...\n\n\n",
"msg_date": "Thu, 13 Jun 2002 09:15:05 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Thu, Jun 13, 2002 at 09:15:05AM -0300, Marc G. Fournier wrote:\n> \n> Jeroen ... can you send me a copy of the CVSROOT for this? Email will\n> work ... if we can, I would like to save the development history, and I\n> *think* I can ...\n\nI already sent one to Bruce last night, IIRC.\n\n\nJeroen\n\n",
"msg_date": "Thu, 13 Jun 2002 16:54:24 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "Jeroen T. Vermeulen wrote:\n> On Thu, Jun 13, 2002 at 09:15:05AM -0300, Marc G. Fournier wrote:\n> > \n> > Jeroen ... can you send me a copy of the CVSROOT for this? Email will\n> > work ... if we can, I would like to save the development history, and I\n> > *think* I can ...\n> \n> I already sent one to Bruce last night, IIRC.\n\nI just bounced it over to Marc.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jun 2002 12:19:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "\ngot it ... will try and incorporate it and see what I can come up with ...\nthanks :)\n\n\nOn Thu, 13 Jun 2002, Bruce Momjian wrote:\n\n> Jeroen T. Vermeulen wrote:\n> > On Thu, Jun 13, 2002 at 09:15:05AM -0300, Marc G. Fournier wrote:\n> > >\n> > > Jeroen ... can you send me a copy of the CVSROOT for this? Email will\n> > > work ... if we can, I would like to save the development history, and I\n> > > *think* I can ...\n> >\n> > I already sent one to Bruce last night, IIRC.\n>\n> I just bounced it over to Marc.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Thu, 13 Jun 2002 13:21:48 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "If on one is has outstanding libpq++ patches, I will run libpq++ through\nmy new tools src/tools/pgindent/pgcppindent. It uses astyle. I can\nalso wait for 7.3 beta and run it then.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> On Wed, 12 Jun 2002 20:29:21 +0200\n> \"Jeroen T. Vermeulen\" <jtv@xs4all.nl> wrote:\n> > I think libpqxx, the alternative to libpq++, is just about ready for\n> > prime time.\n> \n> Great -- I like libpqxx a lot, and I'd like to see it in 7.3. We should\n> also probably keep libpq++ around for backward compatibility, but I\n> suppose we can stop distributing it eventually.\n> \n> > That means integrating it with the main source tree, I\n> > suppose, but I have no idea where to start--particularly because libpqxx\n> > has its own configure setup.\n> \n> I took a brief look at libpqxx's configure setup and ISTM that you won't\n> need to do a lot of work to integrate it into the PostgreSQL build system.\n> Users won't need to specify '--with-postgres' anymore, and the rest of the\n> configure options look pretty standard (gnu-ld, pic, etc.)\n> \n> Is there a reason for keeping '--enable-postgres-dialect', when libpqxx\n> is distributed with PostgreSQL?\n> \n> Otherwise, if you put the code into src/interfaces/libpqxx and modify\n> the PostgreSQL build system to be aware of it (as well as removing\n> libpqxx's autoconf stuff), it shouldn't be too difficult.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: pgconnection.cc\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/libpq++/pgconnection.cc,v\nretrieving revision 1.14\ndiff -c -r1.14 pgconnection.cc\n*** pgconnection.cc\t15 Jun 2002 18:49:29 -0000\t1.14\n--- pgconnection.cc\t15 Jun 2002 19:05:11 -0000\n***************\n*** 1,19 ****\n /*-------------------------------------------------------------------------\n! *\n! * FILE\n! *\tpgconnection.cc\n! *\n! * DESCRIPTION\n! * implementation of the PgConnection class.\n! * PgConnection encapsulates a frontend to backend connection\n! *\n! * Copyright (c) 1994, Regents of the University of California\n! *\n! * IDENTIFICATION\n! *\t $Header: /cvsroot/pgsql/src/interfaces/libpq++/pgconnection.cc,v 1.14 2002/06/15 18:49:29 momjian Exp $\n! *\n! *-------------------------------------------------------------------------\n! */\n \n #include \"pgconnection.h\"\n \n--- 1,19 ----\n /*-------------------------------------------------------------------------\n! *\n! *\tFILE\n! *\tpgconnection.cc\n! *\n! *\tDESCRIPTION\n! *\t implementation of the PgConnection class.\n! *\tPgConnection encapsulates a frontend to backend connection\n! *\n! * Copyright (c) 1994, Regents of the University of California\n! *\n! * IDENTIFICATION\n! *\t $Header: /cvsroot/pgsql/src/interfaces/libpq++/pgconnection.cc,v 1.14 2002/06/15 18:49:29 momjian Exp $\n! *\n! *-------------------------------------------------------------------------\n! */\n \n #include \"pgconnection.h\"\n \n***************\n*** 28,71 ****\n // ****************************************************************\n // default constructor -- initialize everything\n PgConnection::PgConnection()\n! \t: pgConn(NULL), pgResult(NULL), pgCloseConnection(false)\n {}\n \n \n // constructor -- checks environment variable for database name\n // Now uses PQconnectdb\n PgConnection::PgConnection(const char* conninfo)\n! \t: pgConn(NULL), pgResult(NULL), pgCloseConnection(true)\n {\n! // Connect to the database\n! Connect(conninfo);\n }\n \n \n // destructor - closes down the connection and cleanup\n PgConnection::~PgConnection()\n {\n! // Close the connection only if needed\n! // This feature will most probably be used by the derived classes that\n! // need not close the connection after they are destructed.\n! CloseConnection();\n }\n \n \n // PgConnection::CloseConnection()\n // close down the connection if there is one\n! void PgConnection::CloseConnection() \n {\n! // if the connection is open, close it first\n! if (pgCloseConnection) { \n! if (pgResult)\n! \t\t PQclear(pgResult);\n! pgResult = NULL;\n! if (pgConn)\n! \t\t PQfinish(pgConn);\n! pgConn = NULL;\n! pgCloseConnection = false;\n! }\n }\n \n \n--- 28,73 ----\n // ****************************************************************\n // default constructor -- initialize everything\n PgConnection::PgConnection()\n! \t\t: pgConn(NULL), pgResult(NULL), pgCloseConnection(false)\n {}\n \n \n // constructor -- checks environment variable for database name\n // Now uses PQconnectdb\n+ \n PgConnection::PgConnection(const char* conninfo)\n! \t\t: pgConn(NULL), pgResult(NULL), pgCloseConnection(true)\n {\n! \t// Connect to the database\n! \tConnect(conninfo);\n }\n \n \n // destructor - closes down the connection and cleanup\n PgConnection::~PgConnection()\n {\n! \t// Close the connection only if needed\n! \t// This feature will most probably be used by the derived classes that\n! \t// need not close the connection after they are destructed.\n! \tCloseConnection();\n }\n \n \n // PgConnection::CloseConnection()\n // close down the connection if there is one\n! void PgConnection::CloseConnection()\n {\n! \t// if the connection is open, close it first\n! \tif (pgCloseConnection)\n! \t{\n! \t\tif (pgResult)\n! \t\t\tPQclear(pgResult);\n! \t\tpgResult = NULL;\n! \t\tif (pgConn)\n! \t\t\tPQfinish(pgConn);\n! \t\tpgConn = NULL;\n! \t\tpgCloseConnection = false;\n! \t}\n }\n \n \n***************\n*** 73,112 ****\n // establish a connection to a backend\n ConnStatusType PgConnection::Connect(const char conninfo[])\n {\n! // if the connection is open, close it first\n! CloseConnection();\n \n! // Connect to the database\n! pgConn = PQconnectdb(conninfo);\n \n! // Now we have a connection we must close (even if it's bad!)\n! pgCloseConnection = true;\n! \n! // Status will return either CONNECTION_OK or CONNECTION_BAD\n! return Status();\n }\n \n // PgConnection::status -- return connection or result status\n ConnStatusType PgConnection::Status() const\n {\n! return PQstatus(pgConn);\n }\n \n // PgConnection::exec -- send a query to the backend\n ExecStatusType PgConnection::Exec(const char* query)\n {\n! // Clear the result stucture if needed\n! if (pgResult)\n! PQclear(pgResult); \n! \n! // Execute the given query\n! pgResult = PQexec(pgConn, query);\n! \n! // Return the status\n! if (pgResult)\n! \treturn PQresultStatus(pgResult);\n! else \n! \treturn PGRES_FATAL_ERROR;\n }\n \n // Return true if the Postgres command was executed OK\n--- 75,114 ----\n // establish a connection to a backend\n ConnStatusType PgConnection::Connect(const char conninfo[])\n {\n! \t// if the connection is open, close it first\n! \tCloseConnection();\n! \n! \t// Connect to the database\n! \tpgConn = PQconnectdb(conninfo);\n \n! \t// Now we have a connection we must close (even if it's bad!)\n! \tpgCloseConnection = true;\n \n! \t// Status will return either CONNECTION_OK or CONNECTION_BAD\n! \treturn Status();\n }\n \n // PgConnection::status -- return connection or result status\n ConnStatusType PgConnection::Status() const\n {\n! \treturn PQstatus(pgConn);\n }\n \n // PgConnection::exec -- send a query to the backend\n ExecStatusType PgConnection::Exec(const char* query)\n {\n! \t// Clear the result stucture if needed\n! \tif (pgResult)\n! \t\tPQclear(pgResult);\n! \n! \t// Execute the given query\n! \tpgResult = PQexec(pgConn, query);\n! \n! \t// Return the status\n! \tif (pgResult)\n! \t\treturn PQresultStatus(pgResult);\n! \telse\n! \t\treturn PGRES_FATAL_ERROR;\n }\n \n // Return true if the Postgres command was executed OK\n***************\n*** 125,158 ****\n // PgConnection::notifies() -- returns a notification from a list of unhandled notifications\n PGnotify* PgConnection::Notifies()\n {\n! return PQnotifies(pgConn);\n }\n \n // From Integer To String Conversion Function\n string PgConnection::IntToString(int n)\n {\n! char buffer [4*sizeof(n) + 2];\n! sprintf(buffer, \"%d\", n);\n! return buffer;\n }\n \n bool PgConnection::ConnectionBad() const\n! { \n! return Status() == CONNECTION_BAD; \n }\n \n const char* PgConnection::ErrorMessage() const\n! { \n! return (const char *)PQerrorMessage(pgConn); \n }\n! \n const char* PgConnection::DBName() const\n! { \n! return (const char *)PQdb(pgConn); \n }\n \n PQnoticeProcessor PgConnection::SetNoticeProcessor(PQnoticeProcessor proc, void *arg)\n {\n! return PQsetNoticeProcessor(pgConn, proc, arg);\n }\n \n--- 127,160 ----\n // PgConnection::notifies() -- returns a notification from a list of unhandled notifications\n PGnotify* PgConnection::Notifies()\n {\n! \treturn PQnotifies(pgConn);\n }\n \n // From Integer To String Conversion Function\n string PgConnection::IntToString(int n)\n {\n! \tchar buffer [4*sizeof(n) + 2];\n! \tsprintf(buffer, \"%d\", n);\n! \treturn buffer;\n }\n \n bool PgConnection::ConnectionBad() const\n! {\n! \treturn Status() == CONNECTION_BAD;\n }\n \n const char* PgConnection::ErrorMessage() const\n! {\n! \treturn (const char *)PQerrorMessage(pgConn);\n }\n! \n const char* PgConnection::DBName() const\n! {\n! \treturn (const char *)PQdb(pgConn);\n }\n \n PQnoticeProcessor PgConnection::SetNoticeProcessor(PQnoticeProcessor proc, void *arg)\n {\n! \treturn PQsetNoticeProcessor(pgConn, proc, arg);\n }",
"msg_date": "Sat, 15 Jun 2002 15:13:59 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "\nOK, I have added this to our CVS under interfaces/libpqxx. I have not\nmigrated over the CVS history. If we have questions about the code, we\nknow who to ask. ;-)\n\nLibpqxx still needs to be integrated:\n\t\n\tThe 'configure' tests need to be merged into our main configure\n\tThe documentation needs to be merged into our SGML docs.\n\tThe makefile structure needs to be merged into /interfaces.\n\nJeroen, do you have PostgreSQL CVS access yet? If not, we need to get\nyou that.\n\n---------------------------------------------------------------------------\n\nJeroen T. Vermeulen wrote:\n> On Wed, Jun 12, 2002 at 04:04:36PM -0400, Neil Conway wrote:\n> >\n> > Otherwise, if you put the code into src/interfaces/libpqxx and modify\n> > the PostgreSQL build system to be aware of it (as well as removing\n> > libpqxx's autoconf stuff), it shouldn't be too difficult.\n> \n> One concern I have on this point is that not all platforms are going to\n> be able to build libpqxx. Also, there'd have to be a lot of C++ stuff\n> in the existing config.h which I guess was meant to be C. \n> \n> Anyway, I found I'm not much good with automake and so on. I'm trying\n> to merge the two configure.ins, but I feel I must be missing a lot of\n> details.\n> \n> \n> Jeroen\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Tue, 2 Jul 2002 14:05:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "On Tue, Jul 02, 2002 at 02:05:57PM -0400, Bruce Momjian wrote:\n> \n> Jeroen, do you have PostgreSQL CVS access yet? If not, we need to get\n> you that.\n\nDon't have it yet, so please do!\n\n\nJeroen\n\n\n\n",
"msg_date": "Wed, 3 Jul 2002 00:20:51 +0200",
"msg_from": "\"Jeroen T. Vermeulen\" <jtv@xs4all.nl>",
"msg_from_op": true,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "Is it included now in the main build process? If so, I'll test it on\nFreeBSD/Alpha.\n\n> Libpqxx still needs to be integrated:\n>\n> \tThe 'configure' tests need to be merged into our main configure\n> \tThe documentation needs to be merged into our SGML docs.\n> \tThe makefile structure needs to be merged into /interfaces.\n\nChris\n\n\n\n",
"msg_date": "Wed, 3 Jul 2002 14:08:29 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> Is it included now in the main build process? If so, I'll test it on\n> FreeBSD/Alpha.\n> \n> > Libpqxx still needs to be integrated:\n> >\n> > \tThe 'configure' tests need to be merged into our main configure\n> > \tThe documentation needs to be merged into our SGML docs.\n> > \tThe makefile structure needs to be merged into /interfaces.\n> \n\nNo, currently disabled in the build. You can go into libpqxx and run\nconfigure and make and that should work.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Wed, 3 Jul 2002 11:32:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Integrating libpqxx"
}
] |
[
{
"msg_contents": " Deletion of data from a PostgreSQL table is very slow.\n\n It would be nice to have a very fast delete like \"truncate table.\"\n\n Now, truncate is a very dangerous command because it is not logged (but\n the same is true for other operations like bulk copy and select into).\n So one needs to be careful how this command is granted. The same damage\n (accidental deletion of all data) can be done by drop table just as\n easily.\n\n I frequently have to do this right now in PostgreSQL, but I simply\n emulate it by drop table/create table.\n\nWhat is a TRUNCATE TABLE but a drop create anyway? Is there some\ntechnical difference?\n\n--\nBilly O'Connor\n",
"msg_date": "Wed, 12 Jun 2002 14:37:09 -0400 (EDT)",
"msg_from": "Billy O'Connor <billy@oconnoronline.net>",
"msg_from_op": true,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "Deletion of data from a PostgreSQL table is very slow.\n\nIt would be nice to have a very fast delete like \"truncate table.\"\n\nNow, truncate is a very dangerous command because it is not logged (but\nthe same is true for other operations like bulk copy and select into).\nSo one needs to be careful how this command is granted. The same damage\n(accidental deletion of all data) can be done by drop table just as\neasily.\n\nI frequently have to do this right now in PostgreSQL, but I simply\nemulate it by drop table/create table.\n",
"msg_date": "Wed, 12 Jun 2002 12:32:51 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": false,
"msg_subject": "Feature request: Truncate table"
},
{
"msg_contents": "On Wed, 2002-06-12 at 14:32, Dann Corbit wrote:\n> Deletion of data from a PostgreSQL table is very slow.\n> \n> It would be nice to have a very fast delete like \"truncate table.\"\n> \n> Now, truncate is a very dangerous command because it is not logged (but\n> the same is true for other operations like bulk copy and select into).\n> So one needs to be careful how this command is granted. The same damage\n> (accidental deletion of all data) can be done by drop table just as\n> easily.\n> \n> I frequently have to do this right now in PostgreSQL, but I simply\n> emulate it by drop table/create table.\nIt's there:\n$ psql\nWelcome to psql, the PostgreSQL interactive terminal.\n\nType: \\copyright for distribution terms\n \\h for help with SQL commands\n \\? for help on internal slash commands\n \\g or terminate with semicolon to execute query\n \\q to quit\n\nler=# select version();\n version \n---------------------------------------------------------------------\n PostgreSQL 7.2.1 on i386-portbld-freebsd4.6, compiled by GCC 2.95.3\n(1 row)\n\nler=# \\h truncate\nCommand: TRUNCATE\nDescription: empty a table\nSyntax:\nTRUNCATE [ TABLE ] name\n\nler=# \n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "12 Jun 2002 14:35:41 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "On Wed, 2002-06-12 at 13:37, Billy O'Connor wrote:\n> Deletion of data from a PostgreSQL table is very slow.\n> \n> It would be nice to have a very fast delete like \"truncate table.\"\n> \n> Now, truncate is a very dangerous command because it is not logged (but\n> the same is true for other operations like bulk copy and select into).\n> So one needs to be careful how this command is granted. The same damage\n> (accidental deletion of all data) can be done by drop table just as\n> easily.\n> \n> I frequently have to do this right now in PostgreSQL, but I simply\n> emulate it by drop table/create table.\n> \n> What is a TRUNCATE TABLE but a drop create anyway? Is there some\n> technical difference?\n> \nIt doesn't kill indexes/triggers/constraints/Foreign Key Stuff, etc. \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "12 Jun 2002 14:44:56 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "Well in Ingres there is a WORLD of difference! For a start, you don't \nlock out the system catalog. Secondly it is an unlogged event, so it \nbeats \"delete from table_name\" hands down! Then, of course, it preserves \nall permissions, you keep the same OID, so views, et al, can remain in \ntact, as with other objects that referece it.\n\nThese are very important considerations in real-world applications esp. \nwhen a large number of objects may reference the table.\n\n\nWhich brings me to another point - I would dearly love to see a \n\"refresh\" option based on object name added to the system. This would \ncheck all references to a dropped object, by name, and repoint them to \nthe new instance of that object (i.e. if you do a drop/create, it \ndoesn't mess up your entire system if you forgot about a view or three!).\n\n\nMaybe a special \"drop\" and \"create\" can be added. Like \"drop to create\" \nor maybe simply \"recreate\", which tells PG that the object should be \ntreated as if it is dropped then recreated, but updating all the \nreferences to it or perhaps even reusing the OID?\n\nThe point being that alter table doesn't quite fill the hole (it comes \nclose though) and truncate isn't a schema-changing facility, merely a \ndata cropping one.\n\nWho knows? PG may even be credited with a seriously useful extension to \nSQL that may find its way into the standard at some time!\n\nBrad\n\n\nBilly O'Connor wrote:\n\n> Deletion of data from a PostgreSQL table is very slow.\n> \n> It would be nice to have a very fast delete like \"truncate table.\"\n> \n> Now, truncate is a very dangerous command because it is not logged (but\n> the same is true for other operations like bulk copy and select into).\n> So one needs to be careful how this command is granted. The same damage\n> (accidental deletion of all data) can be done by drop table just as\n> easily.\n> \n> I frequently have to do this right now in PostgreSQL, but I simply\n> emulate it by drop table/create table.\n> \n> What is a TRUNCATE TABLE but a drop create anyway? Is there some\n> technical difference?\n> \n> --\n> Billy O'Connor\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n\n",
"msg_date": "Wed, 12 Jun 2002 22:55:40 +0100",
"msg_from": "Bradley Kieser <brad@kieser.net>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "Bradley Kieser wrote:\n> Well in Ingres there is a WORLD of difference! For a start, you don't \n> lock out the system catalog. Secondly it is an unlogged event, so it \n> beats \"delete from table_name\" hands down! Then, of course, it preserves \n> all permissions, you keep the same OID, so views, et al, can remain in \n> tact, as with other objects that referece it.\n> \n> These are very important considerations in real-world applications esp. \n> when a large number of objects may reference the table.\n> \n> \n> Which brings me to another point - I would dearly love to see a \n> \"refresh\" option based on object name added to the system. This would \n> check all references to a dropped object, by name, and repoint them to \n> the new instance of that object (i.e. if you do a drop/create, it \n> doesn't mess up your entire system if you forgot about a view or three!).\n\nWe have actually be moving away from name-based linking so you can\nrename tables and things still work. I can see value in a relinking\nsystem, but we would have to know the old oid and new name, I guess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 18:26:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "> > What is a TRUNCATE TABLE but a drop create anyway? Is there some\n> > technical difference?\n> > \n> It doesn't kill indexes/triggers/constraints/Foreign Key Stuff, etc. \n\nHrm - last time I checked it did...\n\nChris\n\n",
"msg_date": "Thu, 13 Jun 2002 09:47:43 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "> > Hrm - last time I checked it did...\n>\n> Two questions :\n>\n> When was the last time ?\n\n7.1\n\n> It did what ?\n\nDrops triggers and stuff.\n\nOK, I did a check and it looks like it's fixed in 7.2 at least. Sorry for\nthe false alarm...\n\nChris\n\n",
"msg_date": "Thu, 13 Jun 2002 17:32:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > > Hrm - last time I checked it did...\n> >\n> > Two questions :\n> >\n> > When was the last time ?\n> \n> 7.1\n> \n> > It did what ?\n> \n> Drops triggers and stuff.\n> \n> OK, I did a check and it looks like it's fixed in 7.2 at least. Sorry for\n> the false alarm...\n\nIt has never \"dropped triggers and stuff\", so there was nothing to fix.\nAll TRUNCATE TABLE has ever done, since the patch was submitted, was to\ntruncate the underlying relation file and the associated index files,\nand reinitialize the indexes. It has been changed to be disallowed in\ntransactions involving tables not created in the same transaction, but\nthat's about it. People have argued that if there are *RI* triggers on a\ntable, that TRUNCATE should be disallowed, as in Oracle. But TRUNCATE\nfrom inception to date has never dropped triggers...\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 13 Jun 2002 05:52:09 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
},
{
"msg_contents": "On Thu, 2002-06-13 at 03:47, Christopher Kings-Lynne wrote:\n> > > What is a TRUNCATE TABLE but a drop create anyway? Is there some\n> > > technical difference?\n> > > \n> > It doesn't kill indexes/triggers/constraints/Foreign Key Stuff, etc. \n> \n> Hrm - last time I checked it did...\n\nTwo questions :\n\nWhen was the last time ?\n\nIt did what ?\n\n-------------\nHannu\n\n",
"msg_date": "13 Jun 2002 12:26:11 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Feature request: Truncate table"
}
] |
[
{
"msg_contents": "I should apologize for being rather harsh about embedded SQL for\nPostgreSQL.\n\nTo be fair, it does function and it certainly isn't trivial to\nimplement. I am sure that those who have worked on this project have\ninvested very many hours of blood, sweat and tears making it work.\n\nI actually spent a great deal of effort trying to write some tools using\nthe PostgreSQL version of ECPG, and found fatal flaws that threw away a\ncouple weeks of work. I think that is why my responses were so\noverwhelmingly negative.\n\nHere is what I would like to see (consider a gentle suggestion):\n\nA reentrant version of ECPG that uses SQLCA and SQLDA like Oracle or Rdb\nor DB/2 or any of the professional database systems.\n",
"msg_date": "Wed, 12 Jun 2002 11:46:47 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "Dann Corbit wrote:\n> I should apologize for being rather harsh about embedded SQL for\n> PostgreSQL.\n> \n> To be fair, it does function and it certainly isn't trivial to\n> implement. I am sure that those who have worked on this project have\n> invested very many hours of blood, sweat and tears making it work.\n\nOh, OK. Forget what I said earlier about you writing your own database. :-)\n\n> I actually spent a great deal of effort trying to write some tools using\n> the PostgreSQL version of ECPG, and found fatal flaws that threw away a\n> couple weeks of work. I think that is why my responses were so\n> overwhelmingly negative.\n\nI assume this is because you wrote your code assuming a feature was in\necpg, but it wasn't, right?\n\n> Here is what I would like to see (consider a gentle suggestion):\n> \n> A reentrant version of ECPG that uses SQLCA and SQLDA like Oracle or Rdb\n> or DB/2 or any of the professional database systems.\n\nI see on the TODO list under ECPG:\n\n o Implement SQLDA\n o Add SQLSTATE\n\nAre these related to your problem? I see SQLCA in the ecpg code\nalready. Is it implemented incorrectly? If so, I could use items to\nadd to the TODO list.\n\nYou are actually the first person to complain about this, as far as I\ncan remember.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 17:42:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 11:46:47AM -0700, Dann Corbit wrote:\n> I should apologize for being rather harsh about embedded SQL for\n> PostgreSQL.\n\nAlso about being harsh about the people? Okay, apologies accepted.\n\n> I actually spent a great deal of effort trying to write some tools using\n> the PostgreSQL version of ECPG, and found fatal flaws that threw away a\n\nWhich ones? If it's just SQLDA, this is pretty well documented. Yes, the\nfeature is missing, but we all have only limited time for postgresql\nwork.\n\n> A reentrant version of ECPG that uses SQLCA and SQLDA like Oracle or Rdb\n> or DB/2 or any of the professional database systems.\n\nThe last time I used Oracle it used SQLCA in a very similar way as ECPG\ndoes. \n\nMichael\n\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 13 Jun 2002 12:05:43 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 05:42:24PM -0400, Bruce Momjian wrote:\n> You are actually the first person to complain about this, as far as I\n> can remember.\n\nYup. I cannot remember any other person either. And since nobody\ncomplained, nobody worked on this. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 13 Jun 2002 12:06:28 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Larry Rosenman [mailto:ler@lerctr.org]\n> Sent: Wednesday, June 12, 2002 12:36 PM\n> To: Dann Corbit\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Feature request: Truncate table\n> \n> \n> On Wed, 2002-06-12 at 14:32, Dann Corbit wrote:\n> > Deletion of data from a PostgreSQL table is very slow.\n> > \n> > It would be nice to have a very fast delete like \"truncate table.\"\n> > \n> > Now, truncate is a very dangerous command because it is not \n> logged (but\n> > the same is true for other operations like bulk copy and \n> select into).\n> > So one needs to be careful how this command is granted. \n> The same damage\n> > (accidental deletion of all data) can be done by drop table just as\n> > easily.\n> > \n> > I frequently have to do this right now in PostgreSQL, but I simply\n> > emulate it by drop table/create table.\n> It's there:\n> $ psql\n> Welcome to psql, the PostgreSQL interactive terminal.\n> \n> Type: \\copyright for distribution terms\n> \\h for help with SQL commands\n> \\? for help on internal slash commands\n> \\g or terminate with semicolon to execute query\n> \\q to quit\n> \n> ler=# select version();\n> version \n> ---------------------------------------------------------------------\n> PostgreSQL 7.2.1 on i386-portbld-freebsd4.6, compiled by GCC 2.95.3\n> (1 row)\n> \n> ler=# \\h truncate\n> Command: TRUNCATE\n> Description: empty a table\n> Syntax:\n> TRUNCATE [ TABLE ] name\n> \n> ler=# \n\nWell bust my buttons! Now that's service!\n;-)\n\nI am busily doing a Win32 port of PostgreSQL 7.2.1 right now, so that is\nwonderful news.\n",
"msg_date": "Wed, 12 Jun 2002 12:44:57 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Feature request: Truncate table"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Wednesday, June 12, 2002 2:42 PM\n> To: Dann Corbit\n> Cc: Michael Meskes; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] PostGres Doubt\n> \n> \n> Dann Corbit wrote:\n> > I should apologize for being rather harsh about embedded SQL for\n> > PostgreSQL.\n> > \n> > To be fair, it does function and it certainly isn't trivial to\n> > implement. I am sure that those who have worked on this \n> project have\n> > invested very many hours of blood, sweat and tears making it work.\n> \n> Oh, OK. Forget what I said earlier about you writing your \n> own database. :-)\n> \n> > I actually spent a great deal of effort trying to write \n> some tools using\n> > the PostgreSQL version of ECPG, and found fatal flaws that \n> threw away a\n> > couple weeks of work. I think that is why my responses were so\n> > overwhelmingly negative.\n> \n> I assume this is because you wrote your code assuming a feature was in\n> ecpg, but it wasn't, right?\n\nI have written lots of programs that use embedded SQL. I have (for\ninstance) several ODBC drivers that use embedded SQL and C++ as part of\nan ODBC driver system. I merrily coded away some stuff to do the same\nthing in PostgreSQL. After all, I had already done it for several other\nsystems and they all worked just about the same and the effort was\nminimal to change from one system to another.\n\nSo now, I started getting down to the details. One global structure...\nI started a major rewrite to repair it. Then (to my abject horror) I\ndiscovered there is no SQLCA at all. Project abandoned (actually, just\nswitched to libpq and everything was OK).\n\nYes, you are right -- I should have checked a lot more carefully before\nI dove in. I would have avoided getting my bun in a knott.\n \n> > Here is what I would like to see (consider a gentle suggestion):\n> > \n> > A reentrant version of ECPG that uses SQLCA and SQLDA like \n> Oracle or Rdb\n> > or DB/2 or any of the professional database systems.\n> \n> I see on the TODO list under ECPG:\n> \n> o Implement SQLDA\n> o Add SQLSTATE\n> \n> Are these related to your problem? I see SQLCA in the ecpg code\n> already. Is it implemented incorrectly? If so, I could use items to\n> add to the TODO list.\n> \n> You are actually the first person to complain about this, as far as I\n> can remember.\n\nI doubt if many people are using it then. There is a NIST SQL suite\nwhich should be run against it. Have you heard of it? It is a\nstandardization for embedded SQL [and other facets of the SQL langauge].\nI think it would be very nice if the PostgreSQL team should try to\nincorporte the whole thing as part of their validation suite. The\nproject the uses embedded sql is in the folder /pc under the nist main\nfolder. Here is an example from that project that use sqlca:\n\n/* EMBEDDED C (file \"XOP710.PC\") */\n\n/* Copyright 1994, 1995 X/Open Company Limited */\n\n/* All rights reserved. */\n/* */\n/* DISCLAIMER: */\n/* This program was reviewed by employees of NIST for */\n/* conformance to the SQL standards. */\n/* NIST assumes no responsibility for any party's use of */\n/* this program. */\n\n/* X/Open and the 'X' symbol are registered trademarks of X/Open\nCompany */\n/* Limited in the UK and other countries.\n*/\n\n\n/*****************************************************************/\n/* */\n/* COMMENT SECTION */\n/* */\n/* DATE 1994/05/13 EMBEDDED C LANGUAGE */\n/* X/Open SQL VALIDATION TEST SUITE V6.0 */\n/* */\n/* xop710.pc */\n/* WRITTEN BY: Colin O'Driscoll */\n/* */\n/* Acceptance of correctly placed SQLCA */\n/* */\n/* REFERENCES */\n/* X/Open CAE SQL Specification. */\n/* Section 8.1.1 */\n/* */\n/* <embedded SQL C program> */\n/* */\n/* DATE PROGRAM LAST CHANGED 02/11/94 */\n/* */\n/*****************************************************************/\n\n#include <stdio.h>\n#include <time.h>\n#include <string.h>\n#include <stdlib.h>\nEXEC SQL BEGIN DECLARE SECTION;\n\n/* this line may be needed for some preprocessors */\nlong SQLCODE;\n\nchar SQLSTATE[6];\nchar uid[19];\nchar uidx[19];\nEXEC SQL END DECLARE SECTION;\nextern int AUTHID();\n\n/* INCLUDE SQLCA placed correctly */\nEXEC SQL INCLUDE sqlca;\n\n/* variables for NOSUBCLASS() */\nlong norm1;\nlong norm2;\nchar ALPNUM[37];\nchar NORMSQ[6];\n\nint errcnt;\n/* date_time declaration */\ntime_t cal;\n long errflg;\n\nCHCKOK ()\n{\nSQLSTATE[5] = '\\0';\nprintf (\"SQLSTATE should be 00000; its value is %s\\n\", SQLSTATE);\n\nNOSUBCLASS();\nif (strncmp (NORMSQ, \"00000\", 5) == 0 &&\nstrncmp (NORMSQ, SQLSTATE, 5) != 0)\nprintf (\"Valid implementation defined SQLSTATE accepted.\\n\");\n}\n\nmain()\n{\n\n\n strcpy(uid,\"XOPEN1\");\n AUTHID(uid);\nstrcpy(uidx,\"not logged in, not\");\nEXEC SQL SELECT USER INTO :uidx FROM XOPEN1.ECCO;\nif (strncmp(uid,uidx,6) != 0)\n {\n printf(\"ERROR: User %s expected. User %s connected\\n\",uid,uidx);\n exit(99);\n }\nerrcnt = 0;\nerrflg = 0;\nprintf(\"X/OPEN Extensions SQL Test Suite, V6.0, Embedded C,\nxop710.pc\\n\");\nprintf(\"59-byte ID\\n\");\nprintf(\"TEd Version #\\n\");\n/* date_time print */\ntime (&cal);\nprintf (\"\\n Time Run: %s\\n\", ctime (&cal));\n\nstrcpy(ALPNUM, \"01234ABCDEFGH56789IJKLMNOPQRSTUVWXYZ\");\n\n/******************** BEGIN TEST0710 ********************/\n\n printf(\"\\n TEST0710 \\n\");\n printf(\" X/O,Acceptance of correctly placed SQLCA\\n\");\n printf(\" X/OPEN SQL CAE Spec Section 8.1.1\\n\");\n printf(\" - - - - - - - - - - - - - - - - - - -\\n\\n\");\n printf(\" ### INSERT INTO WARNING VALUES('DDDDDD',5);\\n\");\n printf(\"\\n\\n=================================================\\n\");\n\n EXEC SQL DELETE FROM WARNING;\n/* initialise variables */\n strcpy(SQLSTATE,\"x\");\n sqlca.sqlcode = 5;\n\n EXEC SQL INSERT INTO WARNING VALUES('DDDDDD',5);\n\n printf(\"sqlca.sqlcode should be 0 \\n\");\n printf(\"sqlca.sqlcode is %ld\\n\", sqlca.sqlcode);\n\n CHCKOK();\n if ((sqlca.sqlcode != 0) && (strncmp(NORMSQ,\"00000\",5) != 0))\n {\n printf (\"*** Problem found in TEST STEP NUMBER 1 *** \\n\");\n errflg = errflg + 1;\n }\n\n EXEC SQL ROLLBACK WORK;\n printf(\"\\n\\n=================================================\\n\");\n\n if (errflg == 0)\n {\n\n EXEC SQL INSERT INTO XOPEN1.TESTREPORT\nVALUES('0710','pass','PC');\n printf(\"\\n\\n xop710.pc *** pass *** \");\n }\n\n else\n {\n EXEC SQL INSERT INTO XOPEN1.TESTREPORT\nVALUES('0710','fail','PC');\n errcnt = errcnt + 1;\n printf(\"\\n\\n xop710.pc *** fail *** \");\n }\n\n printf(\"\\n\\n=================================================\\n\");\n printf(\"\\n\\n\\n\\n\");\n \n EXEC SQL COMMIT WORK;\n\n/******************** END TEST0710 ********************/\n\n exit(errcnt);\n\n}\n\nNOSUBCLASS()\n{\n/* This routine replaces valid implementation deifined */\n/* subclasses with 000. This replacement equates valid */\n/* implementation-defined subclasses with the 000 value */\n/* expected by the test case; otherwise the test will */\n/* fail. After calling NOSUBCLASS, NORMSQ will be tested */\n/* SQLSTATE will be printed */\n\nstrcpy (NORMSQ, SQLSTATE);\n\nnorm1 = 2;\n/* subclass begins in position 3 of char array NORMSQ */\nfor (norm2 = 13; norm2 < 37; norm2++)\n/* valid subclasses begin with 5/9, I-Z, end of ALPNUM table */\n {\n if (NORMSQ[norm1] == ALPNUM[norm2])\n NORMSQ[norm1] = '0';\n }\n\nif (strncmp (NORMSQ, SQLSTATE, 5) == 0)\n goto P213;\n/* Quit if NORMSQ is unchanged. Subclass is not impl.def */\n/* Changed NORMSQ means implementation-defined subclass, */\n/* so proceed to zero it out, if valid (0-9, A-Z) */\n\nnorm1 = 3;\n/* examining position 4 of char array NORMSQ */\nfor (norm2 = 0; norm2 < 37; norm2++)\n/* valid characters are 0-9 A-Z */\n {\n if (NORMSQ[norm1] == ALPNUM[norm2])\n NORMSQ[norm1] = '0';\n }\n\nnorm1 = 4;\n/* examining position 5 of char array NORMSQ */\nfor (norm2 = 0; norm2 < 37; norm2++)\n/* valid characters are 0-9 A-z */\n {\n if (NORMSQ[norm1] == ALPNUM[norm2])\n NORMSQ[norm1] = '0';\n }\n\n/* implementation-defined subclasses are allowed for warnings */\n/* (class = 01). These equate to successful completion */\n/* SQLSTATE values of 00000. */\n/* reference SQL-92. 4.28 SQL-transactions, paragraph 2 */\n\nif (NORMSQ[0] == '0' && NORMSQ[1] == '1')\n NORMSQ[1] = '0';\nP213:\n return;\n\n}\n",
"msg_date": "Wed, 12 Jun 2002 15:01:31 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "Dann Corbit \n> > I assume this is because you wrote your code assuming a feature was in\n> > ecpg, but it wasn't, right?\n> \n> I have written lots of programs that use embedded SQL. I have (for\n> instance) several ODBC drivers that use embedded SQL and C++ as part of\n> an ODBC driver system. I merrily coded away some stuff to do the same\n> thing in PostgreSQL. After all, I had already done it for several other\n> systems and they all worked just about the same and the effort was\n> minimal to change from one system to another.\n> \n> So now, I started getting down to the details. One global structure...\n> I started a major rewrite to repair it. Then (to my abject horror) I\n> discovered there is no SQLCA at all. Project abandoned (actually, just\n> switched to libpq and everything was OK).\n\n\nI see SQLCA mentioned in the ecpg code. What am I not understanding?\n\n> > > Here is what I would like to see (consider a gentle suggestion):\n> > > \n> > > A reentrant version of ECPG that uses SQLCA and SQLDA like \n> > Oracle or Rdb\n> > > or DB/2 or any of the professional database systems.\n> > \n> > I see on the TODO list under ECPG:\n> > \n> > o Implement SQLDA\n> > o Add SQLSTATE\n> > \n> > Are these related to your problem? I see SQLCA in the ecpg code\n> > already. Is it implemented incorrectly? If so, I could use items to\n> > add to the TODO list.\n> > \n> > You are actually the first person to complain about this, as far as I\n> > can remember.\n> \n> I doubt if many people are using it then. There is a NIST SQL suite\n> which should be run against it. Have you heard of it? It is a\n> standardization for embedded SQL [and other facets of the SQL langauge].\n> I think it would be very nice if the PostgreSQL team should try to\n> incorporte the whole thing as part of their validation suite. The\n> project the uses embedded sql is in the folder /pc under the nist main\n> folder. Here is an example from that project that use sqlca:\n\nOh, that seems easy. I know Michael will know the answer.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 18:19:57 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 06:19:57PM -0400, Bruce Momjian wrote:\n> > I doubt if many people are using it then. There is a NIST SQL suite\n> > which should be run against it. Have you heard of it? It is a\n> > standardization for embedded SQL [and other facets of the SQL langauge].\n> > I think it would be very nice if the PostgreSQL team should try to\n> > incorporte the whole thing as part of their validation suite. The\n> > project the uses embedded sql is in the folder /pc under the nist main\n> > folder. Here is an example from that project that use sqlca:\n> \n> Oh, that seems easy. I know Michael will know the answer.\n\nActually I didn't know that test suite. But I will surely look at it.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 13 Jun 2002 12:09:12 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 03:01:31PM -0700, Dann Corbit wrote:\n> project the uses embedded sql is in the folder /pc under the nist main\n> folder. Here is an example from that project that use sqlca:\n\nOf course this file alone won't run very well, but I added enough stuff\nand created a database to get it running and here's my result:\n\n1) SQLSTATE does not work, which of course is not surprising.\n2) It triggered one bug in parsing octal number in single quotes. The\npatch was just committed.\n3) I had to remove the SQLCODE definition as SQLCODE at the moment is a\n#define. Maybe we should change that.\n4) The SQLCA test runs through just fine.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 17 Jun 2002 15:27:37 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Wednesday, June 12, 2002 3:20 PM\n> To: Dann Corbit\n> Cc: Michael Meskes; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] PostGres Doubt\n> \n> \n> Dann Corbit \n> > > I assume this is because you wrote your code assuming a \n> feature was in\n> > > ecpg, but it wasn't, right?\n> > \n> > I have written lots of programs that use embedded SQL. I have (for\n> > instance) several ODBC drivers that use embedded SQL and \n> C++ as part of\n> > an ODBC driver system. I merrily coded away some stuff to \n> do the same\n> > thing in PostgreSQL. After all, I had already done it for \n> several other\n> > systems and they all worked just about the same and the effort was\n> > minimal to change from one system to another.\n> > \n> > So now, I started getting down to the details. One global \n> structure...\n> > I started a major rewrite to repair it. Then (to my abject \n> horror) I\n> > discovered there is no SQLCA at all. Project abandoned \n> (actually, just\n> > switched to libpq and everything was OK).\n> \n> \n> I see SQLCA mentioned in the ecpg code. What am I not understanding?\n\nI meant to say no SQLDA (the SQLCA only has the problem of scope).\n \n> > > > Here is what I would like to see (consider a gentle suggestion):\n> > > > \n> > > > A reentrant version of ECPG that uses SQLCA and SQLDA like \n> > > Oracle or Rdb\n> > > > or DB/2 or any of the professional database systems.\n> > > \n> > > I see on the TODO list under ECPG:\n> > > \n> > > o Implement SQLDA\n> > > o Add SQLSTATE\n> > > \n> > > Are these related to your problem? I see SQLCA in the ecpg code\n> > > already. Is it implemented incorrectly? If so, I could \n> use items to\n> > > add to the TODO list.\n\nThose are precisely the missing items (along with the implementation of\nSQLCA -- it should not be a global object, but rather be declared and a\nnew instance gets created).\n\n> > > You are actually the first person to complain about this, \n> as far as I\n> > > can remember.\n> > \n> > I doubt if many people are using it then. There is a NIST SQL suite\n> > which should be run against it. Have you heard of it? It is a\n> > standardization for embedded SQL [and other facets of the \n> SQL langauge].\n> > I think it would be very nice if the PostgreSQL team should try to\n> > incorporte the whole thing as part of their validation suite. The\n> > project the uses embedded sql is in the folder /pc under \n> the nist main\n> > folder. Here is an example from that project that use sqlca:\n> \n> Oh, that seems easy. I know Michael will know the answer.\n\nEmbedded SQL is subject to standard \"X/Open DR\":\nhttp://www.opengroup.org/sib.htm\n\nIf it can pass all of the tests in the NIST validation suite, then I\nthink that would be a great start.\n\nIt is also an excellent test for all the other facets of SQL -- it tests\nSQL/CLI (ODBC), transact SQL, etc.\n\nMany government contracts cannot be fulfilled by products which have not\nbeen certified to pass this suite.\nhttp://www.opengroup.org/public/prods/drm4.htm\nAt least, that used to be the case. If need be, I can supply a copy of\nthe test suite (I can't seem to find the download link any more).\n\nHere are some other implementations:\nXdb:\nhttp://www.va.pubnix.com/man/xdb/sqlref/SQLDAStructureForC_516.html\n\nProgress:\nhttp://www.progress.com/support/downloads/v91c_release_notes/esql_92.pdf\n\nSybase:\nhttp://manuals.sybase.com/onlinebooks/group-or/org0400e/osrsp32/@Generic\n__BookTextView/7062\n\nDB/2:\nhttp://publib.boulder.ibm.com/html/as400/v4r5/ic2924/index.htm?info/db2/\nrbafymst222.htm\n\nMicrosoft SQL*Server\nhttp://msdn.microsoft.com/library/default.asp?url=/library/en-us/esqlfor\nc/ec_6_erf_03_8ag5.asp\n\nInformix:\nhttp://www.informix.com/answers/english/docs/dbdk/infoshelf/esqlc/15.fm1\n.html\n\nSQL/Anywhere:\nhttp://bonsai.ucdmc.ucdavis.edu/SQLHelp/00000268.htm\n\nAdabas (This one for Adabas is very nice, it has a formal grammar!):\nhttp://www.softwareag.com/adabasd/documentation/docuen/html/prceng9.htm\n\nOracle:\nhttp://download-west.oracle.com/otndoc/oracle9i/901_doc/appdev.901/a8986\n1/pc_15ody.htm#4581\n\nGeneral case:\nhttp://www.fh-sbg.ac.at/~ulamec/sql/sqlda.htm\n\nAn overview:\nhttp://www.cs.purdue.edu/homes/mcclure/cs448/info/oraproc.ppt\n",
"msg_date": "Wed, 12 Jun 2002 16:09:52 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "Dann Corbit wrote:\n> > > So now, I started getting down to the details. One global \n> > structure...\n> > > I started a major rewrite to repair it. Then (to my abject \n> > horror) I\n> > > discovered there is no SQLCA at all. Project abandoned \n> > (actually, just\n> > > switched to libpq and everything was OK).\n> > \n> > \n> > I see SQLCA mentioned in the ecpg code. What am I not understanding?\n> \n> I meant to say no SQLDA (the SQLCA only has the problem of scope).\n\nI have update the TODO with:\n\n\to Allow multi-threaded use of SQLCA\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 12 Jun 2002 19:34:54 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Wed, Jun 12, 2002 at 04:09:52PM -0700, Dann Corbit wrote:\n> Embedded SQL is subject to standard \"X/Open DR\":\n> http://www.opengroup.org/sib.htm\n> \n> If it can pass all of the tests in the NIST validation suite, then I\n> think that would be a great start.\n\nSomehow I didn't find a link to download the NIST suite. Could you tell\nme where to find it?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jun 2002 08:16:14 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "Just so you know, current CVS HEAD passes all tests on FreeBSD/Alpha (a\n64bit machine) with this configure:\n\n./configure --prefix=/home/chriskl/local --enable-integer-datetimes --enable\n-debug --enable-depend --enable-cassert --with-pam --with-openssl --with-CXX\n\nChris\n\n",
"msg_date": "Thu, 13 Jun 2002 12:00:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Regression Test Report"
}
] |
[
{
"msg_contents": "Since we just had the discussion about index size, I have written a\nsection for our administration manual that shows ways of finding the\ndisk usage for specific tables and databases:\n\n\thttp://candle.pha.pa.us/main/writings/pgsql/sgml/diskusage.html\n\nI also updated README.oid2name to show a sample session that analyzes\ndisk usage.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jun 2002 02:05:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Documentation on disk usage"
},
{
"msg_contents": "It's much easier to use the routines in contrib/dbsize.\n\n> Since we just had the discussion about index size, I have written a\n> section for our administration manual that shows ways of finding the\n> disk usage for specific tables and databases:\n>\n> \thttp://candle.pha.pa.us/main/writings/pgsql/sgml/diskusage.html\n>\n> I also updated README.oid2name to show a sample session that analyzes\n> disk usage.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 17 Jun 2002 23:22:08 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Documentation on disk usage"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> It's much easier to use the routines in contrib/dbsize.\n\nDone, though dbsize does not address indexes or TOAST tables. I have\nadded documentation to SGML to show how to find these values.\n\n\n> \n> > Since we just had the discussion about index size, I have written a\n> > section for our administration manual that shows ways of finding the\n> > disk usage for specific tables and databases:\n> >\n> > \thttp://candle.pha.pa.us/main/writings/pgsql/sgml/diskusage.html\n> >\n> > I also updated README.oid2name to show a sample session that analyzes\n> > disk usage.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Tue, 25 Jun 2002 13:41:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Documentation on disk usage"
}
] |
[
{
"msg_contents": "\n[pgsql-announce removed and pgsql-hackers added]\n\nI have applied your patches with small corrections. Please grab the\nlatest source and let me know if something is wrong.\n\nThanks.\n--\nTatsuo Ishii\n\n> Hi Ishii-san,\n> \n> The patches are attached.Please apply it.\n> \n> Thanks,\n> Bill\n> \n> Tatsuo Ishii wrote:\n> \n> >>Hello,\n> >>\n> >>As postgresql is widely used in the world,many Chinese users are looking\n> >>forward to use such a high performanced database management\n> >>system.However since the Chinese new codepage standard GB18030 is not\n> >>completely supported,postgresql is limitted to be used in China.\n> >>\n> >>Now I have managed to implement the GB18030 support upon the latest\n> >>version,so the following functions are added after the patches are added.\n> >>\n> >>-Chinese GB18030 encoding is available on front-end side,while on\n> >>backend side,EUC_CN or MIC is used.\n> >>-Encoding convertion between MIC and GB18030 is implement.\n> >>-GB18030 locale support is available on front-end side.\n> >>-GB18030 locale test is added.\n> >>\n> >>Any help for testing with these patches and sugguestions for GB18030\n> >>support are greatly appreciated.\n> >>\n> >\n> >We need to apply your pacthes to the current source tree(we are not\n> >allowed to add new feature stable source tree). Your pacthes for\n> >encnames.c pg_wchar.h and wchar.c are rejected due to the difference\n> >between 7.2 and current.\n> >\n> >Can you give me patches encnames.c pg_wchar.h and wchar.c against\n> >current?\n> >\n> >Unicode conversion map staffs ISO10646-GB18030.TXT utf8_to_gb18030.map\n> >UCS_to_GB18030.pl and gb18030_to_utf8.map are looks good for\n> >current. So I will apply them.\n> >--\n> >Tatsuo Ishii\n> >\n> -- \n> /---------------------------/ \n> (Bill Huang)\n> E-mail:bill_huanghb@ybb.ne.jp\n> Cell phone:090-9979-4631\n> /---------------------------/\n",
"msg_date": "Thu, 13 Jun 2002 17:36:19 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Chinese GB18030 support is implemented!"
},
{
"msg_contents": "Hi Ishii-san,\n\nGreat!Thanks for your help.\nWould you please update the Makefile under Unicode?it should be updated \nfor GB18030.\n\nBest regards,\nBill\n\n\nTatsuo Ishii wrote:\n\n>[pgsql-announce removed and pgsql-hackers added]\n>\n>I have applied your patches with small corrections. Please grab the\n>latest source and let me know if something is wrong.\n>\n>Thanks.\n>--\n>Tatsuo Ishii\n>\n>>Hi Ishii-san,\n>>\n>>The patches are attached.Please apply it.\n>>\n>>Thanks,\n>>Bill\n>>\n>>Tatsuo Ishii wrote:\n>>\n>>>>Hello,\n>>>>\n>>>>As postgresql is widely used in the world,many Chinese users are looking\n>>>>forward to use such a high performanced database management\n>>>>system.However since the Chinese new codepage standard GB18030 is not\n>>>>completely supported,postgresql is limitted to be used in China.\n>>>>\n>>>>Now I have managed to implement the GB18030 support upon the latest\n>>>>version,so the following functions are added after the patches are added.\n>>>>\n>>>>-Chinese GB18030 encoding is available on front-end side,while on\n>>>>backend side,EUC_CN or MIC is used.\n>>>>-Encoding convertion between MIC and GB18030 is implement.\n>>>>-GB18030 locale support is available on front-end side.\n>>>>-GB18030 locale test is added.\n>>>>\n>>>>Any help for testing with these patches and sugguestions for GB18030\n>>>>support are greatly appreciated.\n>>>>\n>>>We need to apply your pacthes to the current source tree(we are not\n>>>allowed to add new feature stable source tree). Your pacthes for\n>>>encnames.c pg_wchar.h and wchar.c are rejected due to the difference\n>>>between 7.2 and current.\n>>>\n>>>Can you give me patches encnames.c pg_wchar.h and wchar.c against\n>>>current?\n>>>\n>>>Unicode conversion map staffs ISO10646-GB18030.TXT utf8_to_gb18030.map\n>>>UCS_to_GB18030.pl and gb18030_to_utf8.map are looks good for\n>>>current. So I will apply them.\n>>>--\n>>>Tatsuo Ishii\n>>>\n>>-- \n>>/---------------------------/ \n>>(Bill Huang)\n>>E-mail:bill_huanghb@ybb.ne.jp\n>>Cell phone:090-9979-4631\n>>/---------------------------/\n>>\n-- \nBill Huang (81)-3-3257-0417\nRed Hat K.K. http://www.jp.redhat.com\n\n\n\n",
"msg_date": "Thu, 13 Jun 2002 18:12:55 +0900",
"msg_from": "Bill Huang <bhuang@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Chinese GB18030 support is implemented!"
},
{
"msg_contents": "> Great!Thanks for your help.\n\nYou are welcome.\n\n> Would you please update the Makefile under Unicode?it should be updated \n> for GB18030.\n\ndone.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 14 Jun 2002 12:31:35 +0900 (JST)",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Chinese GB18030 support is implemented!"
}
] |
[
{
"msg_contents": "Currently serial is dumped as a sequence and appropriate default\nstatement.\n\nWith my upcoming dependency patch serials depend on the appropriate\ncolumn. Drop the column (or table) and the sequence goes with it.\nThe depencency information does not survive the pg_dump / restore\nprocess however as it's recreated as the table and individual\nsequence.\n\nI see 2 options for carrying the information.\n\nStore sequence information in the SERIAL creation statement:\nCREATE TABLE tab (col1 SERIAL(<start num>, <sequence name>));\n\nOr store the dependency information in the sequence:\nCREATE SEQUENCE ... REQUIRES COLUMN <column>;\n\n\nThe former makes a lot more sense, and it's nice that the sequence\ninformation is in one place.\n--\nRod\n\n",
"msg_date": "Thu, 13 Jun 2002 08:10:01 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Making serial survive pg_dump"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Store sequence information in the SERIAL creation statement:\n> CREATE TABLE tab (col1 SERIAL(<start num>, <sequence name>));\n\nThis is wrong because it loses the separation between schema and data.\nI do agree that it would be nice if pg_dump recognized serial columns\nand dumped them as such --- but the separate setval call is still the\nappropriate technique for messing with the sequence contents. We do\nnot need a syntax extension in CREATE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 09:46:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Thursday, June 13, 2002 9:46 AM\nSubject: Re: [HACKERS] Making serial survive pg_dump\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Store sequence information in the SERIAL creation statement:\n> > CREATE TABLE tab (col1 SERIAL(<start num>, <sequence name>));\n>\n> This is wrong because it loses the separation between schema and\ndata.\n> I do agree that it would be nice if pg_dump recognized serial\ncolumns\n> and dumped them as such --- but the separate setval call is still\nthe\n> appropriate technique for messing with the sequence contents. We do\n> not need a syntax extension in CREATE.\n\nOk, keeping the setval is appropriate. Are there any problems with a\nSERIAL(<sequence name>) implementation?\n\n\n\n",
"msg_date": "Thu, 13 Jun 2002 17:30:30 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Ok, keeping the setval is appropriate. Are there any problems with a\n> SERIAL(<sequence name>) implementation?\n\nWhat for? The sequence name is an implementation detail, not something\nwe want to expose (much less let users modify).\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 17:41:07 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "Normally I'd agree, but I've found a few people who use normal\nsequence operations with serial sequences. That is, they track down\nthe name and use it.\n\nI'd prefer to force these people to make it manually, but would be\nsurprised if that was a concensus.\n\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Thursday, June 13, 2002 5:41 PM\nSubject: Re: [HACKERS] Making serial survive pg_dump\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Ok, keeping the setval is appropriate. Are there any problems\nwith a\n> > SERIAL(<sequence name>) implementation?\n>\n> What for? The sequence name is an implementation detail, not\nsomething\n> we want to expose (much less let users modify).\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Thu, 13 Jun 2002 17:45:14 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> Normally I'd agree, but I've found a few people who use normal\n> sequence operations with serial sequences. That is, they track down\n> the name and use it.\n\nSure. But what's this have to do with what pg_dump should emit?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 17:52:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "If we have sequences pick new names automatically, it may not pick the\nsame name after dump / restore as it had earlier -- especially across\nversions (see TODO entry).\n\nSo don't we need a way to suggest the *right* name to SERIAL?\n\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Thursday, June 13, 2002 5:52 PM\nSubject: Re: [HACKERS] Making serial survive pg_dump\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > Normally I'd agree, but I've found a few people who use normal\n> > sequence operations with serial sequences. That is, they track\ndown\n> > the name and use it.\n>\n> Sure. But what's this have to do with what pg_dump should emit?\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Thu, 13 Jun 2002 17:56:42 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> If we have sequences pick new names automatically, it may not pick the\n> same name after dump / restore as it had earlier -- especially across\n> versions (see TODO entry).\n> So don't we need a way to suggest the *right* name to SERIAL?\n\nNo. IMHO, if we change the naming convention for serial sequences (which\nseems unlikely, except that it might be indirectly affected by changing\nNAMEDATALEN), then we'd *want* the new naming convention to take effect,\nnot to have pg_dump scripts force an old naming convention to be\npreserved.\n\nI realize there's a potential for failing to restore the setval()\ninformation if the name actually does change, but I'm willing to live\nwith that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 18:05:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "Thats fair, and makes the job a heck of a lot simpler.\n\nWe do need to change the sequence naming once. They have a tendency\nto conflict at the moment.\n\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Thursday, June 13, 2002 6:05 PM\nSubject: Re: [HACKERS] Making serial survive pg_dump\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > If we have sequences pick new names automatically, it may not pick\nthe\n> > same name after dump / restore as it had earlier -- especially\nacross\n> > versions (see TODO entry).\n> > So don't we need a way to suggest the *right* name to SERIAL?\n>\n> No. IMHO, if we change the naming convention for serial sequences\n(which\n> seems unlikely, except that it might be indirectly affected by\nchanging\n> NAMEDATALEN), then we'd *want* the new naming convention to take\neffect,\n> not to have pg_dump scripts force an old naming convention to be\n> preserved.\n>\n> I realize there's a potential for failing to restore the setval()\n> information if the name actually does change, but I'm willing to\nlive\n> with that.\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Thu, 13 Jun 2002 18:11:05 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "\nFolks,\n\n> No. IMHO, if we change the naming convention for serial sequences (which\n> seems unlikely, except that it might be indirectly affected by changing\n> NAMEDATALEN), then we'd *want* the new naming convention to take effect,\n> not to have pg_dump scripts force an old naming convention to be\n> preserved.\n> \n> I realize there's a potential for failing to restore the setval()\n> information if the name actually does change, but I'm willing to live\n> with that.\n\nIMNHO, if this is such a concern for the developer, then what about using \nexplicitly named sequences? I almost never use the SERIAL data type, because \nI feel that I need naming control as well as explicit permissions. SERIAL is \na convenience for those who don't want to be bothered ... serious developers \nhould use DEFAULT NEXTVAL('sequence_name').\n\n-- \n-Josh Berkus\n",
"msg_date": "Thu, 13 Jun 2002 16:55:52 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump"
},
{
"msg_contents": "> Currently serial is dumped as a sequence and appropriate default\n> statement.\n>\n> With my upcoming dependency patch serials depend on the appropriate\n> column. Drop the column (or table) and the sequence goes with it.\n> The depencency information does not survive the pg_dump / restore\n> process however as it's recreated as the table and individual\n> sequence.\n\nWhat happens is the sequence is shared between several tables (eg. invoice\nnumbers or something)\n\nChris\n\n",
"msg_date": "Fri, 14 Jun 2002 10:15:06 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump"
},
{
"msg_contents": "> What happens is the sequence is shared between several tables (eg.\ninvoice\n> numbers or something)\n\nYou cannot accomplish this situation by strictly using the SERIAL\ntype.\n\n",
"msg_date": "Thu, 13 Jun 2002 22:20:18 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Making serial survive pg_dump"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n>> What happens is the sequence is shared between several tables (eg.\n>> invoice numbers or something)\n\n> You cannot accomplish this situation by strictly using the SERIAL\n> type.\n\nBut Chris is correct that there are borderline cases where we might\ndo the wrong thing if we're not careful. The real question here,\nI suspect, is what rules pg_dump will use to decide that it ought\nto suppress a CREATE SEQUENCE command, DEFAULT clause, etc, in\nfavor of emitting a SERIAL column datatype. In particular, ought it\nto depend on looking at the form of the name of the sequence?\nI can see arguments both ways on that...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 23:11:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> \n>>>What happens is the sequence is shared between several tables (eg.\n>>>invoice numbers or something)\n>>\n> \n>>You cannot accomplish this situation by strictly using the SERIAL\n>>type.\n> \n> \n> But Chris is correct that there are borderline cases where we might\n> do the wrong thing if we're not careful. The real question here,\n> I suspect, is what rules pg_dump will use to decide that it ought\n> to suppress a CREATE SEQUENCE command, DEFAULT clause, etc, in\n> favor of emitting a SERIAL column datatype. In particular, ought it\n> to depend on looking at the form of the name of the sequence?\n> I can see arguments both ways on that...\n> \n\nI think that when SERIAL is used, the sequence should be tied \ninextricably to the table which created it, and it should be hidden from \nuse for other purposes (perhaps similar to the way a toast table is). If \nyou *want* to use a sequence across several tables, then you don't use \nSERIAL, you create a sequence.\n\nMany people who come from an MS SQL Server background are used to an \nIDENTITY column being tied transparently to the table in this fashion, \nand they initially find sequences confusing. Conversely, people coming \nfrom an Oracle background are quite comfortable with sequences, and \ndon't understand why it is necessary to have an IDENTITY type column at \nall -- they seem too restrictive. We have people from both backgrounds \nwhere I work, and both databases in use for various applications, and \nthis is at least what I have observed.\n\nThis is a chance for PostgreSQL to support people from both camps \nequally well.\n\nAnyway, just my 2c :-)\n\nJoe\n\n",
"msg_date": "Thu, 13 Jun 2002 21:11:55 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump"
},
{
"msg_contents": "> I think that when SERIAL is used, the sequence should be tied \n> inextricably to the table which created it, and it should be hidden from \n> use for other purposes (perhaps similar to the way a toast table is). If \n> you *want* to use a sequence across several tables, then you don't use \n> SERIAL, you create a sequence.\n\nAgreed. Maybe an extra column in pg_attribute or something?\n\nChris\n\n",
"msg_date": "Fri, 14 Jun 2002 12:43:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Making serial survive pg_dump"
},
{
"msg_contents": "> > I think that when SERIAL is used, the sequence should be tied\n> > inextricably to the table which created it, and it should be\nhidden from\n> > use for other purposes (perhaps similar to the way a toast table\nis). If\n> > you *want* to use a sequence across several tables, then you don't\nuse\n> > SERIAL, you create a sequence.\n>\n> Agreed. Maybe an extra column in pg_attribute or something?\n\nSince no other sequence will depend on a column, I could base it on\nthat.\n\n",
"msg_date": "Fri, 14 Jun 2002 19:07:56 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Making serial survive pg_dump"
}
] |
[
{
"msg_contents": "Thanks for reading. A few disclaimers:\n\n1. I am a newbie. I program for a living, but my work in pg has so far \nbeen at the \"devoted hobby level,\" using pg and PHP. For an example of \nwhat I have done with pg, you can visit www.the-athenaeum.org, a site I \none day hope to make into a business.\n\n2. I've searched the boards, but can't find a good solution to my \nproblem. I realize that there may be better ways to solve my issues \nthan expanding pg's feature set, or there may be features I'm not \nfamiliar with. This message is partly to find out how I should approach \nmy problem.\n\n3. I know you are all busy, and there are more pressing issues. I am \nextremely grateful for any advice you can give me, and will be ecstatic \nif I can get a solution out of this.\n\nSo, on to my issue.\n\nTHE BACKGROUND - I am creating a web site where people can study the \nhumanities. They can upload, discuss, and peer-review information. \n They can also create, edit, approve, and delete records in a postgresql \ndb, using web forms. Many of these forms need a way to enter historical \ndates - a person DOB, the date an empire was founded, the date a book \nwas published, etc. \n\nMY PROBLEM - Because this site deals with, among other things, ancient \nart, acheaology, and anthropology, I need a way to handle dates as \nspecific as a single day, and as far back as 100,000 BC. According to \nthe docs (I looked at \nhttp://www.postgresql.org/idocs/index.php?datatype-datetime.html), the \nfarthest back any date type reaches is 4713 BC. So far, I have tried to \ndeal with this problem by creating a numeric field for the year, and \nradio buttons for AD/BC. I then do a lot of form validation. Not only \nthat, if I want to be as specific as a month or a day, then those are \nseparate fields on my forms. Plus, I can't combine all of the fields \nand put them into a pg data type, because once again, they don't extend \nthat far back. So, I have to maintain and validate the year, month, and \ndays fields separately. Then imagine what I have to do if a user wants \nto _sort_ by date, or select events by date range! \n\nIdeally, I would like to figure this out on two fronts. I'd like to \nfind out what's the best way to store dates that far back (with pg), and \nthen on the PHP end I'll have to figure out how to parse entry so that \nit is as simple as possible for the end user. Knowing how to store \nthese ancient dates in pg would help me a great deal.\n\nThere are a lot of university and hobby sites out there working on \ndigitizing collections of ancient texts, artifacts, etc. I don't know \nhow the date range is chosen for a type like timestamp (4713BC - \n1,465,001 AD), but it seems to me that there would be way more people \nworking on recording the past (and thereby needed a date range that \nextends into ancient civilization) than working with dates in the far \nfuture (more than a million years ahead???).\n\nI hope that someone will be kind enough to reply with some ideas, or \neven to take up the cause and consider a date type that could be used \nfor historical purposes. I am an avid fan of open source and pg, \nespecially as compared to mySQL. I hope to continue using pg, and build \na first-class web site that may one day serve as a great working example \nof what pg can do. Any help would be greatly appreciated.\n\nThanks in advance,\nChris McCormick\n\n",
"msg_date": "Thu, 13 Jun 2002 11:39:55 -0400",
"msg_from": "\"Chris McCormick\" <cmccormick@thestate.com>",
"msg_from_op": true,
"msg_subject": "FEATURE REQUEST - More dynamic date type?"
},
{
"msg_contents": "On Thu, Jun 13, 2002 at 11:39:55 -0400,\n Chris McCormick <cmccormick@thestate.com> wrote:\n> Thanks for reading. A few disclaimers:\n> \n> MY PROBLEM - Because this site deals with, among other things, ancient \n> art, acheaology, and anthropology, I need a way to handle dates as \n> specific as a single day, and as far back as 100,000 BC. According to \n> the docs (I looked at \n> http://www.postgresql.org/idocs/index.php?datatype-datetime.html), the \n> farthest back any date type reaches is 4713 BC. So far, I have tried to \n> deal with this problem by creating a numeric field for the year, and \n> radio buttons for AD/BC. I then do a lot of form validation. Not only \n> that, if I want to be as specific as a month or a day, then those are \n> separate fields on my forms. Plus, I can't combine all of the fields \n> and put them into a pg data type, because once again, they don't extend \n> that far back. So, I have to maintain and validate the year, month, and \n> days fields separately. Then imagine what I have to do if a user wants \n> to _sort_ by date, or select events by date range! \n\nIs there really a standard for how long individual months were in 100000BC!\nCan't you use Julian dates for this? It is well defined (though conversion\nto normal dates may not be that far back) and should be easy to work with.\n(There may be problems if you go back so far that you need to worry about\ndays not really being 24 hours long.)\n",
"msg_date": "Fri, 14 Jun 2002 16:45:39 -0500",
"msg_from": "Bruno Wolff III <bruno@wolff.to>",
"msg_from_op": false,
"msg_subject": "Re: FEATURE REQUEST - More dynamic date type?"
},
{
"msg_contents": "On Thu, 2002-06-13 at 16:39, Chris McCormick wrote:\n...\n> THE BACKGROUND - I am creating a web site where people can study the \n> humanities. They can upload, discuss, and peer-review information. \n> They can also create, edit, approve, and delete records in a postgresql \n> db, using web forms. Many of these forms need a way to enter historical \n> dates - a person DOB, the date an empire was founded, the date a book \n> was published, etc. \n> \n> MY PROBLEM - Because this site deals with, among other things, ancient \n> art, acheaology, and anthropology, I need a way to handle dates as \n> specific as a single day, and as far back as 100,000 BC. According to \n> the docs (I looked at \n> http://www.postgresql.org/idocs/index.php?datatype-datetime.html), the \n> farthest back any date type reaches is 4713 BC. So far, I have tried to \n> deal with this problem by creating a numeric field for the year, and \n> radio buttons for AD/BC. I then do a lot of form validation. Not only \n> that, if I want to be as specific as a month or a day, then those are \n> separate fields on my forms. Plus, I can't combine all of the fields \n> and put them into a pg data type, because once again, they don't extend \n> that far back. So, I have to maintain and validate the year, month, and \n> days fields separately. Then imagine what I have to do if a user wants \n> to _sort_ by date, or select events by date range! \n> \n> Ideally, I would like to figure this out on two fronts. I'd like to \n> find out what's the best way to store dates that far back (with pg), and \n> then on the PHP end I'll have to figure out how to parse entry so that \n> it is as simple as possible for the end user. Knowing how to store \n> these ancient dates in pg would help me a great deal.\n> \n> There are a lot of university and hobby sites out there working on \n> digitizing collections of ancient texts, artifacts, etc. I don't know \n> how the date range is chosen for a type like timestamp (4713BC - \n> 1,465,001 AD), but it seems to me that there would be way more people \n> working on recording the past (and thereby needed a date range that \n> extends into ancient civilization) than working with dates in the far \n> future (more than a million years ahead???).\n> \n> I hope that someone will be kind enough to reply with some ideas, or \n> even to take up the cause and consider a date type that could be used \n> for historical purposes. I am an avid fan of open source and pg, \n> especially as compared to mySQL. I hope to continue using pg, and build \n> a first-class web site that may one day serve as a great working example \n> of what pg can do. Any help would be greatly appreciated.\n\nI have seen an implementation to deal with this problem; it was in a\nmuseum package developed in New Zealand which I saw about 7 years ago. \nI can't now remember what it was called, but it allowed objects to be\ncatalogued with fuzzy dates. (The package was written in Revelation,\nwhich was a PICK-like database.)\n\nI think that the solution will have to be to develop a special type.\nYour fields have to hold dates that vary from very specific (4th August\n1914) or quite close (1520 AD) to pretty vague (Louis Quatorze, 850-880,\nca.1230, 5th century BC) or even very vague (4th Dynasty, Paleolithic). \nThe good news is that PostgreSQL will let you do this, if you can devise\nthe algorithms; I'm not sure if there's another RDBMS that would.\n\nYour type would need a flag byte to determine the type of date or period\n(I think 256 different types of date/period might be enough, but you\ncould give it 2 bytes if you wanted to be sure(?) never to run out) and\na value field -- integer or long integer. You would have to define\ncomparison routines for sorting, equality and inclusion or intersection\n(\"1540\" is included in \"16th century\", \"Napoleonic\" intersects \"18th\ncentury\" and \"19th Century\").\n\nIf you like this idea, I might be interested in developing it, in my\ninfrequent moments of spare time...\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"Cease from anger, and forsake wrath; do not fret- \n it leads only to evil.\" Psalms 37:8",
"msg_date": "15 Jun 2002 04:52:35 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: FEATURE REQUEST - More dynamic date type?"
}
] |
[
{
"msg_contents": "I have added the recent threads discussing a Win32 port to CVS\nTODO.detail, and have added an item on the TODO list:\n\n\t* Create native Win32 port [win32]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 13 Jun 2002 14:02:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Win32 port"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Michael Meskes [mailto:meskes@postgresql.org]\n> Sent: Thursday, June 13, 2002 3:06 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] PostGres Doubt\n> \n> \n> On Wed, Jun 12, 2002 at 11:46:47AM -0700, Dann Corbit wrote:\n> > I should apologize for being rather harsh about embedded SQL for\n> > PostgreSQL.\n> \n> Also about being harsh about the people? Okay, apologies accepted.\n> \n> > I actually spent a great deal of effort trying to write \n> some tools using\n> > the PostgreSQL version of ECPG, and found fatal flaws that \n> threw away a\n> \n> Which ones? If it's just SQLDA, this is pretty well \n> documented. Yes, the\n> feature is missing, but we all have only limited time for postgresql\n> work.\n\nAllow me to apologize again. I have clearly gotten off on the wrong\nfoot here. In 6 months of 8 hour days, I would not be able to create a\ntool with the functionality that you have provided. It is an amazing\npiece of work. The point I was (badly) trying to make is that because\nof some of the limitations of PostgreSQL's ECPG it is impossible for me\nto use it. Now, *all* of the applications I work with are\nmultithreading so my situation may be very different from that of some\nothers.\n \n> > A reentrant version of ECPG that uses SQLCA and SQLDA like \n> Oracle or Rdb\n> > or DB/2 or any of the professional database systems.\n> \n> The last time I used Oracle it used SQLCA in a very similar \n> way as ECPG\n> does. \n\nYou are right about Oracle. They use global variables in embedded SQL.\n(I did not write our company's Oracle driver.) It remains true for all\nthe others that they are multithread capable. It is far better to not\nmake the SQLCA and SQLDA structures global. Since Oracle's model and\nthat of PostgreSQL are very similar (for example in concurrency), it is\nunsurprising that it might be chosen as a model for implementation of\nembedded SQL.\n\nLet me:\n1. Wipe the egg off my face\n2. Personally apologize to the entire list and especially to the\noriginators of PostgreSQL's ecpg\n3. Restate my opinion in a better way:\n\n\"PostgreSQL's implementation of embedded SQL is very good. The grammar\nis complete, it is open source, and highly functional. The licensing is\na dream -- useful for any sort of endeavor. There are a couple minor\nissues that would enhance the functionality of ecpg even more. If the\nSQLCA were made a local variable to the query, it would be possible to\nhave multiple threads of execution. If PostgreSQL's ecpg were enhanced\nto have SQLDA structures as specified by \"X/Open DR\" it would enhance\nthe functionality even further. If such features were added, it would\nbe possible to use ecpg in multithreaded applications, in web servers,\nin ODBC drivers. In fact, it would become the method of choice for\nalmost any sort of application.\"\n\nI am reminded of Benjamin Franklin, who once said:\n\"You can catch more flies with a teaspoon of sugar than with a gallon of\nvinegar.\"\n",
"msg_date": "Thu, 13 Jun 2002 11:47:02 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "I know you guys love subject lines like this, but I have a humble\nrequest. Would it be possible to have either a GUC setting or a grammar\nchange to allow TEMPORARY tables to be dropped at transaction commit? I\nknow the standard defines the lifetimes of temporary tables to be that\nof the session. However, I have CORBA middleware which generates a\ntransient session object per client. The object connects to the database\nat instantiation time and services requests by CORBA's remote method\ninvocation. Often, the methods invoked on the object cause the object to\ncreate temporary tables. Each method invocation is a single transaction.\nBut the lifetime of a user's session can be quite long. Worse, CORBA\ndoesn't permit the application to detect when the client \"disconnects\" -\nthe object (and therefore the database connection) remains unless told\nexplicitly to die. I currently have an evictor pattern remove objects\nupon which no method invocation has taken place over a given time. But\nin the meantime, dozens of temporary tables have built up. The idea kind\nof falls along the same lines as the SET discussion previously. As a\ntest, it took me about 8 lines of code to implement the change. Of\ncourse, it was a hack, but it worked nicely. \n\nWould a patch to the grammar be accepted? Along the lines of:\n\nCREATE TEMPORARY TABLE \n...\nON COMMIT DROP;\n\npseudo-compatible with the SQL-standard of:\n\nON COMMIT { DELETE | PRESERVE } ROWS;\n\nso one day PostgreSQL's grammar would look like:\n\n... \nON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n\nI suppose I could just change the code to query the catalogue for those\ntemporary tables created during the transaction and issue DROP TABLEs by\nhand. But I thought it might be an idea of value to others.\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 13 Jun 2002 16:23:28 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "Non-standard feature request"
},
{
"msg_contents": "Mike Mascari <mascarm@mascari.com> writes:\n> ... Would it be possible to have either a GUC setting or a grammar\n> change to allow TEMPORARY tables to be dropped at transaction commit?\n\nThis seems like a not unreasonable idea; but the lack of other responses\nsuggests that the market for such a feature isn't there. Perhaps you\nshould try to drum up some interest on pgsql-general and/or pgsql-sql.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 01:16:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Non-standard feature request "
},
{
"msg_contents": "On Thu, 13 Jun 2002, Mike Mascari wrote:\n\n> \n> CREATE TEMPORARY TABLE \n> ...\n> ON COMMIT DROP;\n> \n> pseudo-compatible with the SQL-standard of:\n> \n> ON COMMIT { DELETE | PRESERVE } ROWS;\n> \n> so one day PostgreSQL's grammar would look like:\n> \n> ... \n> ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n\nI think this is a pretty useful feature. Shouldn't require too much\nwork. A new relkind or a bool in TempTable and a little code in\nAtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\ntable.\n\nAnyone else keen for this feature? \n\nGavin\n\n",
"msg_date": "Fri, 14 Jun 2002 19:05:08 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Non-standard feature request"
},
{
"msg_contents": "Tom Lane wrote:\n> Mike Mascari <mascarm@mascari.com> writes:\n> > ... Would it be possible to have either a GUC setting or a grammar\n> > change to allow TEMPORARY tables to be dropped at transaction commit?\n> \n> This seems like a not unreasonable idea; but the lack of other responses\n> suggests that the market for such a feature isn't there. Perhaps you\n> should try to drum up some interest on pgsql-general and/or pgsql-sql.\n\nI was wondering if it made sense to remove temp tables on transaction\nfinish if the temp table was created in the transaction? That wouldn't\nrequire any syntax change. Seems non-standard though, and I can imagine\na few cases where you wouldn't want it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jun 2002 12:21:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Non-standard feature request"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Tom Lane wrote:\n> > Mike Mascari <mascarm@mascari.com> writes:\n> > > ... Would it be possible to have either a GUC setting or a grammar\n> > > change to allow TEMPORARY tables to be dropped at transaction commit?\n> >\n> > This seems like a not unreasonable idea; but the lack of other responses\n> > suggests that the market for such a feature isn't there. Perhaps you\n> > should try to drum up some interest on pgsql-general and/or pgsql-sql.\n> \n> I was wondering if it made sense to remove temp tables on transaction\n> finish if the temp table was created in the transaction? That wouldn't\n> require any syntax change. Seems non-standard though, and I can imagine\n> a few cases where you wouldn't want it.\n\nThat is what I want to do, except by extending the grammar. I must admit\nto actually being surprised that a TEMP table created inside a\ntransaction lived after the transaction completed. That's when I looked\nat the standard and saw that PostgreSQL's implementation was correct. I\nwould think for most people session-long temp tables are more the\nexception than the rule. But I guess SQL92 doesn't think so. Regardless,\na couple of other people have shown some interest in the idea. I'll post\nit to general as well as Tom suggests...\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Fri, 14 Jun 2002 14:57:59 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-standard feature request"
},
{
"msg_contents": "On Fri, 14 Jun 2002, Mike Mascari wrote:\n\n> That is what I want to do, except by extending the grammar. I must admit\n> to actually being surprised that a TEMP table created inside a\n> transaction lived after the transaction completed. That's when I looked\n> at the standard and saw that PostgreSQL's implementation was correct. I\n> would think for most people session-long temp tables are more the\n> exception than the rule. But I guess SQL92 doesn't think so. Regardless,\n> a couple of other people have shown some interest in the idea. I'll post\n> it to general as well as Tom suggests...\n> \nActually, we needed to use temp tables that live beyond the transaction,\nbecause there are no session variables in postgres. So I did an\nimplementation that used temp tables instead.\n\nHaving the temp table not live for the life of the session would be a big\nproblem for me.\n\n\t-rocco\n\n",
"msg_date": "Fri, 14 Jun 2002 19:09:44 -0400 (EDT)",
"msg_from": "Rocco Altier <roccoa@routescape.com>",
"msg_from_op": false,
"msg_subject": "Re: Non-standard feature request"
},
{
"msg_contents": "Rocco Altier wrote:\n> \n> On Fri, 14 Jun 2002, Mike Mascari wrote:\n> \n> > That is what I want to do, except by extending the grammar. I must admit\n> > to actually being surprised that a TEMP table created inside a\n> > transaction lived after the transaction completed. That's when I looked\n> > at the standard and saw that PostgreSQL's implementation was correct. I\n> > would think for most people session-long temp tables are more the\n> > exception than the rule. But I guess SQL92 doesn't think so. Regardless,\n> > a couple of other people have shown some interest in the idea. I'll post\n> > it to general as well as Tom suggests...\n> >\n> Actually, we needed to use temp tables that live beyond the transaction,\n> because there are no session variables in postgres. So I did an\n> implementation that used temp tables instead.\n> \n> Having the temp table not live for the life of the session would be a big\n> problem for me.\n\nSure, which is why I'm proposing to extend the grammar. Only if you\ncreated the temporary table with\n\nCREATE TEMPORARY TABLE\n...\nON COMMIT DROP;\n\nwould it drop the temporary table at transaction commit. It should be\n100% compatible with existing code. \n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Sat, 15 Jun 2002 06:32:30 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-standard feature request"
},
{
"msg_contents": "On Fri, 14 Jun 2002, Gavin Sherry wrote:\n\n> On Thu, 13 Jun 2002, Mike Mascari wrote:\n> \n> > \n> > CREATE TEMPORARY TABLE \n> > ...\n> > ON COMMIT DROP;\n> > \n> > pseudo-compatible with the SQL-standard of:\n> > \n> > ON COMMIT { DELETE | PRESERVE } ROWS;\n> > \n> > so one day PostgreSQL's grammar would look like:\n> > \n> > ... \n> > ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n> \n> I think this is a pretty useful feature. Shouldn't require too much\n> work. A new relkind or a bool in TempTable and a little code in\n> AtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\n> table.\n> \n> Anyone else keen for this feature? \n\nAttached is a patch implementing this. The patch is against 7.2.1\nsource. The grammar introduced is of the form:\n\n\tCREATE TEMP TABLE ... ON COMMIT DROP;\n\nIs this a desirable feature? Seems pretty useful to me.\n\nGavin",
"msg_date": "Fri, 28 Jun 2002 01:52:56 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
},
{
"msg_contents": "Slight bug in the previous patch. Logically (and according to SQL99's\ntreatment of ON COMMIT), it can be specified only for CREATE TEMP\nTABLE. The patch throws an error if only CREATE TABLE has been specified.\n\nGavin\n\nOn Fri, 28 Jun 2002, Gavin Sherry wrote:\n\n> On Fri, 14 Jun 2002, Gavin Sherry wrote:\n> \n> > On Thu, 13 Jun 2002, Mike Mascari wrote:\n> > \n> > > \n> > > CREATE TEMPORARY TABLE \n> > > ...\n> > > ON COMMIT DROP;\n> > > \n> > > pseudo-compatible with the SQL-standard of:\n> > > \n> > > ON COMMIT { DELETE | PRESERVE } ROWS;\n> > > \n> > > so one day PostgreSQL's grammar would look like:\n> > > \n> > > ... \n> > > ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n> > \n> > I think this is a pretty useful feature. Shouldn't require too much\n> > work. A new relkind or a bool in TempTable and a little code in\n> > AtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\n> > table.\n> > \n> > Anyone else keen for this feature? \n> \n> Attached is a patch implementing this. The patch is against 7.2.1\n> source. The grammar introduced is of the form:\n> \n> \tCREATE TEMP TABLE ... ON COMMIT DROP;\n> \n> Is this a desirable feature? Seems pretty useful to me.\n> \n> Gavin\n> \n>",
"msg_date": "Fri, 28 Jun 2002 02:17:47 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
},
{
"msg_contents": "Gavin Sherry wrote:\n> \n> Slight bug in the previous patch. Logically (and according to SQL99's\n> treatment of ON COMMIT), it can be specified only for CREATE TEMP\n> TABLE. The patch throws an error if only CREATE TABLE has been specified.\n\n...\n\n> >\n> > Attached is a patch implementing this. The patch is against 7.2.1\n> > source. The grammar introduced is of the form:\n> >\n> > CREATE TEMP TABLE ... ON COMMIT DROP;\n> >\n> > Is this a desirable feature? Seems pretty useful to me.\n> >\n\nGreat! I'm give this a try. \n\nMike Mascari\nmascarm@mascari.com\n\n\n",
"msg_date": "Thu, 27 Jun 2002 12:26:36 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": true,
"msg_subject": "Re: Non-standard feature request"
},
{
"msg_contents": "> > Anyone else keen for this feature? \n> \n> Attached is a patch implementing this. The patch is against 7.2.1\n> source. The grammar introduced is of the form:\n> \n> CREATE TEMP TABLE ... ON COMMIT DROP;\n> \n> Is this a desirable feature? Seems pretty useful to me.\n\nIt's useful, there's a patch - what more do we want!!!\n\nChris\n\n\n\n\n",
"msg_date": "Fri, 28 Jun 2002 11:33:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGavin Sherry wrote:\n> Slight bug in the previous patch. Logically (and according to SQL99's\n> treatment of ON COMMIT), it can be specified only for CREATE TEMP\n> TABLE. The patch throws an error if only CREATE TABLE has been specified.\n> \n> Gavin\n> \n> On Fri, 28 Jun 2002, Gavin Sherry wrote:\n> \n> > On Fri, 14 Jun 2002, Gavin Sherry wrote:\n> > \n> > > On Thu, 13 Jun 2002, Mike Mascari wrote:\n> > > \n> > > > \n> > > > CREATE TEMPORARY TABLE \n> > > > ...\n> > > > ON COMMIT DROP;\n> > > > \n> > > > pseudo-compatible with the SQL-standard of:\n> > > > \n> > > > ON COMMIT { DELETE | PRESERVE } ROWS;\n> > > > \n> > > > so one day PostgreSQL's grammar would look like:\n> > > > \n> > > > ... \n> > > > ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n> > > \n> > > I think this is a pretty useful feature. Shouldn't require too much\n> > > work. A new relkind or a bool in TempTable and a little code in\n> > > AtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\n> > > table.\n> > > \n> > > Anyone else keen for this feature? \n> > \n> > Attached is a patch implementing this. The patch is against 7.2.1\n> > source. The grammar introduced is of the form:\n> > \n> > \tCREATE TEMP TABLE ... ON COMMIT DROP;\n> > \n> > Is this a desirable feature? Seems pretty useful to me.\n> > \n> > Gavin\n> > \n> > \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Thu, 4 Jul 2002 01:25:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
},
{
"msg_contents": "\nGavin, I will need a doc patch for this too. Thanks.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Slight bug in the previous patch. Logically (and according to SQL99's\n> treatment of ON COMMIT), it can be specified only for CREATE TEMP\n> TABLE. The patch throws an error if only CREATE TABLE has been specified.\n> \n> Gavin\n> \n> On Fri, 28 Jun 2002, Gavin Sherry wrote:\n> \n> > On Fri, 14 Jun 2002, Gavin Sherry wrote:\n> > \n> > > On Thu, 13 Jun 2002, Mike Mascari wrote:\n> > > \n> > > > \n> > > > CREATE TEMPORARY TABLE \n> > > > ...\n> > > > ON COMMIT DROP;\n> > > > \n> > > > pseudo-compatible with the SQL-standard of:\n> > > > \n> > > > ON COMMIT { DELETE | PRESERVE } ROWS;\n> > > > \n> > > > so one day PostgreSQL's grammar would look like:\n> > > > \n> > > > ... \n> > > > ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n> > > \n> > > I think this is a pretty useful feature. Shouldn't require too much\n> > > work. A new relkind or a bool in TempTable and a little code in\n> > > AtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\n> > > table.\n> > > \n> > > Anyone else keen for this feature? \n> > \n> > Attached is a patch implementing this. The patch is against 7.2.1\n> > source. The grammar introduced is of the form:\n> > \n> > \tCREATE TEMP TABLE ... ON COMMIT DROP;\n> > \n> > Is this a desirable feature? Seems pretty useful to me.\n> > \n> > Gavin\n> > \n> > \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Thu, 4 Jul 2002 01:26:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
},
{
"msg_contents": "\nGavin, this is not even close to the CVS code. Would you regenerate\nbased on CVS. I could do it, but you will probably make a more reliable\npatch.\n\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Slight bug in the previous patch. Logically (and according to SQL99's\n> treatment of ON COMMIT), it can be specified only for CREATE TEMP\n> TABLE. The patch throws an error if only CREATE TABLE has been specified.\n> \n> Gavin\n> \n> On Fri, 28 Jun 2002, Gavin Sherry wrote:\n> \n> > On Fri, 14 Jun 2002, Gavin Sherry wrote:\n> > \n> > > On Thu, 13 Jun 2002, Mike Mascari wrote:\n> > > \n> > > > \n> > > > CREATE TEMPORARY TABLE \n> > > > ...\n> > > > ON COMMIT DROP;\n> > > > \n> > > > pseudo-compatible with the SQL-standard of:\n> > > > \n> > > > ON COMMIT { DELETE | PRESERVE } ROWS;\n> > > > \n> > > > so one day PostgreSQL's grammar would look like:\n> > > > \n> > > > ... \n> > > > ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n> > > \n> > > I think this is a pretty useful feature. Shouldn't require too much\n> > > work. A new relkind or a bool in TempTable and a little code in\n> > > AtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\n> > > table.\n> > > \n> > > Anyone else keen for this feature? \n> > \n> > Attached is a patch implementing this. The patch is against 7.2.1\n> > source. The grammar introduced is of the form:\n> > \n> > \tCREATE TEMP TABLE ... ON COMMIT DROP;\n> > \n> > Is this a desirable feature? Seems pretty useful to me.\n> > \n> > Gavin\n> > \n> > \n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Sun, 7 Jul 2002 21:50:33 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
},
{
"msg_contents": "Hi Bruce,\n\nI have been away on a long overdue holiday. Will get you this patch once I\ncatch up on email and pending 'day job' stuff.\n\nGavin\n\nOn Thu, 4 Jul 2002, Bruce Momjian wrote:\n\n> \n> Gavin, I will need a doc patch for this too. Thanks.\n> \n> ---------------------------------------------------------------------------\n> \n> Gavin Sherry wrote:\n> > Slight bug in the previous patch. Logically (and according to SQL99's\n> > treatment of ON COMMIT), it can be specified only for CREATE TEMP\n> > TABLE. The patch throws an error if only CREATE TABLE has been specified.\n> > \n> > Gavin\n> > \n> > On Fri, 28 Jun 2002, Gavin Sherry wrote:\n> > \n> > > On Fri, 14 Jun 2002, Gavin Sherry wrote:\n> > > \n> > > > On Thu, 13 Jun 2002, Mike Mascari wrote:\n> > > > \n> > > > > \n> > > > > CREATE TEMPORARY TABLE \n> > > > > ...\n> > > > > ON COMMIT DROP;\n> > > > > \n> > > > > pseudo-compatible with the SQL-standard of:\n> > > > > \n> > > > > ON COMMIT { DELETE | PRESERVE } ROWS;\n> > > > > \n> > > > > so one day PostgreSQL's grammar would look like:\n> > > > > \n> > > > > ... \n> > > > > ON COMMIT { DROP | { DELETE | PRESERVE } ROWS };\n> > > > \n> > > > I think this is a pretty useful feature. Shouldn't require too much\n> > > > work. A new relkind or a bool in TempTable and a little code in\n> > > > AtEOXact_temp_relations() to heap_drop_with_catalog() the registered temp\n> > > > table.\n> > > > \n> > > > Anyone else keen for this feature? \n> > > \n> > > Attached is a patch implementing this. The patch is against 7.2.1\n> > > source. The grammar introduced is of the form:\n> > > \n> > > \tCREATE TEMP TABLE ... ON COMMIT DROP;\n> > > \n> > > Is this a desirable feature? Seems pretty useful to me.\n> > > \n> > > Gavin\n> > > \n> > > \n> \n> Content-Description: \n> \n> [ Attachment, skipping... ]\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> \n\n",
"msg_date": "Mon, 15 Jul 2002 09:57:19 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Non-standard feature request"
}
] |
[
{
"msg_contents": "... while talking to sss.pgh.pa.us.:\n\n>>>>>> MAIL From:<david+cert@blue-labs.org>\n>>> \n>>>\n<<< 550 5.7.1 Probable spam from 68.9.71.221 refused - see http://www.five-ten-sg.com/blackhole.php?68.9.71.221\n554 5.0.0 Service unavailable\n\nTom, if you block everyone on cable, dialup, dsl, and adsl, then you're probably blocking a lot of legitimate mail.\n\nI don't feel like paying some Big Company just so I can relay mail through them when I can do my company's own mail on my own networks. Big Company will get blacklisted soon enough for [inadvertently] allowing a spammer to send mail through them.\n\nPlease don't punish the victim until they're proven guilty.\n\nDavid\np.s. There isn't any contact address on the above URL for the requested updates.\n\n\n\n",
"msg_date": "Thu, 13 Jun 2002 20:10:47 -0400",
"msg_from": "David Ford <david+cert@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "ATTN: Tom Lane"
},
{
"msg_contents": "David Ford <david+cert@blue-labs.org> writes:\n> Tom, if you block everyone on cable, dialup, dsl, and adsl, then you're probably blocking a lot of legitimate mail.\n\nDavid, let me explain this in words of one syllable: I am currently\nrejecting upwards of 2000 spam messages per day. If I did not have\nextremely stringent filters in place, email would be completely\nuseless to me.\n\nAdvice suggesting that I weaken my filters will be ignored with as much\ngrace as I can muster, which on most days is not a lot.\n\nThis is what comes of having several well-publicized email addresses :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 13 Jun 2002 22:44:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ATTN: Tom Lane "
},
{
"msg_contents": "\n\n\n\n\n\nTom Lane wrote:\n\nDavid Ford <david+cert@blue-labs.org> writes:\n \n\nTom, if you block everyone on cable, dialup, dsl, and adsl, then you're probably blocking a lot of legitimate mail.\n \n\n\nDavid, let me explain this in words of one syllable: I am currently\nrejecting upwards of 2000 spam messages per day. If I did not have\nextremely stringent filters in place, email would be completely\nuseless to me.\n\nAdvice suggesting that I weaken my filters will be ignored with as much\ngrace as I can muster, which on most days is not a lot.\n\nThis is what comes of having several well-publicized email addresses :-(\n\nI sympathize with your pain. However, I've found that the five-ten-sg.com\nlist is ofter overly aggressive. There are many other RBL's that are not\nas aggressive and used in combination provide very good results. Also,\nyou could even try SpamCop's RBL, if your so inclined.\n\nI could not post from my work address to any of the lists strictly because\nof the five-ten-sg.com RBL. They blocked everything from BellSouth's IP\nallocation blocks. They only way around it is to beg them to allow you\na static IP and the ask to have that IP unbanned from the RBL. It's a lot\nof work.\n\nRBL's are good, but I think the one that blocked David Ford and myself is\nperhaps a little too strong.\n\nJust my two cents.\n\nThomas\n\n\n\n",
"msg_date": "Thu, 13 Jun 2002 22:27:41 -0500",
"msg_from": "Thomas Swan <tswan@idigx.com>",
"msg_from_op": false,
"msg_subject": "Re: ATTN: Tom Lane"
},
{
"msg_contents": "On Fri, 2002-06-14 at 02:10, David Ford wrote:\n> ... while talking to sss.pgh.pa.us.:\n> \n> >>>>>> MAIL From:<david+cert@blue-labs.org>\n> >>> \n> >>>\n> <<< 550 5.7.1 Probable spam from 68.9.71.221 refused - see http://www.five-ten-sg.com/blackhole.php?68.9.71.221\n> 554 5.0.0 Service unavailable\n> \n> Tom, if you block everyone on cable, dialup, dsl, and adsl, then you're probably blocking a lot of legitimate mail.\n> \n> I don't feel like paying some Big Company just so I can relay mail through them when I can do my company's own mail on my own networks. Big Company will get blacklisted soon enough for [inadvertently] allowing a spammer to send mail through them.\n> \n> Please don't punish the victim until they're proven guilty.\n> \n> David\n> p.s. There isn't any contact address on the above URL for the requested updates.\n\nYou can manually decode an e-mail address from the first line on that\nsite :\n\nYou can always send email to blackhole3 at five-ten-sg.com even if your mail server is listed here. \n\nI got my home ADSL un-blacklisted after a few emails to them when they\nagreed that my IP is actually static.\n\n---------------\nHannu\n\n",
"msg_date": "14 Jun 2002 11:24:39 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: ATTN: Tom Lane"
}
] |
[
{
"msg_contents": "This patch, which is built upon the \"HeapTupleHeader accessor macros\"\npatch from 2002-06-10, is supposed to reduce the heap tuple header size\nby four bytes on most architectures. Of course it changes the on-disk\ntuple format and therefore requires initdb. As I have (once more)\nopened my mouth too wide, I'll have to provide a heap file conversion\nutility, if this patch gets accepted... More on this later.\n\n======================\n All 81 tests passed.\n======================\n\nIt's late now, I'll do more tests tomorrow.\n\nGood night\n Manfred\n\ndiff -ru ../orig/src/backend/access/heap/heapam.c src/backend/access/heap/heapam.c\n--- ../orig/src/backend/access/heap/heapam.c\t2002-06-13 19:34:48.000000000 +0200\n+++ src/backend/access/heap/heapam.c\t2002-06-13 22:31:42.000000000 +0200\n@@ -2204,7 +2204,7 @@\n \t\thtup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n \t\tHeapTupleHeaderSetXmin(htup, record->xl_xid);\n \t\tHeapTupleHeaderSetCmin(htup, FirstCommandId);\n-\t\tHeapTupleHeaderSetXmax(htup, InvalidTransactionId);\n+\t\tHeapTupleHeaderSetXmaxInvalid(htup);\n \t\tHeapTupleHeaderSetCmax(htup, FirstCommandId);\n \n \t\toffnum = PageAddItem(page, (Item) htup, newlen, offnum,\ndiff -ru ../orig/src/include/access/htup.h src/include/access/htup.h\n--- ../orig/src/include/access/htup.h\t2002-06-13 19:34:49.000000000 +0200\n+++ src/include/access/htup.h\t2002-06-14 01:12:47.000000000 +0200\n@@ -57,15 +57,24 @@\n * Also note that we omit the nulls bitmap if t_infomask shows that there\n * are no nulls in the tuple.\n */\n+/*\n+** We store five \"virtual\" fields Xmin, Cmin, Xmax, Cmax, and Xvac\n+** in three physical fields t_xmin, t_cid, t_xmax:\n+** CommandId Cmin;\t\tinsert CID stamp\n+** CommandId Cmax;\t\tdelete CommandId stamp\n+** TransactionId Xmin;\t\tinsert XID stamp\n+** TransactionId Xmax;\t\tdelete XID stamp\n+** TransactionId Xvac;\t\tused by VACCUUM\n+**\n+** This assumes, that a CommandId can be stored in a TransactionId.\n+*/\n typedef struct HeapTupleHeaderData\n {\n \tOid\t\t\tt_oid;\t\t\t/* OID of this tuple -- 4 bytes */\n \n-\tCommandId\tt_cmin;\t\t\t/* insert CID stamp -- 4 bytes each */\n-\tCommandId\tt_cmax;\t\t\t/* delete CommandId stamp */\n-\n-\tTransactionId t_xmin;\t\t/* insert XID stamp -- 4 bytes each */\n-\tTransactionId t_xmax;\t\t/* delete XID stamp */\n+\tTransactionId t_xmin;\t\t/* Xmin -- 4 bytes each */\n+\tTransactionId t_cid;\t\t/* Cmin, Cmax, Xvac */\n+\tTransactionId t_xmax;\t\t/* Xmax, Cmax */\n \n \tItemPointerData t_ctid;\t\t/* current TID of this or newer tuple */\n \n@@ -75,7 +84,7 @@\n \n \tuint8\t\tt_hoff;\t\t\t/* sizeof header incl. bitmap, padding */\n \n-\t/* ^ - 31 bytes - ^ */\n+\t/* ^ - 27 bytes - ^ */\n \n \tbits8\t\tt_bits[1];\t\t/* bitmap of NULLs -- VARIABLE LENGTH */\n \n@@ -96,6 +105,8 @@\n \t\t\t\t\t\t\t\t\t\t * attribute(s) */\n #define HEAP_HASEXTENDED\t\t0x000C\t/* the two above combined */\n \n+#define HEAP_XMIN_IS_XMAX\t\t0x0040\t/* created and deleted in the */\n+\t\t\t\t\t\t\t\t\t\t/* same transaction\n*/\n #define HEAP_XMAX_UNLOGGED\t\t0x0080\t/* to lock tuple for update */\n \t\t\t\t\t\t\t\t\t\t/* without logging\n*/\n #define HEAP_XMIN_COMMITTED\t\t0x0100\t/* t_xmin committed */\n@@ -108,6 +119,7 @@\n \t\t\t\t\t\t\t\t\t\t * vacuum */\n #define HEAP_MOVED_IN\t\t\t0x8000\t/* moved from another place by\n \t\t\t\t\t\t\t\t\t\t * vacuum */\n+#define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)\n \n #define HEAP_XACT_MASK\t\t\t0xFFF0\t/* visibility-related bits */\n \n@@ -116,53 +128,100 @@\n /* HeapTupleHeader accessor macros */\n \n #define HeapTupleHeaderGetXmin(tup) \\\n-\t((tup)->t_xmin)\n+( \\\n+\t(tup)->t_xmin \\\n+)\n \n #define HeapTupleHeaderGetXmax(tup) \\\n-\t((tup)->t_xmax)\n+( \\\n+\t((tup)->t_infomask & HEAP_XMIN_IS_XMAX) ? \\\n+\t\t(tup)->t_xmin \\\n+\t: \\\n+\t\t(tup)->t_xmax \\\n+)\n \n-/* no AssertMacro, because this is read as a system-defined attribute also */\n+/* no AssertMacro, because this is read as a system-defined attribute */\n #define HeapTupleHeaderGetCmin(tup) \\\n ( \\\n-\t(tup)->t_cmin \\\n+\t((tup)->t_infomask & HEAP_MOVED) ? \\\n+\t\tFirstCommandId \\\n+\t: \\\n+\t( \\\n+\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n+\t\t\t(CommandId) (tup)->t_cid \\\n+\t\t: \\\n+\t\t\tFirstCommandId \\\n+\t) \\\n )\n \n #define HeapTupleHeaderGetCmax(tup) \\\n-\t((tup)->t_cmax)\n+( \\\n+\t((tup)->t_infomask & HEAP_MOVED) ? \\\n+\t\tFirstCommandId \\\n+\t: \\\n+\t( \\\n+\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n+\t\t\t(CommandId) (tup)->t_xmax \\\n+\t\t: \\\n+\t\t\t(CommandId) (tup)->t_cid \\\n+\t) \\\n+)\n \n #define HeapTupleHeaderGetXvac(tup) \\\n ( \\\n-\tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n-\t(TransactionId) (tup)->t_cmin \\\n+\tAssertMacro((tup)->t_infomask & HEAP_MOVED), \\\n+\t(tup)->t_cid \\\n )\n \n \n #define HeapTupleHeaderSetXmin(tup, xid) \\\n-\t(TransactionIdStore((xid), &(tup)->t_xmin))\n+( \\\n+\tTransactionIdStore((xid), &(tup)->t_xmin) \\\n+)\n \n #define HeapTupleHeaderSetXminInvalid(tup) \\\n-\t(StoreInvalidTransactionId(&(tup)->t_xmin))\n+do { \\\n+\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n+\tStoreInvalidTransactionId(&(tup)->t_xmin); \\\n+} while (0)\n \n #define HeapTupleHeaderSetXmax(tup, xid) \\\n-\t(TransactionIdStore((xid), &(tup)->t_xmax))\n+do { \\\n+\tif (TransactionIdEquals((tup)->t_xmin, (xid))) \\\n+\t\t(tup)->t_infomask |= HEAP_XMIN_IS_XMAX; \\\n+\telse \\\n+\t{ \\\n+\t\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n+\t\tTransactionIdStore((xid), &(tup)->t_xmax); \\\n+\t} \\\n+} while (0)\n \n #define HeapTupleHeaderSetXmaxInvalid(tup) \\\n-\t(StoreInvalidTransactionId(&(tup)->t_xmax))\n+do { \\\n+\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n+\tStoreInvalidTransactionId(&(tup)->t_xmax); \\\n+} while (0)\n \n #define HeapTupleHeaderSetCmin(tup, cid) \\\n-( \\\n-\tAssertMacro(!((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF))), \\\n-\t(tup)->t_cmin = (cid) \\\n-)\n+do { \\\n+\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n+\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n+} while (0)\n \n #define HeapTupleHeaderSetCmax(tup, cid) \\\n-\t((tup)->t_cmax = (cid))\n+do { \\\n+\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n+\tif ((tup)->t_infomask & HEAP_XMIN_IS_XMAX) \\\n+\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_xmax); \\\n+\telse \\\n+\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n+} while (0)\n \n #define HeapTupleHeaderSetXvac(tup, xid) \\\n-( \\\n-\tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n-\tTransactionIdStore((xid), (TransactionId *) &((tup)->t_cmin)) \\\n-)\n+do { \\\n+\tAssert((tup)->t_infomask & HEAP_MOVED); \\\n+\tTransactionIdStore((xid), &(tup)->t_cid); \\\n+} while (0)\n \n \n /*\n\n",
"msg_date": "Fri, 14 Jun 2002 03:10:07 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Reduce heap tuple header size"
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> This patch, which is built upon the \"HeapTupleHeader accessor macros\"\n> patch from 2002-06-10, is supposed to reduce the heap tuple header size\n> by four bytes on most architectures. Of course it changes the on-disk\n> tuple format and therefore requires initdb.\n\nAs I commented before, I am not in favor of this. I don't think that a\nfour-byte savings justifies a forced initdb with no chance of\npg_upgrade, plus loss of redundancy (= reduced chance of detecting or\nrecovering from corruption), plus significantly slower access to\nseveral critical header fields. The tqual.c routines are already\nhotspots in many scenarios. I believe this will make them noticeably\nslower.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 10:16:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "On Fri, 14 Jun 2002 10:16:22 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n>As I commented before, I am not in favor of this. I don't think that a\n>four-byte savings justifies a forced initdb with no chance of\n>pg_upgrade,\n\nAs I don't know other users' preferences, I cannot comment on this.\nI just think that four bytes per tuple can amount to a sum that justifies\nthis effort. Disk space is not my argument here, but reduced disk IO is.\n\n>plus significantly slower access to\n>several critical header fields. The tqual.c routines are already\n>hotspots in many scenarios. I believe this will make them noticeably\n>slower.\n\nSignificantly slower? I tried to analyze HeapTupleSatisfiesUpdate(),\nas I think it is the most complicated of these Satisfies functions\n(look for the !! comments):\n\n/*\n!! HeapTupleHeaderGetXmin no slowdown\n!! HeapTupleHeaderGetXmax one infomask compare\n!! HeapTupleHeaderGetCmin two infomask compares\n!! HeapTupleHeaderGetCMax two infomask compares\n!! HeapTupleHeaderGetXvac no slowdown\n*/\nint\nHeapTupleSatisfiesUpdate(HeapTuple htuple, CommandId curcid)\n{\n HeapTupleHeader tuple = htuple->t_data;\n\n if (!(tuple->t_infomask & HEAP_XMIN_COMMITTED))\n {\n if (tuple->t_infomask & HEAP_XMIN_INVALID)\n /*\n !! no slowdown\n */\n return HeapTupleInvisible;\n\n if (tuple->t_infomask & HEAP_MOVED_OFF)\n {\n if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXvac(tuple)))\n /*\n !! no slowdown\n */\n return HeapTupleInvisible;\n if (!TransactionIdIsInProgress(HeapTupleHeaderGetXvac(tuple)))\n {\n if (TransactionIdDidCommit(HeapTupleHeaderGetXvac(tuple)))\n {\n tuple->t_infomask |= HEAP_XMIN_INVALID;\n /*\n !! no slowdown\n */\n return HeapTupleInvisible;\n }\n tuple->t_infomask |= HEAP_XMIN_COMMITTED;\n }\n }\n else if (tuple->t_infomask & HEAP_MOVED_IN)\n {\n if (!TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXvac(tuple)))\n {\n if (TransactionIdIsInProgress(HeapTupleHeaderGetXvac(tuple)))\n /*\n !! no slowdown\n */\n return HeapTupleInvisible;\n if (TransactionIdDidCommit(HeapTupleHeaderGetXvac(tuple)))\n tuple->t_infomask |= HEAP_XMIN_COMMITTED;\n else\n {\n tuple->t_infomask |= HEAP_XMIN_INVALID;\n /*\n !! no slowdown\n */\n return HeapTupleInvisible;\n }\n }\n }\n /*\n !! no slowdown up to here\n */\n else if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmin(tuple)))\n {\n /*\n !! GetCmin does 2 infomask compares\n */\n if (HeapTupleHeaderGetCmin(tuple) >= curcid)\n /*\n !! returning with 2 additional infomask compares\n */\n return HeapTupleInvisible;\t\t/* inserted after scan\n * started */\n\n if (tuple->t_infomask & HEAP_XMAX_INVALID)\t\t/* xid invalid */\n /*\n !! returning with 2 additional infomask compares\n */\n return HeapTupleMayBeUpdated;\n\n /*\n !! assertions turned off in production: no slowdown\n */\n Assert(TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmax(tuple)));\n\n if (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n /*\n !! returning with 2 additional infomask compares\n */\n return HeapTupleMayBeUpdated;\n\n /*\n !! GetCmax does 2 infomask compares\n */\n if (HeapTupleHeaderGetCmax(tuple) >= curcid)\n /*\n !! returning with 4 additional infomask compares\n */\n return HeapTupleSelfUpdated;\t/* updated after scan\n * started */\n else\n /*\n !! returning with 4 additional infomask compares\n */\n return HeapTupleInvisible;\t\t/* updated before scan\n * started */\n }\n /*\n !! no slowdown up to here\n */\n else if (!TransactionIdDidCommit(HeapTupleHeaderGetXmin(tuple)))\n {\n if (TransactionIdDidAbort(HeapTupleHeaderGetXmin(tuple)))\n tuple->t_infomask |= HEAP_XMIN_INVALID;\t/* aborted */\n /*\n !! no slowdown\n */\n return HeapTupleInvisible;\n }\n else\n tuple->t_infomask |= HEAP_XMIN_COMMITTED;\n }\n\n /*\n !! no slowdown\n */\n /* by here, the inserting transaction has committed */\n\n if (tuple->t_infomask & HEAP_XMAX_INVALID)\t\t/* xid invalid or aborted */\n /*\n !! no slowdown\n */\n return HeapTupleMayBeUpdated;\n\n if (tuple->t_infomask & HEAP_XMAX_COMMITTED)\n {\n if (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n /*\n !! no slowdown\n ?? BTW, shouldn't we set HEAP_MAX_INVALID here?\n */\n return HeapTupleMayBeUpdated;\n /*\n !! no slowdown\n */\n return HeapTupleUpdated;\t/* updated by other */\n }\n\n /*\n !! no slowdown up to here,\n !! one infomask compare to follow in GetXmax(),\n !! additional waste of precious CPU cycles could be avoided by:\n !! TransactionId xmax = HeapTupleHeaderGetXmax(tuple);\n */\n if (TransactionIdIsCurrentTransactionId(HeapTupleHeaderGetXmax(tuple)))\n {\n if (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n /*\n !! returning with 1 additional infomask compare\n */\n return HeapTupleMayBeUpdated;\n /*\n !! GetCmax does 2 infomask compares\n */\n if (HeapTupleHeaderGetCmax(tuple) >= curcid)\n /*\n !! returning with 3 additional infomask compares\n */\n return HeapTupleSelfUpdated;\t\t/* updated after scan\n * started */\n else\n /*\n !! returning with 3 additional infomask compares\n */\n return HeapTupleInvisible;\t/* updated before scan started */\n }\n\n /*\n !! 1 infomask compare up to here,\n !! another infomask compare ...\n !! or use xmax\n */\n if (!TransactionIdDidCommit(HeapTupleHeaderGetXmax(tuple)))\n {\n /*\n !! and a third infomask compare\n */\n if (TransactionIdDidAbort(HeapTupleHeaderGetXmax(tuple)))\n {\n tuple->t_infomask |= HEAP_XMAX_INVALID;\t\t/* aborted */\n /*\n !! returning with 1 or 3 additional infomask compares\n */\n return HeapTupleMayBeUpdated;\n }\n /* running xact */\n /*\n !! returning with 1 or 3 additional infomask compares\n */\n return HeapTupleBeingUpdated;\t/* in updation by other */\n }\n\n /*\n !! 2 (or 1 with a local xmax) infomask compares up to here\n */\n /* xmax transaction committed */\n tuple->t_infomask |= HEAP_XMAX_COMMITTED;\n\n if (tuple->t_infomask & HEAP_MARKED_FOR_UPDATE)\n /*\n ?? BTW, shouldn't we set HEAP_MAX_INVALID here?\n */\n return HeapTupleMayBeUpdated;\n\n return HeapTupleUpdated;\t/* updated by other */\n}\n\nSo in the worst case we return after having done four more\ncompares than without the patch. Note that in the most common\ncases there is no additional cost at all. If you still think\nwe have a performance problem here, we could replace GetCmin\nand GetCmax by cheaper macros:\n\n#define HeapTupleHeaderGetCminKnowingThatNotMoved(tup) \\\n( \\\n AssertMacro(!((tup)->t_infomask & HEAP_MOVED)),\n ( \\\n ((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n (CommandId) (tup)->t_cid \\\n : \\\n FirstCommandId \\\n ) \\\n)\n\nthus reducing the additional cost to one t_infomask compare,\nbecause the Satisfies functions only access Cmin and Cmax,\nwhen HEAP_MOVED is known to be not set.\n\nOTOH experimenting with a moderatly sized \"out of production\"\ndatabase I got the following results:\n | pages | pages |\nrelkind | count | tuples | before| after | savings\n--------+-------+--------+-------+-------+--------\ni | 31 | 808146 | 8330 | 8330 | 0.00%\nr | 32 | 612968 | 13572 | 13184 | 2.86%\nall | 63 | | 21902 | 21514 | 1.77%\n\n2.86% fewer heap pages mean 2.86% less disk IO caused by heap pages.\nConsidering that index pages tend to benefit more from caching\nwe conclude that heap pages contribute more to the overall\nIO load, so the total savings in the number of disk IOs should\nbe better than the 1.77% shown in the table above. I think\nthis outweighs a few CPU cycles now and then.\n\nServus\n Manfred\n",
"msg_date": "Sat, 15 Jun 2002 00:18:24 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nManfred Koizar wrote:\n> This patch, which is built upon the \"HeapTupleHeader accessor macros\"\n> patch from 2002-06-10, is supposed to reduce the heap tuple header size\n> by four bytes on most architectures. Of course it changes the on-disk\n> tuple format and therefore requires initdb. As I have (once more)\n> opened my mouth too wide, I'll have to provide a heap file conversion\n> utility, if this patch gets accepted... More on this later.\n> \n> ======================\n> All 81 tests passed.\n> ======================\n> \n> It's late now, I'll do more tests tomorrow.\n> \n> Good night\n> Manfred\n> \n> diff -ru ../orig/src/backend/access/heap/heapam.c src/backend/access/heap/heapam.c\n> --- ../orig/src/backend/access/heap/heapam.c\t2002-06-13 19:34:48.000000000 +0200\n> +++ src/backend/access/heap/heapam.c\t2002-06-13 22:31:42.000000000 +0200\n> @@ -2204,7 +2204,7 @@\n> \t\thtup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n> \t\tHeapTupleHeaderSetXmin(htup, record->xl_xid);\n> \t\tHeapTupleHeaderSetCmin(htup, FirstCommandId);\n> -\t\tHeapTupleHeaderSetXmax(htup, InvalidTransactionId);\n> +\t\tHeapTupleHeaderSetXmaxInvalid(htup);\n> \t\tHeapTupleHeaderSetCmax(htup, FirstCommandId);\n> \n> \t\toffnum = PageAddItem(page, (Item) htup, newlen, offnum,\n> diff -ru ../orig/src/include/access/htup.h src/include/access/htup.h\n> --- ../orig/src/include/access/htup.h\t2002-06-13 19:34:49.000000000 +0200\n> +++ src/include/access/htup.h\t2002-06-14 01:12:47.000000000 +0200\n> @@ -57,15 +57,24 @@\n> * Also note that we omit the nulls bitmap if t_infomask shows that there\n> * are no nulls in the tuple.\n> */\n> +/*\n> +** We store five \"virtual\" fields Xmin, Cmin, Xmax, Cmax, and Xvac\n> +** in three physical fields t_xmin, t_cid, t_xmax:\n> +** CommandId Cmin;\t\tinsert CID stamp\n> +** CommandId Cmax;\t\tdelete CommandId stamp\n> +** TransactionId Xmin;\t\tinsert XID stamp\n> +** TransactionId Xmax;\t\tdelete XID stamp\n> +** TransactionId Xvac;\t\tused by VACCUUM\n> +**\n> +** This assumes, that a CommandId can be stored in a TransactionId.\n> +*/\n> typedef struct HeapTupleHeaderData\n> {\n> \tOid\t\t\tt_oid;\t\t\t/* OID of this tuple -- 4 bytes */\n> \n> -\tCommandId\tt_cmin;\t\t\t/* insert CID stamp -- 4 bytes each */\n> -\tCommandId\tt_cmax;\t\t\t/* delete CommandId stamp */\n> -\n> -\tTransactionId t_xmin;\t\t/* insert XID stamp -- 4 bytes each */\n> -\tTransactionId t_xmax;\t\t/* delete XID stamp */\n> +\tTransactionId t_xmin;\t\t/* Xmin -- 4 bytes each */\n> +\tTransactionId t_cid;\t\t/* Cmin, Cmax, Xvac */\n> +\tTransactionId t_xmax;\t\t/* Xmax, Cmax */\n> \n> \tItemPointerData t_ctid;\t\t/* current TID of this or newer tuple */\n> \n> @@ -75,7 +84,7 @@\n> \n> \tuint8\t\tt_hoff;\t\t\t/* sizeof header incl. bitmap, padding */\n> \n> -\t/* ^ - 31 bytes - ^ */\n> +\t/* ^ - 27 bytes - ^ */\n> \n> \tbits8\t\tt_bits[1];\t\t/* bitmap of NULLs -- VARIABLE LENGTH */\n> \n> @@ -96,6 +105,8 @@\n> \t\t\t\t\t\t\t\t\t\t * attribute(s) */\n> #define HEAP_HASEXTENDED\t\t0x000C\t/* the two above combined */\n> \n> +#define HEAP_XMIN_IS_XMAX\t\t0x0040\t/* created and deleted in the */\n> +\t\t\t\t\t\t\t\t\t\t/* same transaction\n> */\n> #define HEAP_XMAX_UNLOGGED\t\t0x0080\t/* to lock tuple for update */\n> \t\t\t\t\t\t\t\t\t\t/* without logging\n> */\n> #define HEAP_XMIN_COMMITTED\t\t0x0100\t/* t_xmin committed */\n> @@ -108,6 +119,7 @@\n> \t\t\t\t\t\t\t\t\t\t * vacuum */\n> #define HEAP_MOVED_IN\t\t\t0x8000\t/* moved from another place by\n> \t\t\t\t\t\t\t\t\t\t * vacuum */\n> +#define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)\n> \n> #define HEAP_XACT_MASK\t\t\t0xFFF0\t/* visibility-related bits */\n> \n> @@ -116,53 +128,100 @@\n> /* HeapTupleHeader accessor macros */\n> \n> #define HeapTupleHeaderGetXmin(tup) \\\n> -\t((tup)->t_xmin)\n> +( \\\n> +\t(tup)->t_xmin \\\n> +)\n> \n> #define HeapTupleHeaderGetXmax(tup) \\\n> -\t((tup)->t_xmax)\n> +( \\\n> +\t((tup)->t_infomask & HEAP_XMIN_IS_XMAX) ? \\\n> +\t\t(tup)->t_xmin \\\n> +\t: \\\n> +\t\t(tup)->t_xmax \\\n> +)\n> \n> -/* no AssertMacro, because this is read as a system-defined attribute also */\n> +/* no AssertMacro, because this is read as a system-defined attribute */\n> #define HeapTupleHeaderGetCmin(tup) \\\n> ( \\\n> -\t(tup)->t_cmin \\\n> +\t((tup)->t_infomask & HEAP_MOVED) ? \\\n> +\t\tFirstCommandId \\\n> +\t: \\\n> +\t( \\\n> +\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n> +\t\t\t(CommandId) (tup)->t_cid \\\n> +\t\t: \\\n> +\t\t\tFirstCommandId \\\n> +\t) \\\n> )\n> \n> #define HeapTupleHeaderGetCmax(tup) \\\n> -\t((tup)->t_cmax)\n> +( \\\n> +\t((tup)->t_infomask & HEAP_MOVED) ? \\\n> +\t\tFirstCommandId \\\n> +\t: \\\n> +\t( \\\n> +\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n> +\t\t\t(CommandId) (tup)->t_xmax \\\n> +\t\t: \\\n> +\t\t\t(CommandId) (tup)->t_cid \\\n> +\t) \\\n> +)\n> \n> #define HeapTupleHeaderGetXvac(tup) \\\n> ( \\\n> -\tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n> -\t(TransactionId) (tup)->t_cmin \\\n> +\tAssertMacro((tup)->t_infomask & HEAP_MOVED), \\\n> +\t(tup)->t_cid \\\n> )\n> \n> \n> #define HeapTupleHeaderSetXmin(tup, xid) \\\n> -\t(TransactionIdStore((xid), &(tup)->t_xmin))\n> +( \\\n> +\tTransactionIdStore((xid), &(tup)->t_xmin) \\\n> +)\n> \n> #define HeapTupleHeaderSetXminInvalid(tup) \\\n> -\t(StoreInvalidTransactionId(&(tup)->t_xmin))\n> +do { \\\n> +\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n> +\tStoreInvalidTransactionId(&(tup)->t_xmin); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetXmax(tup, xid) \\\n> -\t(TransactionIdStore((xid), &(tup)->t_xmax))\n> +do { \\\n> +\tif (TransactionIdEquals((tup)->t_xmin, (xid))) \\\n> +\t\t(tup)->t_infomask |= HEAP_XMIN_IS_XMAX; \\\n> +\telse \\\n> +\t{ \\\n> +\t\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n> +\t\tTransactionIdStore((xid), &(tup)->t_xmax); \\\n> +\t} \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetXmaxInvalid(tup) \\\n> -\t(StoreInvalidTransactionId(&(tup)->t_xmax))\n> +do { \\\n> +\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n> +\tStoreInvalidTransactionId(&(tup)->t_xmax); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetCmin(tup, cid) \\\n> -( \\\n> -\tAssertMacro(!((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF))), \\\n> -\t(tup)->t_cmin = (cid) \\\n> -)\n> +do { \\\n> +\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n> +\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetCmax(tup, cid) \\\n> -\t((tup)->t_cmax = (cid))\n> +do { \\\n> +\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n> +\tif ((tup)->t_infomask & HEAP_XMIN_IS_XMAX) \\\n> +\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_xmax); \\\n> +\telse \\\n> +\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetXvac(tup, xid) \\\n> -( \\\n> -\tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n> -\tTransactionIdStore((xid), (TransactionId *) &((tup)->t_cmin)) \\\n> -)\n> +do { \\\n> +\tAssert((tup)->t_infomask & HEAP_MOVED); \\\n> +\tTransactionIdStore((xid), &(tup)->t_cid); \\\n> +} while (0)\n> \n> \n> /*\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 13:33:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> I will try to apply it within the next 48 hours.\n\nAre you planning to ignore my objections to it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jun 2002 17:07:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Your patch has been added to the PostgreSQL unapplied patches list at:\n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> > I will try to apply it within the next 48 hours.\n> \n> Are you planning to ignore my objections to it?\n\nThe author replied addressing your objections and I saw no reply from on\non that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 17:08:47 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Are you planning to ignore my objections to it?\n\n> The author replied addressing your objections and I saw no reply from on\n> on that.\n\nHe replied stating his opinion; my opinion didn't change. I was waiting\nfor some other people to weigh in with their opinions. As far as I've\nseen, no one else has commented at all.\n\nIf I get voted down on the point after suitable discussion, so be it.\nBut I will strongly object to you applying the patch just because it's\nnext in your inbox.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jun 2002 17:20:52 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Are you planning to ignore my objections to it?\n> \n> > The author replied addressing your objections and I saw no reply from on\n> > on that.\n> \n> He replied stating his opinion; my opinion didn't change. I was waiting\n> for some other people to weigh in with their opinions. As far as I've\n> seen, no one else has commented at all.\n> \n> If I get voted down on the point after suitable discussion, so be it.\n> But I will strongly object to you applying the patch just because it's\n> next in your inbox.\n\nWell, we have three votes, yours, the author, and mine. That's a vote.\n\nIf you want to make a pitch for more votes, go ahead. I can wait.\n\nThe thread went on for quite a while and no one else gave an opinion, as\nI remember, though there may have been a few positive ones for the patch\nthat I forgot about.\n\nI don't care if you object, strongly object, or jump up and down; you\nhave one vote. Please stop the posturing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 17:36:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Are you planning to ignore my objections to it?\n> \n> > The author replied addressing your objections and I saw no reply from on\n> > on that.\n> \n> He replied stating his opinion; my opinion didn't change. I was waiting\n> for some other people to weigh in with their opinions. As far as I've\n> seen, no one else has commented at all.\n> \n> If I get voted down on the point after suitable discussion, so be it.\n> But I will strongly object to you applying the patch just because it's\n> next in your inbox.\n\nTom, I have reviewed your objections:\n\n> As I commented before, I am not in favor of this. I don't think that a\n> four-byte savings justifies a forced initdb with no chance of\n> pg_upgrade, plus loss of redundancy (= reduced chance of detecting or\n> recovering from corruption), plus significantly slower access to\n> several critical header fields. The tqual.c routines are already\n> hotspots in many scenarios. I believe this will make them noticeably\n> slower.\n\nI don't think enough people use pg_upgrade to make it a reason to keep\nan extra four bytes of tuple overhead. I realize 8-byte aligned systems\ndon't benefit, but most of our platforms are 4-byte aligned. I don't\nconsider redundency a valid reason either. We just don't have many\ntable corruption complaints, and the odds that having an extra 4 bytes\nis going to make detection or correction better is unlikely.\n\nThe author addressed the slowness complaint and seemed to refute the\nidea it would be slower.\n\nIs there something I am missing?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 19:09:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Tom Lane wrote:\n> > >> Are you planning to ignore my objections to it?\n> >\n> > > The author replied addressing your objections and I saw no reply from on\n> > > on that.\n> >\n> > He replied stating his opinion; my opinion didn't change. I was waiting\n> > for some other people to weigh in with their opinions. As far as I've\n> > seen, no one else has commented at all.\n> >\n> > If I get voted down on the point after suitable discussion, so be it.\n> > But I will strongly object to you applying the patch just because it's\n> > next in your inbox.\n> \n> Tom, I have reviewed your objections:\n> \n> > As I commented before, I am not in favor of this. I don't think that a\n> > four-byte savings justifies a forced initdb with no chance of\n> > pg_upgrade, plus loss of redundancy (= reduced chance of detecting or\n> > recovering from corruption), plus significantly slower access to\n> > several critical header fields. The tqual.c routines are already\n> > hotspots in many scenarios. I believe this will make them noticeably\n> > slower.\n> \n> I don't think enough people use pg_upgrade to make it a reason to keep\n> an extra four bytes of tuple overhead. I realize 8-byte aligned systems\n> don't benefit, but most of our platforms are 4-byte aligned. I don't\n> consider redundency a valid reason either. We just don't have many\n> table corruption complaints, and the odds that having an extra 4 bytes\n> is going to make detection or correction better is unlikely.\n\nThe non-overwriting storage management (which is one reason why whe need\nall these header fields) causes over 30 bytes of row overhead anyway. I\nam with Tom here, 4 bytes per row isn't worth making the tuple header\nvariable length size.\n\n> The author addressed the slowness complaint and seemed to refute the\n> idea it would be slower.\n\nDo we have any hard numbers on that? Is it just access to the header\nfields, or do we loose the offset cacheability of all fixed size fields\nat the beginning of a row? In the latter case count me into the\nslowness-believer camp.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Fri, 21 Jun 2002 08:55:46 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Jan Wieck wrote:\n> > I don't think enough people use pg_upgrade to make it a reason to keep\n> > an extra four bytes of tuple overhead. I realize 8-byte aligned systems\n> > don't benefit, but most of our platforms are 4-byte aligned. I don't\n> > consider redundency a valid reason either. We just don't have many\n> > table corruption complaints, and the odds that having an extra 4 bytes\n> > is going to make detection or correction better is unlikely.\n> \n> The non-overwriting storage management (which is one reason why whe need\n> all these header fields) causes over 30 bytes of row overhead anyway. I\n> am with Tom here, 4 bytes per row isn't worth making the tuple header\n> variable length size.\n\nWoh, I didn't see anything about making the header variable size. The\nissue was that on 8-byte machines, structure alignment will not allow\nany savings. However, on 4-byte machines, it will be a savings of ~11%\nin the tuple header.\n\n> > The author addressed the slowness complaint and seemed to refute the\n> > idea it would be slower.\n> \n> Do we have any hard numbers on that? Is it just access to the header\n> fields, or do we loose the offset cacheability of all fixed size fields\n> at the beginning of a row? In the latter case count me into the\n> slowness-believer camp.\n\nNo other slowdown except access to the tuple header requires a little\nmore smarts. As the author mentions, the increased number of tuples per\npage more than offset that. In fact, the patch is fairly small, so you\ncan review it yourself:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jun 2002 09:19:37 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Jan Wieck wrote:\n> > I don't think enough people use pg_upgrade to make it a reason to keep\n> > an extra four bytes of tuple overhead. I realize 8-byte aligned systems\n> > don't benefit, but most of our platforms are 4-byte aligned. I don't\n> > consider redundency a valid reason either. We just don't have many\n> > table corruption complaints, and the odds that having an extra 4 bytes\n> > is going to make detection or correction better is unlikely.\n> \n> The non-overwriting storage management (which is one reason why whe need\n> all these header fields) causes over 30 bytes of row overhead anyway. I\n> am with Tom here, 4 bytes per row isn't worth making the tuple header\n> variable length size.\n> \n> > The author addressed the slowness complaint and seemed to refute the\n> > idea it would be slower.\n> \n> Do we have any hard numbers on that? Is it just access to the header\n> fields, or do we loose the offset cacheability of all fixed size fields\n> at the beginning of a row? In the latter case count me into the\n> slowness-believer camp.\n\nHere is a summary of the concepts used in the patch:\n\n\thttp://groups.google.com/groups?hl=en&lr=&ie=UTF-8&threadm=ejf4du853mblm44f0u78f02g166r69lng7%404ax.com&rnum=28&prev=/groups%3Fq%3Dmanfred%2Bkoizar%2Bgroup:comp.databases.postgresql.*%26start%3D20%26hl%3Den%26lr%3D%26ie%3DUTF-8%26scoring%3Dd%26selm%3Dejf4du853mblm44f0u78f02g166r69lng7%25404ax.com%26rnum%3D28\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jun 2002 09:21:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, I have reviewed your objections:\n\nThanks.\n\n> Is there something I am missing?\n\nMy principal objection to this is that I do not like piecemeal breakage\nof pg_upgrade, disk page examination tools, etc. There are other things\nthat have been proposed that would require changes of page header or\ntuple header format, and I would prefer to see them bundled up and done\nas one big change (in some future revision; it's probably too late for\n7.3). \"Disk format of the week\" is not an idea that appeals to me.\n\nI know as well as you do that pg_upgrade compatibility is only\ninteresting if there *is* a pg_upgrade, which very possibly won't\nhappen for 7.3 anyway --- but I don't like foreclosing the possibility\nfor a marginal win. And this is definitely a marginal win. Let's\ntry to batch up enough changes to make it worth the pain.\n\nIn case you are wondering what I am talking about, here is an\noff-the-cuff list of things that I'd prefer not to do one at a time:\n\n* Version identifier words in page headers\n* CRCs in page headers\n* Replication and/or point-in-time recovery might need additional\n header fields similar to those used by WAL\n* Restructuring of line pointers\n* Making OIDs optional (no wasted space if no OID)\n* Adding a tuple version identifier to tuples (for DROP COLUMN etc)\n* Restructuring index tuple headers (remove length field)\n* Fixing array storage to support NULLs inside arrays\n* Restoring time travel on some optional basis\n\nSome of these may never happen (I'm not even in favor of all of them)\nbut it's certain that we will want to do some of them. I don't want to\ntake the same hit over and over when some intelligent project management\nwould let us take it just once or twice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jun 2002 09:22:14 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Do we have any hard numbers on that? Is it just access to the header\n> fields, or do we loose the offset cacheability of all fixed size fields\n> at the beginning of a row? In the latter case count me into the\n> slowness-believer camp.\n\nI don't believe the patch would have made the header variable size,\nonly changed what the fixed size is. The slowdown I was worried about\nwas just a matter of a couple extra tests and branches in the tqual.c\nroutines; which would be negligible if they weren't such hotspots.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jun 2002 09:44:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Jan Wieck wrote:\n> > > I don't think enough people use pg_upgrade to make it a reason to keep\n> > > an extra four bytes of tuple overhead. I realize 8-byte aligned systems\n> > > don't benefit, but most of our platforms are 4-byte aligned. I don't\n> > > consider redundency a valid reason either. We just don't have many\n> > > table corruption complaints, and the odds that having an extra 4 bytes\n> > > is going to make detection or correction better is unlikely.\n> >\n> > The non-overwriting storage management (which is one reason why whe need\n> > all these header fields) causes over 30 bytes of row overhead anyway. I\n> > am with Tom here, 4 bytes per row isn't worth making the tuple header\n> > variable length size.\n> \n> Woh, I didn't see anything about making the header variable size. The\n> issue was that on 8-byte machines, structure alignment will not allow\n> any savings. However, on 4-byte machines, it will be a savings of ~11%\n> in the tuple header.\n\nYou're right. Dunno where I got that idea from.\n\nLooking at the patch I find it quite confusing using Xmin as Xmax,\nsometimes. If we use 3 physical variables for 5 virtual ones in that\nway, I would rather use generic names.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Fri, 21 Jun 2002 09:46:38 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Do we have any hard numbers on that? Is it just access to the header\n> > fields, or do we loose the offset cacheability of all fixed size fields\n> > at the beginning of a row? In the latter case count me into the\n> > slowness-believer camp.\n> \n> I don't believe the patch would have made the header variable size,\n> only changed what the fixed size is. The slowdown I was worried about\n> was just a matter of a couple extra tests and branches in the tqual.c\n> routines; which would be negligible if they weren't such hotspots.\n\nDid someone run at least pgbench with/without that patch applied?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Fri, 21 Jun 2002 10:03:54 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom, I have reviewed your objections:\n> \n> Thanks.\n> \n> > Is there something I am missing?\n> \n> My principal objection to this is that I do not like piecemeal breakage\n> of pg_upgrade, disk page examination tools, etc. There are other things\n\nWhat do we have, pg_upgrade and Red Hat's disk dump tool, right? (I\nforgot the name.)\n\n> that have been proposed that would require changes of page header or\n> tuple header format, and I would prefer to see them bundled up and done\n> as one big change (in some future revision; it's probably too late for\n> 7.3). \"Disk format of the week\" is not an idea that appeals to me.\n\nThis is part of the \"do it perfect or don't do it\" logic that I usually\ndisagree with. We all agree we have a tuple header that is too large. \nHere we have a chance to reduce it by 11% on most platforms, and as a\ndownside, we will not have a working pg_upgrade. What do you think the\naverage user will choose?\n\n> I know as well as you do that pg_upgrade compatibility is only\n> interesting if there *is* a pg_upgrade, which very possibly won't\n> happen for 7.3 anyway --- but I don't like foreclosing the possibility\n\nI am not sure what needs to be done for pg_upgrade for 7.3 (does anyone\nelse?), but if you suspect it will not work anyway, why are you trying\nto save it for 7.3? (I think the problem with pg_upgrade in 7.2 was the\ninability to create pg_clog files to match the new transaction counter.)\n\n> for a marginal win. And this is definitely a marginal win. Let's\n> try to batch up enough changes to make it worth the pain.\n\nMarginal? Seems like a pretty concrete win to me in disk space.\n\nAlso, he has offered to write a conversion tool. If he does that, maybe\nhe can improve pg_upgrade if needed.\n\n> In case you are wondering what I am talking about, here is an\n> off-the-cuff list of things that I'd prefer not to do one at a time:\n> \n> * Version identifier words in page headers\n\nYes, we need that, and in fact should do that in 7.3 if we accept this\npatch. This may make future upgrades easier. In fact, I wonder if we\ncould place the page version number in such a way that old pre-7.3 pages\ncould be easily identified, i.e. place version number 0xAE in an offset\nthat used to hold a values that were always less than 0x80.\n\n> * CRCs in page headers\n> * Replication and/or point-in-time recovery might need additional\n> header fields similar to those used by WAL\n> * Restructuring of line pointers\n> * Making OIDs optional (no wasted space if no OID)\n> * Adding a tuple version identifier to tuples (for DROP COLUMN etc)\n> * Restructuring index tuple headers (remove length field)\n> * Fixing array storage to support NULLs inside arrays\n> * Restoring time travel on some optional basis\n> \n> Some of these may never happen (I'm not even in favor of all of them)\n> but it's certain that we will want to do some of them. I don't want to\n> take the same hit over and over when some intelligent project management\n> would let us take it just once or twice.\n\nYes, there are some good ones there, but the idea that somehow they are\nall going to hit in the same release seems unlikely. I say let's do\nsome now, some later, and move ahead.\n\nUltimately, I think we need to add smarts to the backend to read old\nformats. I know it is a pain, but I see no other options. pg_upgrade\nalways will have trouble with certain changes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jun 2002 10:11:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Jan Wieck wrote:\n> Tom Lane wrote:\n> > \n> > Jan Wieck <JanWieck@Yahoo.com> writes:\n> > > Do we have any hard numbers on that? Is it just access to the header\n> > > fields, or do we loose the offset cacheability of all fixed size fields\n> > > at the beginning of a row? In the latter case count me into the\n> > > slowness-believer camp.\n> > \n> > I don't believe the patch would have made the header variable size,\n> > only changed what the fixed size is. The slowdown I was worried about\n> > was just a matter of a couple extra tests and branches in the tqual.c\n> > routines; which would be negligible if they weren't such hotspots.\n> \n> Did someone run at least pgbench with/without that patch applied?\n\nNo, but he did perform this analysis:\n\n> thus reducing the additional cost to one t_infomask compare,\n> because the Satisfies functions only access Cmin and Cmax,\n> when HEAP_MOVED is known to be not set.\n> \n> OTOH experimenting with a moderatly sized \"out of production\"\n> database I got the following results:\n> | pages | pages |\n> relkind | count | tuples | before| after | savings\n> --------+-------+--------+-------+-------+--------\n> i | 31 | 808146 | 8330 | 8330 | 0.00%\n> r | 32 | 612968 | 13572 | 13184 | 2.86%\n> all | 63 | | 21902 | 21514 | 1.77%\n> \n> 2.86% fewer heap pages mean 2.86% less disk IO caused by heap pages.\n> Considering that index pages tend to benefit more from caching\n> we conclude that heap pages contribute more to the overall\n> IO load, so the total savings in the number of disk IOs should\n> be better than the 1.77% shown in the table above. I think\n> this outweighs a few CPU cycles now and then.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jun 2002 10:25:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Some of these may never happen (I'm not even in favor of all of them)\n>> but it's certain that we will want to do some of them. I don't want to\n>> take the same hit over and over when some intelligent project management\n>> would let us take it just once or twice.\n\n> Yes, there are some good ones there, but the idea that somehow they are\n> all going to hit in the same release seems unlikely. I say let's do\n> some now, some later, and move ahead.\n\nI think it's very likely that P-I-T recovery and replication will hit\nin the 7.4 release cycle. I would prefer to see us plan ahead to do\nthese other changes for 7.4 as well, and get as many of them done in\nthat cycle as we can.\n\nBasically my point is that there are costs and benefits to this sort\nof change, and many of the costs are quantized --- it doesn't cost\nmore to make three incompatible disk-format changes than one, *as long\nas they're in the same release*. So we should try to do some actual\nmanagement of such changes, not accept them willy-nilly whenever someone\nfeels like doing one.\n\nThis patch is unlikely to suffer significant bit-rot if we hold it for\n7.4, and that's what I think we should do with it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jun 2002 11:39:45 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Some of these may never happen (I'm not even in favor of all of them)\n> >> but it's certain that we will want to do some of them. I don't want to\n> >> take the same hit over and over when some intelligent project management\n> >> would let us take it just once or twice.\n> \n> > Yes, there are some good ones there, but the idea that somehow they are\n> > all going to hit in the same release seems unlikely. I say let's do\n> > some now, some later, and move ahead.\n> \n> I think it's very likely that P-I-T recovery and replication will hit\n> in the 7.4 release cycle. I would prefer to see us plan ahead to do\n> these other changes for 7.4 as well, and get as many of them done in\n> that cycle as we can.\n> \n> Basically my point is that there are costs and benefits to this sort\n> of change, and many of the costs are quantized --- it doesn't cost\n> more to make three incompatible disk-format changes than one, *as long\n> as they're in the same release*. So we should try to do some actual\n> management of such changes, not accept them willy-nilly whenever someone\n> feels like doing one.\n> \n> This patch is unlikely to suffer significant bit-rot if we hold it for\n> 7.4, and that's what I think we should do with it.\n\nWell, that is a different argument than initially. So, it is a valid\npatch, but we have to decide when to apply it.\n\nWe can easily hold it until we near release of 7.3. If pg_upgrade is in\ngood shape _and_ no other format changes are required, we can hold it\nfor 7.4. What happens if 7.4 doesn't have any format changes?\n\nHowever, if we have other changes or pg_upgrade isn't going to work, we\ncan apply it in August.\n\n(Manfred, you can be sure I will not lose this patch.)\n\nHowever, we have been very bad at predicting what features/changes are\ncoming in future releases. Also, I don't think replication will require\ntuple format changes. I will check with the group but I haven't heard\nanything about that.\n\nSo, we have to decide if we apply it now or delay it for later in 7.3,\nor for >=7.4.\n\nMy personal vote is that we apply it now, and perhaps try some of the\nother format changes we were going to make. Tom's vote is to hold it, I\nassume. Let's see what others say.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jun 2002 12:45:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Jan Wieck wrote:\n> >\n> > Did someone run at least pgbench with/without that patch applied?\n> \n> No, but he did perform this analysis:\n> \n> > thus reducing the additional cost to one t_infomask compare,\n> > because the Satisfies functions only access Cmin and Cmax,\n> > when HEAP_MOVED is known to be not set.\n> >\n> > OTOH experimenting with a moderatly sized \"out of production\"\n> > database I got the following results:\n> > | pages | pages |\n> > relkind | count | tuples | before| after | savings\n> > --------+-------+--------+-------+-------+--------\n> > i | 31 | 808146 | 8330 | 8330 | 0.00%\n> > r | 32 | 612968 | 13572 | 13184 | 2.86%\n> > all | 63 | | 21902 | 21514 | 1.77%\n> >\n> > 2.86% fewer heap pages mean 2.86% less disk IO caused by heap pages.\n> > Considering that index pages tend to benefit more from caching\n> > we conclude that heap pages contribute more to the overall\n> > IO load, so the total savings in the number of disk IOs should\n> > be better than the 1.77% shown in the table above. I think\n> > this outweighs a few CPU cycles now and then.\n\nThis anawhat? This is a proof that this patch is able to save not even\n3% of disk space in a production environment plus an assumption that the\nsaved IO outweights the extra effort in the tuple visibility checks.\n\nHere are some numbers:\n\nP3 850MHz 256MB RAM IDE\npostmaster -N256 -B8192\npgbench -i -s 10 db\npgbench -c 20 -t 500 db\n\n\nCurrent CVS tip: tps 34.1, 38.7, 36.6\n avg(tps) 36.4\n\nWith patch: tps 37.0, 41.1, 41.1\n avg(tps) 39.7\n\nSo it saves less than 3% disk space at the cost of about 9% performance\nloss. If we can do the same the other way around I'd go for wasting some\nmore disk space.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Fri, 21 Jun 2002 17:13:10 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Here are some numbers:\n\n> Current CVS tip: tps 34.1, 38.7, 36.6\n> avg(tps) 36.4\n\n> With patch: tps 37.0, 41.1, 41.1\n> avg(tps) 39.7\n\n> So it saves less than 3% disk space at the cost of about 9% performance\n> loss.\n\nUh ... isn't more TPS better?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 21 Jun 2002 17:16:50 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Tom Lane dijo: \n\n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Here are some numbers:\n> \n> > Current CVS tip: tps 34.1, 38.7, 36.6\n> > avg(tps) 36.4\n> \n> > With patch: tps 37.0, 41.1, 41.1\n> > avg(tps) 39.7\n> \n> > So it saves less than 3% disk space at the cost of about 9% performance\n> > loss.\n> \n> Uh ... isn't more TPS better?\n\nAlso, is that 3% in disk space savings the actual number, or just copied\nfrom the \"anawhat\"?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nY dijo Dios: \"Que haya Satan�s, para que la gente no me culpe de todo a m�.\"\n\"Y que hayan abogados, para que la gente no culpe de todo a Satan�s\"\n\n",
"msg_date": "Fri, 21 Jun 2002 21:38:12 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Alvaro Herrera wrote:\n> Tom Lane dijo: \n> \n> > Jan Wieck <JanWieck@Yahoo.com> writes:\n> > > Here are some numbers:\n> > \n> > > Current CVS tip: tps 34.1, 38.7, 36.6\n> > > avg(tps) 36.4\n> > \n> > > With patch: tps 37.0, 41.1, 41.1\n> > > avg(tps) 39.7\n> > \n> > > So it saves less than 3% disk space at the cost of about 9% performance\n> > > loss.\n> > \n> > Uh ... isn't more TPS better?\n\n9%, that is a dramatic difference. Is it caused by the reduced disk\nspace (Jan's numbers are correct) or by the extra overhead in the merged\nfields (Jan's numbers are backwards)? Jan will tell us soon.\n\n> Also, is that 3% in disk space savings the actual number, or just copied\n> from the \"anawhat\"?\n\nThe 3% is savings from a sample database. Header size is 11% reduced.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 21 Jun 2002 23:57:15 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So it saves less than 3% disk space at the cost of about 9% performance\n> loss.\n> \n> Uh ... isn't more TPS better?\n\n> 9%, that is a dramatic difference.\n\nYup. I'm suspicious of it --- if the database is 3% smaller then I'd\nbelieve a 3% performance gain from reduction of I/O, but I don't see\nwhere 9% comes from.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 22 Jun 2002 16:56:29 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "\nJan, any update on this? Are the numbers correct?\n\n---------------------------------------------------------------------------\n\nJan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Jan Wieck wrote:\n> > >\n> > > Did someone run at least pgbench with/without that patch applied?\n> > \n> > No, but he did perform this analysis:\n> > \n> > > thus reducing the additional cost to one t_infomask compare,\n> > > because the Satisfies functions only access Cmin and Cmax,\n> > > when HEAP_MOVED is known to be not set.\n> > >\n> > > OTOH experimenting with a moderatly sized \"out of production\"\n> > > database I got the following results:\n> > > | pages | pages |\n> > > relkind | count | tuples | before| after | savings\n> > > --------+-------+--------+-------+-------+--------\n> > > i | 31 | 808146 | 8330 | 8330 | 0.00%\n> > > r | 32 | 612968 | 13572 | 13184 | 2.86%\n> > > all | 63 | | 21902 | 21514 | 1.77%\n> > >\n> > > 2.86% fewer heap pages mean 2.86% less disk IO caused by heap pages.\n> > > Considering that index pages tend to benefit more from caching\n> > > we conclude that heap pages contribute more to the overall\n> > > IO load, so the total savings in the number of disk IOs should\n> > > be better than the 1.77% shown in the table above. I think\n> > > this outweighs a few CPU cycles now and then.\n> \n> This anawhat? This is a proof that this patch is able to save not even\n> 3% of disk space in a production environment plus an assumption that the\n> saved IO outweights the extra effort in the tuple visibility checks.\n> \n> Here are some numbers:\n> \n> P3 850MHz 256MB RAM IDE\n> postmaster -N256 -B8192\n> pgbench -i -s 10 db\n> pgbench -c 20 -t 500 db\n> \n> \n> Current CVS tip: tps 34.1, 38.7, 36.6\n> avg(tps) 36.4\n> \n> With patch: tps 37.0, 41.1, 41.1\n> avg(tps) 39.7\n> \n> So it saves less than 3% disk space at the cost of about 9% performance\n> loss. If we can do the same the other way around I'd go for wasting some\n> more disk space.\n> \n> \n> Jan\n> \n> -- \n> \n> #======================================================================#\n> # It's easier to get forgiveness for being wrong than for being right. #\n> # Let's break this rule - forgive me. #\n> #================================================== JanWieck@Yahoo.com #\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Tue, 25 Jun 2002 09:38:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Jan, any update on this? Are the numbers correct?\n\nSorry, but took some time.\n\nWell, it turned out that pgbench does a terrible job with runtimes below\n30 minutes. Seems that one checkpoint more or less can have a\nsignificant impact on the numbers reported by such run.\n\nAlso starting off with a populated cache (rampup) seems to be a very\ngood idea. So my advice for running pgbench is to do an initdb before\nrunning (to whipe out logfile creation/reuse issues). To populate a\nfresh database with a reasonable scaling. Run pgbench with a high enough\n-c and -t so it runs for at least 5 minutes. Then do the actual\nmeasurement with a pgbench run with settings keeping the system busy for\n30 minutes or more. Needless to say, keep your fingers (and everyone\nelses too) off the system during that time. Shut down not needed\nservices, and especially cron!\n\nUsing the above, the discussed change to the tuple header shows less\nthan 1% difference.\n\n\nSorry for all the confusion.\nJan\n\n> \n> ---------------------------------------------------------------------------\n> \n> Jan Wieck wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Jan Wieck wrote:\n> > > >\n> > > > Did someone run at least pgbench with/without that patch applied?\n> > >\n> > > No, but he did perform this analysis:\n> > >\n> > > > thus reducing the additional cost to one t_infomask compare,\n> > > > because the Satisfies functions only access Cmin and Cmax,\n> > > > when HEAP_MOVED is known to be not set.\n> > > >\n> > > > OTOH experimenting with a moderatly sized \"out of production\"\n> > > > database I got the following results:\n> > > > | pages | pages |\n> > > > relkind | count | tuples | before| after | savings\n> > > > --------+-------+--------+-------+-------+--------\n> > > > i | 31 | 808146 | 8330 | 8330 | 0.00%\n> > > > r | 32 | 612968 | 13572 | 13184 | 2.86%\n> > > > all | 63 | | 21902 | 21514 | 1.77%\n> > > >\n> > > > 2.86% fewer heap pages mean 2.86% less disk IO caused by heap pages.\n> > > > Considering that index pages tend to benefit more from caching\n> > > > we conclude that heap pages contribute more to the overall\n> > > > IO load, so the total savings in the number of disk IOs should\n> > > > be better than the 1.77% shown in the table above. I think\n> > > > this outweighs a few CPU cycles now and then.\n> >\n> > This anawhat? This is a proof that this patch is able to save not even\n> > 3% of disk space in a production environment plus an assumption that the\n> > saved IO outweights the extra effort in the tuple visibility checks.\n> >\n> > Here are some numbers:\n> >\n> > P3 850MHz 256MB RAM IDE\n> > postmaster -N256 -B8192\n> > pgbench -i -s 10 db\n> > pgbench -c 20 -t 500 db\n> >\n> >\n> > Current CVS tip: tps 34.1, 38.7, 36.6\n> > avg(tps) 36.4\n> >\n> > With patch: tps 37.0, 41.1, 41.1\n> > avg(tps) 39.7\n> >\n> > So it saves less than 3% disk space at the cost of about 9% performance\n> > loss. If we can do the same the other way around I'd go for wasting some\n> > more disk space.\n> >\n> >\n> > Jan\n> >\n> > --\n> >\n> > #======================================================================#\n> > # It's easier to get forgiveness for being wrong than for being right. #\n> > # Let's break this rule - forgive me. #\n> > #================================================== JanWieck@Yahoo.com #\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Tue, 25 Jun 2002 16:15:45 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Well, it turned out that pgbench does a terrible job with runtimes below\n> 30 minutes. Seems that one checkpoint more or less can have a\n> significant impact on the numbers reported by such run.\n\nYeah, it is *very* painful to get reproducible numbers out of pgbench.\n\n> Using the above, the discussed change to the tuple header shows less\n> than 1% difference.\n\nSo the bottom line is that there is probably no measurable performance\ndifference, but a 3% space savings, at least for average row lengths\ncomparable to those used in pgbench. (Obviously the space savings is\ngoing to depend on average row length...)\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Tue, 25 Jun 2002 16:20:36 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <JanWieck@Yahoo.com> writes:\n> > Well, it turned out that pgbench does a terrible job with runtimes below\n> > 30 minutes. Seems that one checkpoint more or less can have a\n> > significant impact on the numbers reported by such run.\n> \n> Yeah, it is *very* painful to get reproducible numbers out of pgbench.\n\nAnd since it seems to be extremly checkpoint dependent, imagine what\nsomeone would have to do to measure the impact of changing config\noptions about checkpoint segments and intervals. But I guess that this\nwould be an issue for every benchmark.\n\nThe TPC must know about it as the specs for the TPC-C require at least\none checkpoint occuring during the measurement time and 90% of all\ntransactions fitting the timing constraints. Currently PostgreSQL\ncreates an IO storm on a checkpoint, that simply brings the system down\nto responsetimes 100x the average, slowly recovering from there since\nall simulated users are active at once then (they all woke up from their\nthinking or keying times and pressed SEND).\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Tue, 25 Jun 2002 16:42:08 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "On Fri, 21 Jun 2002 12:45:48 -0400 (EDT), Bruce Momjian <pgman@candle.pha.pa.us> wrote:\n>> > Yes, there are some good ones there, but the idea that somehow they are\n>> > all going to hit in the same release seems unlikely. I say let's do\n>> > some now, some later, and move ahead.\n\nI (strongly :-)) agree.\n\n>Well, that is a different argument than initially. So, it is a valid\n>patch, but we have to decide when to apply it.\n>\n>We can easily hold it until we near release of 7.3. If pg_upgrade is in\n>good shape _and_ no other format changes are required, we can hold it\n>for 7.4. What happens if 7.4 doesn't have any format changes?\n>\n>However, if we have other changes or pg_upgrade isn't going to work, we\n>can apply it in August.\n\nBut what, if the patch causes or reveals a well hidden bug, possibly\nsomewhere else in the code? I'd vote for applying it now, so that in\nAugust it already has undergone some testing. You can always patch -R\nbefore going beta...\n\n>(Manfred, you can be sure I will not lose this patch.)\n\nThanks. Anyway, lose it! Here is a new version.\n\n>So, we have to decide if we apply it now or delay it for later in 7.3,\n>or for >=7.4.\n\nWhy not let the users decide? With this new version of the patch they\ncan\n\tconfigure --enable-pg72format\nif this patch is the only thing that stops pg_upgrade from working and\nif they want to use pg_upgrade.\n\n>My personal vote is that we apply it now, and perhaps try some of the\n>other format changes we were going to make.\n\nAnd with #ifdef PG72FORMAT we can introduce them without very much\nrisk. One thing that's still missing is we'd need two entries in\ncatversion.h.\n\nServus\n Manfred\n\ndiff -u ../base/INSTALL ./INSTALL\n--- ../base/INSTALL\t2002-01-31 01:46:26.000000000 +0100\n+++ ./INSTALL\t2002-06-25 15:59:03.000000000 +0200\n@@ -283,6 +283,13 @@\n Enables single-byte character set recode support. See the\n Administrator's Guide about this feature.\n \n+ --enable-pg72format\n+\n+ Enables version 7.2 page format, giving you a better chance of\n+ converting existing databases by pg_upgrade. Do not use this\n+ feature, if you are just starting to use PostgreSQL or if you\n+ plan to upgrade via pg_dump/restore.\n+\n --enable-multibyte\n \n Allows the use of multibyte character encodings (including\nCommon subdirectories: ../base/config and ./config\ndiff -u ../base/configure ./configure\n--- ../base/configure\t2002-05-29 14:42:20.000000000 +0200\n+++ ./configure\t2002-06-25 15:51:18.000000000 +0200\n@@ -1652,6 +1652,44 @@\n \n \n #\n+# Version 7.2 page format (--enable-pg72format)\n+#\n+echo \"$as_me:$LINENO: checking whether to build with version 7.2 page format\" >&5\n+echo $ECHO_N \"checking whether to build with version 7.2 page format... $ECHO_C\" >&6\n+\n+\n+# Check whether --enable-pg72format or --disable-pg72format was given.\n+if test \"${enable_pg72format+set}\" = set; then\n+ enableval=\"$enable_pg72format\"\n+\n+ case $enableval in\n+ yes)\n+\n+cat >>confdefs.h <<\\_ACEOF\n+#define PG72FORMAT 1\n+_ACEOF\n+\n+ ;;\n+ no)\n+ :\n+ ;;\n+ *)\n+ { { echo \"$as_me:$LINENO: error: no argument expected for --enable-pg72format option\" >&5\n+echo \"$as_me: error: no argument expected for --enable-pg72format option\" >&2;}\n+ { (exit 1); exit 1; }; }\n+ ;;\n+ esac\n+\n+else\n+ enable_pg72format=no\n+\n+fi;\n+\n+echo \"$as_me:$LINENO: result: $enable_pg72format\" >&5\n+echo \"${ECHO_T}$enable_pg72format\" >&6\n+\n+\n+#\n # Multibyte support\n #\n MULTIBYTE=SQL_ASCII\ndiff -u ../base/configure.in ./configure.in\n--- ../base/configure.in\t2002-05-29 14:42:20.000000000 +0200\n+++ ./configure.in\t2002-06-25 15:51:15.000000000 +0200\n@@ -162,6 +162,16 @@\n \n \n #\n+# Version 7.2 page format (--enable-pg72format)\n+#\n+AC_MSG_CHECKING([whether to build with version 7.2 page format])\n+PGAC_ARG_BOOL(enable, pg72format, no, [ --enable-pg72format enable version 7.2 page format],\n+ [AC_DEFINE([PG72FORMAT], 1,\n+ [Set to 1 if you want version 7.2 page format (--enable-pg72format)])])\n+AC_MSG_RESULT([$enable_pg72format])\n+\n+\n+#\n # Multibyte support\n #\n MULTIBYTE=SQL_ASCII\ndiff -ru ../base/src/backend/access/heap/heapam.c src/backend/access/heap/heapam.c\n--- ../base/src/backend/access/heap/heapam.c\t2002-06-17 10:11:31.000000000 +0200\n+++ src/backend/access/heap/heapam.c\t2002-06-17 22:35:32.000000000 +0200\n@@ -2204,7 +2204,7 @@\n \t\thtup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n \t\tHeapTupleHeaderSetXmin(htup, record->xl_xid);\n \t\tHeapTupleHeaderSetCmin(htup, FirstCommandId);\n-\t\tHeapTupleHeaderSetXmax(htup, InvalidTransactionId);\n+\t\tHeapTupleHeaderSetXmaxInvalid(htup);\n \t\tHeapTupleHeaderSetCmax(htup, FirstCommandId);\n \n \t\toffnum = PageAddItem(page, (Item) htup, newlen, offnum,\ndiff -ru ../base/src/include/access/htup.h src/include/access/htup.h\n--- ../base/src/include/access/htup.h\t2002-06-17 10:11:32.000000000 +0200\n+++ src/include/access/htup.h\t2002-06-25 16:14:37.000000000 +0200\n@@ -57,15 +57,33 @@\n * Also note that we omit the nulls bitmap if t_infomask shows that there\n * are no nulls in the tuple.\n */\n+/*\n+** We store five \"virtual\" fields Xmin, Cmin, Xmax, Cmax, and Xvac\n+** in three physical fields t_xmin, t_cid, t_xmax:\n+** CommandId Cmin;\t\tinsert CID stamp\n+** CommandId Cmax;\t\tdelete CommandId stamp\n+** TransactionId Xmin;\t\tinsert XID stamp\n+** TransactionId Xmax;\t\tdelete XID stamp\n+** TransactionId Xvac;\t\tused by VACCUUM\n+**\n+** This assumes, that a CommandId can be stored in a TransactionId.\n+*/\n typedef struct HeapTupleHeaderData\n {\n \tOid\t\t\tt_oid;\t\t\t/* OID of this tuple -- 4 bytes */\n \n+#ifdef PG72FORMAT\n+\t/* v7.2: Xvac is stored in t_cmin */\n \tCommandId\tt_cmin;\t\t\t/* insert CID stamp -- 4 bytes each */\n \tCommandId\tt_cmax;\t\t\t/* delete CommandId stamp */\n \n \tTransactionId t_xmin;\t\t/* insert XID stamp -- 4 bytes each */\n \tTransactionId t_xmax;\t\t/* delete XID stamp */\n+#else\n+\tTransactionId t_xmin;\t\t/* Xmin -- 4 bytes each */\n+\tTransactionId t_cid;\t\t/* Cmin, Cmax, Xvac */\n+\tTransactionId t_xmax;\t\t/* Xmax, Cmax */\n+#endif\n \n \tItemPointerData t_ctid;\t\t/* current TID of this or newer tuple */\n \n@@ -75,7 +93,7 @@\n \n \tuint8\t\tt_hoff;\t\t\t/* sizeof header incl. bitmap, padding */\n \n-\t/* ^ - 31 bytes - ^ */\n+\t/* ^ - 27 (v7.3) or 31 (v7.2) bytes - ^ */\n \n \tbits8\t\tt_bits[1];\t\t/* bitmap of NULLs -- VARIABLE LENGTH */\n \n@@ -96,6 +114,8 @@\n \t\t\t\t\t\t\t\t\t\t * attribute(s) */\n #define HEAP_HASEXTENDED\t\t0x000C\t/* the two above combined */\n \n+#define HEAP_XMIN_IS_XMAX\t\t0x0040\t/* created and deleted in the */\n+\t\t\t\t\t\t\t\t\t\t/* same transaction */\n #define HEAP_XMAX_UNLOGGED\t\t0x0080\t/* to lock tuple for update */\n \t\t\t\t\t\t\t\t\t\t/* without logging */\n #define HEAP_XMIN_COMMITTED\t\t0x0100\t/* t_xmin committed */\n@@ -108,6 +128,7 @@\n \t\t\t\t\t\t\t\t\t\t * vacuum */\n #define HEAP_MOVED_IN\t\t\t0x8000\t/* moved from another place by\n \t\t\t\t\t\t\t\t\t\t * vacuum */\n+#define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)\n \n #define HEAP_XACT_MASK\t\t\t0xFFF0\t/* visibility-related bits */\n \n@@ -116,8 +137,11 @@\n /* HeapTupleHeader accessor macros */\n \n #define HeapTupleHeaderGetXmin(tup) \\\n-\t((tup)->t_xmin)\n+( \\\n+\t(tup)->t_xmin \\\n+)\n \n+#ifdef PG72FORMAT\n #define HeapTupleHeaderGetXmax(tup) \\\n \t((tup)->t_xmax)\n \n@@ -163,6 +187,98 @@\n \tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n \tTransactionIdStore((xid), (TransactionId *) &((tup)->t_cmin)) \\\n )\n+#else\n+#define HeapTupleHeaderGetXmax(tup) \\\n+( \\\n+\t((tup)->t_infomask & HEAP_XMIN_IS_XMAX) ? \\\n+\t\t(tup)->t_xmin \\\n+\t: \\\n+\t\t(tup)->t_xmax \\\n+)\n+\n+/* no AssertMacro, because this is read as a system-defined attribute */\n+#define HeapTupleHeaderGetCmin(tup) \\\n+( \\\n+\t((tup)->t_infomask & HEAP_MOVED) ? \\\n+\t\tFirstCommandId \\\n+\t: \\\n+\t( \\\n+\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n+\t\t\t(CommandId) (tup)->t_cid \\\n+\t\t: \\\n+\t\t\tFirstCommandId \\\n+\t) \\\n+)\n+\n+#define HeapTupleHeaderGetCmax(tup) \\\n+( \\\n+\t((tup)->t_infomask & HEAP_MOVED) ? \\\n+\t\tFirstCommandId \\\n+\t: \\\n+\t( \\\n+\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n+\t\t\t(CommandId) (tup)->t_xmax \\\n+\t\t: \\\n+\t\t\t(CommandId) (tup)->t_cid \\\n+\t) \\\n+)\n+\n+#define HeapTupleHeaderGetXvac(tup) \\\n+( \\\n+\tAssertMacro((tup)->t_infomask & HEAP_MOVED), \\\n+\t(tup)->t_cid \\\n+)\n+\n+\n+#define HeapTupleHeaderSetXmin(tup, xid) \\\n+( \\\n+\tTransactionIdStore((xid), &(tup)->t_xmin) \\\n+)\n+\n+#define HeapTupleHeaderSetXminInvalid(tup) \\\n+do { \\\n+\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n+\tStoreInvalidTransactionId(&(tup)->t_xmin); \\\n+} while (0)\n+\n+#define HeapTupleHeaderSetXmax(tup, xid) \\\n+do { \\\n+\tif (TransactionIdEquals((tup)->t_xmin, (xid))) \\\n+\t\t(tup)->t_infomask |= HEAP_XMIN_IS_XMAX; \\\n+\telse \\\n+\t{ \\\n+\t\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n+\t\tTransactionIdStore((xid), &(tup)->t_xmax); \\\n+\t} \\\n+} while (0)\n+\n+#define HeapTupleHeaderSetXmaxInvalid(tup) \\\n+do { \\\n+\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n+\tStoreInvalidTransactionId(&(tup)->t_xmax); \\\n+} while (0)\n+\n+#define HeapTupleHeaderSetCmin(tup, cid) \\\n+do { \\\n+\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n+\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n+} while (0)\n+\n+#define HeapTupleHeaderSetCmax(tup, cid) \\\n+do { \\\n+\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n+\tif ((tup)->t_infomask & HEAP_XMIN_IS_XMAX) \\\n+\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_xmax); \\\n+\telse \\\n+\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n+} while (0)\n+\n+#define HeapTupleHeaderSetXvac(tup, xid) \\\n+do { \\\n+\tAssert((tup)->t_infomask & HEAP_MOVED); \\\n+\tTransactionIdStore((xid), &(tup)->t_cid); \\\n+} while (0)\n+#endif\n \n \n /*\ndiff -ru ../base/src/include/pg_config.h.in src/include/pg_config.h.in\n--- ../base/src/include/pg_config.h.in\t2002-06-17 21:04:03.000000000 +0200\n+++ src/include/pg_config.h.in\t2002-06-25 16:02:40.000000000 +0200\n@@ -39,6 +39,9 @@\n /* Set to 1 if you want cyrillic recode (--enable-recode) */\n #undef CYR_RECODE\n \n+/* Set to 1 if you want version 7.2 page format (--enable-pg72format) */\n+#undef PG72FORMAT\n+\n /* Set to 1 if you want to use multibyte characters (--enable-multibyte) */\n #undef MULTIBYTE\n \n\n\n\n",
"msg_date": "Wed, 26 Jun 2002 13:05:19 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Re: Reduce heap tuple header size II"
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> Why not let the users decide? With this new version of the patch they\n> can\n> \tconfigure --enable-pg72format\n\nI think that is a really, really bad idea. If we're gonna do it we\nshould just do it. We don't need multiple incompatible file formats\nrunning around all calling themselves PG 7.3.\n\nConfigure options are generally not a solution to anything anyway,\nsince more and more people use RPM distributions and don't have the\noption to make their own configure choices.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Wed, 26 Jun 2002 10:00:23 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size II "
},
{
"msg_contents": "\nOK, we need to vote on this patch. It reduces the tuple header by 4\nbytes (11% decrease).\n\nIf we apply it, we will not be able to easily use pg_upgrade for 7.3\nbecause the on-disk table format will change.\n\nVotes are:\n\n1) Apply it now\n2) Wait until August and see if any other table format changes are made.\n3) Delay patch until we have other table format changes.\n\n\n---------------------------------------------------------------------------\n\nManfred Koizar wrote:\n> On Fri, 14 Jun 2002 10:16:22 -0400, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> >As I commented before, I am not in favor of this. I don't think that a\n> >four-byte savings justifies a forced initdb with no chance of\n> >pg_upgrade,\n> \n> As I don't know other users' preferences, I cannot comment on this.\n> I just think that four bytes per tuple can amount to a sum that justifies\n> this effort. Disk space is not my argument here, but reduced disk IO is.\n> \n> >plus significantly slower access to\n> >several critical header fields. The tqual.c routines are already\n> >hotspots in many scenarios. I believe this will make them noticeably\n> >slower.\n> \n> Significantly slower? I tried to analyze HeapTupleSatisfiesUpdate(),\n> as I think it is the most complicated of these Satisfies functions\n> (look for the !! comments):\n> \n> \n> So in the worst case we return after having done four more\n> compares than without the patch. Note that in the most common\n> cases there is no additional cost at all. If you still think\n> we have a performance problem here, we could replace GetCmin\n> and GetCmax by cheaper macros:\n> \n> #define HeapTupleHeaderGetCminKnowingThatNotMoved(tup) \\\n> ( \\\n> AssertMacro(!((tup)->t_infomask & HEAP_MOVED)),\n> ( \\\n> ((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n> (CommandId) (tup)->t_cid \\\n> : \\\n> FirstCommandId \\\n> ) \\\n> )\n> \n> thus reducing the additional cost to one t_infomask compare,\n> because the Satisfies functions only access Cmin and Cmax,\n> when HEAP_MOVED is known to be not set.\n> \n> OTOH experimenting with a moderatly sized \"out of production\"\n> database I got the following results:\n> | pages | pages |\n> relkind | count | tuples | before| after | savings\n> --------+-------+--------+-------+-------+--------\n> i | 31 | 808146 | 8330 | 8330 | 0.00%\n> r | 32 | 612968 | 13572 | 13184 | 2.86%\n> all | 63 | | 21902 | 21514 | 1.77%\n> \n> 2.86% fewer heap pages mean 2.86% less disk IO caused by heap pages.\n> Considering that index pages tend to benefit more from caching\n> we conclude that heap pages contribute more to the overall\n> IO load, so the total savings in the number of disk IOs should\n> be better than the 1.77% shown in the table above. I think\n> this outweighs a few CPU cycles now and then.\n> \n> Servus\n> Manfred\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Fri, 28 Jun 2002 19:32:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "On Fri, Jun 28, 2002 at 07:32:00PM -0400, Bruce Momjian wrote:\n> OK, we need to vote on this patch. It reduces the tuple header by 4\n> bytes (11% decrease).\n> \n> If we apply it, we will not be able to easily use pg_upgrade for 7.3\n> because the on-disk table format will change.\n\nI vote for apply it now -- if there are no other on-disk format changes\nmade by late August and pg_upgrade is actually a valid, production-quality\nupgrade mechanism (which I'm not prepared to assume), consider\nreverting it.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n\n",
"msg_date": "Fri, 28 Jun 2002 19:50:26 -0400",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "On Fri, 28 Jun 2002, Bruce Momjian wrote:\n\n> OK, we need to vote on this patch. It reduces the tuple header by 4\n> bytes (11% decrease).\n>\n> If we apply it, we will not be able to easily use pg_upgrade for 7.3\n> because the on-disk table format will change.\n>\n> Votes are:\n>\n> 1) Apply it now\n> 2) Wait until August and see if any other table format changes are made.\n> 3) Delay patch until we have other table format changes.\n\nI would tend to say \"apply it now\" so that we can get more testing\nof it.\n\nIt would also be good to see how else we could save space in the\nheader, e.g., by not having an empty OID field when a table is\ncreated without OIDs. (That would double the space savings.)\n\nI tend to use ID cross reference tables quite a lot, and these tend to\nhave a lot of rows in them. (E.g., group table has group ID; user table\nhas user-id; a group-id + user-id table determines which users are in\nwhich groups. In one project a couple of years ago, such a table was 85\nmillion rows.) These types of tables are typically 8 bytes of data and\n40 or so bytes of overhead. Ouch!\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n\n",
"msg_date": "Mon, 1 Jul 2002 11:50:14 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "Curt Sampson wrote:\n> On Fri, 28 Jun 2002, Bruce Momjian wrote:\n> \n> > OK, we need to vote on this patch. It reduces the tuple header by 4\n> > bytes (11% decrease).\n> >\n> > If we apply it, we will not be able to easily use pg_upgrade for 7.3\n> > because the on-disk table format will change.\n> >\n> > Votes are:\n> >\n> > 1) Apply it now\n> > 2) Wait until August and see if any other table format changes are made.\n> > 3) Delay patch until we have other table format changes.\n> \n> I would tend to say \"apply it now\" so that we can get more testing\n> of it.\n\nOK, I have heard enough votes to add this. If more votes come in while\nit is in the queue, we can reevaluate.\n\nAlso, Manfred is working on making the OID field optional, so it seems\nwe may have more format changes coming. Time to focus on any other data\nformat changes we want to be in 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Mon, 1 Jul 2002 10:15:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "On Mon, 1 Jul 2002 10:15:42 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>OK, I have heard enough votes to add this.\n\nIn a second version of this patch posted on 2002-06-26 you can control\nthe tuple format by #define/#undef PG72FORMAT. While there have been\nvoices saying that exposing this choice to the user via\n\tconfigure --enable-pg72format\nis not a good idea [well, it was one voice and the idea was called\n\"really, really bad\" ;-) but the argument is still valid], I wonder\nwhether we shouldn't apply this second version (without the configure\nparts) and put all forthcoming format changes under #ifndef\nPG72FORMAT.\n\nThis way you can decide to go back to v7.2 format immediately before\ngoing beta, if the changes are too hot to handle. And I think if I\nwouldn't volunteer to cleanup the #ifdef PG72FORMAT stuff after the\nnew format has been accepted, I would be nominated to do it ...\n\nServus\n Manfred\n\n\n",
"msg_date": "Mon, 01 Jul 2002 17:35:54 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "Manfred Koizar <mkoi-pg@aon.at> writes:\n> ... I wonder\n> whether we shouldn't apply this second version (without the configure\n> parts) and put all forthcoming format changes under #ifndef\n> PG72FORMAT.\n\nSeems reasonable. I generally dislike #ifdef clutter, but the #ifs\nwould only be around a couple of macro definitions AFAICT, so the\nreadability hit would be minimal. And someone who wanted\nback-compatibility would be able to have it, whichever way we jump\non the decision for 7.3.\n\nAt the rate Manfred is going, he'll have patches for all the tuple and\npage header related issues before August anyway ... so my original gripe\nabout wanting to group those changes into a single release will become\nmoot ;-). I certainly have no objection to doing them all in 7.3\ninstead of 7.4 if we can get it done.\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 01 Jul 2002 13:10:01 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size "
},
{
"msg_contents": "\nPatch applied. Thanks.\n\nCatalog version updated. initdb everyone.\n\n---------------------------------------------------------------------------\n\n\n\nManfred Koizar wrote:\n> This patch, which is built upon the \"HeapTupleHeader accessor macros\"\n> patch from 2002-06-10, is supposed to reduce the heap tuple header size\n> by four bytes on most architectures. Of course it changes the on-disk\n> tuple format and therefore requires initdb. As I have (once more)\n> opened my mouth too wide, I'll have to provide a heap file conversion\n> utility, if this patch gets accepted... More on this later.\n> \n> ======================\n> All 81 tests passed.\n> ======================\n> \n> It's late now, I'll do more tests tomorrow.\n> \n> Good night\n> Manfred\n> \n> diff -ru ../orig/src/backend/access/heap/heapam.c src/backend/access/heap/heapam.c\n> --- ../orig/src/backend/access/heap/heapam.c\t2002-06-13 19:34:48.000000000 +0200\n> +++ src/backend/access/heap/heapam.c\t2002-06-13 22:31:42.000000000 +0200\n> @@ -2204,7 +2204,7 @@\n> \t\thtup->t_infomask = HEAP_XMAX_INVALID | xlhdr.mask;\n> \t\tHeapTupleHeaderSetXmin(htup, record->xl_xid);\n> \t\tHeapTupleHeaderSetCmin(htup, FirstCommandId);\n> -\t\tHeapTupleHeaderSetXmax(htup, InvalidTransactionId);\n> +\t\tHeapTupleHeaderSetXmaxInvalid(htup);\n> \t\tHeapTupleHeaderSetCmax(htup, FirstCommandId);\n> \n> \t\toffnum = PageAddItem(page, (Item) htup, newlen, offnum,\n> diff -ru ../orig/src/include/access/htup.h src/include/access/htup.h\n> --- ../orig/src/include/access/htup.h\t2002-06-13 19:34:49.000000000 +0200\n> +++ src/include/access/htup.h\t2002-06-14 01:12:47.000000000 +0200\n> @@ -57,15 +57,24 @@\n> * Also note that we omit the nulls bitmap if t_infomask shows that there\n> * are no nulls in the tuple.\n> */\n> +/*\n> +** We store five \"virtual\" fields Xmin, Cmin, Xmax, Cmax, and Xvac\n> +** in three physical fields t_xmin, t_cid, t_xmax:\n> +** CommandId Cmin;\t\tinsert CID stamp\n> +** CommandId Cmax;\t\tdelete CommandId stamp\n> +** TransactionId Xmin;\t\tinsert XID stamp\n> +** TransactionId Xmax;\t\tdelete XID stamp\n> +** TransactionId Xvac;\t\tused by VACCUUM\n> +**\n> +** This assumes, that a CommandId can be stored in a TransactionId.\n> +*/\n> typedef struct HeapTupleHeaderData\n> {\n> \tOid\t\t\tt_oid;\t\t\t/* OID of this tuple -- 4 bytes */\n> \n> -\tCommandId\tt_cmin;\t\t\t/* insert CID stamp -- 4 bytes each */\n> -\tCommandId\tt_cmax;\t\t\t/* delete CommandId stamp */\n> -\n> -\tTransactionId t_xmin;\t\t/* insert XID stamp -- 4 bytes each */\n> -\tTransactionId t_xmax;\t\t/* delete XID stamp */\n> +\tTransactionId t_xmin;\t\t/* Xmin -- 4 bytes each */\n> +\tTransactionId t_cid;\t\t/* Cmin, Cmax, Xvac */\n> +\tTransactionId t_xmax;\t\t/* Xmax, Cmax */\n> \n> \tItemPointerData t_ctid;\t\t/* current TID of this or newer tuple */\n> \n> @@ -75,7 +84,7 @@\n> \n> \tuint8\t\tt_hoff;\t\t\t/* sizeof header incl. bitmap, padding */\n> \n> -\t/* ^ - 31 bytes - ^ */\n> +\t/* ^ - 27 bytes - ^ */\n> \n> \tbits8\t\tt_bits[1];\t\t/* bitmap of NULLs -- VARIABLE LENGTH */\n> \n> @@ -96,6 +105,8 @@\n> \t\t\t\t\t\t\t\t\t\t * attribute(s) */\n> #define HEAP_HASEXTENDED\t\t0x000C\t/* the two above combined */\n> \n> +#define HEAP_XMIN_IS_XMAX\t\t0x0040\t/* created and deleted in the */\n> +\t\t\t\t\t\t\t\t\t\t/* same transaction\n> */\n> #define HEAP_XMAX_UNLOGGED\t\t0x0080\t/* to lock tuple for update */\n> \t\t\t\t\t\t\t\t\t\t/* without logging\n> */\n> #define HEAP_XMIN_COMMITTED\t\t0x0100\t/* t_xmin committed */\n> @@ -108,6 +119,7 @@\n> \t\t\t\t\t\t\t\t\t\t * vacuum */\n> #define HEAP_MOVED_IN\t\t\t0x8000\t/* moved from another place by\n> \t\t\t\t\t\t\t\t\t\t * vacuum */\n> +#define HEAP_MOVED (HEAP_MOVED_OFF | HEAP_MOVED_IN)\n> \n> #define HEAP_XACT_MASK\t\t\t0xFFF0\t/* visibility-related bits */\n> \n> @@ -116,53 +128,100 @@\n> /* HeapTupleHeader accessor macros */\n> \n> #define HeapTupleHeaderGetXmin(tup) \\\n> -\t((tup)->t_xmin)\n> +( \\\n> +\t(tup)->t_xmin \\\n> +)\n> \n> #define HeapTupleHeaderGetXmax(tup) \\\n> -\t((tup)->t_xmax)\n> +( \\\n> +\t((tup)->t_infomask & HEAP_XMIN_IS_XMAX) ? \\\n> +\t\t(tup)->t_xmin \\\n> +\t: \\\n> +\t\t(tup)->t_xmax \\\n> +)\n> \n> -/* no AssertMacro, because this is read as a system-defined attribute also */\n> +/* no AssertMacro, because this is read as a system-defined attribute */\n> #define HeapTupleHeaderGetCmin(tup) \\\n> ( \\\n> -\t(tup)->t_cmin \\\n> +\t((tup)->t_infomask & HEAP_MOVED) ? \\\n> +\t\tFirstCommandId \\\n> +\t: \\\n> +\t( \\\n> +\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n> +\t\t\t(CommandId) (tup)->t_cid \\\n> +\t\t: \\\n> +\t\t\tFirstCommandId \\\n> +\t) \\\n> )\n> \n> #define HeapTupleHeaderGetCmax(tup) \\\n> -\t((tup)->t_cmax)\n> +( \\\n> +\t((tup)->t_infomask & HEAP_MOVED) ? \\\n> +\t\tFirstCommandId \\\n> +\t: \\\n> +\t( \\\n> +\t\t((tup)->t_infomask & (HEAP_XMIN_IS_XMAX | HEAP_XMAX_INVALID)) ? \\\n> +\t\t\t(CommandId) (tup)->t_xmax \\\n> +\t\t: \\\n> +\t\t\t(CommandId) (tup)->t_cid \\\n> +\t) \\\n> +)\n> \n> #define HeapTupleHeaderGetXvac(tup) \\\n> ( \\\n> -\tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n> -\t(TransactionId) (tup)->t_cmin \\\n> +\tAssertMacro((tup)->t_infomask & HEAP_MOVED), \\\n> +\t(tup)->t_cid \\\n> )\n> \n> \n> #define HeapTupleHeaderSetXmin(tup, xid) \\\n> -\t(TransactionIdStore((xid), &(tup)->t_xmin))\n> +( \\\n> +\tTransactionIdStore((xid), &(tup)->t_xmin) \\\n> +)\n> \n> #define HeapTupleHeaderSetXminInvalid(tup) \\\n> -\t(StoreInvalidTransactionId(&(tup)->t_xmin))\n> +do { \\\n> +\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n> +\tStoreInvalidTransactionId(&(tup)->t_xmin); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetXmax(tup, xid) \\\n> -\t(TransactionIdStore((xid), &(tup)->t_xmax))\n> +do { \\\n> +\tif (TransactionIdEquals((tup)->t_xmin, (xid))) \\\n> +\t\t(tup)->t_infomask |= HEAP_XMIN_IS_XMAX; \\\n> +\telse \\\n> +\t{ \\\n> +\t\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n> +\t\tTransactionIdStore((xid), &(tup)->t_xmax); \\\n> +\t} \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetXmaxInvalid(tup) \\\n> -\t(StoreInvalidTransactionId(&(tup)->t_xmax))\n> +do { \\\n> +\t(tup)->t_infomask &= ~HEAP_XMIN_IS_XMAX; \\\n> +\tStoreInvalidTransactionId(&(tup)->t_xmax); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetCmin(tup, cid) \\\n> -( \\\n> -\tAssertMacro(!((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF))), \\\n> -\t(tup)->t_cmin = (cid) \\\n> -)\n> +do { \\\n> +\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n> +\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetCmax(tup, cid) \\\n> -\t((tup)->t_cmax = (cid))\n> +do { \\\n> +\tAssert(!((tup)->t_infomask & HEAP_MOVED)); \\\n> +\tif ((tup)->t_infomask & HEAP_XMIN_IS_XMAX) \\\n> +\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_xmax); \\\n> +\telse \\\n> +\t\tTransactionIdStore((TransactionId) (cid), &(tup)->t_cid); \\\n> +} while (0)\n> \n> #define HeapTupleHeaderSetXvac(tup, xid) \\\n> -( \\\n> -\tAssertMacro((tup)->t_infomask & (HEAP_MOVED_IN | HEAP_MOVED_OFF)), \\\n> -\tTransactionIdStore((xid), (TransactionId *) &((tup)->t_cmin)) \\\n> -)\n> +do { \\\n> +\tAssert((tup)->t_infomask & HEAP_MOVED); \\\n> +\tTransactionIdStore((xid), &(tup)->t_cid); \\\n> +} while (0)\n> \n> \n> /*\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Tue, 2 Jul 2002 01:46:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Reduce heap tuple header size"
},
{
"msg_contents": "Tom Lane wrote:\n> Manfred Koizar <mkoi-pg@aon.at> writes:\n> > ... I wonder\n> > whether we shouldn't apply this second version (without the configure\n> > parts) and put all forthcoming format changes under #ifndef\n> > PG72FORMAT.\n> \n> Seems reasonable. I generally dislike #ifdef clutter, but the #ifs\n> would only be around a couple of macro definitions AFAICT, so the\n> readability hit would be minimal. And someone who wanted\n> back-compatibility would be able to have it, whichever way we jump\n> on the decision for 7.3.\n\nI committed the version with no #ifdef's. If we need them, we can add\nthem later, but it is likely we will never need them.\n\n> At the rate Manfred is going, he'll have patches for all the tuple and\n> page header related issues before August anyway ... so my original gripe\n> about wanting to group those changes into a single release will become\n> moot ;-). I certainly have no objection to doing them all in 7.3\n> instead of 7.4 if we can get it done.\n\nYes. Manfred, keep going. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Tue, 2 Jul 2002 02:16:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "On Tue, 2 Jul 2002 02:16:29 -0400 (EDT), Bruce Momjian\n<pgman@candle.pha.pa.us> wrote:\n>I committed the version with no #ifdef's. If we need them, we can add\n>them later, but it is likely we will never need them.\n\nMy point was, if there is a need to fallback to v7.2 format, it can be\ndone by changing a single line from #undef to #define. IMO the next\npatch I'm going to submit is a bit more risky. But if everyone else\nis confident we can make it stable for v7.3, it's fine by me too.\n\n>Yes. Manfred, keep going. ;-)\n\nCan't guarantee to keep the rate. You know, the kids need a bit more\nattention when they don't go to school :-)\n\nServus\n Manfred\n\n\n",
"msg_date": "Wed, 03 Jul 2002 12:26:33 +0200",
"msg_from": "Manfred Koizar <mkoi-pg@aon.at>",
"msg_from_op": true,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
},
{
"msg_contents": "Manfred Koizar wrote:\n> On Tue, 2 Jul 2002 02:16:29 -0400 (EDT), Bruce Momjian\n> <pgman@candle.pha.pa.us> wrote:\n> >I committed the version with no #ifdef's. If we need them, we can add\n> >them later, but it is likely we will never need them.\n> \n> My point was, if there is a need to fallback to v7.2 format, it can be\n> done by changing a single line from #undef to #define. IMO the next\n> patch I'm going to submit is a bit more risky. But if everyone else\n> is confident we can make it stable for v7.3, it's fine by me too.\n\nYes, with your recent pages, I think we are committed to changing the\nformat for 7.3.\n\n> >Yes. Manfred, keep going. ;-)\n> \n> Can't guarantee to keep the rate. You know, the kids need a bit more\n> attention when they don't go to school :-)\n\nLet me send over my kids. Where are you located? Austria? Hmmm...\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n",
"msg_date": "Wed, 3 Jul 2002 12:21:27 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] Reduce heap tuple header size"
}
] |
[
{
"msg_contents": "What is the preferred method (if there even is one) for modifying the\ncomment on a language? \n\nI vaguely remember it being documented that it was stored in\npg_language.lancompiler and specified using the LANCOMPILER option to\nCREATE LANGUAGE or by updating the record directly. pgAdmin has done it\nthis way for years but during a scouring of the docs today, I notice\nthat LANCOMPILER is no longer mentioned and there is no COMMENT ON\nLANGUAGE to replace it.\n\nRegards, Dave.\n",
"msg_date": "Fri, 14 Jun 2002 11:10:52 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Language Comments"
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> What is the preferred method (if there even is one) for modifying the\n> comment on a language? \n\nThere isn't one. Certainly LANCOMPILER was *never* meant as a place to\nstore comments.\n\nI suppose a COMMENT ON LANGUAGE facility could be added, but I can't get\nvery excited about it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 09:48:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Language Comments "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 14 June 2002 14:49\n> To: Dave Page\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Language Comments \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > What is the preferred method (if there even is one) for \n> modifying the \n> > comment on a language?\n> \n> There isn't one. Certainly LANCOMPILER was *never* meant as \n> a place to store comments.\n\nIt was in the docs until v1.14 (doc/src/sgml/ref/create_language.sgml)\nwhen Peter removed it for 1.15 - I therefore made use of the feature in\npgAdmin...\n\nI only noticed it wasn't there 'cos I was trawling the docs looking for\nnew/missing features to add to pgAdmin.\n\nRegards, Dave.\n",
"msg_date": "Fri, 14 Jun 2002 15:48:42 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Language Comments "
}
] |
[
{
"msg_contents": "As you probably know, SQL99 has dropped the rather useless\ncategorizations of \"basic\", \"intermediate\", and \"advanced\" SQL\ncompliance and instead lists a large number of labeled features. I've\nput these into an appendix for the docs (not yet committed to cvs).\n\nThe list is organized as a (for now) three column table, with \"Feature\",\n\"Description\", and \"Comment\" as the three column headers. This is a\nrelatively long list, covering several printed pages.\n\nSo, a question: should I list all features in the same table, with the\ncomment field indicating if something is not (yet) supported, or should\nI split the features into two tables for supported and unsupported\nfeatures? The former keeps all of the information together if someone is\nlooking something up by feature, and the latter reduces the number of\nrequired comments and makes it easier to see the complete list of\nsupported features.\n\n - Thomas\n",
"msg_date": "Fri, 14 Jun 2002 07:59:01 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "SQL99 feature list"
},
{
"msg_contents": "Thomas,\n\n> So, a question: should I list all features in the same table, with\n> the\n> comment field indicating if something is not (yet) supported, or\n> should\n> I split the features into two tables for supported and unsupported\n> features? The former keeps all of the information together if someone\n> is\n> looking something up by feature, and the latter reduces the number of\n> required comments and makes it easier to see the complete list of\n> supported features.\n\nCan't we put the list in a database and generate both? <grin>\n\nSeriously, I vote for 2 lists. \n\n-Josh\n",
"msg_date": "Fri, 14 Jun 2002 09:10:45 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: SQL99 feature list"
}
] |
[
{
"msg_contents": "\nQuestion:\n\n How feasible would it be to create this functionality in PostgreSQL:\n\nOne creates a test version of a database that initially consists of \nread-links to the production version of the same database. Any code he/she \nthen writes that reads from a table reads from the production database but \nany code that modifies data copies that table to the test database.\n\nThe benefits of this are obviously huge for IT shops that need to constantly \nwork on data in test environments as similar as possible to the production \nenvironment. \n\nUsually, this is a very difficult aspect of one's work and represents a great \ndeal of risk. We always try to hard to ensure that what we migrate into \nproduction is going to work there the same as it did in test. And we should \nnot do testing in a production environment.\n\nSuch a feature would give PostgreSQL a major advantage over Oracle or DB2.\n\nAnd some day when PostgreSQL is also distributable, it'll be ideal for the \nenterprise. \n\nMatthew\n\n-- \nAnything that can be logically explained, can be programmed.\n",
"msg_date": "Fri, 14 Jun 2002 11:12:33 -0400",
"msg_from": "Matthew Tedder <matthew@tedder.com>",
"msg_from_op": true,
"msg_subject": "Big Test Environment Feature"
},
{
"msg_contents": "Matthew Tedder wrote:\n\n>Question:\n>\n> How feasible would it be to create this functionality in PostgreSQL:\n>\n>One creates a test version of a database that initially consists of \n>read-links to the production version of the same database. Any code he/she \n>then writes that reads from a table reads from the production database but \n>any code that modifies data copies that table to the test database.\n>\n>The benefits of this are obviously huge for IT shops that need to constantly \n>work on data in test environments as similar as possible to the production \n>environment. \n>\n>Usually, this is a very difficult aspect of one's work and represents a great \n>deal of risk. We always try to hard to ensure that what we migrate into \n>production is going to work there the same as it did in test. And we should \n>not do testing in a production environment.\n>\n>Such a feature would give PostgreSQL a major advantage over Oracle or DB2.\n>\n>And some day when PostgreSQL is also distributable, it'll be ideal for the \n>enterprise. \n>\n>Matthew\n>\n> \n>\n\nWhy wouldn't you use a pg_dump of the production database? Perhaps just \na sampling every so often?\n\nThis sounds like a lot of unnecessary work for the engine. How about a \nseperate program which has\nnotify links to the source database and places updated data in the test db?\n\n- Bill\n\n\n",
"msg_date": "Fri, 14 Jun 2002 13:41:07 -0700",
"msg_from": "Bill Cunningham <billc@ballydev.com>",
"msg_from_op": false,
"msg_subject": "Re: Big Test Environment Feature"
},
{
"msg_contents": "\nComments at appropriate places below..\n\nOn Friday 14 June 2002 04:41 pm, Bill Cunningham wrote:\n> Matthew Tedder wrote:\n> >Question:\n> >\n> > How feasible would it be to create this functionality in PostgreSQL:\n> >\n> >One creates a test version of a database that initially consists of\n> >read-links to the production version of the same database. Any code\n> > he/she then writes that reads from a table reads from the production\n> > database but any code that modifies data copies that table to the test\n> > database.\n> >\n> >The benefits of this are obviously huge for IT shops that need to\n> > constantly work on data in test environments as similar as possible to\n> > the production environment.\n> >\n> >Usually, this is a very difficult aspect of one's work and represents a\n> > great deal of risk. We always try to hard to ensure that what we\n> > migrate into production is going to work there the same as it did in\n> > test. And we should not do testing in a production environment.\n> >\n> >Such a feature would give PostgreSQL a major advantage over Oracle or DB2.\n> >\n> >And some day when PostgreSQL is also distributable, it'll be ideal for the\n> >enterprise.\n> >\n> >Matthew\n>\n> Why wouldn't you use a pg_dump of the production database? Perhaps just\n> a sampling every so often?\n\nThat won't work nearly as well. Obviously we can and often do dumps. But \nwhen testing something that has to work in a production environment, we need \nto see what happens over a course of several day's time. This is needed not \nonly for testing of the specific code changed or added to a process, but also \na test of how it integrations with a larger and more complex information flow \nsystem. \n\n>\n> This sounds like a lot of unnecessary work for the engine. How about a\n> seperate program which has\n> notify links to the source database and places updated data in the test db?\n\nBig unnecessary dumps and recreation of the data structures also \nunecissarilly use I/O resources. The idea is to minimize that and \neasily/seemlessly create testing environments. \n\nOften, many programmer/analysis are working on different parts of the \ninformation system simultaneously each and every day.\n\nMatthew\n\n>\n> - Bill\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nAnything that can be logically explained, can be programmed.\n",
"msg_date": "Sat, 15 Jun 2002 12:37:07 -0400",
"msg_from": "Matthew Tedder <matthew@tedder.com>",
"msg_from_op": true,
"msg_subject": "Re: Big Test Environment Feature"
}
] |
[
{
"msg_contents": "Tom, Bruce,\n\nBack in 7.1.0, we had a problem where no index could be used on ORDER\nBY ... DESC statements. Has this been fixed? I'm writing an article\non indexing.\n\n-Josh Berkus\n",
"msg_date": "Fri, 14 Jun 2002 09:32:03 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": true,
"msg_subject": "Indexing for DESC sorts"
}
] |
[
{
"msg_contents": "\n\n'Afternoon folks,\n\nI think I'm going blind I just can't spot what I've done wrong. Can someone\nhave a quick glance at this function, and relevent table definitions, and tell\nme what I've got wrong please?\n\nThe error message I'm getting when I try to use it with:\n\nSELECT new_transaction_fn(9, 444, 4, 'B', now(), 'C');\n\nis:\n\nNOTICE: Error occurred while executing PL/pgSQL function new_transaction_fn\nNOTICE: line 11 at assignment\nERROR: parser: parse error at or near \"SELECT\"\n\n(The select works and returns one row as I expect it to btw)\n\n\n--\n-- Tables\n--\n\nCREATE TABLE orders (\n\tid\t\tINTEGER\tNOT NULL DEFAULT nextval('order_seq') PRIMARY KEY,\n\ttype\t\tINTEGER REFERENCES order_type(id),\n\tinstrument\tINTEGER REFERENCES instrument(id),\n\ttime\t\tTIMESTAMP WITH TIME ZONE NOT NULL DEFAULT now(),\n\tmarket_price\tFLOAT8,\n\tprice\t\tFLOAT8,\n\tquantity\tINTEGER,\n\tdirection\tCHAR(1) CHECK(direction = 'B' OR direction = 'S')\n) WITHOUT OIDS;\n\n\nCREATE TABLE transaction (\n\tid\t\tINTEGER NOT NULL DEFAULT nextval('transaction_seq') PRIMARY KEY,\n\torder_id\tINTEGER REFERENCES orders(id),\n\ttime\t\tTIMESTAMP WITH TIME ZONE NOT NULL DEFAULT now(),\n\tprice\t\tFLOAT8,\n\tquantity\tINTEGER,\n\tstatus\t\tCHAR(1) CHECK(status = 'c' OR status = 'C')\n) WITHOUT OIDS;\n\n\n--\n-- Function\n--\n\nCREATE OR REPLACE FUNCTION new_transaction_fn (\n\t\tinteger,float8,integer,char(1),timestamp,char(1)\n\t) RETURNS boolean AS '\n\tDECLARE\n\t\tordid ALIAS FOR $1;\n\t\tprice ALIAS FOR $2;\n\t\tquantity ALIAS FOR $3;\n\t\tdirn ALIAS FOR $4;\n\t\ttime ALIAS FOR $5;\n\t\tstatus ALIAS FOR $6;\n\tBEGIN\n\t\t-- check against order\n\t\tPERFORM\n\t\t\tSELECT 1\n\t\t\t\tFROM orders\n\t\t\t\tWHERE\n\t\t\t\t\tid = ordid\n\t\t\t\t\tAND\n\t\t\t\t\tdirection = dirn;\n\t\tIF NOT FOUND THEN\n\t\t\tRAISE EXCEPTION ''No order matching % / % found'', ordid, dirn;\n\t\tEND IF;\n\n\t\tINSERT INTO transaction VALUES (\n\t\t\tnextval(''transaction_seq''),\n\t\t\tordid,\n\t\t\tCOALESCE(time, now()),\n\t\t\tprice,\n\t\t\tquantity,\n\t\t\tCOALESCE(status, ''C'')\n\t\t);\n\n\t\tRETURN TRUE;\n\tEND;\n\t' LANGUAGE 'plpgsql';\n\n--\n--\n--\n\n\nThanks,\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Fri, 14 Jun 2002 17:49:08 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "I must be blind..."
},
{
"msg_contents": "\nOn Fri, 14 Jun 2002, Nigel J. Andrews wrote:\n>\n> [snip]\n> The error message I'm getting when I try to use it with:\n> \n> SELECT new_transaction_fn(9, 444, 4, 'B', now(), 'C');\n> \n> is:\n> \n> NOTICE: Error occurred while executing PL/pgSQL function new_transaction_fn\n> NOTICE: line 11 at assignment\n> ERROR: parser: parse error at or near \"SELECT\"\n> \n> [snip]\n\n\nAh, it's the perform. I was wondering about the validity of a FOUND test after\ndiscarding the results of a query. If I just write the result into a dummy\nvariable without using PERFORM it progresses to a different error (but that's\ndown to a check constraint I haven't changed after changing my mind about the\nvalues in the field).\n\nSo I wasn't blind, just dim.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Fri, 14 Jun 2002 18:23:23 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: I must be blind..."
},
{
"msg_contents": "Nigel J. Andrews wrote:\n<snip>\n> \t\tdirn ALIAS FOR $4;\n> \t\ttime ALIAS FOR $5;\n> \t\tstatus ALIAS FOR $6;\n> \tBEGIN\n> \t\t-- check against order\n> \t\tPERFORM\n> \t\t\tSELECT 1\n> \t\t\t\tFROM orders\n> \t\t\t\tWHERE\n> \t\t\t\t\tid = ordid\n> \t\t\t\t\tAND\n> \t\t\t\t\tdirection = dirn;\n> \t\tIF NOT FOUND THEN\n</snip>\n\nI don't think you can use PERFORM like that. Try:\n\n<snip>\n\t\tdirn ALIAS FOR $4;\n\t\ttime ALIAS FOR $5;\n\t\tstatus ALIAS FOR $6;\n\t\tbuf INT;\n\tBEGIN\n\t\t-- check against order\n\t\t\tSELECT 1 INTO buf\n\t\t\t\tFROM orders\n\t\t\t\tWHERE\n\t\t\t\t\tid = ordid\n\t\t\t\t\tAND\n\t\t\t\t\tdirection = dirn;\n\t\tIF NOT FOUND THEN\n</snip>\n\nHTH,\n\nJoe\n\n",
"msg_date": "Fri, 14 Jun 2002 10:47:40 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: I must be blind..."
},
{
"msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> The error message I'm getting when I try to use it with:\n> SELECT new_transaction_fn(9, 444, 4, 'B', now(), 'C');\n> is:\n> NOTICE: Error occurred while executing PL/pgSQL function new_transaction_fn\n> NOTICE: line 11 at assignment\n> ERROR: parser: parse error at or near \"SELECT\"\n\nI'm not seeing the problem either. Try turning on debug_print_query\nand running the function; then look in the postmaster log to see exactly\nwhat got fed down to the main SQL engine by plpgsql. This might shed\nsome light.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 14:12:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: I must be blind... "
},
{
"msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I don't think you can use PERFORM like that. Try:\n\nActually I believe he can; after looking at the manual I realized that\nthe problem is that PERFORM is syntactically a substitute for SELECT.\nIn other words he needed to write\n\n\tPERFORM 1 FROM orders ...\nnot\n\tPERFORM SELECT 1 FROM orders ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 17:35:58 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: I must be blind... "
},
{
"msg_contents": "\nOn Fri, 14 Jun 2002, Tom Lane wrote:\n\n> Joe Conway <mail@joeconway.com> writes:\n> > I don't think you can use PERFORM like that. Try:\n> \n> Actually I believe he can; after looking at the manual I realized that\n> the problem is that PERFORM is syntactically a substitute for SELECT.\n> In other words he needed to write\n> \n> \tPERFORM 1 FROM orders ...\n> not\n> \tPERFORM SELECT 1 FROM orders ...\n\nThanks Tom,\n\nI didn't get that from the manual at all. I think I should go reread it.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Fri, 14 Jun 2002 23:09:49 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: I must be blind..."
},
{
"msg_contents": "\nOn Fri, 14 Jun 2002, Tom Lane wrote:\n\n> Joe Conway <mail@joeconway.com> writes:\n> > I don't think you can use PERFORM like that. Try:\n> \n> Actually I believe he can; after looking at the manual I realized that\n> the problem is that PERFORM is syntactically a substitute for SELECT.\n> In other words he needed to write\n> \n> \tPERFORM 1 FROM orders ...\n> not\n> \tPERFORM SELECT 1 FROM orders ...\n\nYes, indeed if one reads what is there rather than reading things that aren't\nit does say that PERFORM substitutes for SELECT syntactically.\n\nHowever, because PERFORM discards the results of a query it is only useful for\nside effects of the query. My usage of it was wrong since I wasn't using it for\nside effects merely for determining the existance of a result without having to\nstore that result since it wasn't required. Therefore, with the correct syntax\nof PERFORM <query> my function doesn't generate an 'unprogrammed' error but the\ntest of FOUND always fails, i.e. result is NOT FOUND. Therefore SELECT INTO\ndummy ... is still the correct thing for me to be doing.\n\nI just thought I'd clear that up in case anyone was wondering, and yes, I have\ntested it.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n",
"msg_date": "Fri, 14 Jun 2002 23:22:35 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: I must be blind..."
},
{
"msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> However, because PERFORM discards the results of a query it is only\n> useful for side effects of the query. My usage of it was wrong since I\n> wasn't using it for side effects merely for determining the existance\n> of a result without having to store that result since it wasn't\n> required. Therefore, with the correct syntax of PERFORM <query> my\n> function doesn't generate an 'unprogrammed' error but the test of\n> FOUND always fails, i.e. result is NOT FOUND. Therefore SELECT INTO\n> dummy ... is still the correct thing for me to be doing.\n\nOkay. I guess the next question is whether PERFORM *should* be setting\nFOUND. Seems like it might be a reasonable thing to do.\n\nDoes PERFORM exist in Oracle's plsql? If so, what does it do?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 18:25:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: I must be blind... "
},
{
"msg_contents": "Tom Lane dijo: \n\n> \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > However, because PERFORM discards the results of a query it is only\n> > useful for side effects of the query.\n\n> Okay. I guess the next question is whether PERFORM *should* be setting\n> FOUND. Seems like it might be a reasonable thing to do.\n\nWell, actually FOUND _is_ a side effect of PERFORM, IMHO. I also tried\nto do the very same thing, and also had to use the dummy variable, which\nseems like a waste to me.\n\nI do not know anything about Oracle's PERFORM, though a quick search on\nGoogle shows nothing relevant.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La conclusion que podemos sacar de esos estudios es que\nno podemos sacar ninguna conclusion de ellos\" (Tanenbaum)\n\n",
"msg_date": "Fri, 14 Jun 2002 20:12:10 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: I must be blind... "
},
{
"msg_contents": "\nOn Fri, 14 Jun 2002, Alvaro Herrera wrote:\n\n> Tom Lane dijo: \n> \n> > \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > > However, because PERFORM discards the results of a query it is only\n> > > useful for side effects of the query.\n> \n> > Okay. I guess the next question is whether PERFORM *should* be setting\n> > FOUND. Seems like it might be a reasonable thing to do.\n> \n> Well, actually FOUND _is_ a side effect of PERFORM, IMHO. I also tried\n> to do the very same thing, and also had to use the dummy variable, which\n> seems like a waste to me.\n> \n> I do not know anything about Oracle's PERFORM, though a quick search on\n> Google shows nothing relevant.\n\nI know nothing of Oracle's use of PERFORM either. Indeed I have looked in 4\nOracle books 'Oracle 8i The Complete Reference', 'Oracle8i DBA Bible', 'Oracle\nPL/SQL Language Pocket Reference' and one on PL/SQL Builtins (on the off\nchance), and couldn't find any reference to PERFORM. I even scanned, by eye,\nevery page of the PL/SQL reference and saw nothing.\n\nOn that basis I've included a patch that sets FOUND to true if a PERFORM\n<query> 'processes' a row. From looking at other routines in pl_exec.c I\nbelieve that I have used the correct test. As FOUND isn't testing as true after\na PERFORM at the moment I also presume there is no need for an explicit set to\nfalse.\n\nI have tested this in 7.3dev with the regression tests and the case that caused\nme to come across this situation and no errors occured.\n\nThe patch is at the bottom of this message. I don't know if this should be\napplied though. There seems to be one vote for it, at least, but there is a\nquestion over what other systems do in this situation.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n\nIndex: src/pl/plpgsql/src/pl_exec.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/pl/plpgsql/src/pl_exec.c,v\nretrieving revision 1.55\ndiff -c -r1.55 pl_exec.c\n*** src/pl/plpgsql/src/pl_exec.c\t2002/03/25 07:41:10\t1.55\n--- src/pl/plpgsql/src/pl_exec.c\t2002/06/15 15:10:38\n***************\n*** 981,989 ****\n \t\tif (expr->plan == NULL)\n \t\t\texec_prepare_plan(estate, expr);\n \n! \t\trc = exec_run_select(estate, expr, 0, NULL);\n \t\tif (rc != SPI_OK_SELECT)\n \t\t\telog(ERROR, \"query \\\"%s\\\" didn't return data\", expr->query);\n \n \t\texec_eval_cleanup(estate);\n \t}\n--- 981,992 ----\n \t\tif (expr->plan == NULL)\n \t\t\texec_prepare_plan(estate, expr);\n \n! \t\trc = exec_run_select(estate, expr, 1, NULL);\n \t\tif (rc != SPI_OK_SELECT)\n \t\t\telog(ERROR, \"query \\\"%s\\\" didn't return data\", expr->query);\n+ \n+ \t\tif (estate->eval_processed != 0)\n+ \t\t\texec_set_found(estate, true);\n \n \t\texec_eval_cleanup(estate);\n \t}\n\n",
"msg_date": "Sat, 15 Jun 2002 16:28:19 +0100 (BST)",
"msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>",
"msg_from_op": true,
"msg_subject": "PERFORM effects FOUND patch (Was: I must be blind...)"
},
{
"msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> + \t\tif (estate->eval_processed != 0)\n> + \t\t\texec_set_found(estate, true);\n\nTo be actually useful the command would have to set FOUND to either\ntrue or false depending on whether it computed a row or not. So the\ncorrect patch would be more like\n\n+\t\texec_set_found(estate, (estate->eval_processed != 0));\n\nAlso, changing the parameter to exec_run_select as you did is wrong.\nA multi-row query should be allowed to run to completion, I'd think.\n\nAs for whether to apply it or not --- the change seems reasonable if we\nwere working in a vacuum. But I don't believe we invented PERFORM out\nof whole cloth; surely there are other systems that we need to consider\ncompatibility with.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 15 Jun 2002 12:28:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PERFORM effects FOUND patch (Was: I must be blind...) "
},
{
"msg_contents": "\"Nigel J. Andrews\" wrote:\n> \n> On Fri, 14 Jun 2002, Alvaro Herrera wrote:\n> \n> > Tom Lane dijo:\n> >\n> > > \"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> > > > However, because PERFORM discards the results of a query it is only\n> > > > useful for side effects of the query.\n> >\n> > > Okay. I guess the next question is whether PERFORM *should* be setting\n> > > FOUND. Seems like it might be a reasonable thing to do.\n> >\n> > Well, actually FOUND _is_ a side effect of PERFORM, IMHO. I also tried\n> > to do the very same thing, and also had to use the dummy variable, which\n> > seems like a waste to me.\n> >\n> > I do not know anything about Oracle's PERFORM, though a quick search on\n> > Google shows nothing relevant.\n> \n> I know nothing of Oracle's use of PERFORM either. Indeed I have looked in 4\n> Oracle books 'Oracle 8i The Complete Reference', 'Oracle8i DBA Bible', 'Oracle\n> PL/SQL Language Pocket Reference' and one on PL/SQL Builtins (on the off\n> chance), and couldn't find any reference to PERFORM. I even scanned, by eye,\n> every page of the PL/SQL reference and saw nothing.\n\nPerform has nothing to do with ORACLE. It was added because people tried\nto call other \"procedures\" and didn't want any result back. Using\n\n SELECT function();\n\ndidn't look right, so we made it\n\n PERFORM function();\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Mon, 17 Jun 2002 09:38:36 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PERFORM effects FOUND patch (Was: I must be "
},
{
"msg_contents": "Jan Wieck <JanWieck@Yahoo.com> writes:\n> Perform has nothing to do with ORACLE. It was added because people tried\n> to call other \"procedures\" and didn't want any result back.\n\nWell, in that case we can do what we want with it.\n\nDoes anyone object to making it set FOUND?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:34:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PERFORM effects FOUND patch (Was: I must be "
},
{
"msg_contents": "> Jan Wieck <JanWieck@Yahoo.com> writes:\n>> Perform has nothing to do with ORACLE. It was added because people tried\n>> to call other \"procedures\" and didn't want any result back.\n\n> Well, in that case we can do what we want with it.\n\n> Does anyone object to making it set FOUND?\n\nGiven the lack of objection, I have committed the attached patch for 7.3,\nalong with a suitable documentation update.\n\n\t\t\tregards, tom lane\n\n*** src/pl/plpgsql/src/pl_exec.c.orig\tMon Mar 25 02:41:10 2002\n--- src/pl/plpgsql/src/pl_exec.c\tMon Jun 24 18:23:11 2002\n***************\n*** 969,977 ****\n \telse\n \t{\n \t\t/*\n! \t\t * PERFORM: evaluate query and discard result.\tThis cannot share\n! \t\t * code with the assignment case since we do not wish to\n! \t\t * constraint the discarded result to be only one row/column.\n \t\t */\n \t\tint\t\t\trc;\n \n--- 969,979 ----\n \telse\n \t{\n \t\t/*\n! \t\t * PERFORM: evaluate query and discard result (but set FOUND\n! \t\t * depending on whether at least one row was returned).\n! \t\t *\n! \t\t * This cannot share code with the assignment case since we do not\n! \t\t * wish to constrain the discarded result to be only one row/column.\n \t\t */\n \t\tint\t\t\trc;\n \n***************\n*** 984,989 ****\n--- 986,993 ----\n \t\trc = exec_run_select(estate, expr, 0, NULL);\n \t\tif (rc != SPI_OK_SELECT)\n \t\t\telog(ERROR, \"query \\\"%s\\\" didn't return data\", expr->query);\n+ \n+ \t\texec_set_found(estate, (estate->eval_processed != 0));\n \n \t\texec_eval_cleanup(estate);\n \t}\n\n\n",
"msg_date": "Mon, 24 Jun 2002 19:14:25 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] PERFORM effects FOUND patch (Was: I must be "
}
] |
[
{
"msg_contents": "There are various paths of control in md5_crypt_verify that do\n\n\t\tif (passwd)\n\t\t\tpfree(passwd);\n\t\tif (valuntil)\n\t\t\tpfree(valuntil);\n\nIsn't this now pfree'ing part of the saved pre-parsed pg_pwd data?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 14 Jun 2002 13:08:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Breakage in crypt.c"
},
{
"msg_contents": "Tom Lane wrote:\n> There are various paths of control in md5_crypt_verify that do\n> \n> \t\tif (passwd)\n> \t\t\tpfree(passwd);\n> \t\tif (valuntil)\n> \t\t\tpfree(valuntil);\n> \n> Isn't this now pfree'ing part of the saved pre-parsed pg_pwd data?\n\nOops, yep. Fixed. The pfree's were fine in 7.2, but now, we cache\npg_pwd.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 14 Jun 2002 20:52:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Breakage in crypt.c"
}
] |
[
{
"msg_contents": "Thanks for reading. A few disclaimers:\n\n1. I am a newbie. I program for a living, but my work in pg has so far \nbeen at the \"devoted hobby level,\" using pg and PHP. For an example of \nwhat I have done with pg, you can visit www.the-athenaeum.org , a site I \none day hope to make into a business.\n\n2. I've searched the archives, but can't find a good solution to my \nproblem. I realize that there may be better ways to solve my issues \nthan expanding pg's feature set, or there may be features I'm not \nfamiliar with. This message is partly to find out how I should approach \nmy problem.\n\n3. I know you are all busy, and there are more pressing issues. I am \nextremely grateful for any advice you can give me, and will be ecstatic \nif I can get a solution out of this.\n\nSo, on to my issue.\n\nTHE BACKGROUND - I am creating a web site where people can study the \nhumanities. They can upload, discuss, and peer-review information. They \ncan also create, edit, approve, and delete records in a postgresql db, \nusing web forms. Many of these forms need a way to enter historical \ndates - a person DOB, the date an empire was founded, the date a book \nwas published, etc. \n\nMY PROBLEM - Because this site deals with, among other things, ancient \nart, acheaology, and anthropology, I need a way to handle dates as \nspecific as a single day, and as far back as 100,000 BC. According to \nthe docs (I looked at \nhttp://www.postgresql.org/idocs/index.php?datatype-datetime.html), the \nfarthest back any date type reaches is 4713 BC. So far, I have tried to \ndeal with this problem by creating a numeric field for the year, and \nradio buttons for AD/BC. I then do a lot of form validation. Not only \nthat, if I want to be as specific as a month or a day, then those are \nseparate fields on my forms. Plus, I can't combine all of the fields \nand put them into a pg data type, because once again, the pg dates don't \nextend that far back. So, I have to maintain and validate the year, \nmonth, and day fields separately. Then imagine what I have to do if a \nuser wants to _sort_ by date, or select events by date range! \n\nIdeally, I would like to figure this out on two fronts. I'd like to \nfind out what's the best way to store dates that far back (with pg), and \nthen on the PHP end I'll have to figure out how to parse entry so that \nit is as simple as possible for the end user. Knowing how to store \nthese ancient dates in pg would help me a great deal.\n\nThere are a lot of university and hobby sites out there working on \ndigitizing collections of ancient texts, artifacts, etc. I don't know \nhow the date range is chosen for a type like timestamp (4713BC - \n1,465,001 AD), but it seems to me that there would be way more people \nworking on recording the past (and thereby needed a date range that \nextends into ancient civilization) than working with dates in the far \nfuture (more than a million years ahead???).\n\nI hope that someone will be kind enough to reply with some ideas, or \neven to take up the cause and consider a date type that could be used \nfor historical purposes. I am an avid fan of open source and pg, \nespecially as compared to mySQL. I hope to continue using pg, and build \na first-class web site that may one day serve as a great working example \nof what pg can do. Any help would be greatly appreciated.\n\nThanks in advance,\nChris McCormick\n\n",
"msg_date": "Fri, 14 Jun 2002 15:59:40 -0400",
"msg_from": "\"Chris McCormick\" <cmccormick@thestate.com>",
"msg_from_op": true,
"msg_subject": "FEATURE REQUEST - More dynamic date type"
}
] |
[
{
"msg_contents": "I've just committed changes to include an SQL99 feature list as an\nappendix in the User's Guide. While preparing that I noticed a feature\nor two which would be trivial to implement, so we now have LOCALTIME and\nLOCALTIMESTAMP function calls per spec (afaict; the spec is very vague\non the behaviors).\n\nI've also removed the ODBC-compatible parentheses on CURRENT_TIMESTAMP\netc and made sure that the ODBC driver handles the case correctly.\n\nMore details from the CVS logs are below...\n\n - Thomas\n\nAdd LOCALTIME and LOCALTIMESTAMP functions per SQL99 standard.\nRemove ODBC-compatible empty parentheses from calls to SQL99 functions\n for which these parentheses do not match the standard.\nUpdate the ODBC driver to ensure compatibility with the ODBC standard\n for these functions (e.g. CURRENT_TIMESTAMP, CURRENT_USER, etc).\nInclude a new appendix in the User's Guide which lists the labeled\nfeatures\n for SQL99 (the labeled features replaced the \"basic\", \"intermediate\",\n and \"advanced\" categories from SQL92). features.sgml does not yet split\n this list into \"supported\" and \"unsupported\" lists.\nSearch the existing regular expression cache as a ring buffer.\nWill optimize the case for repeated calls for the same expression,\n which seems to be the most common case. Formerly, always searched\n from the first entry.\nMay want to look at the least-recently-used algorithm to make sure it\n is identifying the right slots to reclaim. Seems silly to do math when\n it seems that we could simply use an incrementing counter...\n",
"msg_date": "Fri, 14 Jun 2002 22:16:04 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Patches for LOCALTIME and regexp, feature list"
},
{
"msg_contents": "You wrote \"was either to voluminous\" instead of \"was either too voluminous\"\nin the first paragraph of the appendix...\n\nChris\n\n----- Original Message -----\nFrom: \"Thomas Lockhart\" <lockhart@fourpalms.org>\nTo: \"PostgreSQL Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Saturday, June 15, 2002 1:16 PM\nSubject: [HACKERS] Patches for LOCALTIME and regexp, feature list\n\n\n> I've just committed changes to include an SQL99 feature list as an\n> appendix in the User's Guide. While preparing that I noticed a feature\n> or two which would be trivial to implement, so we now have LOCALTIME and\n> LOCALTIMESTAMP function calls per spec (afaict; the spec is very vague\n> on the behaviors).\n>\n> I've also removed the ODBC-compatible parentheses on CURRENT_TIMESTAMP\n> etc and made sure that the ODBC driver handles the case correctly.\n>\n> More details from the CVS logs are below...\n>\n> - Thomas\n>\n> Add LOCALTIME and LOCALTIMESTAMP functions per SQL99 standard.\n> Remove ODBC-compatible empty parentheses from calls to SQL99 functions\n> for which these parentheses do not match the standard.\n> Update the ODBC driver to ensure compatibility with the ODBC standard\n> for these functions (e.g. CURRENT_TIMESTAMP, CURRENT_USER, etc).\n> Include a new appendix in the User's Guide which lists the labeled\n> features\n> for SQL99 (the labeled features replaced the \"basic\", \"intermediate\",\n> and \"advanced\" categories from SQL92). features.sgml does not yet split\n> this list into \"supported\" and \"unsupported\" lists.\n> Search the existing regular expression cache as a ring buffer.\n> Will optimize the case for repeated calls for the same expression,\n> which seems to be the most common case. Formerly, always searched\n> from the first entry.\n> May want to look at the least-recently-used algorithm to make sure it\n> is identifying the right slots to reclaim. Seems silly to do math when\n> it seems that we could simply use an incrementing counter...\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Sat, 15 Jun 2002 17:38:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Patches for LOCALTIME and regexp, feature list"
},
{
"msg_contents": "> You wrote \"was either to voluminous\" instead of \"was either too voluminous\"\n> in the first paragraph of the appendix...\n\nThanks!\n\n - Thomas\n",
"msg_date": "Sat, 15 Jun 2002 07:01:17 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Patches for LOCALTIME and regexp, feature list"
}
] |
[
{
"msg_contents": "Hi,\ni'm trying to use the instruccion \"copy\" of psql in windows\nbut it fails, it's posible to do that? how?\n\nthanks.\n",
"msg_date": "Sat, 15 Jun 2002 16:20:21 UT",
"msg_from": "mmendez@inntecra.com",
"msg_from_op": true,
"msg_subject": "please help me with psql!"
}
] |
[
{
"msg_contents": "What does this mean, and what could be causing it?\n\nFATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\ndirectory\n\nThat's the second time in as many months that I have received this\nerror when trying to start postmaster after a crash -- both times a\nserver reboot remedied the issue.\n\nThanks.\n",
"msg_date": "15 Jun 2002 10:08:15 -0700",
"msg_from": "james@unifiedmind.com (James Thornton)",
"msg_from_op": true,
"msg_subject": "FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n\tdirectory"
},
{
"msg_contents": "ow wrote:\n> \n> Just curious ... how often does the server crash? Thanks\n\nPostgres has crashed twice in two months. I am running several OpenACS\nwebsites with it, and I have been for ~1.5 yrs -- Postgres has been\nsolid, these two crashes are not the norm.\n",
"msg_date": "Sun, 16 Jun 2002 15:54:53 -0500",
"msg_from": "James Thornton <thornton@cs.baylor.edu>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or "
},
{
"msg_contents": "Just curious ... how often does the server crash? Thanks\n\n\"James Thornton\" <james@unifiedmind.com> wrote in message\nnews:cabf0e7b.0206150908.1edab2f8@posting.google.com...\n> What does this mean, and what could be causing it?\n>\n> FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n> directory\n>\n> That's the second time in as many months that I have received this\n> error when trying to start postmaster after a crash -- both times a\n> server reboot remedied the issue.\n>\n> Thanks.\n\n\n",
"msg_date": "Sun, 16 Jun 2002 17:06:21 -0400",
"msg_from": "\"ow\" <oneway_111@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n\tdirectory"
},
{
"msg_contents": "Tom Lane wrote:\n>\n> That really should be impossible --- it says that a rename() failed for\n> a file we just created.\n> \n> I judge from the spelling of the error message that you are running 7.1.\n\n7.1.3\n\n> However, given that you state a system reboot is necessary and\n> sufficient to make the problem go away, I am going to stick my neck\n> *way* out and suggest that:\n> \n> 1. You have the $PGDATA directory (or at least its pg_xlog subdirectory)\n> mounted via NFS.\n> \n> 2. This is an NFS problem.\n\nI am not running NFS on this system.\n",
"msg_date": "Mon, 17 Jun 2002 04:13:48 -0500",
"msg_from": "James Thornton <thornton@cs.baylor.edu>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> James Thornton <thornton@cs.ecs.baylor.edu> writes:\n> > I am not running NFS on this system.\n> \n> Oh well, scratch that theory. Perhaps you should tell us what you *are*\n> running --- what OS, what hardware? I still believe that this must be\n> a system-level bug and not directly Postgres' fault.\n\n[nsadmin@roam proc]$ cat version cpuinfo meminfo pci \n\nLinux version 2.4.7-10smp (bhcompile@stripples.devel.redhat.com) (gcc\nversion 2.96 20000731 (Red Hat Linux 7.1 2.96-98)) #1 SMP Thu Sep 6\n17:09:31 EDT 2001\n\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 7\nmodel name : Pentium III (Katmai)\nstepping : 3\ncpu MHz : 548.324\ncache size : 512 KB\nfdiv_bug : no\nhlt_bug : no\nf00f_bug : no\ncoma_bug : no\nfpu : yes\nfpu_exception : yes\ncpuid level : 2\nwp : yes\nflags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge\nmca cmov pat pse36 mmx fxsr sse\nbogomips : 1094.45\n\n total: used: free: shared: buffers: cached:\nMem: 327278592 321400832 5877760 720896 10825728 52867072\nSwap: 271392768 13783040 257609728\nMemTotal: 319608 kB\nMemFree: 5740 kB\nMemShared: 704 kB\nBuffers: 10572 kB\nCached: 39552 kB\nSwapCached: 12076 kB\nActive: 21956 kB\nInact_dirty: 40668 kB\nInact_clean: 280 kB\nInact_target: 480 kB\nHighTotal: 0 kB\nHighFree: 0 kB\nLowTotal: 319608 kB\nLowFree: 5740 kB\nSwapTotal: 265032 kB\nSwapFree: 251572 kB\nNrSwapPages: 62893 pages\n\nPCI devices found:\n Bus 0, device 0, function 0:\n Host bridge: Intel Corporation 440BX/ZX - 82443BX/ZX Host bridge\n(rev 3).\n Master Capable. Latency=64. \n Prefetchable 32 bit memory at 0xf0000000 [0xf3ffffff].\n Bus 0, device 1, function 0:\n PCI bridge: Intel Corporation 440BX/ZX - 82443BX/ZX AGP bridge (rev\n3).\n Master Capable. Latency=64. Min Gnt=136.\n Bus 0, device 7, function 0:\n ISA bridge: Intel Corporation 82371AB PIIX4 ISA (rev 2).\n Bus 0, device 7, function 1:\n IDE interface: Intel Corporation 82371AB PIIX4 IDE (rev 1).\n Master Capable. Latency=32. \n I/O at 0x1000 [0x100f].\n Bus 0, device 7, function 2:\n USB Controller: Intel Corporation 82371AB PIIX4 USB (rev 1).\n IRQ 14.\n Master Capable. Latency=64. \n I/O at 0xdce0 [0xdcff].\n Bus 0, device 7, function 3:\n Bridge: Intel Corporation 82371AB PIIX4 ACPI (rev 2).\n IRQ 9.\n Bus 0, device 14, function 0:\n Ethernet controller: Intel Corporation 82557 [Ethernet Pro 100] (rev\n4).\n IRQ 11.\n Master Capable. Latency=64. Min Gnt=8.Max Lat=56.\n Prefetchable 32 bit memory at 0xf7000000 [0xf7000fff].\n I/O at 0xdcc0 [0xdcdf].\n Non-prefetchable 32 bit memory at 0xff000000 [0xff0fffff].\n Bus 0, device 15, function 0:\n PCI bridge: Digital Equipment Corporation DECchip 21152 (rev 3).\n Master Capable. Latency=64. Min Gnt=2.\n Bus 0, device 17, function 0:\n Ethernet controller: 3Com Corporation 3c905B 100BaseTX [Cyclone]\n(rev 36).\n IRQ 14.\n Master Capable. Latency=64. Min Gnt=10.Max Lat=10.\n I/O at 0xdc00 [0xdc7f].\n Non-prefetchable 32 bit memory at 0xff100000 [0xff10007f].\n Bus 1, device 0, function 0:\n VGA compatible controller: ATI Technologies Inc 3D Rage Pro AGP\n1X/2X (rev 92).\n IRQ 9.\n Master Capable. Latency=64. Min Gnt=8.\n Non-prefetchable 32 bit memory at 0xfd000000 [0xfdffffff].\n I/O at 0xfc00 [0xfcff].\n Non-prefetchable 32 bit memory at 0xfcfff000 [0xfcffffff].\n Bus 2, device 9, function 0:\n Unknown mass storage controller: Promise Technology, Inc. 20262 (rev\n1).\n IRQ 9.\n Master Capable. Latency=64. \n I/O at 0xecf8 [0xecff].\n I/O at 0xecf0 [0xecf3].\n I/O at 0xece0 [0xece7].\n I/O at 0xecd8 [0xecdb].\n I/O at 0xec80 [0xecbf].\n Non-prefetchable 32 bit memory at 0xfafe0000 [0xfaffffff].\n",
"msg_date": "Mon, 17 Jun 2002 07:28:42 -0500",
"msg_from": "James Thornton <thornton@cs.baylor.edu>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such "
},
{
"msg_contents": "james@unifiedmind.com (James Thornton) writes:\n> What does this mean, and what could be causing it?\n> FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n> directory\n> That's the second time in as many months that I have received this\n> error when trying to start postmaster after a crash -- both times a\n> server reboot remedied the issue.\n\nThat really should be impossible --- it says that a rename() failed for\na file we just created.\n\nI judge from the spelling of the error message that you are running 7.1.\nI would recommend an update to 7.2, wherein the error message looks\nmore like this:\n\n if (rename(tmppath, path) < 0)\n elog(STOP, \"rename from %s to %s (initialization of log file %u, segment %u) failed: %m\",\n tmppath, path, log, seg);\n\n(Alternatively, you could just edit the message in your existing sources\nto include the actual source and destination pathnames given to rename()\n--- it's in src/backend/access/transam/xlog.c, line 1396 in 7.1.3.)\n\nThat will allow us to eliminate the faint possibility that the code is\nsomehow miscomputing the pathnames occasionally.\n\nHowever, given that you state a system reboot is necessary and\nsufficient to make the problem go away, I am going to stick my neck\n*way* out and suggest that:\n\n1. You have the $PGDATA directory (or at least its pg_xlog subdirectory)\n mounted via NFS.\n\n2. This is an NFS problem.\n\nIn my book, no adequately-paranoid DBA will trust his database to NFS.\nThere are some cautionary tales in our mailing list archives...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 10:16:48 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n\tdirectory"
},
{
"msg_contents": "James Thornton <thornton@cs.ecs.baylor.edu> writes:\n> I am not running NFS on this system.\n\nOh well, scratch that theory. Perhaps you should tell us what you *are*\nrunning --- what OS, what hardware? I still believe that this must be\na system-level bug and not directly Postgres' fault.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 10:55:20 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n\tdirectory"
},
{
"msg_contents": "6/17/02 10:16:48 AM, Tom Lane <tgl@sss.pgh.pa.us> wrote:\n\n>james@unifiedmind.com (James Thornton) writes:\n>> What does this mean, and what could be causing it?\n>> FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n>> directory\n>> That's the second time in as many months that I have received this\n>> error when trying to start postmaster after a crash -- both times a\n>> server reboot remedied the issue.\n>\n>That really should be impossible --- it says that a rename() failed for\n>a file we just created.\n>\n>I judge from the spelling of the error message that you are running 7.1.\n>I would recommend an update to 7.2, wherein the error message looks\n>more like this:\n>\n> if (rename(tmppath, path) < 0)\n> elog(STOP, \"rename from %s to %s (initialization of log file %u, \nsegment %u) failed: %m\",\n> tmppath, path, log, seg);\n>\n[snip]\n\n From the xlog.c file in 7.3devel in InstallXLogFileSegment(), look at the\ncode near:\n\n> while ((fd = BasicOpenFile(path, O_RDWR | PG_BINARY,\n> S_IRUSR | S_IWUSR)) >= 0)\n\nIt would seem like we assume that ANY failure of BasicOpenFile() implies\nthat 'path' does not exist. So then we don't handle any other cases, and\nrename might fail because 'path' actually exists. \n\nWhat if BasicOpenFile() got some other error?\n\nThis would seem to be wrong, but it still doesn't explain why \nBasicOpenFile() would be failing when 'path' exists in this \nparticular case.\n\nI don't have the 7.1 or 7.2 code around, and I've never looked at it.\n\nJ.R. Nield\nnield@usol.com\n\n\n\n\n",
"msg_date": "Mon, 17 Jun 2002 14:36:36 -0400",
"msg_from": "nield@usol.com",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n\tdirectory"
},
{
"msg_contents": "nield@usol.com writes:\n> What if BasicOpenFile() got some other error?\n\nDoesn't really matter; anything else would be a problem we can't recover\nfrom anyhow. Besides, given that rename is failing with ENOENT, a\nconflict on the destination name does not appear to be the issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:45:12 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FATAL 2: InitRelink(logfile 0 seg 173) failed: No such file or\n\tdirectory"
}
] |
[
{
"msg_contents": "I have finally caught up on my email. The first week after returning\nfrom vacation, I read the 3400 emails from May (~100/day), and the\nsecond week dealt with emails requiring special attention. All\noutstanding patches should now be applied.\n\nI am now back to trying to polish up open issues for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 15 Jun 2002 20:15:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Caught up on email"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \n> Sent: 16 June 2002 01:09\n> To: Tom Lane\n> Cc: PostgreSQL-development; PostgreSQL odbc list\n> Subject: Re: [ODBC] [HACKERS] KSQO parameter\n>\n> > either ... does anyone out there want to work on it?\n> \n> The following patch removes KSQO from GUC and the call to the \n> function. It also moves the main KSQO file into _deadcode. Applied.\n> \n> ODBC folks, should I remove KSQO from the ODBC driver?\n\nI'm not sure that Hiroshi has updated CVS with all our recent changes\njust yet - there is certainly a patch outstanding that redesigns the\nsetup dialogues that I posted that would be affected by this...\n\nRegards, Dave.\n",
"msg_date": "Sun, 16 Jun 2002 10:52:45 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] KSQO parameter"
},
{
"msg_contents": "Dave Page wrote:\n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > Sent: 16 June 2002 01:09\n> > To: Tom Lane\n> > Cc: PostgreSQL-development; PostgreSQL odbc list\n> > Subject: Re: [ODBC] [HACKERS] KSQO parameter\n> >\n> > > either ... does anyone out there want to work on it?\n> >\n> > The following patch removes KSQO from GUC and the call to the\n> > function. It also moves the main KSQO file into _deadcode. Applied.\n> >\n> > ODBC folks, should I remove KSQO from the ODBC driver?\n> \n> I'm not sure that Hiroshi has updated CVS with all our recent changes\n> just yet \n\nSorry I've had no time to see it.\n\n> - there is certainly a patch outstanding that redesigns the\n> setup dialogues that I posted that would be affected by this...\n\nI don't object to remove the KSQO option at server side.\nBut why must the option be removed from the odbc driver \ntogether ? As I mentioned many many times, the driver\nisn't only for the recent (especially not yet released)\nversion of servers. I think we had better have a separate\ndialog for obsolete options.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Mon, 17 Jun 2002 10:57:56 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] KSQO parameter"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > - there is certainly a patch outstanding that redesigns the\n> > setup dialogues that I posted that would be affected by this...\n> \n> I don't object to remove the KSQO option at server side.\n> But why must the option be removed from the odbc driver \n> together ? As I mentioned many many times, the driver\n> isn't only for the recent (especially not yet released)\n> version of servers. I think we had better have a separate\n> dialog for obsolete options.\n\nYes, this is why I am asking. If you want to keep it for <7.1 releases,\nthat is fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 16 Jun 2002 22:36:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] KSQO parameter"
},
{
"msg_contents": "Hello everbody.\n\nI have started work on native OLE DB Provider for PostgreSQL. i think it is\nreally important to have native OLE DB proviader even if there is OLE DB to\nODBC Provider. Microsoft started to withdraw support for ODBC and seems to\nforce to switch to OLE DB(eg VS .Net have limited support for ODBC which\nmust be spearetly downloaded), so we could expect more such moves. So I\nstarted work on native PostgreSQL OLE DB provider. It slowly begin\nto work. (it registers to system, shows property pages, connects to\ndatabase, and do first quries). Now I'm working on creating resultset.\n\nI would like somebody to help answer some questions:\n\n1.is it safe to use binary cursors (basic types format e.g date would not\nchange in future)\n2.how could I control result type (binary/ASCII) for ordinary SELECTS ?\n3.could ODBC driver mix auto-commit and FETCH/DECLARE mode ? how it can\nhandle such situation:\n start select (fetch some data) /as I understand it should start\ntransaction silently/\n update (it should commit as we are in auto-commit)\n fetch more data.\n next update (commit again)\n4. maybe it would have sense to make some common library with ODBC to handle\nprepared statements and some kind of conversions (althougth maybe not\nbecouse we convert to different types (OLE DB uses OLE Automation types))\n5. Is there any way to truncate PQresult in libpq ? I implement fast forward\ncursors so i have no need to store whole result (I can read piece of data\nand forget it when client read it from me). I think there would be no\nproblem if I add some PQtruncate(PQresult *) to pqlib, but maybe there is\nany different way ?\n\nI use ATL templates for OLE DB which is MS library and it can't be compiled\nwithot Visual Studio. (you have right to distribute ATL if you bougth VS).\nSo It would be not \"pure\" solution, but starting from scratch is to\ndifficult, as I don't know COM prefectly. Maybe somebody will rewrite it\nlater.\n\n\n",
"msg_date": "Mon, 17 Jun 2002 10:33:51 +0200",
"msg_from": "\"Marek Mosiewicz\" <marekmosiewicz@poczta.onet.pl>",
"msg_from_op": false,
"msg_subject": "Native OLE DB. What do you think about it"
},
{
"msg_contents": "Are you aware that another team on the list is working on a .Net provider?\nMaybe you could work with them?\n\n.Net Provider people: speak up!\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marek Mosiewicz\n> Sent: Monday, 17 June 2002 4:34 PM\n> To: PostgreSQL-development; PostgreSQL odbc list\n> Subject: [HACKERS] Native OLE DB. What do you think about it\n>\n>\n> Hello everbody.\n>\n> I have started work on native OLE DB Provider for PostgreSQL. i\n> think it is\n> really important to have native OLE DB proviader even if there is\n> OLE DB to\n> ODBC Provider. Microsoft started to withdraw support for ODBC and seems to\n> force to switch to OLE DB(eg VS .Net have limited support for ODBC which\n> must be spearetly downloaded), so we could expect more such moves. So I\n> started work on native PostgreSQL OLE DB provider. It slowly begin\n> to work. (it registers to system, shows property pages, connects to\n> database, and do first quries). Now I'm working on creating resultset.\n>\n> I would like somebody to help answer some questions:\n>\n> 1.is it safe to use binary cursors (basic types format e.g date would not\n> change in future)\n> 2.how could I control result type (binary/ASCII) for ordinary SELECTS ?\n> 3.could ODBC driver mix auto-commit and FETCH/DECLARE mode ? how it can\n> handle such situation:\n> start select (fetch some data) /as I understand it should start\n> transaction silently/\n> update (it should commit as we are in auto-commit)\n> fetch more data.\n> next update (commit again)\n> 4. maybe it would have sense to make some common library with\n> ODBC to handle\n> prepared statements and some kind of conversions (althougth maybe not\n> becouse we convert to different types (OLE DB uses OLE Automation types))\n> 5. Is there any way to truncate PQresult in libpq ? I implement\n> fast forward\n> cursors so i have no need to store whole result (I can read piece of data\n> and forget it when client read it from me). I think there would be no\n> problem if I add some PQtruncate(PQresult *) to pqlib, but maybe there is\n> any different way ?\n>\n> I use ATL templates for OLE DB which is MS library and it can't\n> be compiled\n> withot Visual Studio. (you have right to distribute ATL if you bougth VS).\n> So It would be not \"pure\" solution, but starting from scratch is to\n> difficult, as I don't know COM prefectly. Maybe somebody will rewrite it\n> later.\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 17 Jun 2002 16:34:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Native OLE DB. What do you think about it"
},
{
"msg_contents": "\"Marek Mosiewicz\" <marekmosiewicz@poczta.onet.pl> writes:\n> 1.is it safe to use binary cursors (basic types format e.g date would not\n> change in future)\n\nDon't do it. The internal representations are NOT guaranteed stable,\nand moreover any such thing will guarantee that your code can not talk\nto servers running on non-Intel architectures. (I'm sure MS/Intel\nwould love you to do that, but don't.)\n\n> 2.how could I control result type (binary/ASCII) for ordinary SELECTS ?\n\nYou can't, but it doesn't matter, see above.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 09:51:05 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Native OLE DB. What do you think about it "
},
{
"msg_contents": "> \"Marek Mosiewicz\" <marekmosiewicz@poczta.onet.pl> writes:\n> > 1.is it safe to use binary cursors (basic types format e.g date would\nnot\n> > change in future)\n>\n> Don't do it. The internal representations are NOT guaranteed stable,\n> and moreover any such thing will guarantee that your code can not talk\n> to servers running on non-Intel architectures. (I'm sure MS/Intel\n> would love you to do that, but don't.)\nSo I will not.\n\nBut any way is it difficult to froze basic types. I believe that producing\nand parsing data takes some time both on server and client. Acctually it is\nquestion about prepared statements, but I think it is not pay off. It can\ntake some time to parse parameters if you have many same queries updates\n(very common situation e.g. sending back inserts/ updates from client).\nyou could then :\n 'Qselect * from a where a=? and b=?P<binary parameters>\n<return binary result>\nupdate a set x=? where y=?P<array of parameters - make the same query nth\ntimes with different values>\n\nSuch prepared statments would not only benefit from cached plan but also\navoidance of parameter parsing and sending it - multiple times across\nnetwork -.\n\nMost DB intefaces has some support for such batch execution so it could be\nused (OLE DB JDBC) in it and gain speed)\nIt could be easy do decide to use exclusive little or big indian - such\nconversion is many times faster than atoi(). I don't know if it is problem\nto froze binary representaion of data but if it is not frozen then you could\nnever use bianry cursor from client).\n\nI have no idea how diffcult it would be to implement so I don't know if it\nit has sense, but it is only my propostion.\n>\n> > 2.how could I control result type (binary/ASCII) for ordinary SELECTS ?\n>\n> You can't, but it doesn't matter, see above.\n>\n> regards, tom lane\n\n\n\n",
"msg_date": "Mon, 17 Jun 2002 17:29:44 +0200",
"msg_from": "\"Marek Mosiewicz\" <marekmosiewicz@poczta.onet.pl>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Native OLE DB. What do you think about it "
}
] |
[
{
"msg_contents": "\nComments below to keep context intact...\n\nOn Saturday 15 June 2002 04:13 pm, Alvaro Herrera wrote:\n> Matthew Tedder dijo:\n> > On Friday 14 June 2002 04:41 pm, Bill Cunningham wrote:\n> > > Matthew Tedder wrote:\n> > > > How feasible would it be to create this functionality in\n> > > > PostgreSQL:\n> > > >\n> > > >One creates a test version of a database that initially consists of\n> > > >read-links to the production version of the same database. Any code\n> > > > he/she then writes that reads from a table reads from the production\n> > > > database but any code that modifies data copies that table to the\n> > > > test database.\n> > >\n> > > [pg_dump into the development machines]\n> >\n> > That won't work nearly as well. Obviously we can and often do dumps. \n> > But when testing something that has to work in a production environment,\n> > we need to see what happens over a course of several day's time. This is\n> > needed not only for testing of the specific code changed or added to a\n> > process, but also a test of how it integrations with a larger and more\n> > complex information flow system.\n>\n> Seems like single master multi slave replication would do the trick,\n> wouldn't it? You can replicate the master's data to the slaves and do\n> the tests there. Depending on how frequent the updates are (assuming\n> they are asynchronous), the DB load will be different, but I wonder\n> whether this may be an issue.\n\nFirst, there are two issues to be cogniscent of: (1) that the test table(s) \nremain identical in every way to the production ones, including all the \nhappens to them, except for whatever part of the processing is being tested; \n(2) that we conserve disk space and I/O resources.\n\nHere's an example problem:\n\nCONTEXT:\nA group of eight hospitals merged together and integrated a variety of \nsystems, including disparate Order Entry subsystems. Nightly, the data from \neach subsystem is FTP'd to a central data processing server for the \nenterprise. And as part of the nightly batch flows, a separate process for \neach, translates it to a common format and inserts it into the Orders table. \n Following this, processing begins for other subsystems that use this data \nsuch as the Billing Subsystem(s), Inventory subsystems, Decision Support \nSystems, Archiving subsystems, etc. \n\nUn-Important Note: I personally believe strongly in using flags and status \nindicator codes on top of normalized data, but many conservative shops move \ndata from bucket to bucket along its nightly course, as each process touches \nit. (Although this causes data inconsistency problems, it does also have \nthe advantage of providing a detailed audit trail)\n\nPROBLEM:\nWhen a change is made to the output of one of the Orders subsystems and the \nprogrammer/analyst has to redesign the translation code, should he dump the \nentire database into a test environment? Everything that his data effects \ndownstream may be only 15% of the remaining nightly processes. \n\nSOLUTION:\nTherefore, if the database kept only some kind of a read-link to production \ntables and only dumps when something is modified in the respective table, \nwouldn't it significantly reduce the pull on resources--both in terms of disk \nspace and I/O utilization? \n\nOTHER CONCERNS:\nOften an IT shop has one big production, one big test, and one big \ndevelopment environment. In that case, a big database dump for each makes a \ngreat deal of sense. However, the date for applying a change from \ndevelopment to test and production will be sooner for some projects than for \nothers. My idea basically enables those with different due dates to have \nseparate test or development environments so that the unwanted effects of \nprojects that take a longer time do not negatively impact those that need to \nbe perfected and put into production sooner. The ones that go in sooner, \nwould, however impact the ones going later once the sooner ones are put into \nproduction. But this is not such a bad thing as the alternative.\n\nMaybe I am reading into this a little too deeply. I don't know.. You be the \njudge.......it seemed like something like this could be very helpful at my \nformer workplace. People were constantly bumping into eachother in our test \nenvironment.\n\nMatthew\n-- \nAnything that can be logically explained, can be programmed.\n",
"msg_date": "Sun, 16 Jun 2002 15:13:59 -0400",
"msg_from": "Matthew Tedder <matthew@tedder.com>",
"msg_from_op": true,
"msg_subject": "Re: Big Test Environment Feature"
}
] |
[
{
"msg_contents": "I've been busy working on my presentation on concurrency for the\nupcoming O'Reilly conference. While doing so, I've been thinking\nmore about the question of when to do SetQuerySnapshot calls inside\nfunctions. We've gone around on that before, without much of a\nconsensus on what to do; see for example the thread starting at\nhttp://fts.postgresql.org/db/mw/msg.html?mid=1029236\n\nI have now become convinced that it is correct, in fact necessary,\nto do SetQuerySnapshot for each new user-supplied query, whether\nit's inside a function or not. A CommandCounterIncrement without\nan associated SetQuerySnapshot is okay internally within system\nutility operations (eg, to make visible a catalog entry we just\ncreated), but it is highly suspect otherwise.\n\nIn serializable mode, SetQuerySnapshots after the first one of a\ntransaction are no-ops, so there's really no difference in that case.\nAll we need to think about is read-committed mode. And in\nread-committed mode, we can have situations like this:\n\n\tUPDATE webpages SET hits = hits + 1 WHERE url = '...';\n\tSELECT hits FROM webpages WHERE url = '...';\n\nIf there are no concurrent updates going on, this will work as expected:\nthe SELECT will see the updated row. But if there are concurrent\nupdates and we do not do SetQuerySnapshots in plpgsql, then the SELECT\nmay see two versions of the target row as valid: both the one that was\nvalid as of the last SetQuerySnapshot before we entered the function,\nand the one created by the UPDATE. This happens if and only if some\nother client updated the same row and committed after the last\nSetQuerySnapshot. The UPDATE will see that other client's row as\ncurrent and will update it, as expected. But then the SELECT will\nconsider the previous version of the row to be still good, because it\nwas after all deleted by a transaction that committed later than the\nquery snapshot! And the version produced by the UPDATE is good too,\nsince it was produced within the current transaction (and we've done\nCommandCounterIncrement to make it visible).\n\nAn example of exactly this misbehavior can be seen in\nhttp://archives.postgresql.org/pgsql-bugs/2002-02/msg00142.php\nParticularly in 7.2, it's a tossup which version of the row will\nbe found first by the SELECT, so the bug might appear and disappear\ndepending on the phase of the moon, making it even worse.\n\nWe get sensible behavior in the normal interactive case *only* because\nthere will be a SetQuerySnapshot between UPDATE and SELECT, and so the\nSELECT will certainly consider any versions seen as obsolete by UPDATE\nto be obsolete also.\n\nSo I've come around to agree with the position that Tatsuo and Hiroshi\nput forward in the thread mentioned above: plpgsql (and the other PL\nlanguages) need to do SetQuerySnapshot not only CommandCounterIncrement\nbetween user-supplied queries.\n\nIs anyone still unconvinced? If not, I'll try to fix it sometime soon.\n\nAs that thread pointed out, there also seem to be some problems with\nplpgsql not doing enough CommandCounterIncrements when it's executing\nalready-planned queries; I'll take a look at that issue at the same\ntime.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 16 Jun 2002 19:53:15 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "SetQuerySnapshot, once again"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I've been busy working on my presentation on concurrency for the\n> upcoming O'Reilly conference. While doing so, I've been thinking\n> more about the question of when to do SetQuerySnapshot calls inside\n> functions. We've gone around on that before, without much of a\n> consensus on what to do; see for example the thread starting at\n> http://fts.postgresql.org/db/mw/msg.html?mid=1029236\n> \n> I have now become convinced that it is correct, in fact necessary,\n> to do SetQuerySnapshot for each new user-supplied query, whether\n> it's inside a function or not.\n\nI have a question. Could the functions which contain no \nqueries other than SELECT be stable(returns the definite\nresult for a query) with it ? \n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Mon, 17 Jun 2002 18:42:21 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot, once again"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I have a question. Could the functions which contain no \n> queries other than SELECT be stable(returns the definite\n> result for a query) with it ? \n\nSorry, I don't understand ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 09:40:24 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SetQuerySnapshot, once again "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I have a question. Could the functions which contain no\n> > queries other than SELECT be stable(returns the definite\n> > result for a query) with it ?\n> \n> Sorry, I don't understand ...\n\nLet t be a table which is defined as\n create table t (id serial primary key, dt text);\nThen is the following function *stable* ?\n create function f1(int4) returns text as\n '\n declare\n txt text;\n begin\n select dt into txt from t where id = $1;\n return txt;\n end\n ' language plpgsql;\n\nIf SetQuerySnapshot is called for the above *select*,\nthe result isn't determined by the snapshot of the\nfunction. \n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Tue, 18 Jun 2002 08:59:28 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot, once again"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> Sorry, I don't understand ...\n\n> Let t be a table which is defined as\n> create table t (id serial primary key, dt text);\n> Then is the following function *stable* ?\n> create function f1(int4) returns text as\n> '\n> declare\n> txt text;\n> begin\n> select dt into txt from t where id = $1;\n> return txt;\n> end\n> ' language plpgsql;\n\nI'm not sure exactly what you mean by \"stable\" here.\n\nAnd I'm even less sure whether you are arguing for or\nagainst adding SetQuerySnapshot calls into plpgsql...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:55:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SetQuerySnapshot, once again "
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Tom Lane wrote:\n> >> Sorry, I don't understand ...\n> \n> > Let t be a table which is defined as\n> > create table t (id serial primary key, dt text);\n> > Then is the following function *stable* ?\n> > create function f1(int4) returns text as\n> > '\n> > declare\n> > txt text;\n> > begin\n> > select dt into txt from t where id = $1;\n> > return txt;\n> > end\n> > ' language plpgsql;\n> \n> I'm not sure exactly what you mean by \"stable\" here.\n\nWasn't it you who defined *stable* as \n Cachable within a single command: given fixed input values, the\n result will not change if the function were to be repeatedly evaluated\n within a single SQL command; but the result could change over time.\n?\n\n> And I'm even less sure whether you are arguing for or\n> against adding SetQuerySnapshot calls into plpgsql...\n\nI already mentioned an opinion in 2001/09/08.\n Both the command counters and the snapshots in a\n function should advance except the leading SELECT\n statements.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 19 Jun 2002 01:45:41 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot, once again "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n>> I'm not sure exactly what you mean by \"stable\" here.\n\n> Wasn't it you who defined *stable* as \n> Cachable within a single command: given fixed input values, the\n> result will not change if the function were to be repeatedly evaluated\n> within a single SQL command; but the result could change over time.\n\nOh, *that* \"stable\" ;-)\n\nOkay, I get your point now. You are right --- a function that\nreferences a table that others might be concurrently changing\nwould not be stable under read-committed rules. (But you could\nprobably get away with marking it stable anyway.)\n\n> I already mentioned an opinion in 2001/09/08.\n> Both the command counters and the snapshots in a\n> function should advance except the leading SELECT\n> statements.\n\nI do not like the idea of treating the first select in a function\ndifferently from the rest. And such a rule wouldn't let you build\nguaranteed-stable functions anyway; what if the outer query was\ncalling both your function, and another one that did cause the\nsnapshot to advance? The behavior of your function would then\nvary depending on whether the other function was invoked or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 14:18:09 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SetQuerySnapshot, once again "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> \n> > I already mentioned an opinion in 2001/09/08.\n> > Both the command counters and the snapshots in a\n> > function should advance except the leading SELECT\n> > statements.\n> \n> I do not like the idea of treating the first select in a function\n> differently from the rest. And such a rule wouldn't let you build\n> guaranteed-stable functions anyway;\n\nAFAIK there has been no analysis where we can get *stable*\nfunctions. As far as I see, we can expect SELECT-only functions\nto be *stable* if and only if they are surrounded by SELECT-only\n*stable* functions.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Wed, 19 Jun 2002 09:11:44 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot, once again"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> I do not like the idea of treating the first select in a function\n>> differently from the rest. And such a rule wouldn't let you build\n>> guaranteed-stable functions anyway;\n\n> AFAIK there has been no analysis where we can get *stable*\n> functions. As far as I see, we can expect SELECT-only functions\n> to be *stable* if and only if they are surrounded by SELECT-only\n> *stable* functions.\n\nThis idea might be a bit off-the-wall, but how about:\n\n1. If a plpgsql function is declared immutable or stable, then all its\nqueries run with the same snapshot *and* CommandCounterId as prevail\nin the calling query. Probably we should disallow it from making any\nupdating queries, too; allow only SELECTs.\n\n2. If it's declared volatile (the default), then snapshot and\nCommandCounterId are both updated for each query in the function,\nincluding the first one.\n\nSo the default behavior would be equivalent to issuing the same queries\ninteractively, which I think is a good default. The non-default\nbehavior would allow truly stable functions to be built.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jun 2002 10:25:00 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: SetQuerySnapshot, once again "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >> I do not like the idea of treating the first select in a function\n> >> differently from the rest. And such a rule wouldn't let you build\n> >> guaranteed-stable functions anyway;\n> \n> > AFAIK there has been no analysis where we can get *stable*\n> > functions. As far as I see, we can expect SELECT-only functions\n> > to be *stable* if and only if they are surrounded by SELECT-only\n> > *stable* functions.\n\nOops I was wrong. The last *stable* isn't needed.\n \n> This idea might be a bit off-the-wall,\n\nProbably I mentioned once long before.\nWe can't expect reasonable result for\n select fn1(..), fn2(..), ... from ... ;\nif there are some fnx()-s with strong side effect.\n\n> but how about:\n> \n> 1. If a plpgsql function is declared immutable or stable, then all its\n> queries run with the same snapshot *and* CommandCounterId as prevail\n> in the calling query.\n\nIMHO it's impossible to handle anything with one concept.\nFunctions could be *immutable*(? deterministic in SQL99)\nor *stable* even though they have strong side effect.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Thu, 20 Jun 2002 10:38:27 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: SetQuerySnapshot, once again"
}
] |
[
{
"msg_contents": "I noticed that gram.y doesn't handle with optional WITH in CREATE USER,\nALTER USER, CREATE GROUP very well. It duplicates the actions, rather\nthan creating an optional WITH clause.\n\nI have fixed this, and made WITH in CREATE DATABASE optional for\nconstency.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 01:41:34 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "WITH handling in CREATE USER, etc"
},
{
"msg_contents": "On Mon, 17 Jun 2002, Bruce Momjian wrote:\n\n> I noticed that gram.y doesn't handle with optional WITH in CREATE USER,\n> ALTER USER, CREATE GROUP very well. It duplicates the actions, rather\n> than creating an optional WITH clause.\n\nCare to elaborate?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 17 Jun 2002 06:23:35 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: WITH handling in CREATE USER, etc"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> On Mon, 17 Jun 2002, Bruce Momjian wrote:\n> \n> > I noticed that gram.y doesn't handle with optional WITH in CREATE USER,\n> > ALTER USER, CREATE GROUP very well. It duplicates the actions, rather\n> > than creating an optional WITH clause.\n> \n> Care to elaborate?\n\nSure, here is a sample where there were two rules that were merged into\none with opt_with:\n\nIndex: gram.y\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.323\nretrieving revision 2.324\ndiff -c -r2.323 -r2.324\n*** gram.y\t15 Jun 2002 03:00:03 -0000\t2.323\n--- gram.y\t17 Jun 2002 05:40:32 -0000\t2.324\n***************\n*** 518,537 ****\n *\n *****************************************************************************/\n \n! CreateUserStmt: CREATE USER UserId OptUserList \n! \t\t\t\t{\n! \t\t\t\t\tCreateUserStmt *n = makeNode(CreateUserStmt);\n! \t\t\t\t\tn->user = $3;\n! \t\t\t\t\tn->options = $4;\n! \t\t\t\t\t$$ = (Node *)n;\n! \t\t\t\t}\n! \t\t\t| CREATE USER UserId WITH OptUserList\n \t\t\t\t{\n \t\t\t\t\tCreateUserStmt *n = makeNode(CreateUserStmt);\n \t\t\t\t\tn->user = $3;\n \t\t\t\t\tn->options = $5;\n \t\t\t\t\t$$ = (Node *)n;\n! \t\t\t\t} \n \t\t;\n \n /*****************************************************************************\n--- 518,535 ----\n *\n *****************************************************************************/\n \n! CreateUserStmt: CREATE USER UserId opt_with OptUserList\n \t\t\t\t{\n \t\t\t\t\tCreateUserStmt *n = makeNode(CreateUserStmt);\n \t\t\t\t\tn->user = $3;\n \t\t\t\t\tn->options = $5;\n \t\t\t\t\t$$ = (Node *)n;\n! \t\t\t\t}\n! \t\t;\n! \n! \n! opt_with:\tWITH\t\t\t\t\t\t\t\t{ $$ = TRUE; }\n! \t\t| /*EMPTY*/\t\t\t\t\t\t\t\t{ $$ = TRUE; }\n \t\t;\n \n /*****************************************************************************\n\n",
"msg_date": "Mon, 17 Jun 2002 12:46:53 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: WITH handling in CREATE USER, etc"
},
{
"msg_contents": "On Mon, 17 Jun 2002, Bruce Momjian wrote:\n\n> Vince Vielhaber wrote:\n> > On Mon, 17 Jun 2002, Bruce Momjian wrote:\n> >\n> > > I noticed that gram.y doesn't handle with optional WITH in CREATE USER,\n> > > ALTER USER, CREATE GROUP very well. It duplicates the actions, rather\n> > > than creating an optional WITH clause.\n> >\n> > Care to elaborate?\n>\n> Sure, here is a sample where there were two rules that were merged into\n> one with opt_with:\n\nThat makes sense.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 17 Jun 2002 12:55:17 -0400 (EDT)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: WITH handling in CREATE USER, etc"
}
] |
[
{
"msg_contents": "I am seeing massive regression failures on a freshly compiled, initdb'ed\nversion of CVS:\n\n 16 of 81 tests failed, 1 of these failures ignored. \n\nThe first failure I see is:\n\t\n\t COPY COPY_TBL FROM '/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/data/constrf.data';\n\t- ERROR: copy: line 2, CopyFrom: rejected due to CHECK constraint copy_con\n\t SELECT * FROM COPY_TBL;\n\t x | y | z \n\t! ---+---------------+---\n\t! 4 | !check failed | 5\n\t! 6 | OK | 4\n\t! (2 rows)\n\nAre others seeing this? Cause?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 03:04:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Massive regression failures"
},
{
"msg_contents": "I tried testing it on FreeBSD/Alpha and I only got this far:\n\n./configure --prefix=/home/chriskl/local --enable-integer-datetimes --enable\n-debug --enable-depend --enable-cassert --with-pam --with-openssl --with-CXX\n\ngmake check\n\ngmake[3]: Entering directory `/home/chriskl/pgsql-head/src/backend/libpq'\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n/src/include -c -o be-secure.o be-secure.c -MMD\nbe-secure.c: In function `load_dh_file':\nbe-secure.c:399: `DEBUG' undeclared (first use in this function)\nbe-secure.c:399: (Each undeclared identifier is reported only once\nbe-secure.c:399: for each function it appears in.)\nbe-secure.c: In function `load_dh_buffer':\nbe-secure.c:447: `DEBUG' undeclared (first use in this function)\nbe-secure.c: In function `tmp_dh_cb':\nbe-secure.c:519: `DEBUG' undeclared (first use in this function)\nbe-secure.c: In function `info_cb':\nbe-secure.c:550: `DebugLvl' undeclared (first use in this function)\nbe-secure.c:556: `DEBUG' undeclared (first use in this function)\nbe-secure.c: In function `initialize_SSL':\nbe-secure.c:615: warning: implicit declaration of function `lstat'\nbe-secure.c:615: `buf' undeclared (first use in this function)\nbe-secure.c:621: warning: implicit declaration of function `S_ISREG'\nbe-secure.c: In function `open_server_SSL':\nbe-secure.c:704: `DEBUG' undeclared (first use in this function)\ngmake[3]: *** [be-secure.o] Error 1\ngmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/backend/libpq'\ngmake[2]: *** [libpq-recursive] Error 2\ngmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\ngmake: *** [all] Error 2\n\nI will now recompile without debug symols or openssl perhaps...?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 17 June 2002 3:04 PM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Massive regression failures\n>\n>\n> I am seeing massive regression failures on a freshly compiled, initdb'ed\n> version of CVS:\n>\n> 16 of 81 tests failed, 1 of these failures ignored.\n>\n> The first failure I see is:\n>\n> \t COPY COPY_TBL FROM\n> '/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/data/\n> constrf.data';\n> \t- ERROR: copy: line 2, CopyFrom: rejected due to CHECK\n> constraint copy_con\n> \t SELECT * FROM COPY_TBL;\n> \t x | y | z\n> \t! ---+---------------+---\n> \t! 4 | !check failed | 5\n> \t! 6 | OK | 4\n> \t! (2 rows)\n>\n> Are others seeing this? Cause?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Mon, 17 Jun 2002 15:29:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Massive regression failures"
},
{
"msg_contents": "\nOK, fix committed. He was using 7.2 elog symbols. I changed DEBUG to\nDEBUG1. Should compile now. It was already compiling here. Not sure\nwhy. :-)\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> I tried testing it on FreeBSD/Alpha and I only got this far:\n> \n> ./configure --prefix=/home/chriskl/local --enable-integer-datetimes --enable\n> -debug --enable-depend --enable-cassert --with-pam --with-openssl --with-CXX\n> \n> gmake check\n> \n> gmake[3]: Entering directory `/home/chriskl/pgsql-head/src/backend/libpq'\n> gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n> /src/include -c -o be-secure.o be-secure.c -MMD\n> be-secure.c: In function `load_dh_file':\n> be-secure.c:399: `DEBUG' undeclared (first use in this function)\n> be-secure.c:399: (Each undeclared identifier is reported only once\n> be-secure.c:399: for each function it appears in.)\n> be-secure.c: In function `load_dh_buffer':\n> be-secure.c:447: `DEBUG' undeclared (first use in this function)\n> be-secure.c: In function `tmp_dh_cb':\n> be-secure.c:519: `DEBUG' undeclared (first use in this function)\n> be-secure.c: In function `info_cb':\n> be-secure.c:550: `DebugLvl' undeclared (first use in this function)\n> be-secure.c:556: `DEBUG' undeclared (first use in this function)\n> be-secure.c: In function `initialize_SSL':\n> be-secure.c:615: warning: implicit declaration of function `lstat'\n> be-secure.c:615: `buf' undeclared (first use in this function)\n> be-secure.c:621: warning: implicit declaration of function `S_ISREG'\n> be-secure.c: In function `open_server_SSL':\n> be-secure.c:704: `DEBUG' undeclared (first use in this function)\n> gmake[3]: *** [be-secure.o] Error 1\n> gmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/backend/libpq'\n> gmake[2]: *** [libpq-recursive] Error 2\n> gmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/backend'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> gmake: *** [all] Error 2\n> \n> I will now recompile without debug symols or openssl perhaps...?\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > Sent: Monday, 17 June 2002 3:04 PM\n> > To: PostgreSQL-development\n> > Subject: [HACKERS] Massive regression failures\n> >\n> >\n> > I am seeing massive regression failures on a freshly compiled, initdb'ed\n> > version of CVS:\n> >\n> > 16 of 81 tests failed, 1 of these failures ignored.\n> >\n> > The first failure I see is:\n> >\n> > \t COPY COPY_TBL FROM\n> > '/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/data/\n> > constrf.data';\n> > \t- ERROR: copy: line 2, CopyFrom: rejected due to CHECK\n> > constraint copy_con\n> > \t SELECT * FROM COPY_TBL;\n> > \t x | y | z\n> > \t! ---+---------------+---\n> > \t! 4 | !check failed | 5\n> > \t! 6 | OK | 4\n> > \t! (2 rows)\n> >\n> > Are others seeing this? Cause?\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 03:32:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Massive regression failures"
},
{
"msg_contents": "Hmmm...an update for be-secure.c came through, but now I get this:\n\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n/src/include -c -o be-secure.o be-secure.c -MMD\nbe-secure.c: In function `info_cb':\nbe-secure.c:550: `DebugLvl' undeclared (first use in this function)\nbe-secure.c:550: (Each undeclared identifier is reported only once\nbe-secure.c:550: for each function it appears in.)\nbe-secure.c: In function `initialize_SSL':\nbe-secure.c:615: warning: implicit declaration of function `lstat'\nbe-secure.c:615: `buf' undeclared (first use in this function)\nbe-secure.c:621: warning: implicit declaration of function `S_ISREG'\n\nPerhaps the reason you're not seeing it is because you're not linking\nagainst OpenSSL??\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 17 June 2002 3:33 PM\n> To: Christopher Kings-Lynne\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Massive regression failures\n>\n>\n>\n> OK, fix committed. He was using 7.2 elog symbols. I changed DEBUG to\n> DEBUG1. Should compile now. It was already compiling here. Not sure\n> why. :-)\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Christopher Kings-Lynne wrote:\n> > I tried testing it on FreeBSD/Alpha and I only got this far:\n> >\n> > ./configure --prefix=/home/chriskl/local\n> --enable-integer-datetimes --enable\n> > -debug --enable-depend --enable-cassert --with-pam\n> --with-openssl --with-CXX\n> >\n> > gmake check\n> >\n> > gmake[3]: Entering directory\n> `/home/chriskl/pgsql-head/src/backend/libpq'\n> > gcc -pipe -O -g -Wall -Wmissing-prototypes\n> -Wmissing-declarations -I../../..\n> > /src/include -c -o be-secure.o be-secure.c -MMD\n> > be-secure.c: In function `load_dh_file':\n> > be-secure.c:399: `DEBUG' undeclared (first use in this function)\n> > be-secure.c:399: (Each undeclared identifier is reported only once\n> > be-secure.c:399: for each function it appears in.)\n> > be-secure.c: In function `load_dh_buffer':\n> > be-secure.c:447: `DEBUG' undeclared (first use in this function)\n> > be-secure.c: In function `tmp_dh_cb':\n> > be-secure.c:519: `DEBUG' undeclared (first use in this function)\n> > be-secure.c: In function `info_cb':\n> > be-secure.c:550: `DebugLvl' undeclared (first use in this function)\n> > be-secure.c:556: `DEBUG' undeclared (first use in this function)\n> > be-secure.c: In function `initialize_SSL':\n> > be-secure.c:615: warning: implicit declaration of function `lstat'\n> > be-secure.c:615: `buf' undeclared (first use in this function)\n> > be-secure.c:621: warning: implicit declaration of function `S_ISREG'\n> > be-secure.c: In function `open_server_SSL':\n> > be-secure.c:704: `DEBUG' undeclared (first use in this function)\n> > gmake[3]: *** [be-secure.o] Error 1\n> > gmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/backend/libpq'\n> > gmake[2]: *** [libpq-recursive] Error 2\n> > gmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/backend'\n> > gmake[1]: *** [all] Error 2\n> > gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> > gmake: *** [all] Error 2\n> >\n> > I will now recompile without debug symols or openssl perhaps...?\n> >\n> > Chris\n> >\n> > > -----Original Message-----\n> > > From: pgsql-hackers-owner@postgresql.org\n> > > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > > Sent: Monday, 17 June 2002 3:04 PM\n> > > To: PostgreSQL-development\n> > > Subject: [HACKERS] Massive regression failures\n> > >\n> > >\n> > > I am seeing massive regression failures on a freshly\n> compiled, initdb'ed\n> > > version of CVS:\n> > >\n> > > 16 of 81 tests failed, 1 of these failures ignored.\n> > >\n> > > The first failure I see is:\n> > >\n> > > \t COPY COPY_TBL FROM\n> > > '/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/data/\n> > > constrf.data';\n> > > \t- ERROR: copy: line 2, CopyFrom: rejected due to CHECK\n> > > constraint copy_con\n> > > \t SELECT * FROM COPY_TBL;\n> > > \t x | y | z\n> > > \t! ---+---------------+---\n> > > \t! 4 | !check failed | 5\n> > > \t! 6 | OK | 4\n> > > \t! (2 rows)\n> > >\n> > > Are others seeing this? Cause?\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill,\n> Pennsylvania 19026\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > >\n> >\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Mon, 17 Jun 2002 16:20:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Massive regression failures"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am seeing massive regression failures on a freshly compiled, initdb'ed\n> version of CVS:\n\nThat's because you broke the regression test data files.\n\nKindly unbreak them ASAP. I had wanted to do a cvs update this morning...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 09:47:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Massive regression failures "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am seeing massive regression failures on a freshly compiled, initdb'ed\n> > version of CVS:\n> \n> That's because you broke the regression test data files.\n\nWow, I had a corrupted CVS regression directory. I was seeing it during\nCVS update failure for the /regression directory, but I ignored it. \nThen I thought I saw a commit touching regression, which didn't make\nsense, but I closed the window too soon without investigating.\n\nLater, I deleted the directory and recreated it, but seems I had already\naccidentally modified those files. I don't see any other /regression\nchanges except /data, so I backed out those changes. We should be OK\nnow.\n\n> Kindly unbreak them ASAP. I had wanted to do a cvs update this morning...\n\nFixed now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 11:11:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Massive regression failures"
},
{
"msg_contents": "\nOK, fixed. I didn't have SSL enabled in my test compile. Doing that\nnow. I have fixed the elog flags so this should be OK now. If I see\nany other SSL compile issues I will fix them.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Hmmm...an update for be-secure.c came through, but now I get this:\n> \n> gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../..\n> /src/include -c -o be-secure.o be-secure.c -MMD\n> be-secure.c: In function `info_cb':\n> be-secure.c:550: `DebugLvl' undeclared (first use in this function)\n> be-secure.c:550: (Each undeclared identifier is reported only once\n> be-secure.c:550: for each function it appears in.)\n> be-secure.c: In function `initialize_SSL':\n> be-secure.c:615: warning: implicit declaration of function `lstat'\n> be-secure.c:615: `buf' undeclared (first use in this function)\n> be-secure.c:621: warning: implicit declaration of function `S_ISREG'\n> \n> Perhaps the reason you're not seeing it is because you're not linking\n> against OpenSSL??\n> \n> Chris\n> \n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > Sent: Monday, 17 June 2002 3:33 PM\n> > To: Christopher Kings-Lynne\n> > Cc: PostgreSQL-development\n> > Subject: Re: [HACKERS] Massive regression failures\n> >\n> >\n> >\n> > OK, fix committed. He was using 7.2 elog symbols. I changed DEBUG to\n> > DEBUG1. Should compile now. It was already compiling here. Not sure\n> > why. :-)\n> >\n> > ------------------------------------------------------------------\n> > ---------\n> >\n> > Christopher Kings-Lynne wrote:\n> > > I tried testing it on FreeBSD/Alpha and I only got this far:\n> > >\n> > > ./configure --prefix=/home/chriskl/local\n> > --enable-integer-datetimes --enable\n> > > -debug --enable-depend --enable-cassert --with-pam\n> > --with-openssl --with-CXX\n> > >\n> > > gmake check\n> > >\n> > > gmake[3]: Entering directory\n> > `/home/chriskl/pgsql-head/src/backend/libpq'\n> > > gcc -pipe -O -g -Wall -Wmissing-prototypes\n> > -Wmissing-declarations -I../../..\n> > > /src/include -c -o be-secure.o be-secure.c -MMD\n> > > be-secure.c: In function `load_dh_file':\n> > > be-secure.c:399: `DEBUG' undeclared (first use in this function)\n> > > be-secure.c:399: (Each undeclared identifier is reported only once\n> > > be-secure.c:399: for each function it appears in.)\n> > > be-secure.c: In function `load_dh_buffer':\n> > > be-secure.c:447: `DEBUG' undeclared (first use in this function)\n> > > be-secure.c: In function `tmp_dh_cb':\n> > > be-secure.c:519: `DEBUG' undeclared (first use in this function)\n> > > be-secure.c: In function `info_cb':\n> > > be-secure.c:550: `DebugLvl' undeclared (first use in this function)\n> > > be-secure.c:556: `DEBUG' undeclared (first use in this function)\n> > > be-secure.c: In function `initialize_SSL':\n> > > be-secure.c:615: warning: implicit declaration of function `lstat'\n> > > be-secure.c:615: `buf' undeclared (first use in this function)\n> > > be-secure.c:621: warning: implicit declaration of function `S_ISREG'\n> > > be-secure.c: In function `open_server_SSL':\n> > > be-secure.c:704: `DEBUG' undeclared (first use in this function)\n> > > gmake[3]: *** [be-secure.o] Error 1\n> > > gmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/backend/libpq'\n> > > gmake[2]: *** [libpq-recursive] Error 2\n> > > gmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/backend'\n> > > gmake[1]: *** [all] Error 2\n> > > gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> > > gmake: *** [all] Error 2\n> > >\n> > > I will now recompile without debug symols or openssl perhaps...?\n> > >\n> > > Chris\n> > >\n> > > > -----Original Message-----\n> > > > From: pgsql-hackers-owner@postgresql.org\n> > > > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > > > Sent: Monday, 17 June 2002 3:04 PM\n> > > > To: PostgreSQL-development\n> > > > Subject: [HACKERS] Massive regression failures\n> > > >\n> > > >\n> > > > I am seeing massive regression failures on a freshly\n> > compiled, initdb'ed\n> > > > version of CVS:\n> > > >\n> > > > 16 of 81 tests failed, 1 of these failures ignored.\n> > > >\n> > > > The first failure I see is:\n> > > >\n> > > > \t COPY COPY_TBL FROM\n> > > > '/usr/var/local/src/gen/pgsql/CURRENT/pgsql/src/test/regress/data/\n> > > > constrf.data';\n> > > > \t- ERROR: copy: line 2, CopyFrom: rejected due to CHECK\n> > > > constraint copy_con\n> > > > \t SELECT * FROM COPY_TBL;\n> > > > \t x | y | z\n> > > > \t! ---+---------------+---\n> > > > \t! 4 | !check failed | 5\n> > > > \t! 6 | OK | 4\n> > > > \t! (2 rows)\n> > > >\n> > > > Are others seeing this? Cause?\n> > > >\n> > > > --\n> > > > Bruce Momjian | http://candle.pha.pa.us\n> > > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > > + Christ can be your backup. | Drexel Hill,\n> > Pennsylvania 19026\n> > > >\n> > > > ---------------------------(end of\n> > broadcast)---------------------------\n> > > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > > message can get through to the mailing list cleanly\n> > > >\n> > >\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 11:20:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Massive regression failures"
},
{
"msg_contents": "Still badness on FreeBSD/Alpha:\n\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\nC -I. -I../../../src/include -DFRONTEND -DSYSCONFDIR='\"/home/chriskl/local/\netc/postgresql\"' -c -o fe-secure.o fe-secure.c -MMD\nfe-secure.c: In function `verify_peer':\nfe-secure.c:417: structure has no member named `s6_addr8'\ngmake[3]: *** [fe-secure.o] Error 1\ngmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces/libpq'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\ngmake: *** [all] Error 2\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 17 June 2002 11:11 PM\n> To: Tom Lane\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Massive regression failures\n>\n>\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > I am seeing massive regression failures on a freshly\n> compiled, initdb'ed\n> > > version of CVS:\n> >\n> > That's because you broke the regression test data files.\n>\n> Wow, I had a corrupted CVS regression directory. I was seeing it during\n> CVS update failure for the /regression directory, but I ignored it.\n> Then I thought I saw a commit touching regression, which didn't make\n> sense, but I closed the window too soon without investigating.\n>\n> Later, I deleted the directory and recreated it, but seems I had already\n> accidentally modified those files. I don't see any other /regression\n> changes except /data, so I backed out those changes. We should be OK\n> now.\n>\n> > Kindly unbreak them ASAP. I had wanted to do a cvs update this\n> morning...\n>\n> Fixed now.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Tue, 18 Jun 2002 10:29:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Massive regression failures"
}
] |
[
{
"msg_contents": "Ahh, that would be us then :-) Yes, there is a (very) active .NET\nprovider project underway - it can be found at\nhttp://gborg.postgresql.org/project/npgsql. Note though, that this is a\nnative .NET provider being written in C#, it is not OLE-DB (which would\ncertainly be a useful addition to the list of interfaces imho).\n\nRegards, Dave.\n\n\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au] \n> Sent: 17 June 2002 09:34\n> To: Marek Mosiewicz; PostgreSQL-development; PostgreSQL odbc list\n> Subject: Re: [HACKERS] Native OLE DB. What do you think about it\n> \n> \n> Are you aware that another team on the list is working on a \n> .Net provider? Maybe you could work with them?\n> \n> .Net Provider people: speak up!\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Marek \n> > Mosiewicz\n> > Sent: Monday, 17 June 2002 4:34 PM\n> > To: PostgreSQL-development; PostgreSQL odbc list\n> > Subject: [HACKERS] Native OLE DB. What do you think about it\n> >\n> >\n> > Hello everbody.\n> >\n> > I have started work on native OLE DB Provider for \n> PostgreSQL. i think \n> > it is really important to have native OLE DB proviader even \n> if there \n> > is OLE DB to\n> > ODBC Provider. Microsoft started to withdraw support for \n> ODBC and seems to\n> > force to switch to OLE DB(eg VS .Net have limited support \n> for ODBC which\n> > must be spearetly downloaded), so we could expect more such \n> moves. So I\n> > started work on native PostgreSQL OLE DB provider. It slowly begin\n> > to work. (it registers to system, shows property pages, connects to\n> > database, and do first quries). Now I'm working on creating \n> resultset.\n> >\n> > I would like somebody to help answer some questions:\n> >\n> > 1.is it safe to use binary cursors (basic types format e.g \n> date would \n> > not change in future) 2.how could I control result type \n> (binary/ASCII) \n> > for ordinary SELECTS ? 3.could ODBC driver mix auto-commit and \n> > FETCH/DECLARE mode ? how it can handle such situation:\n> > start select (fetch some data) /as I understand it should start\n> > transaction silently/\n> > update (it should commit as we are in auto-commit)\n> > fetch more data.\n> > next update (commit again)\n> > 4. maybe it would have sense to make some common library with\n> > ODBC to handle\n> > prepared statements and some kind of conversions (althougth \n> maybe not\n> > becouse we convert to different types (OLE DB uses OLE \n> Automation types))\n> > 5. Is there any way to truncate PQresult in libpq ? I implement\n> > fast forward\n> > cursors so i have no need to store whole result (I can read \n> piece of data\n> > and forget it when client read it from me). I think there \n> would be no\n> > problem if I add some PQtruncate(PQresult *) to pqlib, but \n> maybe there is\n> > any different way ?\n> >\n> > I use ATL templates for OLE DB which is MS library and it can't be \n> > compiled withot Visual Studio. (you have right to distribute ATL if \n> > you bougth VS). So It would be not \"pure\" solution, but \n> starting from \n> > scratch is to difficult, as I don't know COM prefectly. \n> Maybe somebody \n> > will rewrite it later.\n> >\n> >\n> >\n> > ---------------------------(end of \n> > broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an \n> appropriate subscribe-nomail command to \n> majordomo@postgresql.org so that your message can get through \n> to the mailing list cleanly\n> \n",
"msg_date": "Mon, 17 Jun 2002 12:48:56 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Native OLE DB. What do you think about it"
},
{
"msg_contents": "> Ahh, that would be us then :-) Yes, there is a (very) active .NET\n> provider project underway - it can be found at\n> http://gborg.postgresql.org/project/npgsql. Note though, that this is a\n> native .NET provider being written in C#, it is not OLE-DB (which would\n> certainly be a useful addition to the list of interfaces imho).\n\n\nIf that's the case, can I please make a suggestion: please liaise with\nthe Mono project's ADO.NET people (http://www.go-mono.com) as they are\ndoing exactly the same thing, and there's no need for duplication of\neffort.\n\nMartin\n\n",
"msg_date": "17 Jun 2002 13:05:00 +0100",
"msg_from": "Martin Coxall <coxall@cream.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Native OLE DB. What do you think about it"
}
] |
[
{
"msg_contents": "Hi\n\nI need to do some testing of pgAdmin on a database with very large oids\n(> 4,000,000,000). Is there anyway I can wind the oid counter forward\nwithout having to do a few billion inserts?\n\nI'm on a test system so I can initdb if required.\n\nThanks, Dave.\n",
"msg_date": "Mon, 17 Jun 2002 12:55:05 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Can I adjust the oid counter for testing?"
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> I need to do some testing of pgAdmin on a database with very large oids\n> (> 4,000,000,000). Is there anyway I can wind the oid counter forward\n> without having to do a few billion inserts?\n> I'm on a test system so I can initdb if required.\n\nA clean solution would be to extend pg_resetxlog to have a switch to set\nnextOid, parallel to its switch to tweak nextXid. (I had thought we had\nthis already, actually, but I'm not seeing it in current sources.)\n\nThe difficulty with that, if you are using current CVS, is that I\nbelieve pg_resetxlog is broken at the moment --- Thomas changed the\nformat of pg_control recently and didn't update pg_resetxlog.\n\nIf you want to fix both of those things and submit a patch, it'd save me\nsome work that needs to get done before 7.3 can go out.\n\nIf that all seems like too much work, you could just reach in with a\ndebugger and set ShmemVariableCache->nextOid in a running system\n(be careful that nothing is going on while you do so). Better set\nShmemVariableCache->oidCount = 0 while you're at it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 10:00:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can I adjust the oid counter for testing? "
},
{
"msg_contents": "Tom Lane wrote:\n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > I need to do some testing of pgAdmin on a database with very large oids\n> > (> 4,000,000,000). Is there anyway I can wind the oid counter forward\n> > without having to do a few billion inserts?\n> > I'm on a test system so I can initdb if required.\n> \n> A clean solution would be to extend pg_resetxlog to have a switch to set\n> nextOid, parallel to its switch to tweak nextXid. (I had thought we had\n> this already, actually, but I'm not seeing it in current sources.)\n\nYes, I thought we had that too, but I don't see it.\n\nActually, you can just use COPY WITH OIDS and insert a large oid. That\nwill set the counter. That's how pg_dump does it:\n\t\n\tCREATE TEMPORARY TABLE pgdump_oid (dummy integer);\n\tCOPY pgdump_oid WITH OIDS FROM stdin;\n\t143655 0\n\t\\.\n\nThis method will only _increase_ the oid.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 12:54:49 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Can I adjust the oid counter for testing?"
}
] |
[
{
"msg_contents": "I posted this last week but got no responses :-( - It's not causing me\nproblems but if it is a bug... \n\nRegards, Dave.\n\n\n> -----Original Message-----\n> From: Dave Page \n> Sent: 10 June 2002 15:11\n> To: PostgreSQL-development\n> Subject: ALTER TABLE... OWNER bugette\n> \n> \n> In a 7.3 dev test database, I have a table called msysconf in \n> a schema called biblio. If I execute:\n> \n> ALTER TABLE biblio.msysconf OWNER TO dpage\n> \n> I get:\n> \n> ERROR: msysconf_idx is an index relation\n> \n> There is an index with this name on the table.\n> \n> Any ideas?\n> \n> Regards, Dave.\n> \n",
"msg_date": "Mon, 17 Jun 2002 13:01:35 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "FW: ALTER TABLE... OWNER bugette (repost)"
},
{
"msg_contents": "\"Dave Page\" <dpage@vale-housing.co.uk> writes:\n>> ALTER TABLE biblio.msysconf OWNER TO dpage\n>> ERROR: msysconf_idx is an index relation\n\nFixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 17 Jun 2002 10:32:03 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: FW: ALTER TABLE... OWNER bugette (repost) "
}
] |
[
{
"msg_contents": "As Bruce already pointed out this seems to be the most wanted feature of\nall. I just finished working through my notes from Linuxtag checking\nwhat people asked about PostgreSQL and yes, the feature most people\nasked for was PIT.\n\nOf course this is closely related to replication, but I wonder if anyone\nis working on/has ideas for PIT other than to replay the replication\nlog.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Mon, 17 Jun 2002 14:53:53 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Point-in-Time Recovery"
}
] |
[
{
"msg_contents": "Here is the complete NIST regression test:\nftp://cap.connx.com/pub/chess-engines/new-approach/nist.ZIP\n\nYou have to use passive ftp to get files from my site because of the\nfirewall.\n\nThe zip is about 6 megabytes compressed.\n",
"msg_date": "Mon, 17 Jun 2002 11:27:45 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Mon, Jun 17, 2002 at 11:27:45AM -0700, Dann Corbit wrote:\n> Here is the complete NIST regression test:\n> ftp://cap.connx.com/pub/chess-engines/new-approach/nist.ZIP\n> \n> You have to use passive ftp to get files from my site because of the\n> firewall.\n\nI'm pretty sure my proxy does use passive ftp, but I cannot get through\nto you. Are you sure you use passive ftp for incoming connections?\n\nAnyway, it seems you have to mail it. :-)\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jun 2002 15:24:57 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
},
{
"msg_contents": "On Tue, Jun 18, 2002 at 03:24:57PM +0200, Michael Meskes wrote:\n> On Mon, Jun 17, 2002 at 11:27:45AM -0700, Dann Corbit wrote:\n> > Here is the complete NIST regression test:\n> > ftp://cap.connx.com/pub/chess-engines/new-approach/nist.ZIP\n> > \n> > You have to use passive ftp to get files from my site because of the\n> > firewall.\n> \n> I'm pretty sure my proxy does use passive ftp, but I cannot get through\n> to you. Are you sure you use passive ftp for incoming connections?\n> \n> Anyway, it seems you have to mail it. :-)\n\nFor future reference (and the archives of the list) the official download site\nfor this code is:\n\nhttp://www.itl.nist.gov/div897/ctg/sql_form.htm\n\nAnd here's the usage statement, regarding incorporation of this work into\nother works (in short, it's public domain)\n\nhttp://www.itl.nist.gov/div897/ctg/softagre.htm\n\nRoss\n",
"msg_date": "Tue, 18 Jun 2002 11:06:11 -0500",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: PostGres Doubt"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Martin Coxall [mailto:coxall@cream.org] \n> Sent: 17 June 2002 13:05\n> To: Dave Page\n> Cc: Christopher Kings-Lynne; Marek Mosiewicz; \n> PostgreSQL-development; PostgreSQL odbc list\n> Subject: Re: [HACKERS] Native OLE DB. What do you think about it\n> \n> \n> > Ahh, that would be us then :-) Yes, there is a (very) active .NET \n> > provider project underway - it can be found at \n> > http://gborg.postgresql.org/project/npgsql. Note though, \n> that this is \n> > a native .NET provider being written in C#, it is not OLE-DB (which \n> > would certainly be a useful addition to the list of \n> interfaces imho).\n> \n> \n> If that's the case, can I please make a suggestion: please \n> liaise with the Mono project's ADO.NET people \n> (http://www.go-mono.com) as they are doing > exactly the same \n> thing, and there's no need for duplication of effort.\n> \n\nWe have liased with them. Their provider is just a wrapper to libpq,\nwhereas ours is written from scratch.\n\nRegards, Dave.\n",
"msg_date": "Mon, 17 Jun 2002 19:50:22 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Native OLE DB. What do you think about it"
}
] |
[
{
"msg_contents": "Thanks Tom,\n\nI suspect hacking pg_resetxlog is beyond my capabilities right now, but\nif I do get anything I'll post a patch. I'll probably end up using\ngdb...\n\nRegards, Dave.\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 17 June 2002 15:00\n> To: Dave Page\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] Can I adjust the oid counter for testing? \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> > I need to do some testing of pgAdmin on a database with very large \n> > oids (> 4,000,000,000). Is there anyway I can wind the oid counter \n> > forward without having to do a few billion inserts? I'm on a test \n> > system so I can initdb if required.\n> \n> A clean solution would be to extend pg_resetxlog to have a \n> switch to set nextOid, parallel to its switch to tweak \n> nextXid. (I had thought we had this already, actually, but \n> I'm not seeing it in current sources.)\n> \n> The difficulty with that, if you are using current CVS, is \n> that I believe pg_resetxlog is broken at the moment --- \n> Thomas changed the format of pg_control recently and didn't \n> update pg_resetxlog.\n> \n> If you want to fix both of those things and submit a patch, \n> it'd save me some work that needs to get done before 7.3 can go out.\n> \n> If that all seems like too much work, you could just reach in \n> with a debugger and set ShmemVariableCache->nextOid in a \n> running system (be careful that nothing is going on while you \n> do so). Better set\n> ShmemVariableCache->oidCount = 0 while you're at it.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of \n> broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n> \n",
"msg_date": "Mon, 17 Jun 2002 19:56:50 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Can I adjust the oid counter for testing? "
}
] |
[
{
"msg_contents": "Thanks.\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 17 June 2002 15:32\n> To: Dave Page\n> Cc: PostgreSQL-development\n> Subject: Re: [HACKERS] FW: ALTER TABLE... OWNER bugette (repost) \n> \n> \n> \"Dave Page\" <dpage@vale-housing.co.uk> writes:\n> >> ALTER TABLE biblio.msysconf OWNER TO dpage\n> >> ERROR: msysconf_idx is an index relation\n> \n> Fixed.\n> \n> \t\t\tregards, tom lane\n> \n> \n",
"msg_date": "Mon, 17 Jun 2002 19:57:46 +0100",
"msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: FW: ALTER TABLE... OWNER bugette (repost) "
}
] |
[
{
"msg_contents": "Folks,\n\nGiven the amount of qoute nesting we do in Postgres, I thought that we need a \nfunction that handles automatic doubling of quotes within strings. I've \nwritten one in PL/pgSQL (below). I'd really love to see this turned into a \nbuiltin C function.\n\n-Josh\n\nCREATE FUNCTION double_quote(text) returns text as '\nDECLARE bad_string ALIAS for $1;\n good_string text;\n current_pos INT;\n old_pos INT;\nBEGIN\n IF bad_string IS NULL or bad_string = '''' THEN\n RETURN bad_string;\n END IF;\n good_string := bad_string;\n current_pos := STRPOS(good_string, chr(39));\n WHILE current_pos > 0 LOOP\n old_pos := current_pos;\n good_string := SUBSTR(good_string, 1, (current_pos - 1)) ||\n repeat(chr(39), 2) || SUBSTR(good_string, (current_pos \n+ 1));\n current_pos := STRPOS(SUBSTR(good_string, (old_pos + 2)), \nchr(39));\n IF current_pos > 0 THEN\n current_pos := current_pos + old_pos + 1;\n END IF;\n END LOOP;\nRETURN good_string;\nEND;'\nLANGUAGE 'plpgsql'\nWITH (ISCACHABLE, ISSTRICT);\n\n\n",
"msg_date": "Mon, 17 Jun 2002 13:34:01 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": true,
"msg_subject": "Request for builtin function: Double_quote"
},
{
"msg_contents": "Josh, \nI'm not sure what you mean by 'builtin C function'. \nThere is one already \n size_t PQescapeString (char *to, const char *from, size_t length); \nOr do you mean a String Function like \n substring(string [from integer] [for integer]) \nI would rather call it 'builtin sql function'. \n\nRegards, Christoph \n\n> \n> Folks,\n> \n> Given the amount of qoute nesting we do in Postgres, I thought that we need a\n> function that handles automatic doubling of quotes within strings. I've \n> written one in PL/pgSQL (below). I'd really love to see this turned into a \n> builtin C function.\n> \n> -Josh\n> \n",
"msg_date": "Tue, 18 Jun 2002 9:35:34 METDST",
"msg_from": "Christoph Haller <ch@rodos.fzk.de>",
"msg_from_op": false,
"msg_subject": "Re: Request for builtin function: Double_quote"
},
{
"msg_contents": "Josh Berkus <josh@agliodbs.com> writes:\n> Given the amount of qoute nesting we do in Postgres, I thought that we need a \n> function that handles automatic doubling of quotes within strings. I've \n> written one in PL/pgSQL (below). I'd really love to see this turned into a \n> builtin C function.\n\nWhat does this do that isn't already done by quote_literal?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:48:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for builtin function: Double_quote "
},
{
"msg_contents": "Chris, Tom:\n\nYes, thank you Chris, I meant a builtin SQL function.\n\n> > Given the amount of qoute nesting we do in Postgres, I thought that\n> we need a \n> > function that handles automatic doubling of quotes within strings.\n> I've \n> > written one in PL/pgSQL (below). I'd really love to see this\n> turned into a \n> > builtin C function.\n> \n> What does this do that isn't already done by quote_literal?\n\nWell, first off, quote_literal isn't in the documentation under\n\"Functions and Operators\". So this is the first I've heard about it\n-- or probably anyone else outside the core team. How long has it\nbeen around?\n\nSecond, double_quote does not return the outside quotes, just the\ninside ones ... it's for passing string values to EXECUTE statements.\n However, now that I know that quote_literal exists, I can simplify\nthe double_quote statement considerably. \n\nTherefore, I withdraw my initial request, and request instead that\nquote_literal be added to the function documentation in String\nFunctions and Operators.\n\nI will event supply text for the functions table:\n\nfunction\t\t\treturns\t\t\nquote_literal(string text)\ttext\t\t\n\nexplain\nReturns the entire string passed to it, including quote marks. Useful\nfor nesting quotes, such as in the EXECUTEing dynamic queries.\n\nexample\t\t\tresult\nquote_literal('O''Reilly')\t'O''Reilly'\n\n\n-Josh Berkus\n\n-Josh Berkus\n",
"msg_date": "Tue, 18 Jun 2002 08:49:33 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Request for builtin function: Double_quote "
},
{
"msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> Well, first off, quote_literal isn't in the documentation under\n> \"Functions and Operators\". So this is the first I've heard about it\n> -- or probably anyone else outside the core team. How long has it\n> been around?\n\nAwhile; however, the only documentation was in the discussion of EXECUTE\nin the pl/pgsql chapter of the Programmer's Guide, which is probably not\nthe best place.\n\n> Therefore, I withdraw my initial request, and request instead that\n> quote_literal be added to the function documentation in String\n> Functions and Operators.\n\nDone; I also added its sister function quote_ident. See the devel\ndocs at\nhttp://candle.pha.pa.us/main/writings/pgsql/sgml/functions-string.html\n\n\t\t\tregards, tom lane\n\n\n",
"msg_date": "Mon, 24 Jun 2002 18:40:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Request for builtin function: Double_quote "
},
{
"msg_contents": "\nTom,\n\n> Done; I also added its sister function quote_ident. See the devel\n> docs at\n> http://candle.pha.pa.us/main/writings/pgsql/sgml/functions-string.html\n\nTante Grazie.\n\n-- \n-Josh Berkus\n\n\n\n",
"msg_date": "Mon, 24 Jun 2002 20:32:44 -0700",
"msg_from": "Josh Berkus <josh@agliodbs.com>",
"msg_from_op": true,
"msg_subject": "Re: Request for builtin function: Double_quote"
}
] |
[
{
"msg_contents": " \nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Well, we're not doing that; and I see no good reason to make the thing\n> > be a builtin function at all. Since it's just an example, it can very\n> > well be a contrib item with a creation script. Probably *should* be,\n> > in fact, because dynamically created functions are what other people are\n> > going to be building; an example of how to do it as a builtin function\n> > isn't as helpful.\n> \n> Here is a patch for contrib/showguc. It can serve as a reference\n> implementation for a C function which returns setof composite. It\n> required some small changes in guc.c and guc.h so that the number of GUC\n> variables, and their values, could be accessed. Example usage as shown\n> below:\n> \n> test=# select * from show_all_vars() where varname = 'wal_sync_method';\n> varname | varval\n> -----------------+-----------\n> wal_sync_method | fdatasync\n> (1 row)\n> \n> test=# select show_var('wal_sync_method');\n> show_var\n> -----------\n> fdatasync\n> (1 row)\n> \n> \n> show_var() is neither composite nor set returning, but it seemed like a\n> worthwhile addition. Please apply if there are no objections.\n> \n> Thanks,\n> \n> Joe\n> \n\n> Index: contrib/showguc/Makefile\n> ===================================================================\n> RCS file: contrib/showguc/Makefile\n> diff -N contrib/showguc/Makefile\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- contrib/showguc/Makefile\t27 May 2002 00:24:44 -0000\n> ***************\n> *** 0 ****\n> --- 1,9 ----\n> + subdir = contrib/showguc\n> + top_builddir = ../..\n> + include $(top_builddir)/src/Makefile.global\n> + \n> + MODULES = showguc\n> + DATA_built = showguc.sql\n> + DOCS = README.showguc\n> + \n> + include $(top_srcdir)/contrib/contrib-global.mk\n> Index: contrib/showguc/README.showguc\n> ===================================================================\n> RCS file: contrib/showguc/README.showguc\n> diff -N contrib/showguc/README.showguc\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- contrib/showguc/README.showguc\t10 Jun 2002 00:16:48 -0000\n> ***************\n> *** 0 ****\n> --- 1,105 ----\n> + /*\n> + * showguc\n> + *\n> + * Sample to demonstrate a C function which returns setof composite.\n> + * Joe Conway <mail@joeconway.com>\n> + *\n> + * Copyright 2002 by PostgreSQL Global Development Group\n> + *\n> + * Permission to use, copy, modify, and distribute this software and its\n> + * documentation for any purpose, without fee, and without a written agreement\n> + * is hereby granted, provided that the above copyright notice and this\n> + * paragraph and the following two paragraphs appear in all copies.\n> + * \n> + * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> + * POSSIBILITY OF SUCH DAMAGE.\n> + * \n> + * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> + *\n> + */\n> + Version 0.1 (9 June, 2002):\n> + First release\n> + \n> + Release Notes:\n> + \n> + Version 0.1\n> + - initial release \n> + \n> + Installation:\n> + Place these files in a directory called 'showguc' under 'contrib' in the PostgreSQL source tree. Then run:\n> + \n> + make\n> + make install\n> + \n> + You can use showguc.sql to create the functions in your database of choice, e.g.\n> + \n> + psql -U postgres template1 < showguc.sql\n> + \n> + installs following functions into database template1:\n> + \n> + show_all_vars() - returns all GUC variables\n> + show_var(text) - returns value of the requested GUC variable\n> + \n> + Documentation\n> + ==================================================================\n> + Name\n> + \n> + show_all_vars() - returns all GUC variables\n> + \n> + Synopsis\n> + \n> + show_all_vars()\n> + \n> + Inputs\n> + \n> + none\n> + \n> + Outputs\n> + \n> + Returns setof __gucvar, where __gucvar is (varname TEXT, varval TEXT). All\n> + GUC variables displayed by SHOW ALL are returned as a set.\n> + \n> + Example usage\n> + \n> + test=# select * from show_all_vars() where varname = 'wal_sync_method';\n> + varname | varval\n> + -----------------+-----------\n> + wal_sync_method | fdatasync\n> + (1 row)\n> + \n> + ==================================================================\n> + Name\n> + \n> + show_var(text varname) - returns value of GUC variable varname\n> + \n> + Synopsis\n> + \n> + show_var(varname)\n> + \n> + Inputs\n> + \n> + varname\n> + The name of a GUC variable\n> + \n> + Outputs\n> + \n> + Returns the current value of varname.\n> + \n> + Example usage\n> + \n> + test=# select show_var('wal_sync_method');\n> + show_var\n> + -----------\n> + fdatasync\n> + (1 row)\n> + \n> + ==================================================================\n> + -- Joe Conway\n> + \n> Index: contrib/showguc/showguc.c\n> ===================================================================\n> RCS file: contrib/showguc/showguc.c\n> diff -N contrib/showguc/showguc.c\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- contrib/showguc/showguc.c\t10 Jun 2002 00:02:14 -0000\n> ***************\n> *** 0 ****\n> --- 1,152 ----\n> + /*\n> + * showguc\n> + *\n> + * Sample to demonstrate a C function which returns setof composite.\n> + * Joe Conway <mail@joeconway.com>\n> + *\n> + * Copyright 2002 by PostgreSQL Global Development Group\n> + *\n> + * Permission to use, copy, modify, and distribute this software and its\n> + * documentation for any purpose, without fee, and without a written agreement\n> + * is hereby granted, provided that the above copyright notice and this\n> + * paragraph and the following two paragraphs appear in all copies.\n> + * \n> + * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> + * POSSIBILITY OF SUCH DAMAGE.\n> + * \n> + * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> + *\n> + */\n> + #include \"postgres.h\"\n> + \n> + #include \"fmgr.h\"\n> + #include \"funcapi.h\"\n> + #include \"utils/builtins.h\"\n> + #include \"utils/guc.h\"\n> + \n> + #include \"showguc.h\"\n> + \n> + /*\n> + * showguc_all - equiv to SHOW ALL command but implemented as\n> + * an SRF.\n> + */\n> + PG_FUNCTION_INFO_V1(showguc_all);\n> + Datum\n> + showguc_all(PG_FUNCTION_ARGS)\n> + {\n> + \tFuncCallContext\t *funcctx;\n> + \tTupleDesc\t\t\ttupdesc;\n> + \tint\t\t\t\t\tcall_cntr;\n> + \tint\t\t\t\t\tmax_calls;\n> + \tTupleTableSlot\t *slot;\n> + \tAttInMetadata\t *attinmeta;\n> + \n> + \t/* stuff done only on the first call of the function */\n> + \tif(SRF_IS_FIRSTPASS())\n> + \t{\n> + \t\t/* create a function context for cross-call persistence */\n> + \t\tfuncctx = SRF_FIRSTCALL_INIT();\n> + \n> + \t\t/*\n> + \t\t * Build a tuple description for a pg__guc tuple\n> + \t\t */\n> + \t\ttupdesc = RelationNameGetTupleDesc(\"__gucvar\");\n> + \n> + \t\t/* allocate a slot for a tuple with this tupdesc */\n> + \t\tslot = TupleDescGetSlot(tupdesc);\n> + \n> + \t\t/* assign slot to function context */\n> + \t\tfuncctx->slot = slot;\n> + \n> + \t\t/*\n> + \t\t * Generate attribute metadata needed later to produce tuples from raw\n> + \t\t * C strings\n> + \t\t */\n> + \t\tattinmeta = TupleDescGetAttInMetadata(tupdesc);\n> + \t\tfuncctx->attinmeta = attinmeta;\n> + \n> + \t\t/* total number of tuples to be returned */\n> + \t\tfuncctx->max_calls = GetNumGUCConfigOptions();\n> + }\n> + \n> + \t/* stuff done on every call of the function */\n> + \tfuncctx = SRF_PERCALL_SETUP(funcctx);\n> + \n> + \tcall_cntr = funcctx->call_cntr;\n> + \tmax_calls = funcctx->max_calls;\n> + \tslot = funcctx->slot;\n> + \tattinmeta = funcctx->attinmeta;\n> + \n> + \tif (call_cntr < max_calls)\t/* do when there is more left to send */\n> + \t{\n> + \t\tchar\t *varname;\n> + \t\tchar\t *varval;\n> + \t\tchar\t **values;\n> + \t\tHeapTuple\ttuple;\n> + \t\tDatum\t\tresult;\n> + \n> + \t\t/*\n> + \t\t * Get the next GUC variable name and value\n> + \t\t */\n> + \t\tvarval = GetGUCConfigOptionNum(call_cntr, &varname);\n> + \n> + \t\t/*\n> + \t\t * Prepare a values array for storage in our slot.\n> + \t\t * This should be an array of C strings which will\n> + \t\t * be processed later by the appropriate \"in\" functions.\n> + \t\t */\n> + \t\tvalues = (char **) palloc(2 * sizeof(char *));\n> + \t\tvalues[0] = varname;\n> + \t\tvalues[1] = varval;\n> + \n> + \t\t/* build a tuple */\n> + \t\ttuple = BuildTupleFromCStrings(attinmeta, values);\n> + \n> + \t\t/* make the tuple into a datum */\n> + \t\tresult = TupleGetDatum(slot, tuple);\n> + \n> + \t\t/* Clean up */\n> + \t\tpfree(varname);\n> + \t\tpfree(values);\n> + \n> + \t\tSRF_RETURN_NEXT(funcctx, result);\n> + \t}\n> + \telse\t/* do when there is no more left */\n> + \t{\n> + \t\tSRF_RETURN_DONE(funcctx);\n> + \t}\n> + }\n> + \n> + \n> + /*\n> + * showguc_name - equiv to SHOW X command but implemented as\n> + * a function.\n> + */\n> + PG_FUNCTION_INFO_V1(showguc_name);\n> + Datum\n> + showguc_name(PG_FUNCTION_ARGS)\n> + {\n> + \tchar *varname;\n> + \tchar *varval;\n> + \ttext *result_text;\n> + \n> + \t/* Get the GUC variable name */\n> + \tvarname = DatumGetCString(DirectFunctionCall1(textout, PointerGetDatum(PG_GETARG_TEXT_P(0))));\n> + \n> + \t/* Get the value */\n> + \tvarval = GetGUCConfigOptionName(varname);\n> + \n> + \t/* Convert to text */\n> + \tresult_text = DatumGetTextP(DirectFunctionCall1(textin, CStringGetDatum(varval)));\n> + \n> + \t/* return it */\n> + \tPG_RETURN_TEXT_P(result_text);\n> + }\n> + \n> Index: contrib/showguc/showguc.h\n> ===================================================================\n> RCS file: contrib/showguc/showguc.h\n> diff -N contrib/showguc/showguc.h\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- contrib/showguc/showguc.h\t10 Jun 2002 00:01:02 -0000\n> ***************\n> *** 0 ****\n> --- 1,37 ----\n> + /*\n> + * showguc\n> + *\n> + * Sample to demonstrate a C function which returns setof composite.\n> + * Joe Conway <mail@joeconway.com>\n> + *\n> + * Copyright 2002 by PostgreSQL Global Development Group\n> + *\n> + * Permission to use, copy, modify, and distribute this software and its\n> + * documentation for any purpose, without fee, and without a written agreement\n> + * is hereby granted, provided that the above copyright notice and this\n> + * paragraph and the following two paragraphs appear in all copies.\n> + * \n> + * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> + * POSSIBILITY OF SUCH DAMAGE.\n> + * \n> + * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> + *\n> + */\n> + \n> + #ifndef SHOWGUC_H\n> + #define SHOWGUC_H\n> + \n> + /*\n> + * External declarations\n> + */\n> + extern Datum showguc_all(PG_FUNCTION_ARGS);\n> + extern Datum showguc_name(PG_FUNCTION_ARGS);\n> + \n> + #endif /* SHOWGUC_H */\n> Index: contrib/showguc/showguc.sql.in\n> ===================================================================\n> RCS file: contrib/showguc/showguc.sql.in\n> diff -N contrib/showguc/showguc.sql.in\n> *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> --- contrib/showguc/showguc.sql.in\t10 Jun 2002 00:03:13 -0000\n> ***************\n> *** 0 ****\n> --- 1,10 ----\n> + CREATE VIEW __gucvar AS\n> + SELECT\n> + ''::TEXT AS varname,\n> + ''::TEXT AS varval;\n> + \n> + CREATE OR REPLACE FUNCTION show_all_vars() RETURNS setof __gucvar\n> + AS 'MODULE_PATHNAME','showguc_all' LANGUAGE 'c' STABLE STRICT;\n> + \n> + CREATE OR REPLACE FUNCTION show_var(text) RETURNS text\n> + AS 'MODULE_PATHNAME','showguc_name' LANGUAGE 'c' STABLE STRICT;\n> Index: src/backend/utils/misc/guc.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/backend/utils/misc/guc.c,v\n> retrieving revision 1.69\n> diff -c -r1.69 guc.c\n> *** src/backend/utils/misc/guc.c\t17 May 2002 20:32:29 -0000\t1.69\n> --- src/backend/utils/misc/guc.c\t9 Jun 2002 22:51:45 -0000\n> ***************\n> *** 824,830 ****\n> \n> \n> static int guc_var_compare(const void *a, const void *b);\n> ! static void _ShowOption(struct config_generic *record);\n> \n> \n> /*\n> --- 824,830 ----\n> \n> \n> static int guc_var_compare(const void *a, const void *b);\n> ! static char *_ShowOption(struct config_generic *record);\n> \n> \n> /*\n> ***************\n> *** 2204,2215 ****\n> ShowGUCConfigOption(const char *name)\n> {\n> \tstruct config_generic *record;\n> \n> \trecord = find_option(name);\n> \tif (record == NULL)\n> \t\telog(ERROR, \"Option '%s' is not recognized\", name);\n> \n> ! \t_ShowOption(record);\n> }\n> \n> /*\n> --- 2204,2221 ----\n> ShowGUCConfigOption(const char *name)\n> {\n> \tstruct config_generic *record;\n> + \tchar *val;\n> \n> \trecord = find_option(name);\n> \tif (record == NULL)\n> \t\telog(ERROR, \"Option '%s' is not recognized\", name);\n> \n> ! \tval = _ShowOption(record);\n> ! \tif(val != NULL)\n> ! \t{\n> ! \t\telog(INFO, \"%s is %s\", record->name, val);\n> ! \t\tpfree(val);\n> ! \t}\n> }\n> \n> /*\n> ***************\n> *** 2219,2239 ****\n> ShowAllGUCConfig(void)\n> {\n> \tint\t\t\ti;\n> \n> \tfor (i = 0; i < num_guc_variables; i++)\n> \t{\n> \t\tstruct config_generic *conf = guc_variables[i];\n> \n> \t\tif ((conf->flags & GUC_NO_SHOW_ALL) == 0)\n> ! \t\t\t_ShowOption(conf);\n> \t}\n> }\n> \n> ! static void\n> _ShowOption(struct config_generic *record)\n> {\n> \tchar\t\tbuffer[256];\n> \tconst char *val;\n> \n> \tswitch (record->vartype)\n> \t{\n> --- 2225,2295 ----\n> ShowAllGUCConfig(void)\n> {\n> \tint\t\t\ti;\n> + \tchar\t *val;\n> \n> \tfor (i = 0; i < num_guc_variables; i++)\n> \t{\n> \t\tstruct config_generic *conf = guc_variables[i];\n> \n> \t\tif ((conf->flags & GUC_NO_SHOW_ALL) == 0)\n> ! \t\t{\n> ! \t\t\tval = _ShowOption(conf);\n> ! \t\t\tif(val != NULL)\n> ! \t\t\t{\n> ! \t\t\t\telog(INFO, \"%s is %s\", conf->name, val);\n> ! \t\t\t\tpfree(val);\n> ! \t\t\t}\n> ! \t\t}\n> \t}\n> }\n> \n> ! /*\n> ! * Return GUC variable value by name\n> ! */\n> ! char *\n> ! GetGUCConfigOptionName(const char *name)\n> ! {\n> ! \tstruct config_generic *record;\n> ! \n> ! \trecord = find_option(name);\n> ! \tif (record == NULL)\n> ! \t\telog(ERROR, \"Option '%s' is not recognized\", name);\n> ! \n> ! \treturn _ShowOption(record);\n> ! }\n> ! \n> ! /*\n> ! * Return GUC variable value and set varname for a specific\n> ! * variable by number.\n> ! */\n> ! char *\n> ! GetGUCConfigOptionNum(int varnum, char **varname)\n> ! {\n> ! \tstruct config_generic *conf = guc_variables[varnum];\n> ! \n> ! \t*varname = pstrdup(conf->name);\n> ! \n> ! \tif ((conf->flags & GUC_NO_SHOW_ALL) == 0)\n> ! \t\treturn _ShowOption(conf);\n> ! \telse\n> ! \t\treturn NULL;\n> ! }\n> ! \n> ! /*\n> ! * Return the total number of GUC variables\n> ! */\n> ! int\n> ! GetNumGUCConfigOptions(void)\n> ! {\n> ! \treturn num_guc_variables;\n> ! }\n> ! \n> ! static char *\n> _ShowOption(struct config_generic *record)\n> {\n> \tchar\t\tbuffer[256];\n> \tconst char *val;\n> + \tchar\t *retval;\n> \n> \tswitch (record->vartype)\n> \t{\n> ***************\n> *** 2297,2303 ****\n> \t\t\tbreak;\n> \t}\n> \n> ! \telog(INFO, \"%s is %s\", record->name, val);\n> }\n> \n> \n> --- 2353,2361 ----\n> \t\t\tbreak;\n> \t}\n> \n> ! \tretval = pstrdup(val);\n> ! \n> ! \treturn retval;\n> }\n> \n> \n> Index: src/include/utils/guc.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql/src/include/utils/guc.h,v\n> retrieving revision 1.17\n> diff -c -r1.17 guc.h\n> *** src/include/utils/guc.h\t17 May 2002 01:19:19 -0000\t1.17\n> --- src/include/utils/guc.h\t9 Jun 2002 22:45:20 -0000\n> ***************\n> *** 86,91 ****\n> --- 86,94 ----\n> \t\t\t\t\t\t\t bool isLocal, bool DoIt);\n> extern void ShowGUCConfigOption(const char *name);\n> extern void ShowAllGUCConfig(void);\n> + extern char *GetGUCConfigOptionName(const char *name);\n> + extern char *GetGUCConfigOptionNum(int varnum, char **varname);\n> + extern int GetNumGUCConfigOptions(void);\n> \n> extern void SetPGVariable(const char *name, List *args, bool is_local);\n> extern void GetPGVariable(const char *name);\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 17:44:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: [Fwd: contrib/showguc (was Re: [HACKERS] revised sample"
},
{
"msg_contents": "OK, I've been looking at this package for some time through various\niterations and I have my doubts about it.\n\nWhat's going to happen to this when SHOW ALL is changed to return a query\nresult? If you want to provide an example of a set-returning function,\nuse something of lasting value, maybe generate some mathematic sequence.\n\nAlso, the first place this sort of material should go is the\ndocumentation, not hidden somewhere in contrib.\n\nIn any case, please don't expose the name \"GUC\" to user space.\n\n\nBruce Momjian writes:\n\n>\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n>\n> I will try to apply it within the next 48 hours.\n>\n> ---------------------------------------------------------------------------\n>\n>\n> Joe Conway wrote:\n> > Tom Lane wrote:\n> > > Well, we're not doing that; and I see no good reason to make the thing\n> > > be a builtin function at all. Since it's just an example, it can very\n> > > well be a contrib item with a creation script. Probably *should* be,\n> > > in fact, because dynamically created functions are what other people are\n> > > going to be building; an example of how to do it as a builtin function\n> > > isn't as helpful.\n> >\n> > Here is a patch for contrib/showguc. It can serve as a reference\n> > implementation for a C function which returns setof composite. It\n> > required some small changes in guc.c and guc.h so that the number of GUC\n> > variables, and their values, could be accessed. Example usage as shown\n> > below:\n> >\n> > test=# select * from show_all_vars() where varname = 'wal_sync_method';\n> > varname | varval\n> > -----------------+-----------\n> > wal_sync_method | fdatasync\n> > (1 row)\n> >\n> > test=# select show_var('wal_sync_method');\n> > show_var\n> > -----------\n> > fdatasync\n> > (1 row)\n> >\n> >\n> > show_var() is neither composite nor set returning, but it seemed like a\n> > worthwhile addition. Please apply if there are no objections.\n> >\n> > Thanks,\n> >\n> > Joe\n> >\n>\n> > Index: contrib/showguc/Makefile\n> > ===================================================================\n> > RCS file: contrib/showguc/Makefile\n> > diff -N contrib/showguc/Makefile\n> > *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> > --- contrib/showguc/Makefile\t27 May 2002 00:24:44 -0000\n> > ***************\n> > *** 0 ****\n> > --- 1,9 ----\n> > + subdir = contrib/showguc\n> > + top_builddir = ../..\n> > + include $(top_builddir)/src/Makefile.global\n> > +\n> > + MODULES = showguc\n> > + DATA_built = showguc.sql\n> > + DOCS = README.showguc\n> > +\n> > + include $(top_srcdir)/contrib/contrib-global.mk\n> > Index: contrib/showguc/README.showguc\n> > ===================================================================\n> > RCS file: contrib/showguc/README.showguc\n> > diff -N contrib/showguc/README.showguc\n> > *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> > --- contrib/showguc/README.showguc\t10 Jun 2002 00:16:48 -0000\n> > ***************\n> > *** 0 ****\n> > --- 1,105 ----\n> > + /*\n> > + * showguc\n> > + *\n> > + * Sample to demonstrate a C function which returns setof composite.\n> > + * Joe Conway <mail@joeconway.com>\n> > + *\n> > + * Copyright 2002 by PostgreSQL Global Development Group\n> > + *\n> > + * Permission to use, copy, modify, and distribute this software and its\n> > + * documentation for any purpose, without fee, and without a written agreement\n> > + * is hereby granted, provided that the above copyright notice and this\n> > + * paragraph and the following two paragraphs appear in all copies.\n> > + *\n> > + * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> > + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> > + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> > + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> > + * POSSIBILITY OF SUCH DAMAGE.\n> > + *\n> > + * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> > + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> > + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> > + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> > + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> > + *\n> > + */\n> > + Version 0.1 (9 June, 2002):\n> > + First release\n> > +\n> > + Release Notes:\n> > +\n> > + Version 0.1\n> > + - initial release\n> > +\n> > + Installation:\n> > + Place these files in a directory called 'showguc' under 'contrib' in the PostgreSQL source tree. Then run:\n> > +\n> > + make\n> > + make install\n> > +\n> > + You can use showguc.sql to create the functions in your database of choice, e.g.\n> > +\n> > + psql -U postgres template1 < showguc.sql\n> > +\n> > + installs following functions into database template1:\n> > +\n> > + show_all_vars() - returns all GUC variables\n> > + show_var(text) - returns value of the requested GUC variable\n> > +\n> > + Documentation\n> > + ==================================================================\n> > + Name\n> > +\n> > + show_all_vars() - returns all GUC variables\n> > +\n> > + Synopsis\n> > +\n> > + show_all_vars()\n> > +\n> > + Inputs\n> > +\n> > + none\n> > +\n> > + Outputs\n> > +\n> > + Returns setof __gucvar, where __gucvar is (varname TEXT, varval TEXT). All\n> > + GUC variables displayed by SHOW ALL are returned as a set.\n> > +\n> > + Example usage\n> > +\n> > + test=# select * from show_all_vars() where varname = 'wal_sync_method';\n> > + varname | varval\n> > + -----------------+-----------\n> > + wal_sync_method | fdatasync\n> > + (1 row)\n> > +\n> > + ==================================================================\n> > + Name\n> > +\n> > + show_var(text varname) - returns value of GUC variable varname\n> > +\n> > + Synopsis\n> > +\n> > + show_var(varname)\n> > +\n> > + Inputs\n> > +\n> > + varname\n> > + The name of a GUC variable\n> > +\n> > + Outputs\n> > +\n> > + Returns the current value of varname.\n> > +\n> > + Example usage\n> > +\n> > + test=# select show_var('wal_sync_method');\n> > + show_var\n> > + -----------\n> > + fdatasync\n> > + (1 row)\n> > +\n> > + ==================================================================\n> > + -- Joe Conway\n> > +\n> > Index: contrib/showguc/showguc.c\n> > ===================================================================\n> > RCS file: contrib/showguc/showguc.c\n> > diff -N contrib/showguc/showguc.c\n> > *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> > --- contrib/showguc/showguc.c\t10 Jun 2002 00:02:14 -0000\n> > ***************\n> > *** 0 ****\n> > --- 1,152 ----\n> > + /*\n> > + * showguc\n> > + *\n> > + * Sample to demonstrate a C function which returns setof composite.\n> > + * Joe Conway <mail@joeconway.com>\n> > + *\n> > + * Copyright 2002 by PostgreSQL Global Development Group\n> > + *\n> > + * Permission to use, copy, modify, and distribute this software and its\n> > + * documentation for any purpose, without fee, and without a written agreement\n> > + * is hereby granted, provided that the above copyright notice and this\n> > + * paragraph and the following two paragraphs appear in all copies.\n> > + *\n> > + * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> > + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> > + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> > + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> > + * POSSIBILITY OF SUCH DAMAGE.\n> > + *\n> > + * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> > + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> > + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> > + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> > + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> > + *\n> > + */\n> > + #include \"postgres.h\"\n> > +\n> > + #include \"fmgr.h\"\n> > + #include \"funcapi.h\"\n> > + #include \"utils/builtins.h\"\n> > + #include \"utils/guc.h\"\n> > +\n> > + #include \"showguc.h\"\n> > +\n> > + /*\n> > + * showguc_all - equiv to SHOW ALL command but implemented as\n> > + * an SRF.\n> > + */\n> > + PG_FUNCTION_INFO_V1(showguc_all);\n> > + Datum\n> > + showguc_all(PG_FUNCTION_ARGS)\n> > + {\n> > + \tFuncCallContext\t *funcctx;\n> > + \tTupleDesc\t\t\ttupdesc;\n> > + \tint\t\t\t\t\tcall_cntr;\n> > + \tint\t\t\t\t\tmax_calls;\n> > + \tTupleTableSlot\t *slot;\n> > + \tAttInMetadata\t *attinmeta;\n> > +\n> > + \t/* stuff done only on the first call of the function */\n> > + \tif(SRF_IS_FIRSTPASS())\n> > + \t{\n> > + \t\t/* create a function context for cross-call persistence */\n> > + \t\tfuncctx = SRF_FIRSTCALL_INIT();\n> > +\n> > + \t\t/*\n> > + \t\t * Build a tuple description for a pg__guc tuple\n> > + \t\t */\n> > + \t\ttupdesc = RelationNameGetTupleDesc(\"__gucvar\");\n> > +\n> > + \t\t/* allocate a slot for a tuple with this tupdesc */\n> > + \t\tslot = TupleDescGetSlot(tupdesc);\n> > +\n> > + \t\t/* assign slot to function context */\n> > + \t\tfuncctx->slot = slot;\n> > +\n> > + \t\t/*\n> > + \t\t * Generate attribute metadata needed later to produce tuples from raw\n> > + \t\t * C strings\n> > + \t\t */\n> > + \t\tattinmeta = TupleDescGetAttInMetadata(tupdesc);\n> > + \t\tfuncctx->attinmeta = attinmeta;\n> > +\n> > + \t\t/* total number of tuples to be returned */\n> > + \t\tfuncctx->max_calls = GetNumGUCConfigOptions();\n> > + }\n> > +\n> > + \t/* stuff done on every call of the function */\n> > + \tfuncctx = SRF_PERCALL_SETUP(funcctx);\n> > +\n> > + \tcall_cntr = funcctx->call_cntr;\n> > + \tmax_calls = funcctx->max_calls;\n> > + \tslot = funcctx->slot;\n> > + \tattinmeta = funcctx->attinmeta;\n> > +\n> > + \tif (call_cntr < max_calls)\t/* do when there is more left to send */\n> > + \t{\n> > + \t\tchar\t *varname;\n> > + \t\tchar\t *varval;\n> > + \t\tchar\t **values;\n> > + \t\tHeapTuple\ttuple;\n> > + \t\tDatum\t\tresult;\n> > +\n> > + \t\t/*\n> > + \t\t * Get the next GUC variable name and value\n> > + \t\t */\n> > + \t\tvarval = GetGUCConfigOptionNum(call_cntr, &varname);\n> > +\n> > + \t\t/*\n> > + \t\t * Prepare a values array for storage in our slot.\n> > + \t\t * This should be an array of C strings which will\n> > + \t\t * be processed later by the appropriate \"in\" functions.\n> > + \t\t */\n> > + \t\tvalues = (char **) palloc(2 * sizeof(char *));\n> > + \t\tvalues[0] = varname;\n> > + \t\tvalues[1] = varval;\n> > +\n> > + \t\t/* build a tuple */\n> > + \t\ttuple = BuildTupleFromCStrings(attinmeta, values);\n> > +\n> > + \t\t/* make the tuple into a datum */\n> > + \t\tresult = TupleGetDatum(slot, tuple);\n> > +\n> > + \t\t/* Clean up */\n> > + \t\tpfree(varname);\n> > + \t\tpfree(values);\n> > +\n> > + \t\tSRF_RETURN_NEXT(funcctx, result);\n> > + \t}\n> > + \telse\t/* do when there is no more left */\n> > + \t{\n> > + \t\tSRF_RETURN_DONE(funcctx);\n> > + \t}\n> > + }\n> > +\n> > +\n> > + /*\n> > + * showguc_name - equiv to SHOW X command but implemented as\n> > + * a function.\n> > + */\n> > + PG_FUNCTION_INFO_V1(showguc_name);\n> > + Datum\n> > + showguc_name(PG_FUNCTION_ARGS)\n> > + {\n> > + \tchar *varname;\n> > + \tchar *varval;\n> > + \ttext *result_text;\n> > +\n> > + \t/* Get the GUC variable name */\n> > + \tvarname = DatumGetCString(DirectFunctionCall1(textout, PointerGetDatum(PG_GETARG_TEXT_P(0))));\n> > +\n> > + \t/* Get the value */\n> > + \tvarval = GetGUCConfigOptionName(varname);\n> > +\n> > + \t/* Convert to text */\n> > + \tresult_text = DatumGetTextP(DirectFunctionCall1(textin, CStringGetDatum(varval)));\n> > +\n> > + \t/* return it */\n> > + \tPG_RETURN_TEXT_P(result_text);\n> > + }\n> > +\n> > Index: contrib/showguc/showguc.h\n> > ===================================================================\n> > RCS file: contrib/showguc/showguc.h\n> > diff -N contrib/showguc/showguc.h\n> > *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> > --- contrib/showguc/showguc.h\t10 Jun 2002 00:01:02 -0000\n> > ***************\n> > *** 0 ****\n> > --- 1,37 ----\n> > + /*\n> > + * showguc\n> > + *\n> > + * Sample to demonstrate a C function which returns setof composite.\n> > + * Joe Conway <mail@joeconway.com>\n> > + *\n> > + * Copyright 2002 by PostgreSQL Global Development Group\n> > + *\n> > + * Permission to use, copy, modify, and distribute this software and its\n> > + * documentation for any purpose, without fee, and without a written agreement\n> > + * is hereby granted, provided that the above copyright notice and this\n> > + * paragraph and the following two paragraphs appear in all copies.\n> > + *\n> > + * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> > + * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> > + * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> > + * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> > + * POSSIBILITY OF SUCH DAMAGE.\n> > + *\n> > + * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> > + * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> > + * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> > + * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> > + * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> > + *\n> > + */\n> > +\n> > + #ifndef SHOWGUC_H\n> > + #define SHOWGUC_H\n> > +\n> > + /*\n> > + * External declarations\n> > + */\n> > + extern Datum showguc_all(PG_FUNCTION_ARGS);\n> > + extern Datum showguc_name(PG_FUNCTION_ARGS);\n> > +\n> > + #endif /* SHOWGUC_H */\n> > Index: contrib/showguc/showguc.sql.in\n> > ===================================================================\n> > RCS file: contrib/showguc/showguc.sql.in\n> > diff -N contrib/showguc/showguc.sql.in\n> > *** /dev/null\t1 Jan 1970 00:00:00 -0000\n> > --- contrib/showguc/showguc.sql.in\t10 Jun 2002 00:03:13 -0000\n> > ***************\n> > *** 0 ****\n> > --- 1,10 ----\n> > + CREATE VIEW __gucvar AS\n> > + SELECT\n> > + ''::TEXT AS varname,\n> > + ''::TEXT AS varval;\n> > +\n> > + CREATE OR REPLACE FUNCTION show_all_vars() RETURNS setof __gucvar\n> > + AS 'MODULE_PATHNAME','showguc_all' LANGUAGE 'c' STABLE STRICT;\n> > +\n> > + CREATE OR REPLACE FUNCTION show_var(text) RETURNS text\n> > + AS 'MODULE_PATHNAME','showguc_name' LANGUAGE 'c' STABLE STRICT;\n> > Index: src/backend/utils/misc/guc.c\n> > ===================================================================\n> > RCS file: /opt/src/cvs/pgsql/src/backend/utils/misc/guc.c,v\n> > retrieving revision 1.69\n> > diff -c -r1.69 guc.c\n> > *** src/backend/utils/misc/guc.c\t17 May 2002 20:32:29 -0000\t1.69\n> > --- src/backend/utils/misc/guc.c\t9 Jun 2002 22:51:45 -0000\n> > ***************\n> > *** 824,830 ****\n> >\n> >\n> > static int guc_var_compare(const void *a, const void *b);\n> > ! static void _ShowOption(struct config_generic *record);\n> >\n> >\n> > /*\n> > --- 824,830 ----\n> >\n> >\n> > static int guc_var_compare(const void *a, const void *b);\n> > ! static char *_ShowOption(struct config_generic *record);\n> >\n> >\n> > /*\n> > ***************\n> > *** 2204,2215 ****\n> > ShowGUCConfigOption(const char *name)\n> > {\n> > \tstruct config_generic *record;\n> >\n> > \trecord = find_option(name);\n> > \tif (record == NULL)\n> > \t\telog(ERROR, \"Option '%s' is not recognized\", name);\n> >\n> > ! \t_ShowOption(record);\n> > }\n> >\n> > /*\n> > --- 2204,2221 ----\n> > ShowGUCConfigOption(const char *name)\n> > {\n> > \tstruct config_generic *record;\n> > + \tchar *val;\n> >\n> > \trecord = find_option(name);\n> > \tif (record == NULL)\n> > \t\telog(ERROR, \"Option '%s' is not recognized\", name);\n> >\n> > ! \tval = _ShowOption(record);\n> > ! \tif(val != NULL)\n> > ! \t{\n> > ! \t\telog(INFO, \"%s is %s\", record->name, val);\n> > ! \t\tpfree(val);\n> > ! \t}\n> > }\n> >\n> > /*\n> > ***************\n> > *** 2219,2239 ****\n> > ShowAllGUCConfig(void)\n> > {\n> > \tint\t\t\ti;\n> >\n> > \tfor (i = 0; i < num_guc_variables; i++)\n> > \t{\n> > \t\tstruct config_generic *conf = guc_variables[i];\n> >\n> > \t\tif ((conf->flags & GUC_NO_SHOW_ALL) == 0)\n> > ! \t\t\t_ShowOption(conf);\n> > \t}\n> > }\n> >\n> > ! static void\n> > _ShowOption(struct config_generic *record)\n> > {\n> > \tchar\t\tbuffer[256];\n> > \tconst char *val;\n> >\n> > \tswitch (record->vartype)\n> > \t{\n> > --- 2225,2295 ----\n> > ShowAllGUCConfig(void)\n> > {\n> > \tint\t\t\ti;\n> > + \tchar\t *val;\n> >\n> > \tfor (i = 0; i < num_guc_variables; i++)\n> > \t{\n> > \t\tstruct config_generic *conf = guc_variables[i];\n> >\n> > \t\tif ((conf->flags & GUC_NO_SHOW_ALL) == 0)\n> > ! \t\t{\n> > ! \t\t\tval = _ShowOption(conf);\n> > ! \t\t\tif(val != NULL)\n> > ! \t\t\t{\n> > ! \t\t\t\telog(INFO, \"%s is %s\", conf->name, val);\n> > ! \t\t\t\tpfree(val);\n> > ! \t\t\t}\n> > ! \t\t}\n> > \t}\n> > }\n> >\n> > ! /*\n> > ! * Return GUC variable value by name\n> > ! */\n> > ! char *\n> > ! GetGUCConfigOptionName(const char *name)\n> > ! {\n> > ! \tstruct config_generic *record;\n> > !\n> > ! \trecord = find_option(name);\n> > ! \tif (record == NULL)\n> > ! \t\telog(ERROR, \"Option '%s' is not recognized\", name);\n> > !\n> > ! \treturn _ShowOption(record);\n> > ! }\n> > !\n> > ! /*\n> > ! * Return GUC variable value and set varname for a specific\n> > ! * variable by number.\n> > ! */\n> > ! char *\n> > ! GetGUCConfigOptionNum(int varnum, char **varname)\n> > ! {\n> > ! \tstruct config_generic *conf = guc_variables[varnum];\n> > !\n> > ! \t*varname = pstrdup(conf->name);\n> > !\n> > ! \tif ((conf->flags & GUC_NO_SHOW_ALL) == 0)\n> > ! \t\treturn _ShowOption(conf);\n> > ! \telse\n> > ! \t\treturn NULL;\n> > ! }\n> > !\n> > ! /*\n> > ! * Return the total number of GUC variables\n> > ! */\n> > ! int\n> > ! GetNumGUCConfigOptions(void)\n> > ! {\n> > ! \treturn num_guc_variables;\n> > ! }\n> > !\n> > ! static char *\n> > _ShowOption(struct config_generic *record)\n> > {\n> > \tchar\t\tbuffer[256];\n> > \tconst char *val;\n> > + \tchar\t *retval;\n> >\n> > \tswitch (record->vartype)\n> > \t{\n> > ***************\n> > *** 2297,2303 ****\n> > \t\t\tbreak;\n> > \t}\n> >\n> > ! \telog(INFO, \"%s is %s\", record->name, val);\n> > }\n> >\n> >\n> > --- 2353,2361 ----\n> > \t\t\tbreak;\n> > \t}\n> >\n> > ! \tretval = pstrdup(val);\n> > !\n> > ! \treturn retval;\n> > }\n> >\n> >\n> > Index: src/include/utils/guc.h\n> > ===================================================================\n> > RCS file: /opt/src/cvs/pgsql/src/include/utils/guc.h,v\n> > retrieving revision 1.17\n> > diff -c -r1.17 guc.h\n> > *** src/include/utils/guc.h\t17 May 2002 01:19:19 -0000\t1.17\n> > --- src/include/utils/guc.h\t9 Jun 2002 22:45:20 -0000\n> > ***************\n> > *** 86,91 ****\n> > --- 86,94 ----\n> > \t\t\t\t\t\t\t bool isLocal, bool DoIt);\n> > extern void ShowGUCConfigOption(const char *name);\n> > extern void ShowAllGUCConfig(void);\n> > + extern char *GetGUCConfigOptionName(const char *name);\n> > + extern char *GetGUCConfigOptionNum(int varnum, char **varname);\n> > + extern int GetNumGUCConfigOptions(void);\n> >\n> > extern void SetPGVariable(const char *name, List *args, bool is_local);\n> > extern void GetPGVariable(const char *name);\n> >\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n\n\n",
"msg_date": "Tue, 18 Jun 2002 23:55:52 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: contrib/showguc (was Re: [HACKERS] revised sample"
},
{
"msg_contents": "<moving back to HACKERS for the discussion>\n\nPeter Eisentraut wrote:\n> OK, I've been looking at this package for some time through various\n> iterations and I have my doubts about it.\n> \n> What's going to happen to this when SHOW ALL is changed to return a query\n> result? If you want to provide an example of a set-returning function,\n> use something of lasting value, maybe generate some mathematic sequence.\n\nWell, I wanted to implement this as a functional equivalent of SHOW ALL, \nin the backend, but there was no way to bootstrap a builtin SRF that \nwasn't also too ugly to be acceptable (at least that I could come up \nwith -- any suggestions?).\n\nAnd while SHOW ALL *could* be implemented in a similar fashion to \nEXPLAIN, with that approach you could not use a WHERE clause, or join \nthe results with any other data. IMHO that significantly reduces the \nutility of returning SHOW ALL results as a query in the first place.\n\nI'd be happy to produce a different function as a reference \nimplementation, but it seemed that there was sufficient demand in the \npast that this was a useful example. If the consensus is that \ncontrib/showguc (renamed, see below) is a bad idea, then I'll come up \nwith something else.\n\n> \n> Also, the first place this sort of material should go is the\n> documentation, not hidden somewhere in contrib.\n\nNo doubt, and there *will* be documentation. I was waiting for the API \nand example to stabilize a bit first. There is no sense in documenting a \nmoving target.\n\n> \n> In any case, please don't expose the name \"GUC\" to user space.\n\nOK. If I replace user space references to GUC with something more \npalatable, are the guc.c and guc.h changes at least acceptable? With \nthem, user functions can at least read configuration variables. GUC \nvariables are inaccessible otherwise.\n\nPlease let me know the desired direction for this, and I'll adjust \naccordingly.\n\nThanks,\n\nJoe\n\n\n",
"msg_date": "Tue, 18 Jun 2002 18:24:29 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: [PATCHES] contrib/showguc (was Re: revised sample"
},
{
"msg_contents": "Joe Conway writes:\n\n> Well, I wanted to implement this as a functional equivalent of SHOW ALL,\n> in the backend, but there was no way to bootstrap a builtin SRF that\n> wasn't also too ugly to be acceptable (at least that I could come up\n> with -- any suggestions?).\n\nWell, if you want to provide a really simple example (which might not be a\nbad idea), just return N random numbers, where N is passed as an argument.\nIf you want to add some substance, generate those numbers yourself using\nsome algorithm to achieve a certain kind of distribution. (This might\nrequire you to keep some state between calls, which would be interesting\nto see.)\n\nAs far as returning composite types, that's really a separate thing, so\nthere ought to be a separate example.\n\n> And while SHOW ALL *could* be implemented in a similar fashion to\n> EXPLAIN, with that approach you could not use a WHERE clause, or join\n> the results with any other data. IMHO that significantly reduces the\n> utility of returning SHOW ALL results as a query in the first place.\n\nSHOW ALL is a red herring. What we need is simple SHOW to return a query\nresult. Or we need a simple function that takes one argument of type text\n(or name) and returns one datum of type text with the value of the\nparameter. If, as you say, you'd like to join the results with other\ncomputations, the latter would be for you.\n\n\nA random observation: Your SRF API seems to require that you determine\nthe maximum number of calls ahead of time. Is this necessary? It might\nbe interesting, for instance, to create mathematical sequences and have it\nterminate at some condition. Instead of the max_calls and call counter\nyou could provide some space for free use.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 19 Jun 2002 23:13:10 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: [PATCHES] contrib/showguc (was Re: revised sample"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Well, if you want to provide a really simple example (which might not be a\n> bad idea), just return N random numbers, where N is passed as an argument.\n> If you want to add some substance, generate those numbers yourself using\n> some algorithm to achieve a certain kind of distribution. (This might\n> require you to keep some state between calls, which would be interesting\n> to see.)\n> \n> As far as returning composite types, that's really a separate thing, so\n> there ought to be a separate example.\n\n\nOK. I'll create a simpler example of both returning composite and \nreturning a set.\n\n\n> SHOW ALL is a red herring. What we need is simple SHOW to return a query\n> result. Or we need a simple function that takes one argument of type text\n> (or name) and returns one datum of type text with the value of the\n> parameter. If, as you say, you'd like to join the results with other\n> computations, the latter would be for you.\n\nFair enough. There was already a function in the submitted contrib which \ndid just that. I'll rework it into a backend function and resubmit.\n\n\n> A random observation: Your SRF API seems to require that you determine\n> the maximum number of calls ahead of time. Is this necessary? It might\n> be interesting, for instance, to create mathematical sequences and have it\n> terminate at some condition.\n\nThe max calls is a purely optional part of the API. If a different way \nof determining when you're \"done\" exists for a particular application, \nthat should be used instead. I will make sure this fact is prominent in \nthe documentation.\n\n > Instead of the max_calls and call counter you could provide some space\n > for free use.\n\nWell, that was the idea with:\n\t/* pointer to misc context info */\n\tvoid\t\t *fctx;\nYou can use this to keep track of any context info you want. I suppose \ncall_cntr and max_calls could be part of the user provided context, but \nsince they will be used frequently (at least in my experience so far \nthey have been needed each time), I thought making them part of the \nstructure for the sake of convenience was worth it.\n\nThank you for the feedback!\n\nJoe\n\n\n",
"msg_date": "Wed, 19 Jun 2002 16:52:38 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [Fwd: [PATCHES] contrib/showguc (was Re: revised sample"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Monday, June 17, 2002 6:01 PM\n> To: Jan Wieck\n> Cc: Peter Eisentraut; PostgreSQL-development\n> Subject: Re: [HACKERS] Roadmap for a Win32 port\n> \n> \n> Jan Wieck wrote:\n> > > And pg_ctl will be run with a symlink to postmaster like postgres,\n> > > right? Makes sense.\n> > \n> > No symlink. Windows doesn't have symlinks, the \"link\" stuff you\n> > see is just some file with a special meaning for the Windows\n> > explorer. There is absolutely no support built into the OS. They\n> > really haven't learned alot since the DOS times, when they added\n> > \".\" and \"..\" entries to directories to \"look\" similar to UNIX.\n> > Actually, they never really understood what a hardlink is in the\n> > first place, so why do you expect them to know how to implement\n> > symbolic ones?\n> > \n> > It will be at least another copy of the postmaster (dot exe).\n> \n> Yea, I just liked the idea of the postmaster binary somehow reporting\n> the postmaster status. Seems it is in a better position to \n> do that than\n> a shell script.\n\nArchitectural notion:\nThe Postmaster is about 100x bigger than it needs to be.\n\nThe Postmaster needs to set up shared memory and launch servers. It\ndoes not need to know anything about SQL grammar or any of that\nrigamarole.\n\nIt could be a 15K executable.\n\nWhy not have an itty-bitty Postmaster that does nothing but a spawn or a\ncreate process to create threaded Postgres instances?\n\nJust a notion.\n",
"msg_date": "Mon, 17 Jun 2002 18:17:51 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "Dann Corbit wrote:\n> > > It will be at least another copy of the postmaster (dot exe).\n> > \n> > Yea, I just liked the idea of the postmaster binary somehow reporting\n> > the postmaster status. Seems it is in a better position to \n> > do that than\n> > a shell script.\n> \n> Architectural notion:\n> The Postmaster is about 100x bigger than it needs to be.\n> \n> The Postmaster needs to set up shared memory and launch servers. It\n> does not need to know anything about SQL grammar or any of that\n> rigamarole.\n> \n> It could be a 15K executable.\n> \n> Why not have an itty-bitty Postmaster that does nothing but a spawn or a\n> create process to create threaded Postgres instances?\n\nCan't. postmaster/postgres are symlinks to the same file, and we fork()\nfrom postmaster to create backends. All the code has to be in the\npostmaster so the fork works.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 21:19:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
}
] |
[
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Monday, June 17, 2002 6:20 PM\n> To: Dann Corbit\n> Cc: Jan Wieck; Peter Eisentraut; PostgreSQL-development\n> Subject: Re: [HACKERS] Roadmap for a Win32 port\n> \n> \n> Dann Corbit wrote:\n> > > > It will be at least another copy of the postmaster (dot exe).\n> > > \n> > > Yea, I just liked the idea of the postmaster binary \n> somehow reporting\n> > > the postmaster status. Seems it is in a better position to \n> > > do that than\n> > > a shell script.\n> > \n> > Architectural notion:\n> > The Postmaster is about 100x bigger than it needs to be.\n> > \n> > The Postmaster needs to set up shared memory and launch servers. It\n> > does not need to know anything about SQL grammar or any of that\n> > rigamarole.\n> > \n> > It could be a 15K executable.\n> > \n> > Why not have an itty-bitty Postmaster that does nothing but \n> a spawn or a\n> > create process to create threaded Postgres instances?\n> \n> Can't. postmaster/postgres are symlinks to the same file, \n> and we fork()\n> from postmaster to create backends. All the code has to be in the\n> postmaster so the fork works.\n\nIs fork() faster than creation of a new process via exec()? After the\ncreation of the shared memory, the information needed to use it could be\npassed to the Postgres servers on the command line.\n\nThe startup stuff for PostgreSQL is just a few files. It does not seem\ninsurmountable to change it. But it is none of my business. If it is a\nmajor hassle (for reasons which I am not aware) then I see no driving\nreason to change it.\n",
"msg_date": "Mon, 17 Jun 2002 18:25:48 -0700",
"msg_from": "\"Dann Corbit\" <DCorbit@connx.com>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "Dann Corbit wrote:\n> > Can't. postmaster/postgres are symlinks to the same file, \n> > and we fork()\n> > from postmaster to create backends. All the code has to be in the\n> > postmaster so the fork works.\n> \n> Is fork() faster than creation of a new process via exec()? After the\n> creation of the shared memory, the information needed to use it could be\n> passed to the Postgres servers on the command line.\n> \n> The startup stuff for PostgreSQL is just a few files. It does not seem\n> insurmountable to change it. But it is none of my business. If it is a\n> major hassle (for reasons which I am not aware) then I see no driving\n> reason to change it.\n\nWe used to fork() and exec(), but that was slow. Now we preload stuff\nin the postmaster for each backend. It is faster.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 17 Jun 2002 21:27:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "Dann Corbit wrote:\n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > Sent: Monday, June 17, 2002 6:20 PM\n> > To: Dann Corbit\n> > Cc: Jan Wieck; Peter Eisentraut; PostgreSQL-development\n> > Subject: Re: [HACKERS] Roadmap for a Win32 port\n> >\n> >\n> > Dann Corbit wrote:\n> > > > > It will be at least another copy of the postmaster (dot exe).\n> > > >\n> > > > Yea, I just liked the idea of the postmaster binary\n> > somehow reporting\n> > > > the postmaster status. Seems it is in a better position to\n> > > > do that than\n> > > > a shell script.\n> > >\n> > > Architectural notion:\n> > > The Postmaster is about 100x bigger than it needs to be.\n> > >\n> > > The Postmaster needs to set up shared memory and launch servers. It\n> > > does not need to know anything about SQL grammar or any of that\n> > > rigamarole.\n> > >\n> > > It could be a 15K executable.\n> > >\n> > > Why not have an itty-bitty Postmaster that does nothing but\n> > a spawn or a\n> > > create process to create threaded Postgres instances?\n> >\n> > Can't. postmaster/postgres are symlinks to the same file,\n> > and we fork()\n> > from postmaster to create backends. All the code has to be in the\n> > postmaster so the fork works.\n> \n> Is fork() faster than creation of a new process via exec()? After the\n> creation of the shared memory, the information needed to use it could be\n> passed to the Postgres servers on the command line.\n\nexec() does NOT create new processes. It loads another executable file\ninto the existing, calling process. \n\nfork() duplicates the calling process. In modern unix variants, this is\ndone in a very efficient way, so that the text segment (program code) is\nshared readonly and everything else (data and stack segments) are shared\ncopy on write. Thus, fork() itself doesn't even cause memory copying.\nThat happens later when one of the now two processes writes to a memory\npage the first time.\n\nWindows does not have these two separate steps. It wants the full blown\nexpensive \"create process and load executable\", or the \"let's all muck\naround with the same handles\" modell, called threading. \n\n> \n> The startup stuff for PostgreSQL is just a few files. It does not seem\n> insurmountable to change it. But it is none of my business. If it is a\n> major hassle (for reasons which I am not aware) then I see no driving\n> reason to change it.\n\nIt has to be changed for Windows, it is a major hassle for reasons I\nwasn't aware of, and I am half way through ;-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Tue, 18 Jun 2002 10:07:26 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "On Tue, 2002-06-18 at 09:07, Jan Wieck wrote:\n> Dann Corbit wrote:\n> > \n> > The startup stuff for PostgreSQL is just a few files. It does not seem\n> > insurmountable to change it. But it is none of my business. If it is a\n> > major hassle (for reasons which I am not aware) then I see no driving\n> > reason to change it.\n> \n> It has to be changed for Windows, it is a major hassle for reasons I\n> wasn't aware of, and I am half way through ;-)\n> \n\nWell, if you're going to go through the trouble of rewriting postmaster\nto be win32 specific, you might as well make it hook into the services\ncontrol panel.\n\nIf I recall, shared memory is \"owned\" by a process in Win32 (please\ncorrect as needed...as I've slept since I last looked). That means that\nthe postmaster process not only owns the shared memory but needs to make\nsure that it's persists as the rest of postgres is expecting.\n\nPlease provide more details as to the nature of your Win32 changes. I'm\ncertainly curious. If you've already covered them, just say so...I have\nno problem going back to the archives! :)\n\nGreg",
"msg_date": "18 Jun 2002 10:56:38 -0500",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
}
] |
[
{
"msg_contents": "In order to make domains spec compliant I've added the ability for the\nexecutor to handle Constraint * types, which it applies appropriately.\ncoerce_type wraps domains with the appropriate Constraint Nodes as\nrequired. I've run into a rather simple problem however.\n\nCREATE DOMAIN int4notnulldomain int4 NOT NULL;\nSELECT cast(cast(NULL as int4) as int4notnulldomain); -- constraint\napplied properly\n\nSELECT cast(NULL as int4notnulldomain); -- constraint missed.\n\n\nThis appears to be due to makeTypeCast() in gram.y which bypasses\ncreating a TypeCast node for simple A_Const.\n\nRemoving the top part of the if (always creating a TypeCast node)\ncauses some rather extensive failures in the regression tests,\nspecifically with 'int4'::regproc type constructs.\n\nAny advice?\n--\nRod\n\n",
"msg_date": "Mon, 17 Jun 2002 22:42:22 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Domains and Casting"
},
{
"msg_contents": "\"Rod Taylor\" <rbt@zort.ca> writes:\n> This appears to be due to makeTypeCast() in gram.y which bypasses\n> creating a TypeCast node for simple A_Const.\n\nMy immediate reaction is that you've probably put the testing of\ndomain constraints in the wrong place. You didn't say exactly\nwhat your implementation looked like though ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:58:16 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Domains and Casting "
},
{
"msg_contents": "Erm... I suppose I didn't really intend to bring up domains at all.\nI'm just playing trying to figure out how things work (easiest by\nbreaking them I think).\n\nI don't understand why the below patch has such an adverse affect on\nthe system.\nCauses:\n\n (p2.pronargs != 3 OR p2.proretset OR p2.proargtypes[2] !=\n'int4'::regtype);\n! ERROR: Invalid type name 'int4'\n\nor\n\n (p2.oprkind != 'b' OR p2.oprresult != 'bool'::regtype OR\n! ERROR: Invalid type name 'bool'\n\n\n\nIndex: src/backend/parser/gram.y\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/backend/parser/gram.y,v\nretrieving revision 2.314\ndiff -c -r2.314 gram.y\n*** src/backend/parser/gram.y 2002/05/12 20:10:04 2.314\n--- src/backend/parser/gram.y 2002/06/19 00:54:44\n***************\n*** 6424,6442 ****\n * (We don't want to collapse x::type1::type2 into just x::type2.)\n * Otherwise, generate a TypeCast node.\n */\n! if (IsA(arg, A_Const) &&\n! ((A_Const *) arg)->typename == NULL)\n! {\n! ((A_Const *) arg)->typename = typename;\n! return arg;\n! }\n! else\n! {\n TypeCast *n = makeNode(TypeCast);\n n->arg = arg;\n n->typename = typename;\n return (Node *) n;\n! }\n }\n\n static Node *\n--- 6424,6442 ----\n * (We don't want to collapse x::type1::type2 into just x::type2.)\n * Otherwise, generate a TypeCast node.\n */\n! // if (IsA(arg, A_Const) &&\n! // ((A_Const *) arg)->typename == NULL)\n! // {\n! // ((A_Const *) arg)->typename = typename;\n! // return arg;\n! // }\n! // else\n! // {\n TypeCast *n = makeNode(TypeCast);\n n->arg = arg;\n n->typename = typename;\n return (Node *) n;\n! // }\n }\n\n static Node *\n--\nRod\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, June 18, 2002 10:58 AM\nSubject: Re: [HACKERS] Domains and Casting\n\n\n> \"Rod Taylor\" <rbt@zort.ca> writes:\n> > This appears to be due to makeTypeCast() in gram.y which bypasses\n> > creating a TypeCast node for simple A_Const.\n>\n> My immediate reaction is that you've probably put the testing of\n> domain constraints in the wrong place. You didn't say exactly\n> what your implementation looked like though ...\n>\n> regards, tom lane\n>\n\n",
"msg_date": "Tue, 18 Jun 2002 20:58:12 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Re: Domains and Casting "
}
] |
[
{
"msg_contents": "I finally hit bison's limit and cannot find any easy to remove rules in\nthe ecpg part of the parser anymore. There may be some in the backend\npart, but I'd like to keep those in sync.\n\nFor the time being I update my machine to a development snapshot bison\n1.49, but that doesn't look like a good solution. After all it hasn't\nbeen released yet. \n\nSince I suppose almost no one of you outthere uses a development version\nof bison I cannot commit my changes, or else, you all cannot compile\necpg anymore. \n\nSo what do we do?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jun 2002 15:14:01 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "ECPG won't compile anymore"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> I finally hit bison's limit and cannot find any easy to remove rules in\n> the ecpg part of the parser anymore. There may be some in the backend\n> part, but I'd like to keep those in sync.\n\n> So what do we do?\n\nI'd be inclined to say that you don't commit until bison 1.49 is\nofficially released. Got any idea when that will be?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 10:29:10 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG won't compile anymore "
},
{
"msg_contents": "On Tue, Jun 18, 2002 at 10:29:10AM -0400, Tom Lane wrote:\n> I'd be inclined to say that you don't commit until bison 1.49 is\n> officially released. Got any idea when that will be?\n\nNo, that's the problem. ECPG and the backend parser are running out of\nsync. After all bison's release may be later than our next one. \n\nI cannot commit even simple bugfixes anymore as my source tree\nalready has the uncompilable bison file. So I would have to work on two\ndifferent source trees. I don't exactly like that.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jun 2002 17:09:43 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ECPG won't compile anymore"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Tue, Jun 18, 2002 at 10:29:10AM -0400, Tom Lane wrote:\n>> I'd be inclined to say that you don't commit until bison 1.49 is\n>> officially released. Got any idea when that will be?\n\n> No, that's the problem. ECPG and the backend parser are running out of\n> sync. After all bison's release may be later than our next one. \n\nThat would be trouble, but considering that we are not even thinking of\ngoing beta before late August, is it really a realistic risk? bison\nseems to be making releases quite frequently lately. They were at 1.30\nback in November, according to my archives, so that's 19 releases in the\nlast 8 months.\n\nIf we get to August and there's no official release of bison with the\nlarger table size, then it will be time to worry, IMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 13:57:37 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG won't compile anymore "
},
{
"msg_contents": "Michael Meskes wrote:\n> On Tue, Jun 18, 2002 at 10:29:10AM -0400, Tom Lane wrote:\n> > I'd be inclined to say that you don't commit until bison 1.49 is\n> > officially released. Got any idea when that will be?\n> \n> No, that's the problem. ECPG and the backend parser are running out of\n> sync. After all bison's release may be later than our next one. \n> \n> I cannot commit even simple bugfixes anymore as my source tree\n> already has the uncompilable bison file. So I would have to work on two\n> different source trees. I don't exactly like that.\n\nAre we the only ones up against this problem? Hard to imagine we are\nthe only ones up against this limit in bison. Are there other options? \nI don't see how we can distribute ecpg in 7.3 without some kind of fix.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 18 Jun 2002 14:12:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG won't compile anymore"
},
{
"msg_contents": "On Tue, Jun 18, 2002 at 01:57:37PM -0400, Tom Lane wrote:\n> That would be trouble, but considering that we are not even thinking of\n> going beta before late August, is it really a realistic risk? bison\n\nYes. After all it's much easier to sync the two if I get smaller\nchanges. \n\n> seems to be making releases quite frequently lately. They were at 1.30\n> back in November, according to my archives, so that's 19 releases in the\n> last 8 months.\n\nTrue. But most of them are not for releases. I have no idea which vesion\nnumber will be the one they release. Up to 1.35 they released\nfrequently, but then it stopped and 1.49 is quite far from 1.35.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jun 2002 09:34:20 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ECPG won't compile anymore"
},
{
"msg_contents": "On Tue, Jun 18, 2002 at 02:12:45PM -0400, Bruce Momjian wrote:\n> Are we the only ones up against this problem? Hard to imagine we are\n\nNo, there are more, that's why bison is worked on.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jun 2002 09:35:25 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ECPG won't compile anymore"
},
{
"msg_contents": "Michael Meskes wrote:\n> On Tue, Jun 18, 2002 at 02:12:45PM -0400, Bruce Momjian wrote:\n> > Are we the only ones up against this problem? Hard to imagine we are\n> \n> No, there are more, that's why bison is worked on.\n\nAll I can say is that I am making incremental commits to gram.y, so you\ncan ignore the reformatting commits and focus on the ones that affect\nyour grammar. Not sure what else can be done.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jun 2002 11:50:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ECPG won't compile anymore"
}
] |
[
{
"msg_contents": "Hello,\n\nI am new to PostgreSQL, but I am interested in the Win32 port.\nI have studied the architecture of other databases like Oracle.\n\nThey have had to turn their multi-process model used on Unix into a fully\nmulti-threaded one on Win32. I have the feeling that they have had the same\ndebate that the one you have.\n\nThe CreateProcess() syscall is very costly on Windows. Some improvements\nhave been done in Windows XP but it is still far more costly than a Unix\nfork().\n\nI have been programming with threads on NT for a long time now.\nThey are quiet robust and efficient. I fear that it is the only successful\nway to port PostgreSQL.\n\nSorry for this interruption,\nSerge\n\n-----Original Message-----\nFrom: Jan Wieck [mailto:JanWieck@Yahoo.com] \nSent: Tuesday, June 18, 2002 16:07\nTo: Dann Corbit\nCc: Bruce Momjian; Peter Eisentraut; PostgreSQL-development\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\nDann Corbit wrote:\n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > Sent: Monday, June 17, 2002 6:20 PM\n> > To: Dann Corbit\n> > Cc: Jan Wieck; Peter Eisentraut; PostgreSQL-development\n> > Subject: Re: [HACKERS] Roadmap for a Win32 port\n> >\n> >\n> > Dann Corbit wrote:\n> > > > > It will be at least another copy of the postmaster (dot exe).\n> > > >\n> > > > Yea, I just liked the idea of the postmaster binary\n> > somehow reporting\n> > > > the postmaster status. Seems it is in a better position to\n> > > > do that than\n> > > > a shell script.\n> > >\n> > > Architectural notion:\n> > > The Postmaster is about 100x bigger than it needs to be.\n> > >\n> > > The Postmaster needs to set up shared memory and launch servers. It\n> > > does not need to know anything about SQL grammar or any of that\n> > > rigamarole.\n> > >\n> > > It could be a 15K executable.\n> > >\n> > > Why not have an itty-bitty Postmaster that does nothing but\n> > a spawn or a\n> > > create process to create threaded Postgres instances?\n> >\n> > Can't. postmaster/postgres are symlinks to the same file,\n> > and we fork()\n> > from postmaster to create backends. All the code has to be in the\n> > postmaster so the fork works.\n> \n> Is fork() faster than creation of a new process via exec()? After the\n> creation of the shared memory, the information needed to use it could be\n> passed to the Postgres servers on the command line.\n\nexec() does NOT create new processes. It loads another executable file\ninto the existing, calling process. \n\nfork() duplicates the calling process. In modern unix variants, this is\ndone in a very efficient way, so that the text segment (program code) is\nshared readonly and everything else (data and stack segments) are shared\ncopy on write. Thus, fork() itself doesn't even cause memory copying.\nThat happens later when one of the now two processes writes to a memory\npage the first time.\n\nWindows does not have these two separate steps. It wants the full blown\nexpensive \"create process and load executable\", or the \"let's all muck\naround with the same handles\" modell, called threading. \n\n> \n> The startup stuff for PostgreSQL is just a few files. It does not seem\n> insurmountable to change it. But it is none of my business. If it is a\n> major hassle (for reasons which I am not aware) then I see no driving\n> reason to change it.\n\nIt has to be changed for Windows, it is a major hassle for reasons I\nwasn't aware of, and I am half way through ;-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n",
"msg_date": "Tue, 18 Jun 2002 18:42:33 +0200",
"msg_from": "Serge Adda <sAdda@infovista.com>",
"msg_from_op": true,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "\nI know that Apache Group created special library to handle difference\nbetween different platforms (including win32). They had similar problems\nporting Apache to Windows. They build very portable threads api (win32,\nPOSIX, native Linux thread and more) There is also all IPC stuff (mutex,\nsignals mmap etc.) and many more. This functions work both on unix and\nwindows and use most effective implementation (e.g. POSIX functions on\nWinodws are slow compared to native).\n\nhttp://apr.apache.org/\n\n\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Serge Adda\nSent: Tuesday, June 18, 2002 6:43 PM\nTo: 'Jan Wieck'; 'Dann Corbit'\nCc: 'Bruce Momjian'; 'Peter Eisentraut'; 'PostgreSQL-development'\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\nHello,\n\nI am new to PostgreSQL, but I am interested in the Win32 port.\nI have studied the architecture of other databases like Oracle.\n\nThey have had to turn their multi-process model used on Unix into a\nfully\nmulti-threaded one on Win32. I have the feeling that they have had the\nsame\ndebate that the one you have.\n\nThe CreateProcess() syscall is very costly on Windows. Some improvements\nhave been done in Windows XP but it is still far more costly than a Unix\nfork().\n\nI have been programming with threads on NT for a long time now.\nThey are quiet robust and efficient. I fear that it is the only\nsuccessful\nway to port PostgreSQL.\n\nSorry for this interruption,\nSerge\n\n-----Original Message-----\nFrom: Jan Wieck [mailto:JanWieck@Yahoo.com] \nSent: Tuesday, June 18, 2002 16:07\nTo: Dann Corbit\nCc: Bruce Momjian; Peter Eisentraut; PostgreSQL-development\nSubject: Re: [HACKERS] Roadmap for a Win32 port\n\nDann Corbit wrote:\n> \n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > Sent: Monday, June 17, 2002 6:20 PM\n> > To: Dann Corbit\n> > Cc: Jan Wieck; Peter Eisentraut; PostgreSQL-development\n> > Subject: Re: [HACKERS] Roadmap for a Win32 port\n> >\n> >\n> > Dann Corbit wrote:\n> > > > > It will be at least another copy of the postmaster (dot exe).\n> > > >\n> > > > Yea, I just liked the idea of the postmaster binary\n> > somehow reporting\n> > > > the postmaster status. Seems it is in a better position to\n> > > > do that than\n> > > > a shell script.\n> > >\n> > > Architectural notion:\n> > > The Postmaster is about 100x bigger than it needs to be.\n> > >\n> > > The Postmaster needs to set up shared memory and launch servers.\nIt\n> > > does not need to know anything about SQL grammar or any of that\n> > > rigamarole.\n> > >\n> > > It could be a 15K executable.\n> > >\n> > > Why not have an itty-bitty Postmaster that does nothing but\n> > a spawn or a\n> > > create process to create threaded Postgres instances?\n> >\n> > Can't. postmaster/postgres are symlinks to the same file,\n> > and we fork()\n> > from postmaster to create backends. All the code has to be in the\n> > postmaster so the fork works.\n> \n> Is fork() faster than creation of a new process via exec()? After the\n> creation of the shared memory, the information needed to use it could\nbe\n> passed to the Postgres servers on the command line.\n\nexec() does NOT create new processes. It loads another executable file\ninto the existing, calling process. \n\nfork() duplicates the calling process. In modern unix variants, this is\ndone in a very efficient way, so that the text segment (program code) is\nshared readonly and everything else (data and stack segments) are shared\ncopy on write. Thus, fork() itself doesn't even cause memory copying.\nThat happens later when one of the now two processes writes to a memory\npage the first time.\n\nWindows does not have these two separate steps. It wants the full blown\nexpensive \"create process and load executable\", or the \"let's all muck\naround with the same handles\" modell, called threading. \n\n> \n> The startup stuff for PostgreSQL is just a few files. It does not\nseem\n> insurmountable to change it. But it is none of my business. If it is\na\n> major hassle (for reasons which I am not aware) then I see no driving\n> reason to change it.\n\nIt has to be changed for Windows, it is a major hassle for reasons I\nwasn't aware of, and I am half way through ;-)\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n",
"msg_date": "Tue, 18 Jun 2002 22:50:16 +0200",
"msg_from": "\"Marek Mosiewicz\" <marekmosiewicz@poczta.onet.pl>",
"msg_from_op": false,
"msg_subject": "Re: Roadmap for a Win32 port"
},
{
"msg_contents": "Le Mardi 18 Juin 2002 18:42, Serge Adda a écrit :\n> I am new to PostgreSQL, but I am interested in the Win32 port.\n> I have studied the architecture of other databases like Oracle.\n\nHello,\n\nIt seems clear that several teams are working without central point management \nand contact:\n\n- W32 port: at least a Japase team, a PostgreSQL hacker team and a company are \nworking separately. This makes three separate efforts for the most important \nproject this year.\n\n- Replication: development is slow although a lot of people would be \ninterested in helping. But there is no central organization apart from the \nhackers-list.\n\n- Web site: a www list is working on several issues. Is there a central design \nfor all PostgreSQL site, like PHP does?\n\n- Marketing: MySQL sucks and has a team of marketing sending junk technical \nemails and writing false benchmarks. Who is in charge of marketing at \nPostgreSQL? Where can I find a list of PostgreSQL features?\n\nPersonnaly, I agree with someone who wrote \"Remember Betamax. It was the best \ntechnical standard, but it died for commercial reasons\". I don't want \nPostgreSQL to be the next Betamax.\n\nSome projects, like Debian, have a democratic organisation. The team leader is \nelected for a year. Why not settle a similar organization? This would help \ntake decisions ... and not loose time on important issues.\n\nPostgreSQL is a software but it is also a community. If we believe in \ndemocracy, I suggest we should organize in a democratic way and elect a \nleader for a year.\n\nJust my 2 cents.\nCheers.\nJean-Michel POURE\n",
"msg_date": "Thu, 20 Jun 2002 12:01:53 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Democracy and organisation : let's make a revolution in the Debian\n way"
},
{
"msg_contents": "On Thu, Jun 20, 2002 at 12:01:53PM +0200, Jean-Michel POURE wrote:\n\n> Some projects, like Debian, have a democratic organisation. The team leader is \n> elected for a year. Why not settle a similar organization? This would help \n\n IMHO there is not problem with organization -- I don't know what do\n you want to organize on actual number of developers / contributors\n :-)\n \n The PostgreSQL is not project like Debian -- if you want to work on Debian \n you must know package system and Debian standards only. But if you want to \n work on PostgreSQL you must be at the least average programmer (means several \n years experience) and you must know a great many of current PostgreSQL code.\n \n> PostgreSQL is a software but it is also a community. If we believe in \n> democracy, I suggest we should organize in a democratic way and elect a \n> leader for a year.\n\n What is non-democratic now?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Jun 2002 13:39:08 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way"
},
{
"msg_contents": "Le Jeudi 20 Juin 2002 13:39, Karel Zak a écrit :\n> IMHO there is not problem with organization -- I don't know what do\n> you want to organize on actual number of developers / contributors\n\nDear Karel,\n\nMy previous e-mail points out several projects where, IMHO, a leadership would \nbenefit the community at large :\n- replication,\n- W32 port,\n- marketing (read the post \"Read this and puke\").\n\n> What is non-democratic now?\n\nThe current processes are based on discussion, and therefore are democratic. \nMy proposal does not intend to change discussion processes between \npgsql-hackers.\n\nBut, in order to face companies like MySQL AB, Oracle or Micro$oft, the \ncommunity needs to take important decisions that will help team work. A \nclarified organization would help.\n\nPlease note I am not a PostgreSQL hacker myself, as I do not contribute code \nto PostgreSQL main sources. But, as an outside spectator, I would only like \nto point out that some efforts need coordination.\n\nDebian is a very interesting example of Open-Source organization, as for all \naspects linked to \"decision making\". Usually, at Debian, when a discussion is \ndriven, a clear choice arizes after a limited time. Projects are sometimes \nslow, but always reach their goals.\n\nAs for current PostgreSQL organization, can someone explain me which W32 port \nwill make its way to PostgreSQL main source code? Can someone publish a \nschedule for replication availability? Who is in charge of explaining newbees \nthat MySQL InnoDB is just a marketing lie? What is the current PostgreSQL \nmarket share?\n\nIn other words, we should ask ourselves the question of PostgreSQL future \norganization. We come to point where PostgreSQL has equal chances to become \nthe #1 database or die like Betamax.\n\nBest regards to you all,\nJean-Michel\n",
"msg_date": "Thu, 20 Jun 2002 14:33:04 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way"
},
{
"msg_contents": "On Thu, Jun 20, 2002 at 02:33:04PM +0200, Jean-Michel POURE wrote:\n> Le Jeudi 20 Juin 2002 13:39, Karel Zak a �crit :\n> > IMHO there is not problem with organization -- I don't know what do\n> > you want to organize on actual number of developers / contributors\n> \n> My previous e-mail points out several projects where, IMHO, a leadership would \n> benefit the community at large :\n> - replication,\n> - W32 port,\n> - marketing (read the post \"Read this and puke\").\n\n I understend you. I saw a lot of project and ideas, but it \n _always_ depend on people and their time. You can organize, you can\n prepare cool planns, but don't forget -- in finale must somebody\n implement it. Who? I don't know method how clone Tom Lane or get\n money for others developers who can't full time work on PostgreSQL\n now.\n \n> As for current PostgreSQL organization, can someone explain me which W32 port \n> will make its way to PostgreSQL main source code? Can someone publish a \n\n If nobody -- you can test it, make it better and write something\n about it. If nobody works on some theme it means this theme is not\n important for now. BUT everybody can change it and everybody can \n start work on arbitrary TODO item. It seems hard, but it's right :-)\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Thu, 20 Jun 2002 15:05:59 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> [...]\n> As for current PostgreSQL organization, can someone explain me which W32 port\n> will make its way to PostgreSQL main source code? Can someone publish a\n> schedule for replication availability? Who is in charge of explaining newbees\n> that MySQL InnoDB is just a marketing lie? What is the current PostgreSQL\n> market share?\n\nI think the first native Win32 port that gets contributet will make it.\nSo far I have seen a couple of people claiming the \"have\" a native Win32\nport, not using CygWIN. But none of them was willing to contribute their\nwork.\n\nThere is no official \"schedule\" for any of the features. PostgreSQL is a\n100% volunteer project, so people work on it if and when they feel the\nneed to AND have the time. Even if some of us do work on PostgreSQL at\ntheir job, a few even fulltime, you will not see such commitments.\n\nAnd why do you think InnoDB is a marketing lie? It bought MySQL a good\nnumber of features, including row level locking, transactions and\nlimited foreign key support. Actually it stopped MySQL from yelling out\nthe FUD they told people far too long, like \"transactions are useless\noverhead\" and \"referential integrity is bad\".\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Thu, 20 Jun 2002 09:08:20 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in "
},
{
"msg_contents": "On Thu, 20 Jun 2002, Jean-Michel POURE wrote:\n\n> Le Jeudi 20 Juin 2002 13:39, Karel Zak a écrit :\n> > IMHO there is not problem with organization -- I don't know what do\n> > you want to organize on actual number of developers / contributors\n> \n> Dear Karel,\n> \n> My previous e-mail points out several projects where, IMHO, a leadership would \n> benefit the community at large :\n> - replication,\n> - W32 port,\n> - marketing (read the post \"Read this and puke\").\n> \n> > What is non-democratic now?\n> \n> The current processes are based on discussion, and therefore are democratic. \n> My proposal does not intend to change discussion processes between \n> pgsql-hackers.\n> \n> But, in order to face companies like MySQL AB, Oracle or Micro$oft, the \n> community needs to take important decisions that will help team work. A \n> clarified organization would help.\n\nJean,\n\nWhy on earth does this matter? Postgres will continue to be a good\ndatabase as long as developers and users cut code. It is not a\nsufficiently complicated project to warrant too much concern about things\nlike this. Besides, a significant amount of the code committed to\nPostgres is inspired by personal interest not obligation.\n\nGavin\n\n\n",
"msg_date": "Thu, 20 Jun 2002 23:17:16 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution"
},
{
"msg_contents": "Jean-Michel POURE <jm.poure@freesurf.fr> writes:\n> As for current PostgreSQL organization, can someone explain me which\n> W32 port will make its way to PostgreSQL main source code? Can someone\n> publish a schedule for replication availability? Who is in charge of\n> explaining newbees that MySQL InnoDB is just a marketing lie? What is\n> the current PostgreSQL market share?\n\nAnd an \"elected leader\" will make all this stuff clear exactly how?\n\nIf someone would like to step up and actually *do* that marketing work,\nthat'd be great. Complaining that it's not being done is a waste of\nlist bandwidth. Electing someone isn't going to magically make it\nhappen, either.\n\nIt does concern me that there seem to be several W32 porting efforts\ngoing on without contact with each other or with the wider pghackers\ncommunity; but if they want to work that way, I don't think there's\nmuch we can do to force them to come out in the open.\n\nBTW, we do already have a recognized leadership group: the core\ncommittee. The committee members mostly prefer to lead by example\nand by consensus, rather than trying to impose their will on others,\nwhich is maybe why you hadn't noticed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 20 Jun 2002 09:22:53 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way"
},
{
"msg_contents": "Le Jeudi 20 Juin 2002 15:22, Tom Lane a écrit :\n> BTW, we do already have a recognized leadership group: the core\n> committee. The committee members mostly prefer to lead by example\n> and by consensus, rather than trying to impose their will on others,\n> which is maybe why you hadn't noticed.\n\nYou are right. Leading by example and consensus is the way things work. If the \ncommittee can publish clear goals, there is no need to elect a leader.\n\nNow I realize my concern is about project management. Do you think the \ncommittee could publish a project web page showing who works on what?\n\nSomeone willing to help on a projet would view the page and contact \ndeveloppers more easilly. Presently, there is no clear knowledge who is \nworking on the W32 port or on replication. Using a project page, there would \nbe less chances to see new forks of particular projects (W32, replication).\n\nThis page would be accessible from the to-do-list. Conversly, the to-do-list \nwould lead to the projects page.\n\nJust my 2 cents.\nThanks to all of you who replied my mail.\n\nCheers,\nJean-Michel\n",
"msg_date": "Thu, 20 Jun 2002 16:00:04 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in the\n\tDebian way"
},
{
"msg_contents": "On Thu, 2002-06-20 at 14:33, Jean-Michel POURE wrote:\n> Le Jeudi 20 Juin 2002 13:39, Karel Zak a écrit :\n> > IMHO there is not problem with organization -- I don't know what do\n> > you want to organize on actual number of developers / contributors\n> \n> Dear Karel,\n> \n> My previous e-mail points out several projects where, IMHO, a leadership would \n> benefit the community at large :\n> - replication,\n> - W32 port,\n> - marketing (read the post \"Read this and puke\").\n> \n> > What is non-democratic now?\n> \n> The current processes are based on discussion, and therefore are democratic. \n> My proposal does not intend to change discussion processes between \n> pgsql-hackers.\n> \n> But, in order to face companies like MySQL AB, Oracle or Micro$oft, the \n> community needs to take important decisions that will help team work. A \n> clarified organization would help.\n\nIn what way ?\n\nDo you really think that if we would elect Tom or Bruce or someone else\n\"The President of the PostgreSQL Community\" then their word would weight\nmore in mainstream press ?\n\n> Please note I am not a PostgreSQL hacker myself, as I do not contribute code \n> to PostgreSQL main sources. But, as an outside spectator, I would only like \n> to point out that some efforts need coordination.\n> \n> Debian is a very interesting example of Open-Source organization, as for all \n> aspects linked to \"decision making\". Usually, at Debian, when a discussion is \n> driven, a clear choice arizes after a limited time. Projects are sometimes \n> slow, but always reach their goals.\n\nFrom an \"outside spactator\" perspective Debian seems to be really,\nreally slow.\n \n> As for current PostgreSQL organization, can someone explain me which W32 port \n> will make its way to PostgreSQL main source code?\n\nThere is already one W32 port in main source (the one that uses cygwin).\n\nIMHO the first native port that \n a) works\n b) does not make *nix version slower/harder to maintain\nand\n c) is submitted for inclusion will make it into main source.\n\n> Can someone publish a schedule for replication availability?\n\nI guess any of the teams working on different ways to replicate could do\nit. Another question is if they can stick to it.\n\n> Who is in charge of explaining newbees that MySQL InnoDB is just a\n> marketing lie?\n\nNobody is \"in charge\", but everybody is welcome to do it, even without\nbeing \"elected\" or \"nominated \";)\n\nStill, having a \"success stories\" or \"advocacy\" section on\nwww.postgresq.org seems like a good idea.\n\n> What is the current PostgreSQL market share?\n>\n> In other words, we should ask ourselves the question of PostgreSQL future \n> organization.\n\nThe current organization is \"a loosely knit team\" which seems to work\nquite well.\n\n> We come to point where PostgreSQL has equal chances to become \n> the #1 database or die like Betamax.\n\nOpen-source will probably work differently, i.e postgres will probably\nnot die even if it will not be #1.\n\n----------------\nHannu\n",
"msg_date": "20 Jun 2002 16:06:24 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution"
},
{
"msg_contents": "Jean-Michel POURE wrote:\n> Le Mardi 18 Juin 2002 18:42, Serge Adda a ?crit :\n> > I am new to PostgreSQL, but I am interested in the Win32 port.\n> > I have studied the architecture of other databases like Oracle.\n> \n> Hello,\n> \n> It seems clear that several teams are working without central point management \n> and contact:\n> \n> - W32 port: at least a Japanese team, a PostgreSQL hacker team and a company are \n> working separately. This makes three separate efforts for the most important \n> project this year.\n\nFunny you should mention that. I talked with three people on the phone\njust yesterday in an attempt to get them together on the Win32 port. I\nhave been hesitant to publicly try and join them together because they\nhaven't publicly stated if they are going to contribute their code\nback to the project, or haven't been clear on _when_ they would\ncontribute it back.\n\nMy goal is to take the roadmap I posted on June 5, make a web page, and\nget some agreement from the community on how to address each item:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgtodo?win32\n. \nThen, the groups can know how we prefer to have things done, and if they\ncontribute back quickly, the other projects can benefit from their work,\nand we _don't_ get conflicting Win32 implementations contributed back to\nthe project.\n\n> - Replication: development is slow although a lot of people would be \n> interested in helping. But there is no central organization apart from the \n> hackers-list.\n\nI am on the replication mailing list:\n\n\thttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n\nThat project is moving along, and hopefully they can merge their code\ninto the main tree so others can assist them. Not sure what else we can\ndo.\n\n> - Web site: a www list is working on several issues. Is there a central design \n> for all PostgreSQL site, like PHP does?\n> \n> - Marketing: MySQL sucks and has a team of marketing sending junk technical \n> emails and writing false benchmarks. Who is in charge of marketing at \n> PostgreSQL? Where can I find a list of PostgreSQL features?\n\nI skipped commenting on the marketing discussion because I was working\non a patch. Probably our biggest problem compared to MySQL is that\nMySQL is a single company behind a single database, while we have >4\ncompanies behind PostgreSQL, and their thrust is promoting their company\nrather than PostgreSQL itself.\n\nWhile we have an excellent development team, we don't have a funded team\nto go around to trade shows and write articles promoting PostgreSQL. We\nonly have a very few people employed full-time with PostgreSQL, and it\nusually takes a few full-time people to do just marketing. Great Bridge\nwas really the only one to push PostgreSQL, and they are gone.\n\nMySQL has such a team, and so does Oracle, and it helps. Linux was in a\nsimilar boat, with multiple companies behind Linux, and every one\npromoting its own company rather than Linux itself. We need large\nPostgreSQL companies that promote themselves, and PostgreSQL along with\nit. Linux is in the same boat, with distributors promoting themselves\nand Linux along with it.\n\n\n> Personnaly, I agree with someone who wrote \"Remember Betamax. It was the best \n> technical standard, but it died for commercial reasons\". I don't want \n> PostgreSQL to be the next Betamax.\n\nI thought the problem with Betamax was that Sony controlled the\nstandard, and other companies didn't like that.\n\n\nI am always looking for more ideas in these areas.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 12:09:35 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in"
},
{
"msg_contents": "...\n> MySQL has such a team, and so does Oracle, and it helps. Linux was in a\n> similar boat, with multiple companies behind Linux, and every one\n> promoting its own company rather than Linux itself. We need large\n> PostgreSQL companies that promote themselves, and PostgreSQL along with\n> it. Linux is in the same boat, with distributors promoting themselves\n> and Linux along with it.\n\nRight. That may have slowed down the trade press recognition of Linux,\nbut in the end Linux has risen into view by popular demand and acclaim.\n\nPostgreSQL has a similar opportunity and can succeed on a similar path.\nWe do have a superior technical solution, we have a superior open-source\nsupport system, and we have (imho :) a superior development model and\ndevelopment team open to new contributions and innovation.\n\nThe basis is there for success into the future, and at some point we\nwill get a lot for a little effort on marketing. Other products\nbootstrap themselves by marketing early and often. And to the detriment\nof the technical side.\n\n - Thomas\n",
"msg_date": "Thu, 20 Jun 2002 09:23:18 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in"
},
{
"msg_contents": "Jan Wieck wrote:\n> Jean-Michel POURE wrote:\n> > [...]\n> > As for current PostgreSQL organization, can someone explain me which W32 port\n> > will make its way to PostgreSQL main source code? Can someone publish a\n> > schedule for replication availability? Who is in charge of explaining newbees\n> > that MySQL InnoDB is just a marketing lie? What is the current PostgreSQL\n> > market share?\n> \n> I think the first native Win32 port that gets contributet will make it.\n\nThis line bothers me. With multiple people working on Win32, I would\nlike us to decide how we would _like_ such a port to be implemented. I\nthink this will assist those working on the project to _know_ that their\nwork will be accepted if submitted.\n\nAlso, I encourage people working on Win32 ports to contribute their work\n_as_ the complete each module, rather than as one huge patch. That way,\nother Win32 porters can benefit from their work and maybe we can get\nthis done in 1/2 the time.\n\nWhat I don't want to happen is two Win32 projects contributing duplicate\ncode at the same time. It is a waste when they could have combined\ntheir efforts.\n\nSo, my message to Win32 porters is to communicate what you are doing,\nincluding implementation details, and contribute back as soon as you\ncan. If you don't, someone else may beat you to it, and your patch may\nbe rejected, leaving you to either scrap your work or continually port\nyour changes to every future PostgreSQL release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 12:31:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in"
},
{
"msg_contents": "On Thu, 20 Jun 2002, Jean-Michel POURE wrote:\n\n> As for current PostgreSQL organization, can someone explain me which W32\n> port will make its way to PostgreSQL main source code?\n\nWhichever one actually submits patches for review first that is deemed\nacceptable for inclusion ... as its always been ...\n\n> Can someone publish a schedule for replication availability?\n\nIts available now, and has been for several months *shrug* There is a\nproject called PgReplication that is Open Source (hopefully someone from\nthat camp will pop up) ... and PgSQL, Inc has several deployments of their\ncommercial replication out there now, with several more pending ...\n\nIn fact, all of our work has been based on the rserv code that is in\ncontrib, and that we released over (almost?) a year ago ... but I've seen\nnobody actually try to build on it, altho its what we've extended and are\nusing sucessfully in production environments ...\n\n> Who is in charge of explaining newbees that MySQL InnoDB is just a\n> marketing lie?\n\nYou are ... and anyone else that asks about it / mentions it ...\n\n> What is the current PostgreSQL market share?\n\nIf you can think of a method of calculating this, as there is no\n'commercial licensing' involved, please let us know ... would love to find\nout ... there have been several surveys about it, but, quite frankly,\nwithout having some sort of 'licensing' required to use, there is zero way\nof getting any *real* numbers on this ...\n\n> In other words, we should ask ourselves the question of PostgreSQL\n> future organization. We come to point where PostgreSQL has equal chances\n> to become the #1 database or die like Betamax.\n\nAt this point in time, the current organization == future organization ...\nyou are putting a 'marketing effort' as being the responsibility of the\nopen source project, and using MySQL AB as a comparison ... MySQL AB is a\n*commercial company*, as is PostgreSQL, Inc -and- SRA and several other\nnewcomers, all of whom are doing marketing in their own way, based on\nbudget and requirements for growth ...\n\nThe \"developmental organization\", which we are, has been successful for\nthe past 7 years now ... flame wars are minimal, as are disagreements ...\nthere are some patches that get rejected that those generating them are\ndisappointed about, but most of *them* just bounce back with improvements\nbased on what has been told to them as being unacceptable ... about the\nonly thing that *has* changed over the past 7 years is that our standards\nare tightened up as we move from mainly fixing bugs/stability to improving\nthe server itself ...\n\n\n\n",
"msg_date": "Thu, 20 Jun 2002 14:15:31 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution"
},
{
"msg_contents": "On Thu, 20 Jun 2002, Bruce Momjian wrote:\n\n> Jan Wieck wrote:\n> > Jean-Michel POURE wrote:\n> > > [...]\n> > > As for current PostgreSQL organization, can someone explain me which W32 port\n> > > will make its way to PostgreSQL main source code? Can someone publish a\n> > > schedule for replication availability? Who is in charge of explaining newbees\n> > > that MySQL InnoDB is just a marketing lie? What is the current PostgreSQL\n> > > market share?\n> >\n> > I think the first native Win32 port that gets contributet will make it.\n>\n> This line bothers me. With multiple people working on Win32, I would\n> like us to decide how we would _like_ such a port to be implemented. I\n> think this will assist those working on the project to _know_ that their\n> work will be accepted if submitted.\n\n1. that is not accurate, as it won't necessarily be accepted if submitted\n... its not like years ago when are standards were lax ... Jan is\nperfectly accurate in his assessment of which will make it in, except for\nomitting one point ... it has to meet our standards ... so, its more \"the\nfirst to contribute *and* meet our standards\" ...\n\n> Also, I encourage people working on Win32 ports to contribute their work\n> _as_ the complete each module, rather than as one huge patch. That way,\n> other Win32 porters can benefit from their work and maybe we can get\n> this done in 1/2 the time.\n\nThis is what should be done anyway ... I wouldn't even be so much worried\nabou t\"other Win32 porters\" as much as trying to integrate that 'hugh\npatch' at a later date, consiedering all the changes that go into our code\n:)\n\n> What I don't want to happen is two Win32 projects contributing duplicate\n> code at the same time. It is a waste when they could have combined\n> their efforts.\n\nIMHO, that is actually their problem ... without meaning to sound crass\nabout it, but its not like we haven't discussed it extensively here, and\nopenly ... hell, we've even tried to break down the whole project into\nsmaller components to make the whole easier to merge in :)\n\n> So, my message to Win32 porters is to communicate what you are doing,\n> including implementation details, and contribute back as soon as you\n> can. If you don't, someone else may beat you to it, and your patch may\n> be rejected, leaving you to either scrap your work or continually port\n> your changes to every future PostgreSQL release.\n\nAgreed here ... and this doesn't just apply to Win32 porters ... there\nhave been *alot* of \"big projects\" worked on over the years that have been\nimplemented in pieces so that code-base doesn't diverge too much, making\npatching difficult ... the last thing anyone should want is to work for a\nfew months to create a nice big patch to find out that what they have\n\"fixed\" got yanked out of the code already :)\n\n\n",
"msg_date": "Thu, 20 Jun 2002 14:23:02 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "First Win32 Contribution (Was: Re: Democracy and organisation\n\t: let's make a revolution in)"
},
{
"msg_contents": "On Thu, 20 Jun 2002, Jean-Michel POURE wrote:\n\n> Le Jeudi 20 Juin 2002 15:22, Tom Lane a �crit :\n> > BTW, we do already have a recognized leadership group: the core\n> > committee. The committee members mostly prefer to lead by example\n> > and by consensus, rather than trying to impose their will on others,\n> > which is maybe why you hadn't noticed.\n>\n> You are right. Leading by example and consensus is the way things work. If the\n> committee can publish clear goals, there is no need to elect a leader.\n>\n> Now I realize my concern is about project management. Do you think the\n> committee could publish a project web page showing who works on what?\n\nNo ... cause nobody works on anything specific ... they work on what\nappeals to them, as it appeals to them ...\n\n> Someone willing to help on a projet would view the page and contact\n> developpers more easilly. Presently, there is no clear knowledge who is\n> working on the W32 port or on replication. Using a project page, there\n> would be less chances to see new forks of particular projects (W32,\n> replication).\n\nIf someone workign on the w32 port isn't willing to announce such on\n-hackers, how do you think we'd get that information for a web page? The\neasiest way to 'contact developers' is to post to this list, and then you\ndon't restrict yourself to anyone one person, but everyone involved in teh\nproject ...\n\nI think you are confusing a project on the scale of an operating system\nwith PgSQL ... an OS has to have someone at the top to make sure each\nsub-package integrates well into the overall OS ... we *are* the\nsub-package, and our \"leaders\" are essentially \"our developers\" ... we\nhave a core committee that consists of several of us that brought PgSQL\nout of Berkeley and into the real world, but if you saw the archives for\nthat list, you woudln't see much, since we tend to try and keep all\ndiscussions *on* -hackers ...\n\n",
"msg_date": "Thu, 20 Jun 2002 14:30:50 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution"
},
{
"msg_contents": "On 20 Jun 2002, Hannu Krosing wrote:\n\n> Nobody is \"in charge\", but everybody is welcome to do it, even without\n> being \"elected\" or \"nominated \";)\n>\n> Still, having a \"success stories\" or \"advocacy\" section on\n> www.postgresq.org seems like a good idea.\n\nBeing worked on ... we are actually working on totally revamping and\ncleaning up the web site(s) ...\n\n\n",
"msg_date": "Thu, 20 Jun 2002 14:32:28 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution"
},
{
"msg_contents": "On Thu, 20 Jun 2002 12:09:35 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> Jean-Michel POURE wrote:\n> > - Replication: development is slow although a lot of people would be \n> > interested in helping. But there is no central organization apart from the \n> > hackers-list.\n\nReplication development is co-ordinated on the pgreplication-general\nlist which Bruce mentions below, not on -hackers. Any interested\ndevelopers should subscribe to it, read the relevant research papers\non Postgres-R, and contribute code.\n\nAs for the speed of development, I've started to contribute code recently,\nand Darren Johnson (the main replication developer) says he should have some\nfree time soon -- there are also some other new developers interested in\nthe project. So while there was a period of inactivity, I think that progress\nis now being made.\n\n> I am on the replication mailing list:\n> \n> \thttp://gborg.postgresql.org/project/pgreplication/projdisplay.php\n> \n> That project is moving along, and hopefully they can merge their code\n> into the main tree so others can assist them. Not sure what else we can\n> do.\n\nI can't make any claims about a schedule -- since everyone working on the\nproject is a volunteer, the only release goals I'd like to set are \"when it's\nready\". The code right now is pretty unstable, so I don't think it's appropriate\nfor the main CVS tree until we work it into better shape.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Thu, 20 Jun 2002 14:22:39 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Democracy and organisation : let's make a revolution in"
},
{
"msg_contents": "\"Marc G. Fournier\" wrote:\n> \n> On Thu, 20 Jun 2002, Bruce Momjian wrote:\n> \n> > This line bothers me. With multiple people working on Win32, I would\n> > like us to decide how we would _like_ such a port to be implemented. I\n> > think this will assist those working on the project to _know_ that their\n> > work will be accepted if submitted.\n> \n> 1. that is not accurate, as it won't necessarily be accepted if submitted\n> ... its not like years ago when are standards were lax ... Jan is\n> perfectly accurate in his assessment of which will make it in, except for\n> omitting one point ... it has to meet our standards ... so, its more \"the\n> first to contribute *and* meet our standards\" ...\n\nOmitted point taken :-)\n\n> > What I don't want to happen is two Win32 projects contributing duplicate\n> > code at the same time. It is a waste when they could have combined\n> > their efforts.\n> \n> IMHO, that is actually their problem ... without meaning to sound crass\n> about it, but its not like we haven't discussed it extensively here, and\n> openly ... hell, we've even tried to break down the whole project into\n> smaller components to make the whole easier to merge in :)\n\nThe problem with this kind of project is that you have a big stumbling\nblock at the beginning, which has to be done before you can rollout and\nintegrate the help of developers scattered around the globe. This was\nthe case with the foreign key project, where the trigger queue and one\nset of triggers where working, and then Stephan did all the others and I\nforgot who else helped to do the utility commands and CREATE TABLE\nsyntax and tried to decrypt the SQL definitions? In the Windows port\ncase it is to get it as far that you at least can fire up a postmaster,\nget past the startup process, connect to the database and do a few\nqueries before the thing blows up. Before this everybody has exactly the\nsame problem, \"It doesn't startup\", so the likelyhood of everyone\nstomping over each others work every single night is about 99.9%!\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Thu, 20 Jun 2002 14:57:20 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and "
},
{
"msg_contents": "Jan Wieck wrote:\n> > > What I don't want to happen is two Win32 projects contributing duplicate\n> > > code at the same time. It is a waste when they could have combined\n> > > their efforts.\n> > \n> > IMHO, that is actually their problem ... without meaning to sound crass\n> > about it, but its not like we haven't discussed it extensively here, and\n> > openly ... hell, we've even tried to break down the whole project into\n> > smaller components to make the whole easier to merge in :)\n> \n> The problem with this kind of project is that you have a big stumbling\n> block at the beginning, which has to be done before you can rollout and\n> integrate the help of developers scattered around the globe. This was\n> the case with the foreign key project, where the trigger queue and one\n> set of triggers where working, and then Stephan did all the others and I\n> forgot who else helped to do the utility commands and CREATE TABLE\n> syntax and tried to decrypt the SQL definitions? In the Windows port\n> case it is to get it as far that you at least can fire up a postmaster,\n> get past the startup process, connect to the database and do a few\n> queries before the thing blows up. Before this everybody has exactly the\n> same problem, \"It doesn't startup\", so the likelyhood of everyone\n> stomping over each others work every single night is about 99.9%!\n\nYes, but it doesn't prevent discussion. I think open implementation\ndiscussion will help. I am suggesting this to everyone, not just Jan. \nI have been in private discussion with others too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 15:37:14 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and organisation:"
},
{
"msg_contents": "On Thursday 20 June 2002 02:57 pm, Jan Wieck wrote:\n> set of triggers where working, and then Stephan did all the others and I\n> forgot who else helped to do the utility commands and CREATE TABLE\n> syntax and tried to decrypt the SQL definitions?\n\nDon Baccus?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Thu, 20 Jun 2002 15:56:53 -0400",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and"
},
{
"msg_contents": "Jan Wieck wrote:\n> \n> \"Marc G. Fournier\" wrote:\n> >\n\n...\n\n> > IMHO, that is actually their problem ... without meaning to sound crass\n> > about it, but its not like we haven't discussed it extensively here, and\n> > openly ... hell, we've even tried to break down the whole project into\n> > smaller components to make the whole easier to merge in :)\n> \n> The problem with this kind of project is that you have a big stumbling\n> block at the beginning, which has to be done before you can rollout and\n> integrate the help of developers scattered around the globe. This was\n> the case with the foreign key project, where the trigger queue and one\n> set of triggers where working, and then Stephan did all the others and I\n> forgot who else helped to do the utility commands and CREATE TABLE\n> syntax and tried to decrypt the SQL definitions? In the Windows port\n> case it is to get it as far that you at least can fire up a postmaster,\n> get past the startup process, connect to the database and do a few\n> queries before the thing blows up. Before this everybody has exactly the\n> same problem, \"It doesn't startup\", so the likelyhood of everyone\n> stomping over each others work every single night is about 99.9%!\n\nIt would be nice to also have it \"fire up\" under Windows CE as well ;-)\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Thu, 20 Jun 2002 15:58:17 -0400",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and"
},
{
"msg_contents": "\nOn Thu, 20 Jun 2002, Jan Wieck wrote:\n\n> \"Marc G. Fournier\" wrote:\n> >\n> > On Thu, 20 Jun 2002, Bruce Momjian wrote:\n> >\n> > > This line bothers me. With multiple people working on Win32, I would\n> > > like us to decide how we would _like_ such a port to be implemented. I\n> > > think this will assist those working on the project to _know_ that their\n> > > work will be accepted if submitted.\n> >\n> > 1. that is not accurate, as it won't necessarily be accepted if submitted\n> > ... its not like years ago when are standards were lax ... Jan is\n> > perfectly accurate in his assessment of which will make it in, except for\n> > omitting one point ... it has to meet our standards ... so, its more \"the\n> > first to contribute *and* meet our standards\" ...\n>\n> Omitted point taken :-)\n>\n> > > What I don't want to happen is two Win32 projects contributing duplicate\n> > > code at the same time. It is a waste when they could have combined\n> > > their efforts.\n> >\n> > IMHO, that is actually their problem ... without meaning to sound crass\n> > about it, but its not like we haven't discussed it extensively here, and\n> > openly ... hell, we've even tried to break down the whole project into\n> > smaller components to make the whole easier to merge in :)\n>\n> The problem with this kind of project is that you have a big stumbling\n> block at the beginning, which has to be done before you can rollout and\n> integrate the help of developers scattered around the globe. This was\n> the case with the foreign key project, where the trigger queue and one\n> set of triggers where working, and then Stephan did all the others and I\n> forgot who else helped to do the utility commands and CREATE TABLE\n> syntax and tried to decrypt the SQL definitions? In the Windows port\n\nActually, IIRC Don did the triggers, and I did the utility commands/create\nstuff, but the point is still the same. (Made in the point of\nhistorical accuracy since I don't want someone else's work to end up\ngetting attributed to me since that's unfair to them. :) )\n\n",
"msg_date": "Thu, 20 Jun 2002 14:58:54 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and "
},
{
"msg_contents": "Stephan Szabo wrote:\n\n> Actually, IIRC Don did the triggers, and I did the utility commands/create\n> stuff, but the point is still the same. (Made in the point of\n> historical accuracy since I don't want someone else's work to end up\n> getting attributed to me since that's unfair to them. :) )\n\nThis is why I love this PostgreSQL Project. The honesty and\nhumbleness everyone is practicing. Who ever did what, together we\ndid a good job! Which other open source database has the full set\nof referential actions and deferrability?\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being\nright. #\n# Let's break this rule - forgive\nme. #\n#==================================================\nJanWieck@Yahoo.com #\n",
"msg_date": "Thu, 20 Jun 2002 20:17:41 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and"
},
{
"msg_contents": "It could be helpful to create a mailing list just for this project,\nsince not all members of pg-hackers will/shall participate, and we\nwould probably flood this list quite a bit trying to figure out what\nis the best way to implement a win32 port. Just like the\npg-replication list, this new list would be project specific.\n\nHowever, as an aside, I think the 'first best fit shall be commited'\napproach is a _bad_ idea. Everyone (whos interested in the port)\nagrees with the basic goals, and we will get a working system much\nfaster if we all work on a single solution: And not try to race each\nother. \nIf the main pg developers do not want to bless a specific method/project\nfor the port, then the people interested should hash it out, before\nhundreds of man-hours are wasted developing something that ends up not\nbeing used. Debuging-into existence is a bad idea, as the single-night\nexample hints at (wether intentionaly or not) - with a proper plan we\nshould be able to create unit tests that can prove whether the methods\nchoosen are functioning well before we ever get a fully working\npostmaster.\n\n~Jon Franz\n\nBruce Momjian wrote:\n\n>Jan Wieck wrote:\n>\n>>>>What I don't want to happen is two Win32 projects contributing duplicate\n>>>>code at the same time. It is a waste when they could have combined\n>>>>their efforts.\n>>>>\n>>>IMHO, that is actually their problem ... without meaning to sound crass\n>>>about it, but its not like we haven't discussed it extensively here, and\n>>>openly ... hell, we've even tried to break down the whole project into\n>>>smaller components to make the whole easier to merge in :)\n>>>\n>>The problem with this kind of project is that you have a big stumbling\n>>block at the beginning, which has to be done before you can rollout and\n>>integrate the help of developers scattered around the globe. This was\n>>the case with the foreign key project, where the trigger queue and one\n>>set of triggers where working, and then Stephan did all the others and I\n>>forgot who else helped to do the utility commands and CREATE TABLE\n>>syntax and tried to decrypt the SQL definitions? In the Windows port\n>>case it is to get it as far that you at least can fire up a postmaster,\n>>get past the startup process, connect to the database and do a few\n>>queries before the thing blows up. Before this everybody has exactly the\n>>same problem, \"It doesn't startup\", so the likelyhood of everyone\n>>stomping over each others work every single night is about 99.9%!\n>>\n>\n>Yes, but it doesn't prevent discussion. I think open implementation\n>discussion will help. I am suggesting this to everyone, not just Jan. \n>I have been in private discussion with others too.\n>\n\n\n\n",
"msg_date": "Thu, 20 Jun 2002 20:22:27 -0400",
"msg_from": "Jon Franz <coventry@one.net>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and organisation:"
},
{
"msg_contents": "Jon Franz wrote:\n> It could be helpful to create a mailing list just for this project,\n> since not all members of pg-hackers will/shall participate, and we\n> would probably flood this list quite a bit trying to figure out what\n> is the best way to implement a win32 port. Just like the\n> pg-replication list, this new list would be project specific.\n> \n> However, as an aside, I think the 'first best fit shall be commited'\n> approach is a _bad_ idea. Everyone (whos interested in the port)\n> agrees with the basic goals, and we will get a working system much\n> faster if we all work on a single solution: And not try to race each\n> other. \n\nI think we have to be involved to prevent chaos when those patches\narrive.\n\n> If the main pg developers do not want to bless a specific method/project\n> for the port, then the people interested should hash it out, before\n> hundreds of man-hours are wasted developing something that ends up not\n> being used. Debuging-into existence is a bad idea, as the single-night\n> example hints at (wether intentionaly or not) - with a proper plan we\n> should be able to create unit tests that can prove whether the methods\n> choosen are functioning well before we ever get a fully working\n> postmaster.\n\nActually, don't we have a cygwin mailing list? Seems that would be a\ngreat location, except for the name. Maybe Marc can close the list and\nmigrate them all to a new 'win32' list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 20 Jun 2002 20:44:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and organisation:"
},
{
"msg_contents": "On Thu, 20 Jun 2002, Bruce Momjian wrote:\n\n> Jon Franz wrote:\n> > It could be helpful to create a mailing list just for this project,\n> > since not all members of pg-hackers will/shall participate, and we\n> > would probably flood this list quite a bit trying to figure out what\n> > is the best way to implement a win32 port. Just like the\n> > pg-replication list, this new list would be project specific.\n> >\n> > However, as an aside, I think the 'first best fit shall be commited'\n> > approach is a _bad_ idea. Everyone (whos interested in the port)\n> > agrees with the basic goals, and we will get a working system much\n> > faster if we all work on a single solution: And not try to race each\n> > other.\n>\n> I think we have to be involved to prevent chaos when those patches\n> arrive.\n>\n> > If the main pg developers do not want to bless a specific method/project\n> > for the port, then the people interested should hash it out, before\n> > hundreds of man-hours are wasted developing something that ends up not\n> > being used. Debuging-into existence is a bad idea, as the single-night\n> > example hints at (wether intentionaly or not) - with a proper plan we\n> > should be able to create unit tests that can prove whether the methods\n> > choosen are functioning well before we ever get a fully working\n> > postmaster.\n>\n> Actually, don't we have a cygwin mailing list? Seems that would be a\n> great location, except for the name. Maybe Marc can close the list and\n> migrate them all to a new 'win32' list.\n\nTwo different issues here ... a win32 port is different then the cygwin\nissues ... also, I had suggested ages back creatin ga seperate list, but\nwas told that it would frustrate the effort ... so unless those that\ndisagreed with it in the first place have changed their mind, the\ndiscussion of any porting effort to win32 should remain here ...\n\n\n",
"msg_date": "Thu, 20 Jun 2002 22:47:32 -0300 (ADT)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: First Win32 Contribution (Was: Re: Democracy and "
}
] |
[
{
"msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\tmomjian@postgresql.org\t02/06/18 13:56:41\n\nModified files:\n\tsrc/backend/parser: gram.y \n\nLog message:\n\tWrap long gram.y lines.\n\n",
"msg_date": "Tue, 18 Jun 2002 13:56:41 -0400 (EDT)",
"msg_from": "momjian@postgresql.org (Bruce Momjian - CVS)",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "> Modified files:\n> src/backend/parser: gram.y\n> Log message:\n> Wrap long gram.y lines.\n\nArgh. I've been making some (minor) changes to gram.y and had noticed\nsome indenting troubles too. I'll bet it conflicts :(\n\nDoes the cvs head now compile and run? If so (or if when) I'll merge and\nremove conflicts...\n\n - Thomas\n",
"msg_date": "Tue, 18 Jun 2002 22:26:37 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > Modified files:\n> > src/backend/parser: gram.y\n> > Log message:\n> > Wrap long gram.y lines.\n> \n> Argh. I've been making some (minor) changes to gram.y and had noticed\n> some indenting troubles too. I'll bet it conflicts :(\n\nYes, I have to do gram.y manually. Few indenters can do yacc files.\n\n> Does the cvs head now compile and run? If so (or if when) I'll merge and\n> remove conflicts...\n\nSure, CVS runs fine. I am working on some COPY syntax changes, but that\nwill not effect your changes. Let me know if you want help merging. I\nknow what I changed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jun 2002 01:55:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "CVS HEAD does not compile on FreeBSD/Alpha:\n\n./configure --prefix=/home/chriskl/local --enable-integer-datetimes --enable\n-debug --enable-depend --enable-cassert --with-pam --with-CXX --with-openssl\n\nGives:\n\ngmake[3]: Entering directory `/home/chriskl/pgsql-head/src/interfaces/libpq'\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\nC -I. -I../../../src/include -DFRONTEND -DSYSCONFDIR='\"/home/chriskl/local/\netc/postgresql\"' -c -o fe-secure.o fe-secure.c -MMD\nfe-secure.c: In function `verify_peer':\nfe-secure.c:417: structure has no member named `s6_addr8'\ngmake[3]: *** [fe-secure.o] Error 1\ngmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces/libpq'\ngmake[2]: *** [all] Error 2\ngmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\ngmake: *** [all] Error 2\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-committers-owner@postgresql.org\n> [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Wednesday, 19 June 2002 1:55 PM\n> To: Thomas Lockhart\n> Cc: Bruce Momjian - CVS; pgsql-committers@postgresql.org\n> Subject: Re: [COMMITTERS] pgsql/src/backend/parser gram.y\n>\n>\n> Thomas Lockhart wrote:\n> > > Modified files:\n> > > src/backend/parser: gram.y\n> > > Log message:\n> > > Wrap long gram.y lines.\n> >\n> > Argh. I've been making some (minor) changes to gram.y and had noticed\n> > some indenting troubles too. I'll bet it conflicts :(\n>\n> Yes, I have to do gram.y manually. Few indenters can do yacc files.\n>\n> > Does the cvs head now compile and run? If so (or if when) I'll merge and\n> > remove conflicts...\n>\n> Sure, CVS runs fine. I am working on some COPY syntax changes, but that\n> will not effect your changes. Let me know if you want help merging. I\n> know what I changed.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Wed, 19 Jun 2002 14:14:34 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> CVS HEAD does not compile on FreeBSD/Alpha:\n> \n> ./configure --prefix=/home/chriskl/local --enable-integer-datetimes --enable\n> -debug --enable-depend --enable-cassert --with-pam --with-CXX --with-openssl\n> \n> Gives:\n> \n> gmake[3]: Entering directory `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -fpic -DPI\n> C -I. -I../../../src/include -DFRONTEND -DSYSCONFDIR='\"/home/chriskl/local/\n> etc/postgresql\"' -c -o fe-secure.o fe-secure.c -MMD\n> fe-secure.c: In function `verify_peer':\n> fe-secure.c:417: structure has no member named `s6_addr8'\n> gmake[3]: *** [fe-secure.o] Error 1\n> gmake[3]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> gmake[2]: *** [all] Error 2\n> gmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces'\n> gmake[1]: *** [all] Error 2\n> gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> gmake: *** [all] Error 2\n\nOf course, it compiles if you disable SSL. :-)\n\nCan you look at that line and see if there is something in your OS that\nmatches it? I have KAME here and I thought FreeBSD would have that too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jun 2002 02:16:08 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "Can you give me a hint where to look? I've looked at the code and throught\nhe man pages and can't find the actual structure documented. It's version\n4.4 of FreeBSD, and it has ipv6 compiled in (ifconfig -a proves that).\n\nI'm searching /usr/src right now...\n\nChris\n\n> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> Sent: Wednesday, 19 June 2002 2:16 PM\n> To: Christopher Kings-Lynne\n> Cc: Thomas Lockhart; Bruce Momjian - CVS;\n> pgsql-committers@postgresql.org\n> Subject: Re: [COMMITTERS] pgsql/src/backend/parser gram.y\n>\n>\n> Christopher Kings-Lynne wrote:\n> > CVS HEAD does not compile on FreeBSD/Alpha:\n> >\n> > ./configure --prefix=/home/chriskl/local\n> --enable-integer-datetimes --enable\n> > -debug --enable-depend --enable-cassert --with-pam --with-CXX\n> --with-openssl\n> >\n> > Gives:\n> >\n> > gmake[3]: Entering directory\n> `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> > gcc -pipe -O -g -Wall -Wmissing-prototypes\n> -Wmissing-declarations -fpic -DPI\n> > C -I. -I../../../src/include -DFRONTEND\n> -DSYSCONFDIR='\"/home/chriskl/local/\n> > etc/postgresql\"' -c -o fe-secure.o fe-secure.c -MMD\n> > fe-secure.c: In function `verify_peer':\n> > fe-secure.c:417: structure has no member named `s6_addr8'\n> > gmake[3]: *** [fe-secure.o] Error 1\n> > gmake[3]: Leaving directory\n> `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> > gmake[2]: *** [all] Error 2\n> > gmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces'\n> > gmake[1]: *** [all] Error 2\n> > gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> > gmake: *** [all] Error 2\n>\n> Of course, it compiles if you disable SSL. :-)\n>\n> Can you look at that line and see if there is something in your OS that\n> matches it? I have KAME here and I thought FreeBSD would have that too.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n",
"msg_date": "Wed, 19 Jun 2002 14:29:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> Can you give me a hint where to look? I've looked at the code and throught\n> he man pages and can't find the actual structure documented. It's version\n> 4.4 of FreeBSD, and it has ipv6 compiled in (ifconfig -a proves that).\n> \n> I'm searching /usr/src right now...\n\nI see:\n\n\tnetinet6/in6.h:132:#define s6_addr8 __u6_addr.__u6_addr8\n\nI have in that file:\n\t\n\tstruct in6_addr {\n\t union {\n\t u_int8_t __u6_addr8[16];\n\t u_int16_t __u6_addr16[8];\n\t u_int32_t __u6_addr32[4];\n\t } __u6_addr; /* 128-bit IP6 address */\n\t};\n\t\n\t#define s6_addr __u6_addr.__u6_addr8\n\t#define s6_addr8 __u6_addr.__u6_addr8\n\t#define s6_addr16 __u6_addr.__u6_addr16\n\t#define s6_addr32 __u6_addr.__u6_addr32\n\nand struct in6_addr is part of struct sockaddr_in6:\n\t\n\tstruct sockaddr_in6 {\n\t u_int8_t sin6_len; /* length of this struct(sa_family_t)*/\n\t u_int8_t sin6_family; /* AF_INET6 (sa_family_t) */\n\t u_int16_t sin6_port; /* Transport layer port # (in_port_t)*/\n\t u_int32_t sin6_flowinfo; /* IP6 flow information */\n\t struct in6_addr sin6_addr; /* IP6 address */\n ^^^^^^^^^^^^^^^^^^^^^^^^^\n\t u_int32_t sin6_scope_id; /* intface scope id */\n\t};\n\nNow, do we support IP6 anyway in the backend? If we don't, do we need\nthis code? I see a test for AF_INET6, but I don't see that anywhere\nelse in the backend code. Perhaps that 'case' needs to be removed and\nwe can do IP6 all at once in the future.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jun 2002 02:37:26 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "OK, the offending code is this:\n\n case AF_INET6:\n sin6 = (struct sockaddr_in6 *) &addr;\n for (s = h->h_addr_list; *s != NULL; s++)\n {\n if (!memcmp(sin6->sin6_addr.s6_addr8, *s,\nh->h_length))\n return 0;\n }\n break;\n\n\nIt seems that this is how sin6_addr's type is defined:\n\nstruct in6_addr {\n u_int8_t s6_addr[16];\n};\n\nso it's s6_addr, NOT s6_addr8. Is it still the same type?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-committers-owner@postgresql.org\n> [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Wednesday, 19 June 2002 2:30 PM\n> To: Bruce Momjian\n> Cc: Thomas Lockhart; Bruce Momjian - CVS;\n> pgsql-committers@postgresql.org\n> Subject: Re: [COMMITTERS] pgsql/src/backend/parser gram.y\n>\n>\n> Can you give me a hint where to look? I've looked at the code\n> and throught\n> he man pages and can't find the actual structure documented. It's version\n> 4.4 of FreeBSD, and it has ipv6 compiled in (ifconfig -a proves that).\n>\n> I'm searching /usr/src right now...\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > Sent: Wednesday, 19 June 2002 2:16 PM\n> > To: Christopher Kings-Lynne\n> > Cc: Thomas Lockhart; Bruce Momjian - CVS;\n> > pgsql-committers@postgresql.org\n> > Subject: Re: [COMMITTERS] pgsql/src/backend/parser gram.y\n> >\n> >\n> > Christopher Kings-Lynne wrote:\n> > > CVS HEAD does not compile on FreeBSD/Alpha:\n> > >\n> > > ./configure --prefix=/home/chriskl/local\n> > --enable-integer-datetimes --enable\n> > > -debug --enable-depend --enable-cassert --with-pam --with-CXX\n> > --with-openssl\n> > >\n> > > Gives:\n> > >\n> > > gmake[3]: Entering directory\n> > `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> > > gcc -pipe -O -g -Wall -Wmissing-prototypes\n> > -Wmissing-declarations -fpic -DPI\n> > > C -I. -I../../../src/include -DFRONTEND\n> > -DSYSCONFDIR='\"/home/chriskl/local/\n> > > etc/postgresql\"' -c -o fe-secure.o fe-secure.c -MMD\n> > > fe-secure.c: In function `verify_peer':\n> > > fe-secure.c:417: structure has no member named `s6_addr8'\n> > > gmake[3]: *** [fe-secure.o] Error 1\n> > > gmake[3]: Leaving directory\n> > `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> > > gmake[2]: *** [all] Error 2\n> > > gmake[2]: Leaving directory `/home/chriskl/pgsql-head/src/interfaces'\n> > > gmake[1]: *** [all] Error 2\n> > > gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> > > gmake: *** [all] Error 2\n> >\n> > Of course, it compiles if you disable SSL. :-)\n> >\n> > Can you look at that line and see if there is something in your OS that\n> > matches it? I have KAME here and I thought FreeBSD would have that too.\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill,\n> Pennsylvania 19026\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 19 Jun 2002 14:40:54 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/parser gram.y"
},
{
"msg_contents": "Althought that is in the contrib/bind directory. Searching again...\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Wednesday, 19 June 2002 2:41 PM\n> To: Bruce Momjian; Hackers\n> Subject: Re: [HACKERS] [COMMITTERS] pgsql/src/backend/parser gram.y\n>\n>\n> OK, the offending code is this:\n>\n> case AF_INET6:\n> sin6 = (struct sockaddr_in6 *) &addr;\n> for (s = h->h_addr_list; *s != NULL; s++)\n> {\n> if (!memcmp(sin6->sin6_addr.s6_addr8, *s,\n> h->h_length))\n> return 0;\n> }\n> break;\n>\n>\n> It seems that this is how sin6_addr's type is defined:\n>\n> struct in6_addr {\n> u_int8_t s6_addr[16];\n> };\n>\n> so it's s6_addr, NOT s6_addr8. Is it still the same type?\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: pgsql-committers-owner@postgresql.org\n> > [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Christopher\n> > Kings-Lynne\n> > Sent: Wednesday, 19 June 2002 2:30 PM\n> > To: Bruce Momjian\n> > Cc: Thomas Lockhart; Bruce Momjian - CVS;\n> > pgsql-committers@postgresql.org\n> > Subject: Re: [COMMITTERS] pgsql/src/backend/parser gram.y\n> >\n> >\n> > Can you give me a hint where to look? I've looked at the code\n> > and throught\n> > he man pages and can't find the actual structure documented.\n> It's version\n> > 4.4 of FreeBSD, and it has ipv6 compiled in (ifconfig -a proves that).\n> >\n> > I'm searching /usr/src right now...\n> >\n> > Chris\n> >\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > > Sent: Wednesday, 19 June 2002 2:16 PM\n> > > To: Christopher Kings-Lynne\n> > > Cc: Thomas Lockhart; Bruce Momjian - CVS;\n> > > pgsql-committers@postgresql.org\n> > > Subject: Re: [COMMITTERS] pgsql/src/backend/parser gram.y\n> > >\n> > >\n> > > Christopher Kings-Lynne wrote:\n> > > > CVS HEAD does not compile on FreeBSD/Alpha:\n> > > >\n> > > > ./configure --prefix=/home/chriskl/local\n> > > --enable-integer-datetimes --enable\n> > > > -debug --enable-depend --enable-cassert --with-pam --with-CXX\n> > > --with-openssl\n> > > >\n> > > > Gives:\n> > > >\n> > > > gmake[3]: Entering directory\n> > > `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> > > > gcc -pipe -O -g -Wall -Wmissing-prototypes\n> > > -Wmissing-declarations -fpic -DPI\n> > > > C -I. -I../../../src/include -DFRONTEND\n> > > -DSYSCONFDIR='\"/home/chriskl/local/\n> > > > etc/postgresql\"' -c -o fe-secure.o fe-secure.c -MMD\n> > > > fe-secure.c: In function `verify_peer':\n> > > > fe-secure.c:417: structure has no member named `s6_addr8'\n> > > > gmake[3]: *** [fe-secure.o] Error 1\n> > > > gmake[3]: Leaving directory\n> > > `/home/chriskl/pgsql-head/src/interfaces/libpq'\n> > > > gmake[2]: *** [all] Error 2\n> > > > gmake[2]: Leaving directory\n> `/home/chriskl/pgsql-head/src/interfaces'\n> > > > gmake[1]: *** [all] Error 2\n> > > > gmake[1]: Leaving directory `/home/chriskl/pgsql-head/src'\n> > > > gmake: *** [all] Error 2\n> > >\n> > > Of course, it compiles if you disable SSL. :-)\n> > >\n> > > Can you look at that line and see if there is something in\n> your OS that\n> > > matches it? I have KAME here and I thought FreeBSD would\n> have that too.\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill,\n> > Pennsylvania 19026\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Wed, 19 Jun 2002 14:42:42 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/parser gram.y"
}
] |
[
{
"msg_contents": "How about we add the preproc.c file generated by bison 1.49 to cvs?\nCould that create problems elsewhere? \n\nThe version that is part of the source tree now is generated on the\nserver, isn't it?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Tue, 18 Jun 2002 19:58:10 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "ecpg and bison again"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> How about we add the preproc.c file generated by bison 1.49 to cvs?\n> Could that create problems elsewhere? \n\nYes. It's a bad idea to put derived files in CVS. For one thing,\nCVS will not guarantee that their timestamps are right compared to\nthe master file.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 18 Jun 2002 16:41:57 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again "
},
{
"msg_contents": "On Tue, Jun 18, 2002 at 04:41:57PM -0400, Tom Lane wrote:\n> Michael Meskes <meskes@postgresql.org> writes:\n> > How about we add the preproc.c file generated by bison 1.49 to cvs?\n> > Could that create problems elsewhere? \n> \n> Yes. It's a bad idea to put derived files in CVS. For one thing,\n> CVS will not guarantee that their timestamps are right compared to\n> the master file.\n\nActually I thought about changing the makefile as well, so preproc.c\ndoes not look like a derived file anymore.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jun 2002 09:36:49 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Tue, Jun 18, 2002 at 04:41:57PM -0400, Tom Lane wrote:\n>> Michael Meskes <meskes@postgresql.org> writes:\n> How about we add the preproc.c file generated by bison 1.49 to cvs?\n> Could that create problems elsewhere? \n>> \n>> Yes. It's a bad idea to put derived files in CVS. For one thing,\n>> CVS will not guarantee that their timestamps are right compared to\n>> the master file.\n\n> Actually I thought about changing the makefile as well, so preproc.c\n> does not look like a derived file anymore.\n\nThat cure would be FAR worse than the disease. Leave it be.\n\nThe time for panic will be in August, if we are ready to make a beta\nrelease and there's still no bison release. In the meantime I really\ndon't see why you can't keep updating your copy of preproc.y and\njust not commit it...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jun 2002 09:06:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again "
},
{
"msg_contents": "Perhaps there is some usefulness in adding 'preproc-inprogress.y' to\nthe repository and those interested in ecpg changes and who have the\nrelevant bison installed can manually copy it to 'preproc.y'?\n\nOtherwise the ecpg changes are not going to get any testing apart from\nMichael's...\n\nRegards, Lee Kindness.\n\nTom Lane writes:\n > Michael Meskes <meskes@postgresql.org> writes:\n > > Actually I thought about changing the makefile as well, so preproc.c\n > > does not look like a derived file anymore.\n > That cure would be FAR worse than the disease. Leave it be.\n",
"msg_date": "Wed, 19 Jun 2002 14:22:08 +0100",
"msg_from": "Lee Kindness <lkindness@csl.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again "
},
{
"msg_contents": "On Wed, Jun 19, 2002 at 09:06:31AM -0400, Tom Lane wrote:\n> release and there's still no bison release. In the meantime I really\n> don't see why you can't keep updating your copy of preproc.y and\n> just not commit it...\n\nI can for sure, but no one else can use it. And I have to be very\ncareful with patches someone else commits.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jun 2002 15:35:38 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "On Wed, Jun 19, 2002 at 02:22:08PM +0100, Lee Kindness wrote:\n> Perhaps there is some usefulness in adding 'preproc-inprogress.y' to\n> the repository and those interested in ecpg changes and who have the\n> relevant bison installed can manually copy it to 'preproc.y'?\n\nIs this something we can agree on? I'm willing to even add\npreproc-inprogress.c, but I'm not sure if this generates the same\nproblems as with preproc.c.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Wed, 19 Jun 2002 15:36:46 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "Michael Meskes <meskes@postgresql.org> writes:\n> On Wed, Jun 19, 2002 at 02:22:08PM +0100, Lee Kindness wrote:\n>> Perhaps there is some usefulness in adding 'preproc-inprogress.y' to\n>> the repository and those interested in ecpg changes and who have the\n>> relevant bison installed can manually copy it to 'preproc.y'?\n\n> Is this something we can agree on? I'm willing to even add\n> preproc-inprogress.c, but I'm not sure if this generates the same\n> problems as with preproc.c.\n\nSeems to me that it would.\n\nI agree it's not pleasant to be blocked like this. Is there any way we\ncan persuade the bison guys to be a little more urgent about releasing a\nfix? (If 1.49 is just an internal beta version, maybe a back-patch to\ntheir last released version?)\n\nAnother possibility is to temporarily disable ecpg from being built by\ndefault (eg, just remove it from src/interfaces/Makefile) and then go\nahead and commit your changes. Then, anyone wanting to test it would\nhave to (a) have a suitable bison installed and (b) manually go into\ninterfaces/ecpg and say \"make all install\". I can't say that I like\nthis idea, but it seems better than putting derived files into CVS.\n\n\t\t\tregards, tom lane\n\nPS: BTW, are any of the bison people at Red Hat? Maybe I could apply\na little internal pressure...\n",
"msg_date": "Wed, 19 Jun 2002 10:14:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again "
},
{
"msg_contents": "> > Perhaps there is some usefulness in adding 'preproc-inprogress.y' to\n> > the repository and those interested in ecpg changes and who have the\n> > relevant bison installed can manually copy it to 'preproc.y'?\n> Is this something we can agree on? I'm willing to even add\n> preproc-inprogress.c, but I'm not sure if this generates the same\n> problems as with preproc.c.\n\nActually, this situation is *exactly* what CVS is made to help with.\nMake a branch on the src/interfaces/ecpg directory (call it, say,\n\"ecpg_big_bison\", or whatever you want) and then you can commit on that\nbranch, others can see the branch if they want, and you don't have to\ncarry along code without committing it.\n\nI can help make the branch (or suggest the commands to do so) if that\nwould be helpful.\n\nLater, you can merge the branch down to the main trunk. And in the\nmeantime, you can merge *up* from the main trunk if there are fixes you\nwant to apply to both versions, again with a one line cvs command.\n\nSo, let's use cvs to do this rather than having to create a separate\ntemporary file. I've done this very successfully on other large projects\nso am confident that it will work well for us.\n\n - Thomas\n",
"msg_date": "Wed, 19 Jun 2002 07:19:15 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "Tom Lane wrote:\n> Michael Meskes <meskes@postgresql.org> writes:\n> > On Tue, Jun 18, 2002 at 04:41:57PM -0400, Tom Lane wrote:\n> >> Michael Meskes <meskes@postgresql.org> writes:\n> > How about we add the preproc.c file generated by bison 1.49 to cvs?\n> > Could that create problems elsewhere? \n> >> \n> >> Yes. It's a bad idea to put derived files in CVS. For one thing,\n> >> CVS will not guarantee that their timestamps are right compared to\n> >> the master file.\n> \n> > Actually I thought about changing the makefile as well, so preproc.c\n> > does not look like a derived file anymore.\n> \n> That cure would be FAR worse than the disease. Leave it be.\n> \n> The time for panic will be in August, if we are ready to make a beta\n> release and there's still no bison release. In the meantime I really\n> don't see why you can't keep updating your copy of preproc.y and\n> just not commit it...\n\nI think it is fine to add a bison C file to CVS until we get bison\nupdated, and Michael can control that. We can always remove it later. \nIs the problem that they C file will not have the proper timestamp?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 19 Jun 2002 12:00:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is the problem that they C file will not have the proper timestamp?\n\nExactly. Don't you remember all the troubles we had back when we\ndid have the derived files in CVS? I don't want to have to deal\nwith that again, even for a short while.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jun 2002 12:35:46 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again "
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> Actually, this situation is *exactly* what CVS is made to help with.\n> Make a branch on the src/interfaces/ecpg directory (call it, say,\n> \"ecpg_big_bison\", or whatever you want) and then you can commit on that\n> branch, others can see the branch if they want, and you don't have to\n> carry along code without committing it.\n\nSeems like a plan...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 19 Jun 2002 12:39:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again "
},
{
"msg_contents": "> > Actually, this situation is *exactly* what CVS is made to help with.\n> > Make a branch on the src/interfaces/ecpg directory (call it, say,\n> > \"ecpg_big_bison\", or whatever you want) and then you can commit on that\n> > branch, others can see the branch if they want, and you don't have to\n> > carry along code without committing it.\n> Seems like a plan...\n\nMichael, is this acceptable to you? If you use remote cvs, then you\nwould update *only* the src/interfaces/ecpg directory on the branch tag,\nand from then on your local copy (and your interactions with cvs) will\nbe on that branch. Other options to cvs commands can force your local\ncopy back to the main trunk, can pull main trunk updates up to the\nbranch, etc etc. And at the end when you don't need it anymore we can\neven get rid of the tagged branch altogether.\n\nI'm happy setting up the branch if that would be helpful. Let me know if\nthis is the way you want to proceed, and if so what you would like the\nbranch to be called.\n\n - Thomas\n",
"msg_date": "Wed, 19 Jun 2002 18:23:24 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "On Wed, Jun 19, 2002 at 10:14:32AM -0400, Tom Lane wrote:\n> I agree it's not pleasant to be blocked like this. Is there any way we\n> can persuade the bison guys to be a little more urgent about releasing a\n> fix? (If 1.49 is just an internal beta version, maybe a back-patch to\n> their last released version?)\n\nI had the feeling they rewrote some major parts. You cannot back-patch\nthat. But then I may err on this.\n\n> Another possibility is to temporarily disable ecpg from being built by\n> default (eg, just remove it from src/interfaces/Makefile) and then go\n> ahead and commit your changes. Then, anyone wanting to test it would\n> have to (a) have a suitable bison installed and (b) manually go into\n> interfaces/ecpg and say \"make all install\". I can't say that I like\n> this idea, but it seems better than putting derived files into CVS.\n\nThat would be possible too.\n\n> PS: BTW, are any of the bison people at Red Hat? Maybe I could apply\n> a little internal pressure...\n\nNo idea, sorry.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Fri, 21 Jun 2002 16:54:12 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "On Wed, Jun 19, 2002 at 06:23:24PM -0700, Thomas Lockhart wrote:\n> Michael, is this acceptable to you? If you use remote cvs, then you\n\nYes, it is.\n\n> I'm happy setting up the branch if that would be helpful. Let me know if\n> this is the way you want to proceed, and if so what you would like the\n\nThat would be nice. I do not really knwo cvs myself.\n\n> branch to be called.\n\nNo idea. \"new-bison\" maybe?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Sun, 23 Jun 2002 15:31:41 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "> > I'm happy setting up the branch if that would be helpful. Let me know if\n> > this is the way you want to proceed, and if so what you would like the\n> That would be nice. I do not really knwo cvs myself.\n\nDone. And here is how you would use it...\n\n> > branch to be called.\n> No idea. \"new-bison\" maybe?\n\nOK, the tag name is \"ecpg_new_bison\" (note the underscores; dashes are\nnot allowed afaicr).\n\nI created the branch *only* on src/interfaces/ecpg. To get ready to work\non the branch, you will want to update your entire tree:\n\ncvs update -PdA pgsql\n\nIf you have uncommitted changes to code cvs will respect that and\npreserve the changes, but you may want to make a tarball just in case ;)\n\nThen, update *only* the ecpg source directory to the branch:\n\ncd pgsql/src/interfaces\ncvs update -r ecpg_big_bison ecpg\n\nAt that point, *all* cvs files in your ecpg directory will be living on\nthe branch. If you make changes and commit them, the changes will only\nbe visible on the branch. The branch tag is \"sticky\", so *unless* you\nexplicitly change the tag or branch by, say, an unfortunate \"update -A\"\nor \"update -rHEAD\" then all files and any new files will stay on the\nbranch.\n\nSince the tag is only on the src/interfaces/ecpg directory, if you\naccidentally try updating other directories to that tag the files may\n\"vanish\", since they do not have that tag. Update those directories back\nto the head and the files will reappear.\n\nYou can easily work on both the HEAD and ecpg_big_bison by renaming your\nalready-branched ecpg directory to, say, ecpg.big, then doing\n\ncd pgsql/src/interfaces\ncvs update -PdA ecpg\n\nwhich will recover files from the tip of the cvs tree. You would then\nhave two directories, ecpg/ and ecpg.big/, and the sticky tags in\necpg.big/ will be respected by CVS and will still refer to the correct\nbranch of ecpg/ within the CVS repository.\n\nIf you commit changes to the tip which you want to pull up to the\nbranch, use\n\ncd pgsql/src/interfaces\n# name may be ecpg/ if you have a completely separate tree\ncvs update -j HEAD ecpg.big\n\nwhich will merge changes from HEAD (the cvs tip) into your source\ndirectory. You would still need to commit them to have them in the\nrepository:\n\ncd pgsql/src/interfaces\ncvs commit ecpg.big\n\nWhen it comes time, you will want to merge your branch back down to the\nmain tree. You can do this as was done above for the other direction:\n\ncd pgsql/src/interfaces\ncvs update -j ecpg_big_bison ecpg\n\nwhere the ecpg directory is, as described above, already on the cvs tip.\n\nLet me know if you have any questions!\n\n - Thomas\n",
"msg_date": "Sun, 23 Jun 2002 07:35:06 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "A couple of notes:\n\n...\n> Then, update *only* the ecpg source directory to the branch:\n> cd pgsql/src/interfaces\n> cvs update -r ecpg_big_bison ecpg\n\ncvs will respect any changes you have made to the sources in your\ndirectory and the changes will be preserved in the move to the branch.\n\nHere is what the update looks like on my machine:\n\nmyst$ cvs update -r ecpg_big_bison ecpg\ncvs update: Updating ecpg\ncvs update: Updating ecpg/include\ncvs update: Updating ecpg/lib\n? ecpg/lib/libecpg.so.3.2.0\n? ecpg/lib/libecpg.so.3.3.0\n? ecpg/lib/libecpg.so.3.4.0\ncvs update: Updating ecpg/preproc\n? ecpg/preproc/ecpg\ncvs update: Updating ecpg/test\n\n\nIf you want to check on the branch status of a particular file use the\n\"status\" command. For example, checking on ecpg/Makefile looks like:\n\nmyst$ cvs status ecpg/Makefile\n===================================================================\nFile: Makefile \tStatus: Up-to-date\n\n Working revision:\t1.14\tMon Feb 4 15:37:13 2002\n Repository revision:\t1.14\n/home/thomas/cvs/repository/pgsql/src/interfaces/ecpg/Makefile,v\n Sticky Tag:\t\tecpg_big_bison (branch: 1.14.4)\n Sticky Date:\t\t(none)\n Sticky Options:\t(none)\n\n\n(I'm using cvsup so the repository path is local to my machine in this\nexample.)\n\n - Thomas\n",
"msg_date": "Sun, 23 Jun 2002 07:43:15 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "> I get\n> cvs [server aborted]: cannot write /cvsroot/CVSROOT/val-tags: Permission denied\n> This seems to be a server message.\n\nI see the same thing when trying to update a tree to this branch using\nlocal cvs on mcvsup.postgresql.org. The file is owned by scrappy and has\nno group write permissions.\n\nI use CVSup, and looking at the permissions on my local CVS repository\n(which does allow me to work with the branch) that file is group (and\nworld!?) writable:\n\n29501576 -rw-rw-rw- 1 thomas thomas 33 Jun 23 07:37\nval-tags\n\nAnd looking at another CVS repository with \"known good\" behavior I see\nthat the file is group-writable.\n\nscrappy, can you adjust the permissions on /cvsroot/CVSROOT to allow\ngroup writes, and adjust the permissions on /cvsroot/CVSROOT/val-tags to\nallow group writes? Perhaps it is just the permissions on the directory\nwhich are the problem, but it seems that no branch operations are\ncurrently allowed :(\n\n - Thomas\n\n\n",
"msg_date": "Sun, 23 Jun 2002 18:47:46 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: ecpg and bison again"
},
{
"msg_contents": "On Sun, Jun 23, 2002 at 06:47:46PM -0700, Thomas Lockhart wrote:\n> > I get\n> > cvs [server aborted]: cannot write /cvsroot/CVSROOT/val-tags: Permission denied\n> > This seems to be a server message.\n> \n> I see the same thing when trying to update a tree to this branch using\n> local cvs on mcvsup.postgresql.org. The file is owned by scrappy and has\n> no group write permissions.\n\nIt seems the file is still not writable. Or is there a mistake on my\nside?\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n\n\n",
"msg_date": "Fri, 28 Jun 2002 13:03:05 +0200",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: ecpg and bison again"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.