threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "We removed 'configure --enable-unicode', right? I didn't see any commit\nmessage about it and want to add it to the HISTORY file. If I missed\nanything else in HISTORY, please let me know.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 26 Oct 2001 19:59:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "configure --enable-unicode"
},
{
"msg_contents": "> We removed 'configure --enable-unicode', right? I didn't see any commit\n> message about it and want to add it to the HISTORY file. If I missed\n> anything else in HISTORY, please let me know.\n\n From cvs log:\n\n>revision 1.141\n>date: 2001/09/14 10:36:52; author: ishii; state: Exp; lines: +0 -13\n>Remove --enable-unicode-conversion\n>unicode-conversion is always on if --enable-multibyte is specified\n\nPlease add:\n\nAdd LATIN5,6,7,8,9,10 support\nLATIN5 means ISO-8859-9, not ISO-8859-5 any more\n--\nTatsuo Ishii\n",
"msg_date": "Sun, 28 Oct 2001 21:54:20 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: configure --enable-unicode"
},
{
"msg_contents": "> > We removed 'configure --enable-unicode', right? I didn't see any commit\n> > message about it and want to add it to the HISTORY file. If I missed\n> > anything else in HISTORY, please let me know.\n> \n> >From cvs log:\n> \n> >revision 1.141\n> >date: 2001/09/14 10:36:52; author: ishii; state: Exp; lines: +0 -13\n> >Remove --enable-unicode-conversion\n> >unicode-conversion is always on if --enable-multibyte is specified\n\nThanks, not sure how I missed that:\n\n Remove configure --enable-unicode-conversion, now enabled by\n multibyte (Tatsuo) \n> \n> Please add:\n> \n> Add LATIN5,6,7,8,9,10 support\n> LATIN5 means ISO-8859-9, not ISO-8859-5 any more\n\nAdded:\n\n\tChange LATIN5 to mean ISO-8859-9, not ISO-8859-5 (Tatsuo) \n \nThanks\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Oct 2001 14:23:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: configure --enable-unicode"
}
] |
[
{
"msg_contents": ">\n>\n>\n>No, it should *not* look like that. The fe-connect.c code is designed\n>to move on as soon as it's convinced that the kernel has accepted the\n>connection request. We use a non-blocking connect() call and later\n>wait for connection complete by probing the select() status. Looping\n>on the connect() itself would be a busy-wait, which would be antisocial.\n>\n\nThe fe-connect.c code moves on regardless of the completion of the \nconnect() if it has been interrupted.\n\nTo simplify, in a program without SIGALRM events, PQconnect* won't be \ninterrupted. The connect() call will complete properly.\n\nIn a program with SIGALRM events, the call is interrupted inside \nconnect(). If SA_RESTART was disabled for connect() in POSIX semantics, \nthe program would automatically jump right back into the connect() \ncall. However by default POSIX code enables SA_RESTART which for \nSIGALRM means -don't- automatically restart the system call. This means \nthe programmer needs to check for -1/errno=EINTR and jump back into \nconnect() himself. There isn't a concern for busy wait/anti social code \nbehavior, your program was in the middle of connect() when it was \ninterrupted, you're simply jumping back to where you left off.\n\nIt doesn't matter if it is a blocking connect or non-blocking connect, \nhandling EINTR must be done if SIGALRM events are employed. A fast \nenough event timer with a non-blocking connect will also be susceptible \nto EINTR.\n\nEINTR is distinctly different from EINPROGRESS. If they were the same \nthen there would be a problem. EINTR should be handled by jumping back \ninto the connect() call, it is re-entrant and designed for this.\n\nRegardless, you don't wait for the connection to complete, the code \nfollowing the connect() call returns failure for every -1 result from \nconnect() unless it is EINPROGRESS or EWOULDBLOCK. select() is -not- \nused in fe-connect.c. It is possible with the current code for the \nconnection to fail in non-blocking mode. Reason: you call connect() in \nnon-blocking mode, break out of the section on EINPROGRESS, and continue \nassuming that the connection will be successful.\n\n EINPROGRESS\n The socket is non-blocking and the connection can\n not be completed immediately. It is possible to\n select(2) or poll(2) for completion by selecting\n the socket for writing. After select indicates\n writability, use getsockopt(2) to read the SO_ERROR\n option at level SOL_SOCKET to determine whether\n connect completed successfully (SO_ERROR is zero)\n or unsuccessfully (SO_ERROR is one of the usual\n error codes listed here, explaining the reason for\n the failure).\n\nThe socket is not checked any further after the connect(). The code \nshould not continue on into the SSL handling until you're sure that the \nsocket is ready for operation.\n\nThe reason why I am getting EINTR from a non-blocking connect is because \nmy event timer happens to fire in the middle of the connect() call. \n Just because you set the socket to FIONBIO doesn't mean that connect() \ncan't be interrupted.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Oct 2001 20:15:30 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> [ much ]\n\nI think you are missing the point. I am not saying that we shouldn't\ndeal with EINTR; rather I am raising what I think is a legitimate\nquestion: *what* is the most appropriate response? My reading of\nHP's gloss suggests that we could treat EINTR the same as EINPROGRESS,\nie, consider the connect() to have succeeded and move on to the\nwait-for-connection-complete-or-failure-using-select() phase.\nIf that works I would prefer it to repeating the connect() call,\nprimarily because it avoids any possibility of introducing an\ninfinite loop.\n\nFor PQrequestCancel we clearly do need to retry the connect(), since\nthat use of connect() isn't nonblocking. But I'm not convinced that\nwe should do so in the main-line connection code.\n\n> It is possible with the current code for the \n> connection to fail in non-blocking mode. Reason: you call connect() in \n> non-blocking mode, break out of the section on EINPROGRESS, and continue \n> assuming that the connection will be successful.\n\nNo, we don't. If you think that, then you haven't studied the code\nsufficiently to be submitting patches for it. What we actually do\nis exactly what is recommended by the manpage you're quoting at me.\nIt's just split across multiple routines.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 21:11:49 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
},
{
"msg_contents": "Actually, now that I look at this another time, there's an interesting\nquestion to ask: have you compiled with USE_SSL?\n\nThe USE_SSL case definitely is broken, since it invokes the connect()\nin blocking mode, but fails to retry on EINTR, which it clearly should\ndo in that mode. (What's even worse is that you cannot suppress\nthe problem by setting allow_ssl_try false. If you compiled with SSL\nsupport enabled then you lose anyway.)\n\nI think we can fix this by moving the SSL startup code to a saner place,\nnamely in PQconnectPoll just before it's about to send the startup\npacket. There's no reason why we shouldn't *always* do the connect()\nin nonblock mode. We could switch the socket back to blocking mode\nwhile invoking the SSL negotiation phase (which'd be skipped if not\nallow_ssl_try, so that a library compiled with USE_SSL isn't ipso\nfacto broken for people who want non-SSL nonblocking connect).\n\nIf you are testing with USE_SSL then that explains why you are seeing\nEINTR failures. If not, then we still have to ask whether EINTR really\nneeds to be handled differently from EINPROGRESS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 26 Oct 2001 21:43:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
}
] |
[
{
"msg_contents": ">\n>\n>\n>I think you are missing the point. I am not saying that we shouldn't\n>deal with EINTR; rather I am raising what I think is a legitimate\n>question: *what* is the most appropriate response? My reading of\n>HP's gloss suggests that we could treat EINTR the same as EINPROGRESS,\n>ie, consider the connect() to have succeeded and move on to the\n>wait-for-connection-complete-or-failure-using-select() phase.\n>If that works I would prefer it to repeating the connect() call,\n>primarily because it avoids any possibility of introducing an\n>infinite loop.\n>\n\nYou wouldn't get an infinite loop, you'd get Exx indicating the \noperation was in progress. Yes, you could spin on a select() waiting \nfor the end result. What I normally do is this:\n\nconnect()\n\nwhile(select()) {\n switch () {\n case EINTR:\n break;\n case EINPROGRESS:\n nanosleep();\n break;\n case ETIMEDOUT:\n default:\n /* handle timeout and other error conditions nicely for the \nuser */\n break;\n }\n}\n\nWith EINTR, it's fine to immediately start working again because your \ncode was interrupted from outside this scope. We don't know where in \nconnect() we were interrupted, blocking or non-blocking. With \nEINPROGRESS I sleep for a while to be nice to the system. Here we know \nthat things are moving along like they should be and we are in a proper \nsleepable period.\n\nThat isn't to imply that things will break if we sleep from EINTR. Only \nthat connect() exited due to an interruption, not due to planning.\n\n>\n>No, we don't. If you think that, then you haven't studied the code\n>sufficiently to be submitting patches for it. What we actually do\n>is exactly what is recommended by the manpage you're quoting at me.\n>It's just split across multiple routines.\n>\n\n I traced several calls and they run through a few functions which end \nup in pqFlush. These code paths haven't checked the socket to see if it \nis ready for RW operation yet. pqFlush calls send() [ignoring SSL].\n\nOnly after a lot of code has been traversed is pqWait run in which the \nsocket is checked for RW and EINTR. My point that I was bringing up \nwith Peter was that it's much more nice on the system to wait for the \nsocket to become usable before going through all that code. In the \nprevious email I suggested that with a sufficiently fast timer event, \nyou'd never get back through the PQconnect* before being interrupted \nagain and that's why I advocate putting the EINTR as close to the \nconnect() as possible. Tying this together is why it is possible to \nfail, a good amount of code is traversed before you get back to dealing \nwith the socket. Anywhere inbetween this signal events can happen again.\n\nThat's what provoked this original patch. Unless I shut off my timer or \nchanged my timer to happen in the long distant future, I would never \nhave a successful connection established.\n\nDavid\n\n\n",
"msg_date": "Fri, 26 Oct 2001 22:08:05 -0400",
"msg_from": "David Ford <david@blue-labs.org>",
"msg_from_op": true,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully"
},
{
"msg_contents": "David Ford <david@blue-labs.org> writes:\n> I traced several calls and they run through a few functions which end \n> up in pqFlush. These code paths haven't checked the socket to see if it \n> is ready for RW operation yet. pqFlush calls send() [ignoring SSL].\n\nWhere? AFAICS (ignoring the USE_SSL breakage), connectDBStart will\nreturn immediately after calling connect(), and the next thing\nthat's done is pqWait from connectDBComplete. If there's a path that\ndoes what you claim, that's a bug ... but I don't see it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Oct 2001 12:46:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [patch] helps fe-connect.c handle -EINTR more gracefully "
}
] |
[
{
"msg_contents": "Hi all!\n\nI wanted to propose a possible \"rationalization\" of the PostgreSQL naming\nscheme, as briefly outlined below. The following suggestion may seem like a\ntrivial improvement to some, but to me it is a matter of polish and\nconsistency.\n\nOne possible renaming / reorganization: (feedback encouraged!!!)\n\nchange default account name postgres to pgsql\nchange daemon name postmaster to pgsqld\nchange client name psql to pgsql\nchange data location /var/lib/pgsql/data to /var/pgsql\nmove .conf files from /var/lib/pgsql/data to /etc/pgsql\nchange all PG_xxx file names (PG_VERSION, /usr/bin/pg_xxx, etc) to pgsql_xxx\nchange all postgresql* file names to pgsql*\nwhatever else I have missed...:)\n\nI think this would be a very worthwhile improvement, but I don't know how\nothers feel about this. It should make it easier for newbies to learn their\nway around and generally reduce confusion.\n\nGoing a bit further in reorganization, if the config files always lived in\nan /etc/pgsql directory, then pgsqld (aka postmaster) could start with zero\nparameters and zero environment variables (true?), since it could get PGDATA\nand PGLIB type data, plus log file location, from the config files. This\nshould simplify the init scripts as well, and generally make setup easier.\nEventually, other environment variables (like the one passed to CREATE\nDATABASE name WITH\nLOCATION = 'location' where location is an environment variable) could be\neliminated, so that all configuration information lived in /etc/pgsql conf\nfiles.\n\nI am a Unix/Linux novice, but this seems to make sense to me. What does\neveryone think?\n\nRob\n\nPS Please forgive me in advance if this is not the correct mailing list to\npropose this on.\n\n",
"msg_date": "Fri, 26 Oct 2001 22:52:50 -0400",
"msg_from": "\"Robert Dyas\" <rdyas@adelphia.net>",
"msg_from_op": true,
"msg_subject": "consistent naming of components"
},
{
"msg_contents": "> change default account name postgres to pgsql\n> change daemon name postmaster to pgsqld\n> change client name psql to pgsql\n> change data location /var/lib/pgsql/data to /var/pgsql\n> move .conf files from /var/lib/pgsql/data to /etc/pgsql\n\n*coff*\n\nThe more correct (ie. anything but linux) place to put conf files is\n/usr/local/etc/pgsql. And anyway - you can change the position of\nthese files at compile time...\n\nChris\n\n\n",
"msg_date": "Sat, 27 Oct 2001 18:05:48 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: consistent naming of components"
},
{
"msg_contents": "\"Robert Dyas\" <rdyas@adelphia.net> writes:\n> [ rename and move just about everything in sight ]\n\nSorry, but I don't think this is going to happen. We'd be breaking\na heck of a lot of user applications, startup scripts, etc to achieve\n(IMHO) very little of value. Renaming psql->pgsql would alone break\nmore user scripts than I care to think about.\n\n> change data location /var/lib/pgsql/data to /var/pgsql\n> move .conf files from /var/lib/pgsql/data to /etc/pgsql\n\nThe present sources do not have any hardwired notion of where things\nshould go. If you care to install things in those directories, you\ncan --- but you won't get far insisting that everyone else should do\nlikewise. Preferred filesystem organization varies across platforms.\nEven if it didn't, there are situations such as running multiple\npostmasters (eg, setting up a test version) in which some instances\n*must* have a nonstandard location.\n\nYou might possibly be able to talk the RPM maintainer into changing\nhis ideas of where the RPMs should install stuff --- but I believe\nhe thinks he's following the Linux filesystem layout standard\n(FHS? forget what it's called exactly). In any case, breaking\nbackwards compatibility won't be an easy sell.\n\n> Going a bit further in reorganization, if the config files always lived in\n> an /etc/pgsql directory, then pgsqld (aka postmaster) could start with zero\n> parameters and zero environment variables (true?),\n\nAgain, see multiple-postmaster issue. AFAICT you are proposing to\nremove flexibility that is *necessary* for some people. (Like me\n... I currently have three postmasters of different vintages running\non this machine ...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Oct 2001 13:11:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: consistent naming of components "
},
{
"msg_contents": "Robert Dyas writes:\n\n> One possible renaming / reorganization: (feedback encouraged!!!)\n>\n> change default account name postgres to pgsql\n\nIt's a little known secret, but there is no default account name!\n\"postgres\" is simply what most people seem to choose (but by no means\nall). The documentation might refer to \"postgres\" as well, but that's\njust to align with common usage.\n\n> change daemon name postmaster to pgsqld\n\nMost people don't start the postmaster directly anyway, so I don't see\nthis as an improvement.\n\n> change client name psql to pgsql\n\nThere are more clients than just psql. What's PgAccess' name going to be.\npsql is just the name of the program and it's hardwired into people's\nfingers.\n\n> change data location /var/lib/pgsql/data to /var/pgsql\n\nThere is no default data location either. The /var/lib location is\nprobably what the FHS folks dreamt up, but other platforms don't use it.\n\n> move .conf files from /var/lib/pgsql/data to /etc/pgsql\n\nAs there is no default data location, these are just two arbitrary\nlocations.\n\n> change all PG_xxx file names (PG_VERSION, /usr/bin/pg_xxx, etc) to pgsql_xxx\n\nMore typing, longer file names, not pretty.\n\n> change all postgresql* file names to pgsql*\n\nThe package is called PostgreSQL, so if anything, the change should be\npgsql => postgresql.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 28 Oct 2001 13:31:06 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: consistent naming of components"
},
{
"msg_contents": "Personally I see very little gain for lots of work and problems.\n\nBut whatever is decided it'll be ok for me if I can still tell a postmaster\non the command line where its base directory or global configuration file\nis and it finds everything else from there. \n\nApache is almost there, need to edit the apachectl and init scripts and\nthat's about it.\n\nThe supplied Postgresql initstyle script kills all postmasters though :(.\n\nCheerio,\nLink.\n\nAt 10:52 PM 26-10-2001 -0400, Robert Dyas wrote:\n>Hi all!\n>\n>I wanted to propose a possible \"rationalization\" of the PostgreSQL naming\n>scheme, as briefly outlined below. The following suggestion may seem like a\n>trivial improvement to some, but to me it is a matter of polish and\n>consistency.\n>\n>One possible renaming / reorganization: (feedback encouraged!!!)\n>\n>change default account name postgres to pgsql\n>change daemon name postmaster to pgsqld\n>change client name psql to pgsql\n>change data location /var/lib/pgsql/data to /var/pgsql\n>move .conf files from /var/lib/pgsql/data to /etc/pgsql\n>change all PG_xxx file names (PG_VERSION, /usr/bin/pg_xxx, etc) to pgsql_xxx\n>change all postgresql* file names to pgsql*\n>whatever else I have missed...:)\n>\n>I think this would be a very worthwhile improvement, but I don't know how\n>others feel about this. It should make it easier for newbies to learn their\n>way around and generally reduce confusion.\n>\n>Going a bit further in reorganization, if the config files always lived in\n>an /etc/pgsql directory, then pgsqld (aka postmaster) could start with zero\n>parameters and zero environment variables (true?), since it could get PGDATA\n>and PGLIB type data, plus log file location, from the config files. This\n>should simplify the init scripts as well, and generally make setup easier.\n>Eventually, other environment variables (like the one passed to CREATE\n>DATABASE name WITH\n>LOCATION = 'location' where location is an environment variable) could be\n>eliminated, so that all configuration information lived in /etc/pgsql conf\n>files.\n>\n>I am a Unix/Linux novice, but this seems to make sense to me. What does\n>everyone think?\n>\n>Rob\n>\n>PS Please forgive me in advance if this is not the correct mailing list to\n>propose this on.\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n>\n\n",
"msg_date": "Mon, 29 Oct 2001 13:55:46 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: consistent naming of components"
},
{
"msg_contents": "I'll consider this dead.\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: Saturday, October 27, 2001 1:12 PM\nTo: Robert Dyas\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] consistent naming of components\n\n\n\"Robert Dyas\" <rdyas@adelphia.net> writes:\n> [ rename and move just about everything in sight ]\n\nSorry, but I don't think this is going to happen. We'd be breaking\na heck of a lot of user applications, startup scripts, etc to achieve\n(IMHO) very little of value. Renaming psql->pgsql would alone break\nmore user scripts than I care to think about.\n\n> change data location /var/lib/pgsql/data to /var/pgsql\n> move .conf files from /var/lib/pgsql/data to /etc/pgsql\n\nThe present sources do not have any hardwired notion of where things\nshould go. If you care to install things in those directories, you\ncan --- but you won't get far insisting that everyone else should do\nlikewise. Preferred filesystem organization varies across platforms.\nEven if it didn't, there are situations such as running multiple\npostmasters (eg, setting up a test version) in which some instances\n*must* have a nonstandard location.\n\nYou might possibly be able to talk the RPM maintainer into changing\nhis ideas of where the RPMs should install stuff --- but I believe\nhe thinks he's following the Linux filesystem layout standard\n(FHS? forget what it's called exactly). In any case, breaking\nbackwards compatibility won't be an easy sell.\n\n> Going a bit further in reorganization, if the config files always lived in\n> an /etc/pgsql directory, then pgsqld (aka postmaster) could start with\nzero\n> parameters and zero environment variables (true?),\n\nAgain, see multiple-postmaster issue. AFAICT you are proposing to\nremove flexibility that is *necessary* for some people. (Like me\n... I currently have three postmasters of different vintages running\non this machine ...)\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Mon, 29 Oct 2001 15:31:09 -0500",
"msg_from": "\"Robert Dyas\" <rdyas@adelphia.net>",
"msg_from_op": true,
"msg_subject": "Re: consistent naming of components "
}
] |
[
{
"msg_contents": "I cannot use RULEs with WHERE clauses. What's wrong? Is this a bug? I also\nhad this problem with 7.1.1. The documentation says this should work.\n\nfoo=# SELECT version();\n version\n---------------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\nfoo=# CREATE TABLE a(foo integer);\nCREATE\nfoo=# CREATE TABLE b(foo integer);\nCREATE\nfoo=# CREATE VIEW c AS SELECT foo FROM a;\nCREATE\nfoo=# CREATE RULE d AS ON INSERT TO c WHERE new.foo=5 DO INSTEAD SELECT foo FROM b;\nCREATE\nfoo=# INSERT INTO c VALUES (5);\nERROR: Cannot insert into a view without an appropriate rule\nfoo=# INSERT INTO c VALUES (6);\nERROR: Cannot insert into a view without an appropriate rule\n\nTIA, Zoltan\n\n-- \n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Sat, 27 Oct 2001 13:10:50 +0200 (CEST)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "bug (?) with RULEs with WHERE"
},
{
"msg_contents": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> foo=# CREATE TABLE a(foo integer);\n> CREATE\n> foo=# CREATE TABLE b(foo integer);\n> CREATE\n> foo=# CREATE VIEW c AS SELECT foo FROM a;\n> CREATE\n> foo=# CREATE RULE d AS ON INSERT TO c WHERE new.foo=5 DO INSTEAD SELECT foo FROM b;\n> CREATE\n> foo=# INSERT INTO c VALUES (5);\n> ERROR: Cannot insert into a view without an appropriate rule\n\nYou didn't provide a rule covering the new.foo<>5 case.\n\nIn practice, you *must* have an unconditional INSTEAD rule present for\nany view operation you want to allow. It can be DO INSTEAD NOTHING,\nand then you can do all your useful work in conditional rules, but the\nunconditional rule must be there. Else the system thinks that perhaps\nthe insert into the view would really happen.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 27 Oct 2001 14:04:02 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: bug (?) with RULEs with WHERE "
},
{
"msg_contents": "On Sat, 27 Oct 2001, Tom Lane wrote:\n\n> Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu> writes:\n> > [...]\n> > foo=# CREATE RULE d AS ON INSERT TO c WHERE new.foo=5 DO INSTEAD SELECT foo FROM b;\n> > CREATE\n> > foo=# INSERT INTO c VALUES (5);\n> > ERROR: Cannot insert into a view without an appropriate rule\n> \n> You didn't provide a rule covering the new.foo<>5 case.\n> \n> In practice, you *must* have an unconditional INSTEAD rule present for\n> any view operation you want to allow. It can be DO INSTEAD NOTHING,\n> and then you can do all your useful work in conditional rules, but the\n> unconditional rule must be there. Else the system thinks that perhaps\n> the insert into the view would really happen.\n\nThank you, I see. It works now. But in 7.1.1 on a rather complex view I\nexperienced that the RULE has been executed as many times as many rows the\nview contains, although I added a WHERE to filter the rows: in fact it \nshould have been executed only once. In 7.1.3 this problem doesn't\noccur. Has anything been changed since 7.1.1 in this code?\n\nSo I'm migrating to 7.1.3 now. But currently I'm still having problems\nwith user authentication (I get \"Password authentication failed for user\n'xxxx'.\" errors). I always used\n\nINSERT INTO pg_shadow...\n\nIs this changed? With\n\nALTER USER...\n\nit works, of course. Do you suggest stopping use \"INSERT INTO\npg_shadow...\"?\n\nTIA, Zoltan\n\n Kov\\'acs, Zolt\\'an\n kovacsz@pc10.radnoti-szeged.sulinet.hu\n http://www.math.u-szeged.hu/~kovzol\n ftp://pc10.radnoti-szeged.sulinet.hu/home/kovacsz\n\n",
"msg_date": "Tue, 30 Oct 2001 13:57:57 +0100 (CET)",
"msg_from": "Kovacs Zoltan <kovacsz@pc10.radnoti-szeged.sulinet.hu>",
"msg_from_op": true,
"msg_subject": "Re: bug (?) with RULEs with WHERE "
}
] |
[
{
"msg_contents": "I recently ran pgindent, which had some fixes from the 7.1 version that\nwere suggested by Tom Lane. Unfortunately, some of my fixes had bad\nside effects, and I would like to run pgindent again to correct those\nproblems Tom has found.\n\nThe changes should be minimal, mostly related to indenting of\nstruct/enum and whitespace before single-line comments. I forgot to add\nthe ODBC symbols to pgindent so I need to rerun ODBC anyway. JDBC will\nnot be effected.\n\nIf I don't hear any objections, I will run it in 12 hours. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 27 Oct 2001 08:04:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "pgindent run"
},
{
"msg_contents": "\nAll done. Thanks guys.\n\n> I recently ran pgindent, which had some fixes from the 7.1 version that\n> were suggested by Tom Lane. Unfortunately, some of my fixes had bad\n> side effects, and I would like to run pgindent again to correct those\n> problems Tom has found.\n> \n> The changes should be minimal, mostly related to indenting of\n> struct/enum and whitespace before single-line comments. I forgot to add\n> the ODBC symbols to pgindent so I need to rerun ODBC anyway. JDBC will\n> not be effected.\n> \n> If I don't hear any objections, I will run it in 12 hours. Thanks.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 28 Oct 2001 01:24:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: pgindent run"
}
] |
[
{
"msg_contents": "I tried posting this a couple times, and I'm not sure why I never saw it, but I\ndo think it is something worth thinking about.\n\nThere was some discussion about \"pre-forking\" PostgreSQL, and I gathered that\none of the problems would be how do you know what database to open? At our\nshop, we use a combination of Oracle and PostgreSQL. (BTW: Congrats guys, we\nhave more stability issues with Oracle than we do Postgres!)\n\nOne of the features of Oracle that is kind of cool, is that it separates the\ndatabase and the network protocol, i.e. the oracle and listener programs. The\nlistener deals with all the networking crap, and oracle just does the database\nstuff.\n\nWhile somewhat problematic to configure, it has its advantages. While thinking\nabout pre-forking postgres, it occured to me that Postgres may be made to work\nsimilarly.\n\npostmaster could start up as it normally does, however, there could be an\nadditional configuration for database listeners. Similar to postgresql.conf,\npglisteners.conf, could specify databases which could be pre-forked and\nlistening on other TCP/IP ports.\n\nI envision something like this:\n\n[sales_db]\nenable_seqscan = false\nport = 5433\nhostname_lookup = false\n\n[marketing_db]\nport = 5434\n\nThat way postmaster monitors the state of the \"listener\" postgres, and after it\naccepts on its port, postmaster will fork off another postgres to wait in a\nsocket accept().\n\nI think it would also be cool to be able to configure the behavior of the\nlisteners differently than the standard postmaster defaults.\n",
"msg_date": "Sat, 27 Oct 2001 10:20:58 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Some suggestions."
}
] |
[
{
"msg_contents": "I find the HISTORY file to be distressingly poor to peruse. Reasons:\n\nA large proportion of the items don't convey any useful information.\nExamples:\n\n| PLpgSQL fix for SELECT... FOR UPDATE (Tom)\n\nWhat did this fix? Does SELECT FOR UDPATE now work whereas it didn't use\nto? => \"SELECT ... FOR UPDATE now works in PL/pgSQL\"\n\n| Fix for PL/pgSQL PERFORM returning multiple rows (Tom)\n\nWhat did this fix? Can you return multiple rows now or does it merely\ngive an error message that you cannot where it used to crash?\n\n| Fix for inherited CHECK constraints (Stephan Szabo)\n\nditto\n\n| PL/pgSQL Allow IS and FOR in cursors (Bruce)\n\nIf I didn't happen to know exactly what this meant, I wouldn't have a\nclue.\n\n| Allow NULL to appear at beginning/end based on ORDER BY (Tom)\n\nIt doesn't \"allow\", it just \"does\".\n\n| Pltcl add spi_lastoid capability (bob@redivi.com)\n\nCapability = command, function, type, ...?\n\n| Allow column renaming in views\n\nALTER VIEW foo RENAME COLUMN -- huh?\n\n| New option to output SET SESSION AUTHORIZATION commands (Peter E)\n\nOption to what to output where?\n\n| New postgresql.conf option to enable/disable \"col = NULL\" comparisons\n\nThis is not correct.\n\n| Cachability fixes (Thomas, Tom)\n\nI don't think cachability as such was \"fixed\", or even \"changed\". The\nitem probably related to some iscacheable pg_proc entries which were\ntemporarily broken.\n\n\nThe categories Bug Fixes, Enhancements, Types, Performance, Interfaces,\nSource Code could be split better, and they're not used very consistently.\nAn example from each category that doesn't fit:\n\nBug Fixes: Disallow access to pg_statistic for non-super user (Tom)\nThis was not a bug, but a consequence of a change.\n\nEnhancements: Fix TCL COPY TO/FROM (ljb)\nIf it is \"fixed\" then it was broken before.\n\nTypes: New function bit_length() (Peter E)\nNo comment.\n\nPerformance: Dynahash portability improvements (Tom)\n\nInterfaces: Obviously, anything done in the interfaces is also either a\nbug fix or an enhancement. And what exactly constitutes an interface is\nnot clear to me.\n\nSource code: Remove OID's from some system tables (Tom)\nMaybe this is an enhancement.\n\n\nSome changes are \"must know\", because they are incompatible, such as\n\n| Load pg_hba.conf only on startup and SIGHUP (Bruce)\n\nThis should be made clear somewhere.\n\n\nFinally,\n\n| Remove configure --enable-pltcl-utf option\n\nThere was never such an option in a previous release.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sat, 27 Oct 2001 19:44:05 +0200 (CEST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "HISTORY file"
},
{
"msg_contents": "\n> | Fix for inherited CHECK constraints (Stephan Szabo)\n> \n> ditto\n\nIf this is what I think it is, I think the actual fix was the \nfollowing (although I don't know what a particularly good wording\nis)\n\nALTER TABLE ADD CONSTRAINT now properly adds check constraints\nto children of the specified table, which is consistant to\nthe behavior of check constraints in inheritance trees created\nat create time.\n\n",
"msg_date": "Sat, 27 Oct 2001 11:56:31 -0700 (PDT)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> I find the HISTORY file to be distressingly poor to peruse. Reasons:\n> \n\nI noticed ODBC related items.\nIt seems plain to change as follows at least for me.\n\nODBC\n Remove query limit (Hiroshi)\n Remove query size limit\n\n Remove text field size limit (Hiroshi)\n Fix for SQLPrimaryKeys() (Hiroshi)\n Fix for SQLPrimaryKeys in multibyte mode\n\n Procedure calls (Hiroshi)\n Allow ODBC procedure calls\n FETCH first fix (Aidan Mountford)\n??? maybe the following ?\n Improve boolean handing\n Updatable cursors (Hiroshi)\nThis isn't true. Please remove from the HISTORY list.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 29 Oct 2001 11:18:31 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
},
{
"msg_contents": "\n[ Sorry I am two days late in responding to this.]\n\n> I find the HISTORY file to be distressingly poor to peruse. Reasons:\n\nWhile I do my best to generate the HISTORY file, it is far from perfect.\nI need comments like this to help me improve it. Peter, glad you took\nthe time to review the list. Let me address each one and modify the\nHISTORY file accordingly:\n\n> \n> A large proportion of the items don't convey any useful information.\n> Examples:\n> \n> | PLpgSQL fix for SELECT... FOR UPDATE (Tom)\n> \n> What did this fix? Does SELECT FOR UDPATE now work whereas it didn't use\n> to? => \"SELECT ... FOR UPDATE now works in PL/pgSQL\"\n\nPart of the problem here is that I have to guess from the commit message\nas to what was actually changed. The entries have to be:\n\n\to concise\n\to understandable to novices\n\to combine entries fixing the same problem\n\nI could use some more information on this one. I should add that\ncertain committers, particularly to interfaces, have commit messages\nthat just say \"Committed patch from Fred\" and this does not help me\ngenerate a proper HISTORY file. Usually, copying something from the\noriginal message helps.\n\n> \n> | Fix for PL/pgSQL PERFORM returning multiple rows (Tom)\n> \n> What did this fix? Can you return multiple rows now or does it merely\n> give an error message that you cannot where it used to crash?\n\nAgain, I don't know.\n\n> | Fix for inherited CHECK constraints (Stephan Szabo)\n> \n> ditto\n\nI don't know the details. Can you give them to me?\n\n> \n> | PL/pgSQL Allow IS and FOR in cursors (Bruce)\n> \n> If I didn't happen to know exactly what this meant, I wouldn't have a\n> clue.\n\nI can fix this one:\n\n PL/pgSQL Allow IS and FOR keywords in cursors, for compatibility (Bruce)\n\n> | Allow NULL to appear at beginning/end based on ORDER BY (Tom)\n> \n> It doesn't \"allow\", it just \"does\".\n\nUh, yes, this is better:\n\n Make NULL appear at beginning/end based on ORDER BY (Tom) \n\n> | Pltcl add spi_lastoid capability (bob@redivi.com)\n> \n> Capability = command, function, type, ...?\n\nGot it, capability -> function:\n\n Pltcl add spi_lastoid function (bob@redivi.com)\n> \n> | Allow column renaming in views\n> \n> ALTER VIEW foo RENAME COLUMN -- huh?\n\nActually, yes, it modifies the AS label of the column. Was that what\nyou meant?\n\n\tcreate view x as select * from pg_class;\n\talter table x rename column relname to jj;\n\tselect jj from x;\n\nI suppose it didn't work before.\n\n> | New option to output SET SESSION AUTHORIZATION commands (Peter E)\n> \n> Option to what to output where?\n\nI now see the entire command was added in 7.2. I missed the earlier CVS\ncommit:\n\n\tNew SET SESSION AUTHORIZATION command (Peter E) \n\n\n> | New postgresql.conf option to enable/disable \"col = NULL\" comparisons\n> \n> This is not correct.\n\nUh, it isn't? Can you give me some new text?\n\n> \n> | Cachability fixes (Thomas, Tom)\n> \n> I don't think cachability as such was \"fixed\", or even \"changed\". The\n> item probably related to some iscacheable pg_proc entries which were\n> temporarily broken.\n\nDo you have other wording? Seems there were was a cachability bug\nreport and we \"fixed\" it in the catalogs.\n\n\n> The categories Bug Fixes, Enhancements, Types, Performance, Interfaces,\n> Source Code could be split better, and they're not used very consistently.\n> An example from each category that doesn't fit:\n> \n> Bug Fixes: Disallow access to pg_statistic for non-super user (Tom)\n> This was not a bug, but a consequence of a change.\n\nI considered it a bug. If there was a salary column, any user in 7.1\ncould see the max value in the column. Seemed like a security bug to\nme.\n\n> \n> Enhancements: Fix TCL COPY TO/FROM (ljb)\n> If it is \"fixed\" then it was broken before.\n\nNow:\n\n\tAdd TCL COPY TO/FROM (ljb) \n\nFixed. :-)\n\n> Types: New function bit_length() (Peter E)\n> No comment.\n\nUh, I started to put some of the type-specific additions into Types. Is\nthat OK? Particularly the multi-byte ones so they are all in one place.\n\n> \n> Performance: Dynahash portability improvements (Tom)\n\nGood point. Moved.\n\n> \n> Interfaces: Obviously, anything done in the interfaces is also either a\n> bug fix or an enhancement. And what exactly constitutes an interface is\n> not clear to me.\n\nNot clear to me either. I wanted to get jdbc and odbc into separate\nlists because they are so large. Seemed like a good idea.\n\n> Source code: Remove OID's from some system tables (Tom)\n> Maybe this is an enhancement.\n\nUh, yes.\n\n> Some changes are \"must know\", because they are incompatible, such as\n> \n> | Load pg_hba.conf only on startup and SIGHUP (Bruce)\n> \n> This should be made clear somewhere.\n\nAdded to Migration section:\n\n\tAlso, pg_hba.conf only loads on SIGHUP now.\n\n> Finally,\n> \n> | Remove configure --enable-pltcl-utf option\n> \n> There was never such an option in a previous release.\n\nOh, did that come in and out in 7.2? Removed.\n\nLet me know what else you see. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n/usr/local/bin/mime: cannot create /dev/ttyp5: permission denied\n",
"msg_date": "Mon, 29 Oct 2001 14:10:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
},
{
"msg_contents": "> \n> > | Fix for inherited CHECK constraints (Stephan Szabo)\n> > \n> > ditto\n> \n> If this is what I think it is, I think the actual fix was the \n> following (although I don't know what a particularly good wording\n> is)\n> \n> ALTER TABLE ADD CONSTRAINT now properly adds check constraints\n> to children of the specified table, which is consistant to\n> the behavior of check constraints in inheritance trees created\n> at create time.\n\nChanged to:\n\nFix for ALTER TABLE ADD CONSTRAINT ... CHECK for inherited children\n (Stephan Szabo)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Oct 2001 14:13:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
},
{
"msg_contents": "\nAll changed made. Thanks.\n\n---------------------------------------------------------------------------\n\n> Peter Eisentraut wrote:\n> > \n> > I find the HISTORY file to be distressingly poor to peruse. Reasons:\n> > \n> \n> I noticed ODBC related items.\n> It seems plain to change as follows at least for me.\n> \n> ODBC\n> Remove query limit (Hiroshi)\n> Remove query size limit\n> \n> Remove text field size limit (Hiroshi)\n> Fix for SQLPrimaryKeys() (Hiroshi)\n> Fix for SQLPrimaryKeys in multibyte mode\n> \n> Procedure calls (Hiroshi)\n> Allow ODBC procedure calls\n> FETCH first fix (Aidan Mountford)\n> ??? maybe the following ?\n> Improve boolean handing\n> Updatable cursors (Hiroshi)\n> This isn't true. Please remove from the HISTORY list.\n> \n> regards,\n> Hiroshi Inoue\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 29 Oct 2001 14:15:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: HISTORY file"
}
] |
[
{
"msg_contents": "I'm working on DROP OPERATOR CLASS, and have a question about how to\nactually do the deletes. I ask as my main method has been to copy other\nbits of the backend, assuming they do things right.\n\nBut I've found two different examples, and don't know which is right? Or\nare they both?\n\nSpecifically I'm at the step of cleaning out the entries in pg_amop and\npg_amproc.\n\nExample 1 from backend/commands/remove.c for aggregates:\n\n relation = heap_openr(AggregateRelationName, RowExclusiveLock);\n\n tup = SearchSysCache(AGGNAME,\n PointerGetDatum(aggName),\n ObjectIdGetDatum(basetypeID),\n 0, 0);\n\n if (!HeapTupleIsValid(tup))\n agg_error(\"RemoveAggregate\", aggName, basetypeID);\n\n /* Remove any comments related to this aggregate */\n DeleteComments(tup->t_data->t_oid, RelationGetRelid(relation));\n\n simple_heap_delete(relation, &tup->t_self);\n\n ReleaseSysCache(tup);\n\n heap_close(relation, RowExclusiveLock);\n\nAnother apparently contrary example from backend/catalog/heap.c:\n\n pg_class_desc = heap_openr(RelationRelationName, RowExclusiveLock);\n\n tup = SearchSysCacheCopy(RELOID,\n ObjectIdGetDatum(rel->rd_id),\n 0, 0, 0);\n if (!HeapTupleIsValid(tup))\n elog(ERROR, \"Relation \\\"%s\\\" does not exist\",\n RelationGetRelationName(rel));\n\n /*\n * delete the relation tuple from pg_class, and finish up.\n */\n simple_heap_delete(pg_class_desc, &tup->t_self);\n heap_freetuple(tup);\n\n heap_close(pg_class_desc, RowExclusiveLock);\n\nIn one we SearchSysCache(), simple_heap_delete(), and then\nReleaseSysCache(). In the other we SearchSysCacheCopy(),\nsimple_heap_delete, and heap_freetuple().\n\nAre they two parallel ways, or is one better?\n\nTake care,\n\nBill\n\n",
"msg_date": "Sat, 27 Oct 2001 20:31:09 -0700 (PDT)",
"msg_from": "Bill Studenmund <wrstuden@netbsd.org>",
"msg_from_op": true,
"msg_subject": "Correct way to do deletes?"
},
{
"msg_contents": "Bill Studenmund <wrstuden@netbsd.org> writes:\n> In one we SearchSysCache(), simple_heap_delete(), and then\n> ReleaseSysCache(). In the other we SearchSysCacheCopy(),\n> simple_heap_delete, and heap_freetuple().\n\n> Are they two parallel ways, or is one better?\n\nI don't think it matters much anymore. The copy approach takes a few\nmore cycles, though, to no good purpose since you have no need to modify\nthe tuple struct obtained from cache. (When updating a tuple, copying\nthe cache entry and scribbling directly on the copy is sometimes more\nconvenient than calling heap_modifytuple to build a new tuple.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 16:56:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Correct way to do deletes? "
}
] |
[
{
"msg_contents": "We used to have to force sequential scans to be disabled because of a very\nnon-uniform distribution of keys in an index, to actually use the index. We are\na music site and a very large number of keys simply point to a catch-all of\n\"Various Artists\" or \"Soundtrack.\" The 7.2 beta's statistics and optimizer\nseems very much better than previous versions of PostgreSQL. Great job guys!\n\nThe table:\ncdinfo=# select count(*) from zsong ;\n count\n---------\n 3840513\n(1 row)\n\ncdinfo=# select artistid, count(artistid) from zsong group by artistid order by\ncount(artistid) desc limit 2;\n artistid | count\n-----------+--------\n 100050450 | 461727\n 100036031 | 54699\n(2 rows)\n\nIn PostgreSQL 7.1.2:\ncdinfo=# select version() ;\n version\n---------------------------------------------------------------------\n PostgreSQL 7.1.2 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\ncdinfo=# explain select count(*) from zsong where artistid = 1 ;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=93874.21..93874.21 rows=1 width=0)\n -> Seq Scan on zsong (cost=0.00..93769.55 rows=41863 width=0)\n\nEXPLAIN\ncdinfo=# explain select count(*) from zsong where artistid = 100050450;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=94816.11..94816.11 rows=1 width=0)\n -> Seq Scan on zsong (cost=0.00..93769.55 rows=418625 width=0)\n\nEXPLAIN\n\nIn PostgreSQL 7.2b1\ncdinfo=# select version();\n version\n-------------------------------------------------------------\n PostgreSQL 7.2b1 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\ncdinfo=# explain select count(*) from zsong where artistid = 1 ;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=80.10..80.10 rows=1 width=0)\n -> Index Scan using zsong_artistid on zsong (cost=0.00..80.00 rows=39\nwidth=0)\n\nEXPLAIN\ncdinfo=# explain select count(*) from zsong where artistid = 100050450;\nNOTICE: QUERY PLAN:\n\nAggregate (cost=94899.78..94899.78 rows=1 width=0)\n -> Seq Scan on zsong (cost=0.00..93664.41 rows=494146 width=0)\n\nEXPLAIN\n",
"msg_date": "Sat, 27 Oct 2001 23:38:53 -0400",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Optimizer, index use, good news for 7.2b1"
}
] |
[
{
"msg_contents": "Dear all,\n\nI am running PostgreSQL 7.1.2 with UNICODE support in production.\nMaybe I miss something about UNICODE:\n\nCREATE TABLE \"test\" (\n \"source_oid\" serial,\n \"source_timestamp\" timestamp,\n \"source_creation\" date DEFAULT 'now',\n \"source_modification\" date DEFAULT 'now',\n \"source_content\" text\n);\n\nINSERT INTO test (source_content) VALUES ('Photocopie du permis de \nconstruire accept�.');\n\nNow, when trying :\nSELECT * FROM test WHERE source_content ILIKE '%accept%'; ---> returns the \nrecord;\nSELECT * FROM test WHERE source_content ILIKE '%accept�%' ---> returns nothing\nSELECT * FROM test WHERE source_content ILIKE '%accepte%' ---> returns nothing\n\nThe same happens from ODBC, PHP and psql. Can you reproduce this?\n\nI have tried\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Sun, 28 Oct 2001 09:22:24 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "UNICODE"
},
{
"msg_contents": "On Sun, Oct 28, 2001 at 09:22:24AM +0100, Jean-Michel POURE wrote:\n> Dear all,\n> \n> I am running PostgreSQL 7.1.2 with UNICODE support in production.\n> Maybe I miss something about UNICODE:\n\n> SELECT * FROM test WHERE source_content ILIKE '%accept�%' ---> returns \n> nothing\n> SELECT * FROM test WHERE source_content ILIKE '%accepte%' ---> returns \n> nothing\n> \n> The same happens from ODBC, PHP and psql. Can you reproduce this?\n\n1) Did you compile PostgreSQL with --enable-locale\n2) Did you set correct locale for postmaster (LANG=xxx)\n\n-- \nmarko\n\n",
"msg_date": "Sun, 28 Oct 2001 11:50:42 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "\n>1) Did you compile PostgreSQL with --enable-locale\nYes.\n\n>2) Did you set correct locale for postmaster (LANG=xxx)\nDatabase was create using CREATE db WITH ENCODING='UNICODE'.\npgsql: \\encoding returns UNICODE.\n\nThe db stores multiple languages (French, English, Japanese).\nWhy should I define a *single* locale for postmaster?\nDo I miss something?\n\nBest regards,\nJean-Michel POURE\n\n",
"msg_date": "Sun, 28 Oct 2001 11:06:00 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "On Sun, Oct 28, 2001 at 09:22:24AM +0100, Jean-Michel POURE wrote:\n> \n> I am running PostgreSQL 7.1.2 with UNICODE support in production.\n> Maybe I miss something about UNICODE:\n> \n> CREATE TABLE \"test\" (\n> \"source_oid\" serial,\n> \"source_timestamp\" timestamp,\n> \"source_creation\" date DEFAULT 'now',\n> \"source_modification\" date DEFAULT 'now',\n> \"source_content\" text\n> );\n> \n> INSERT INTO test (source_content) VALUES ('Photocopie du permis de \n> construire accept�.');\n> \n> Now, when trying :\n> SELECT * FROM test WHERE source_content ILIKE '%accept%'; ---> returns the \n> record;\n> SELECT * FROM test WHERE source_content ILIKE '%accept�%' ---> returns \n> nothing\n> SELECT * FROM test WHERE source_content ILIKE '%accepte%' ---> returns \n> nothing\n> \n> The same happens from ODBC, PHP and psql. Can you reproduce this?\n\nSorry, I misinterpreted what your problem is. I somehow thought\nyou want the '�' and 'e' produce same result - for that you need\nto mess with locale, but LIKE does not use locale anyway...\n\nNow I reread you message and here's hint:\n\n* If client_encoding == server_encoding, the bytes are put into\n DB as-is - no conversion is done.\n\nSo are you abslutely sure you have on client side UTF8 strings?\nUnfortunately you cant use client_encoding=latin1 as PostgreSQL \nrefuses the do the conversion between them. (I am with 7.1.3)\n\nEg. I did the following:\n\n* created db with encoding = UNICODE\n* Put your example into test.sql\n* iconv -f latin1 -t utf8 test.sql > test2.sql\n* psql < test2.sql\n\nand it worked as it should...\n\n\n-- \nmarko\n\n",
"msg_date": "Sun, 28 Oct 2001 12:53:55 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "I only want this query to work under Unicode:\nSELECT * FROM test WHERE source_content ILIKE '%accept�%'.\n\n>* If client_encoding == server_encoding, the bytes are put into\n> DB as-is - no conversion is done.\n>\n>So are you absolutely sure you have on client side UTF8 strings?\nPostgreSQL is compiled with UNICODE and LOCALE support.\nUnicode is used on both ends (PostgreSQL and psql).\n\n>Unfortunately you cant use client_encoding=latin1 as PostgreSQL\n>refuses the do the conversion between them. (I am with 7.1.3)\nAccording to the on-line manual, only MULE provides instant transcoding.\n\n>Eg. I did the following:\n>\n>* created db with encoding = UNICODE\n>* Put your example into test.sql\n>* iconv -f latin1 -t utf8 test.sql > test2.sql\n>* psql < test2.sql\n>\n>and it worked as it should...\n\nNice to hear it works when transcoding files to UTF-8. It shows it is not a \nback-end problem.\n\nAs for me, I typed INSERT INTO source_content VALUES ('Permis de conduire \naccept�') in Psql.\nPsql does not insert the data and I have to kill it manually. Can you \nreproduce this?\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Sun, 28 Oct 2001 12:44:26 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "On Sun, Oct 28, 2001 at 12:44:26PM +0100, Jean-Michel POURE wrote:\n> I only want this query to work under Unicode:\n> SELECT * FROM test WHERE source_content ILIKE '%accept�%'.\n\nAs I showed it works, if data in db is in UTF-8 and the query\nstring 'accept�' is in UTF8\n\n> >* If client_encoding == server_encoding, the bytes are put into\n> > DB as-is - no conversion is done.\n> >\n> >So are you absolutely sure you have on client side UTF8 strings?\n> PostgreSQL is compiled with UNICODE and LOCALE support.\n> Unicode is used on both ends (PostgreSQL and psql).\n\npsql uses your input literally - so is your console/xterm in\nUNICODE/UTF8?\n\n> >Eg. I did the following:\n> >\n> >* created db with encoding = UNICODE\n> >* Put your example into test.sql\n> >* iconv -f latin1 -t utf8 test.sql > test2.sql\n> >* psql < test2.sql\n> >\n> >and it worked as it should...\n> \n> Nice to hear it works when transcoding files to UTF-8. It shows it is not a \n> back-end problem.\n> \n> As for me, I typed INSERT INTO source_content VALUES ('Permis de conduire \n> accept�') in Psql.\n\nAs I said - psql does not do any conversion.\n\n> Psql does not insert the data and I have to kill it manually. Can you \n> reproduce this?\n\nNo. If it hangs this is serious problem. Or did you simply\nforgot final ';' ? It btw does not seem valid sql to me,\nconsidering you previously provided table structure.\n\nIn the end: are the strings/queries you give to psql/pg_exec\nUTF-8 - this is now main thing, as you have _configured_\neverything correctly.\n\n-- \nmarko\n\n",
"msg_date": "Sun, 28 Oct 2001 14:05:47 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "\n>psql uses your input literally - so is your console/xterm in\n>UNICODE/UTF8?\nClient: \\encoding returns 'UNICODE'.\nServer: \\list show databases. All databases are UNICODE (except TEMPLATE0 \nand TEMPLATE1 which are ASCII of course). I use a Mandrake 8.1 distribution \nand think my console is UNICODE.\n\n> > As for me, I typed INSERT INTO source_content VALUES ('Permis de conduire\n> > accept�') in Psql.\n>As I said - psql does not do any conversion.\nThe faulty query is: INSERT INTO test (source_content) VALUES ('Permis de \nconduire accept�');\n\nI just can't believe that Psql is not UTF-8 compatible. It seems unreal as \nPsql is PostgreSQL #1 helper application. Should I use PostgreSQL MULE \nencoding to have automatic trans coding. What are the guidelines, I am \ncompletely lost.\n\n> > Psql does not insert the data and I have to kill it manually. Can you\n> > reproduce this?\n>No. If it hangs this is serious problem. Or did you simply\n>forgot final ';' ? It btw does not seem valid sql to me,\n>considering you previously provided table structure.\nIs it possible that my database is corrupted? I have used pg_dump several \ntimes to dump data from production server to development servers and \nconversely. Does pg_dump produce UTF8 output? What are the guidelines when \nusing UTF-8: forget psql and pg_dump?\n\n>In the end: are the strings/queries you give to psql/pg_exec\n>UTF-8 - this is now main thing, as you have _configured_\n>everything correctly.\nEverything is configured correctly server-side (PostgreSQL, Psql).\n\nThank you very much for your support Marko,\nBest regards,\nJean-Michel\n\n\n",
"msg_date": "Sun, 28 Oct 2001 14:34:49 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "On Sun, Oct 28, 2001 at 02:34:49PM +0100, Jean-Michel POURE wrote:\n> \n> >psql uses your input literally - so is your console/xterm in\n> >UNICODE/UTF8?\n> Client: \\encoding returns 'UNICODE'.\n> Server: \\list show databases. All databases are UNICODE (except TEMPLATE0 \n> and TEMPLATE1 which are ASCII of course). I use a Mandrake 8.1 distribution \n> and think my console is UNICODE.\n\nYou think? Try this:\n\n\t$ echo \"accept�\" | od -c\n\nIf your term is in utf you should get:\n\n\t0000000 a c c e p t 303 251 \\n\n\t0000011\n\nIf in iso-8859-1:\n\n\t0000000 a c c e p t 351 \\n\n\t0000010\n\nIt may be in some other 8bit encoding too, then the last number\nmay be different.\n\n> >> As for me, I typed INSERT INTO source_content VALUES ('Permis de conduire\n> >> accept�') in Psql.\n> >As I said - psql does not do any conversion.\n> The faulty query is: INSERT INTO test (source_content) VALUES ('Permis de \n> conduire accept�');\n\nHmm. It may be a bug in input routines. You give PostgreSQL a\n1byte '�', it expects 2 byte char and overflows somewhere. Can\nyou reproduce it on 7.1.3? Maybe its fixed there, I cant\nreproduce it.\n\n> I just can't believe that Psql is not UTF-8 compatible. It seems unreal as \n> Psql is PostgreSQL #1 helper application. Should I use PostgreSQL MULE \n> encoding to have automatic trans coding. What are the guidelines, I am \n> completely lost.\n\npsql & pg_dump are fine. Your problem is that you dont give to\npsql and pg_exec/PHP utf-8 strings, but some iso-8859-*.\n\n> >> Psql does not insert the data and I have to kill it manually. Can you\n> >> reproduce this?\n> >No. If it hangs this is serious problem. Or did you simply\n> >forgot final ';' ? It btw does not seem valid sql to me,\n> >considering you previously provided table structure.\n> Is it possible that my database is corrupted? I have used pg_dump several \n> times to dump data from production server to development servers and \n> conversely. Does pg_dump produce UTF8 output? What are the guidelines when \n> using UTF-8: forget psql and pg_dump?\n\nAs I said, psql & pg_dump are fine, they do not touch your data\nwhen it passes through them.\n\nIt may be that all of your database is in latin1, as you\ninserted strings in this encoding, not utf8. Basically\nPostgreSQL server also does not touch your data, only its\ncompare functions does not work, as the strings are not in\nencoding you tell they are.\n\nSolution to this is to dump your data, use the iconv utility\nto convert it to utf8 and reload.\n\n\nTo see this you should do:\n\n\t$ psql -c \"SELECT source_contect FROM table where ...\" \\\n\t\t| od -c\n\nAnd then look whether the weird characters are represented in\n1 or 2 bytes.\n\n-- \nmarko\n\n",
"msg_date": "Sun, 28 Oct 2001 17:09:45 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "At 17:09 28/10/01 +0200, you wrote:\n>On Sun, Oct 28, 2001 at 02:34:49PM +0100, Jean-Michel POURE wrote:\n> >\n> > >psql uses your input literally - so is your console/xterm in\n> > >UNICODE/UTF8?\n> > Client: \\encoding returns 'UNICODE'.\n> > Server: \\list show databases. All databases are UNICODE (except TEMPLATE0\n> > and TEMPLATE1 which are ASCII of course). I use a Mandrake 8.1 \n> distribution\n> > and think my console is UNICODE.\n>\n>You think? Try this:\n>\n> $ echo \"accept�\" | od -c\n>\n>If your term is in utf you should get:\n>\n> 0000000 a c c e p t 303 251 \\n\n> 0000011\n>\n>If in iso-8859-1:\n>\n> 0000000 a c c e p t 351 \\n\n> 0000010\n>\n>It may be in some other 8bit encoding too, then the last number\n>may be different.\nIt is:\n 0000000 a c c e p t � \\n\n 0000010\n\n\n>Hmm. It may be a bug in input routines. You give PostgreSQL a\n>1byte '�', it expects 2 byte char and overflows somewhere. Can\n>you reproduce it on 7.1.3? Maybe its fixed there, I cant\n>reproduce it.\n\nI noticed some longer routines with \"�\" worked without any problem.\nI cannot reproduce it as I converted my database to plain ASCII.\nWill try UNICODE on 7.2 beta when adding Japanese text to my database.\n\nThank you very much for your help.\nBest regards, Jean-Michel POURE\n\n\n",
"msg_date": "Sun, 28 Oct 2001 16:37:48 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "On Sun, Oct 28, 2001 at 04:37:48PM +0100, Jean-Michel POURE wrote:\n> At 17:09 28/10/01 +0200, you wrote:\n> > $ echo \"accept�\" | od -c\n> It is:\n> 0000000 a c c e p t � \\n\n> 0000010\n\nHuh. Then try 'od -t x1'. Also what the commend 'locale'\nprints.\n\n\n> >Hmm. It may be a bug in input routines. You give PostgreSQL a\n> >1byte '�', it expects 2 byte char and overflows somewhere. Can\n> >you reproduce it on 7.1.3? Maybe its fixed there, I cant\n> >reproduce it.\n> \n> I noticed some longer routines with \"�\" worked without any problem.\n> I cannot reproduce it as I converted my database to plain ASCII.\n> Will try UNICODE on 7.2 beta when adding Japanese text to my database.\n\nOk. I still suggest you try to understand what was going on,\notherwise you will be in trouble again. The logic around\nencodings will be same in 7.2.\n\n-- \nmarko\n\n",
"msg_date": "Sun, 28 Oct 2001 17:59:56 +0200",
"msg_from": "Marko Kreen <marko@l-t.ee>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "I'm questioning whether anyone has done benchmarks on various hardware for\nPGSQL and MySQL. I'm either thinking dual P3-866's, Dual AMD-1200's, etc.\nI'm looking for benchmarks of large queries on striped -vs- non-striped\nvolumes, different processor speeds, etc.\n\nAny thoughts people?\n\n",
"msg_date": "Sun, 28 Oct 2001 13:07:58 -0400",
"msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Ultimate DB Server"
},
{
"msg_contents": "Ram plays a big factor in queries, most queries are stored in ram. Also\ndepends on which platform as well.\n\nThank you,\n \nTodd Williamsen, MCSE\nhome: 847.265.4692\nCell: 847.867.9427\n\n\n-----Original Message-----\nFrom: Mike Rogers [mailto:temp6453@hotmail.com] \nSent: Sunday, October 28, 2001 11:08 AM\nTo: mysql@lists.mysql.com; pgsql-hackers@postgresql.org;\npgsql-admin@postgresql.org\nSubject: Ultimate DB Server\n\n\nI'm questioning whether anyone has done benchmarks on various hardware\nfor PGSQL and MySQL. I'm either thinking dual P3-866's, Dual\nAMD-1200's, etc. I'm looking for benchmarks of large queries on striped\n-vs- non-striped volumes, different processor speeds, etc.\n\nAny thoughts people?\n\n\n---------------------------------------------------------------------\nBefore posting, please check:\n http://www.mysql.com/manual.php (the manual)\n http://lists.mysql.com/ (the list archive)\n\nTo request this thread, e-mail <mysql-thread89232@lists.mysql.com>\nTo unsubscribe, e-mail\n<mysql-unsubscribe-todd=williamsen.net@lists.mysql.com>\nTrouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php\n\n",
"msg_date": "Sun, 28 Oct 2001 11:36:36 -0600",
"msg_from": "\"Todd Williamsen\" <todd@williamsen.net>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "Hi Jean-Micehl,\n\n* Jean-Michel POURE <jm.poure@freesurf.fr> [011028 18:23]:\n> \n> >psql uses your input literally - so is your console/xterm in\n> >UNICODE/UTF8?\n> Client: \\encoding returns 'UNICODE'.\n> Server: \\list show databases. All databases are UNICODE (except\n> TEMPLATE0 and TEMPLATE1 which are ASCII of course). I use a Mandrake\n> 8.1 distribution and think my console is UNICODE.\n\nI don't know the details for the Mandrake distribution, but I would\nrather think the default terminal to be iso-8859-15 or iso-8859-1\nencoded (I use myself a linux debian sid, customised to be mixed\niso-8859-15/utf-8 :) ).\n\nIn that case, it's likely to cause problems.\nOne thing is to check your current locale (before running psql), by\ntyping \"locale charmap\" on your terminal :\n\nUnicode :\n\nasterix:~$ locale charmap\nUTF-8\n\nlatin-9 (fr_FR@euro) :\n\nasterix:~$ locale charmap\nISO-8859-15\n\nThen, if you really have a Unicode term, then you may run into other\nproblems. Psql uses readline, and readline is not yet \"utf-8\" enabled\nby default. There are patches for that, but I don't know why they\ndon't integrate the support into the code... whatever the reason, it\nmeans that for example Backspace won't work over characters with more\nthan one byte, and that includes everything which is not ASCII.\n\nSo, if while typing in psql, you try to do some text editing over the\n\"ᅵ\", then it's likely to mangle your input to psql (without\nnecessarily be visible in your terminal), and anything from a bad\ncommandline, to psql waiting for more input... When you've finished\ntyping your line, check if psql prompt is displaying an \"=\" sign :\n\ntests=#\n\nThird, depending on how your data is entered vs queried, it may have\nsome differences. For example, if you use an application which\nconverts UTF-8 data to D-normalisation before submitting to\nPostgreSQL, then the \"ᅵ\" will be stored as \"e\"+\"combining mark acute\naccent\". Then, when you do your query, you have to submit in the same\nformat, as \"ᅵ\" (directly typed from the keyboard) and \"e\"+\"comb.acute\naccent\" are two different things (I plan to add support in PostgreSQL\nfor this kind of stuff for 7.3, if I manage to go a bit faster on my\nother projects...).\n\nAnyway, I have been trying a query like yours, using a UTF-8 xterm,\nwith a UNICODE encoding, both psql and database :\n\nmy table :\n\ntests=# insert into matable values ('un texte accentuᅵ', 12);\nINSERT 70197 1\ntests=# insert into matable values ('ᅵa accentue le problᅵme', 14);\nINSERT 70198 1\n\ntests=# select * from matable;\n montext | valeur \n-------------------------+--------\n un texte accentuᅵ | 12\n ᅵa accentue le problᅵme | 14\n(2 rows)\n\n[note that the \"ᅵ\", \"ᅵ\" and \"ᅵ\" are not combining forms here...]\n\ntests=# select * from matable where montext ilike '%accentuᅵ%';\n montext | valeur \n-------------------+--------\n un texte accentuᅵ | 12\n(1 row)\n\nIt works fine for me.\n\n> >> As for me, I typed INSERT INTO source_content VALUES ('Permis de\n> >> conduire acceptᅵ') in Psql.\n> >As I said - psql does not do any conversion.\n> The faulty query is: INSERT INTO test (source_content) VALUES\n> ('Permis de conduire acceptᅵ');\n> \n> I just can't believe that Psql is not UTF-8 compatible. It seems\n> unreal as Psql is PostgreSQL #1 helper application. Should I use\n> PostgreSQL MULE encoding to have automatic trans coding. What are\n> the guidelines, I am completely lost.\n\nPsql is UTF-8 compatible. However, the terminal support of UTF-8 may\nbe a little shaky for now (no dead keys, no compose key) and that will\nbe fixed in Xfree-4.2, and readline support of UTF-8 is deficient (as\nis bash's, where readline comes from). I don't know when *that* will\nbe fixed. I know http://www.li18nux.org/ has some patches, but I\nhaven't tried them yet.\n\n> >> Psql does not insert the data and I have to kill it manually. Can\n> >> you reproduce this?\n> >No. If it hangs this is serious problem. Or did you simply forgot\n> >final ';' ? It btw does not seem valid sql to me, considering you\n> >previously provided table structure.\n> Is it possible that my database is corrupted? I have used pg_dump\n> several times to dump data from production server to development\n> servers and conversely. Does pg_dump produce UTF8 output? What are\n> the guidelines when using UTF-8: forget psql and pg_dump?\n\nOne thing you really have to be careful about is the locale you're\nrunning your terminal into (cf above with \"locale charmap\"). A lot of\ntools are sensitive to that, as soon as they set the locale, and also\nthe terminal itself is sensitive to that (if you run an xterm, a\ngnome-terminal or other, make sure they are started themselves with\nthe correct locale, rather than the locale being set by a .bashrc or\n.profile AFTER the xterm is launched. One way to be sure is to launch\nan Xterm from the command line in an other xterm ;) ).\n\n> >In the end: are the strings/queries you give to psql/pg_exec UTF-8\n> >- this is now main thing, as you have _configured_ everything\n> >correctly.\n> Everything is configured correctly server-side (PostgreSQL, Psql).\n> \n> Thank you very much for your support Marko,\n> Best regards,\n> Jean-Michel\n\nIt's possible to work with psql and UTF-8, I'm using it :) But support\nfor utf-8 is not complete yet, and it's not seamless. Also, support in\nPostgresql is not yet complete for UTF-8 (normalisation forms,\ncollation, regexes...), but it'll come :)\n\nPatrice.\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n",
"msg_date": "Sun, 28 Oct 2001 19:03:05 +0100",
"msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "At 13:07 28/10/01 -0400, you wrote:\n>I'm questioning whether anyone has done benchmarks on various hardware for\n>PGSQL and MySQL. I'm either thinking dual P3-866's, Dual AMD-1200's, etc.\n>I'm looking for benchmarks of large queries on striped -vs- non-striped\n>volumes, different processor speeds, etc.\n\nHello Mike,\n\nIMHO, you should consider *simple* software optimization first.\n\nHardware can bring a 2x gain whereas software optimization can boost an \napplication by 10x. Until now, I never heard or read about a real *software \noptimization* benchmark between MySQL and PostgreSQL.\n\nSoftware optimization includes the use of views, triggers, rules, PL/pgSQL \nserver side programming. By definition, it is hard to compare MySQL with \nPostgreSQL because MySQL *does not include* these important features (and \nprobably will never do).\n\nI see at least two easy cases where PostgreSQL beats MySQL:\n1) Create a simple relational DB with triggers storing values instead of \nperforming LEFT JOINS. Increase the number of simultaneous queries. MySQL \nwill die at x queries and PostgreSQL will still be working at 5x queries.\n2) Use PL/pgSQL to perform complex jobs normally devoted to an application \nserver (Java, PHP) on a separate platform. In some case (recursive loops \nfor example), network traffic can be divided by 100. As a result, \nPostgreSQL can be 10x faster because everything is performed server-side.\n\nThis is to say that, in some circomstances, PostgreSQL running on an i586 \nwith IDE drive beats MySQL on a double Pentium. In real life, applications \nare always optimized at software level first before hardware level. This is \nwhy PostsgreSQL is *by nature* better than MySQL.\n\nUnless MySQL gets better, there is no real challenge in comparing both systems.\n\nCheers,\nJean-Michel POURE\n\n",
"msg_date": "Sun, 28 Oct 2001 20:18:41 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "> Hardware can bring a 2x gain whereas software optimization can boost an\n> application by 10x. Until now, I never heard or read about a real *software\n> optimization* benchmark between MySQL and PostgreSQL.\n\nIt has been my experience that a knowledgeable, SQL savvy engineer can not use\nMySQL. You have to have no basic knowledge of SQL to be able to work within its\nlimitations. Every project with which I have tried MySQL, I have always found\nmyself trying to work around what I can't do with it. In that respect, it is\nlike working on Windows.\n\n> I see at least two easy cases where PostgreSQL beats MySQL:\n> 1) Create a simple relational DB with triggers storing values instead of\n> performing LEFT JOINS. Increase the number of simultaneous queries. MySQL\n> will die at x queries and PostgreSQL will still be working at 5x queries.\n> 2) Use PL/pgSQL to perform complex jobs normally devoted to an application\n> server (Java, PHP) on a separate platform. In some case (recursive loops\n> for example), network traffic can be divided by 100. As a result,\n> PostgreSQL can be 10x faster because everything is performed server-side.\n\nServer side programming is a double edged sword. PostgreSQL is not a\ndistributed database, thus you are limited to the throughput of a single\nsystem. Moving processing off to PHP or Java on a different system can reduce\nthe load on your server by distributing processing to other systems. If you can\ncut query execution time by moving work off to other systems, you can\neffectively increase the capacity of your database server.\n\nTypically, on a heavily used database, you should try to limit server side\nprogramming to that which reduces the database work load. If you are moving\nwork, which can be done on the client, back to the server, you will bottleneck\nat the server while the client is sitting idle.\n\n> \n> This is to say that, in some circomstances, PostgreSQL running on an i586\n> with IDE drive beats MySQL on a double Pentium. In real life, applications\n> are always optimized at software level first before hardware level. This is\n> why PostsgreSQL is *by nature* better than MySQL.\n\nOne of the reasons why PostgreSQL beats MySQL, IMHO, is that it has the SQL\nfeatures that allow you to control and reduce the database work load by doing\nthings smarter.\n\n> \n> Unless MySQL gets better, there is no real challenge in comparing both systems.\n\nIt is funny, I know guys that love MySQL. Even when I show them the cool things\nthey can do with Postgres, they just don't seem to get it. It is sort of like\ntalking to an Amiga user.\n",
"msg_date": "Sun, 28 Oct 2001 17:19:28 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "MySQL and PostgreSQL are starting to move together as far as I can see.\nMySQL has the _option_ of transactional database formats (you can use both\nnormal MyISAM tables and transactional tables). MySQL 4.0 has all those\nvarious features you speak of. On all too many applications, MySQL kicks\nass. Admitedly, if you do massive complex database applications, PostgreSQL\ncan smoke it when done right, but MySQL works great for most tasks. It's\nnot even a matter of which is better or how to compare them. It is a\nquestion of 'what is your purpose for the database' and then deciding based\non the intended purpose.\n I did mention that it would be running BOTH MySQL and PostgreSQL, and\nnot just one. I use them both for various purposes, depending on the need\nand am trying to move it to a seperate server to increase the speed of\nqueries on BOTH database systems. It's not a question of which is better,\nbut a question of what will maximize output for cost.\n I think you may have misinterpreted the question\n--\nMike\n\n----- Original Message -----\nFrom: \"Jean-Michel POURE\" <jm.poure@freesurf.fr>\nTo: <pgsql-hackers@postgresql.org>\nCc: \"Mike Rogers\" <temp6453@hotmail.com>\nSent: Sunday, October 28, 2001 3:18 PM\nSubject: Re: [HACKERS] Ultimate DB Server\n\n\n> At 13:07 28/10/01 -0400, you wrote:\n> >I'm questioning whether anyone has done benchmarks on various hardware\nfor\n> >PGSQL and MySQL. I'm either thinking dual P3-866's, Dual AMD-1200's,\netc.\n> >I'm looking for benchmarks of large queries on striped -vs- non-striped\n> >volumes, different processor speeds, etc.\n>\n> Hello Mike,\n>\n> IMHO, you should consider *simple* software optimization first.\n>\n> Hardware can bring a 2x gain whereas software optimization can boost an\n> application by 10x. Until now, I never heard or read about a real\n*software\n> optimization* benchmark between MySQL and PostgreSQL.\n>\n> Software optimization includes the use of views, triggers, rules, PL/pgSQL\n> server side programming. By definition, it is hard to compare MySQL with\n> PostgreSQL because MySQL *does not include* these important features (and\n> probably will never do).\n>\n> I see at least two easy cases where PostgreSQL beats MySQL:\n> 1) Create a simple relational DB with triggers storing values instead of\n> performing LEFT JOINS. Increase the number of simultaneous queries. MySQL\n> will die at x queries and PostgreSQL will still be working at 5x queries.\n> 2) Use PL/pgSQL to perform complex jobs normally devoted to an application\n> server (Java, PHP) on a separate platform. In some case (recursive loops\n> for example), network traffic can be divided by 100. As a result,\n> PostgreSQL can be 10x faster because everything is performed server-side.\n>\n> This is to say that, in some circomstances, PostgreSQL running on an i586\n> with IDE drive beats MySQL on a double Pentium. In real life, applications\n> are always optimized at software level first before hardware level. This\nis\n> why PostsgreSQL is *by nature* better than MySQL.\n>\n> Unless MySQL gets better, there is no real challenge in comparing both\nsystems.\n>\n> Cheers,\n> Jean-Michel POURE\n>\n>\n",
"msg_date": "Sun, 28 Oct 2001 18:41:03 -0400",
"msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "mlw wrote:\n> \n..\n> It is funny, I know guys that love MySQL. Even when I show them the cool things\n> they can do with Postgres, they just don't seem to get it. It is sort of like\n> talking to an Amiga user.\n\nHey. As someone who learned 68000 assembly on the Amiga back in '86,\nI take that personally. There's nothing like writing a pixel editor\nin 4096-color HAM mode in 68000 off of a floppy-based Commodore\nMacro Assembler. Sadly, however, I don't yet see the Amiga 1000 on\nthe PostgreSQL ports list. ;-)\n\nMike Mascari\nmascarm@mascari.com\n",
"msg_date": "Sun, 28 Oct 2001 17:58:10 -0500",
"msg_from": "Mike Mascari <mascarm@mascari.com>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "Mike Mascari wrote:\n> \n> mlw wrote:\n> >\n> ..\n> > It is funny, I know guys that love MySQL. Even when I show them the cool things\n> > they can do with Postgres, they just don't seem to get it. It is sort of like\n> > talking to an Amiga user.\n> \n> Hey. As someone who learned 68000 assembly on the Amiga back in '86,\n> I take that personally. There's nothing like writing a pixel editor\n> in 4096-color HAM mode in 68000 off of a floppy-based Commodore\n> Macro Assembler. Sadly, however, I don't yet see the Amiga 1000 on\n> the PostgreSQL ports list. ;-)\n\nSorry, I like to needle Amiga users. One of my closest friends is, what can\nonly be described as, a complete Amiga zealot. Most of the time it is pretty\nfun to get him going, I hope you know it is all good natured fun. \n\nI built my first robot using an RCA 1802 back in the late '70s. I still think\nthe P.C. was the worst computer design. Going from any platform to the\n8080~8088 was such a let down. If someone had ported CP/M to the RCA 1802 back\nin the '70s, computers may have been different today.\n",
"msg_date": "Sun, 28 Oct 2001 18:11:40 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "I'm not sure what you are expecting but...\n\n> SELECT * FROM test WHERE source_content ILIKE '%accept%';\n ---> returns a record\n\n> SELECT * FROM test WHERE source_content ILIKE '%accept�%'\n ---> returns a record\n\n> SELECT * FROM test WHERE source_content ILIKE '%accepte%'\n ---> returns 0 record\n\nSo all of above seem to work fine for me.\n\n$ pg_config --configure\n--prefix=/usr/local/pgsql --enable-multibyte=EUC_JP --enable-unicode-conversion --with-tcl --with-perl --enable-syslog --enable-debug --with-CXX --with-java\n$ pg_config --version\nPostgreSQL 7.1.3\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 29 Oct 2001 09:57:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "> In that case, it's likely to cause problems.\n> One thing is to check your current locale (before running psql), by\n> typing \"locale charmap\" on your terminal :\n> \n> Unicode :\n> \n> asterix:~$ locale charmap\n> UTF-8\n\nJust curious. Are there any working charmap for UTF-8? I mean, the\ncharmap contains not only ISO 8859-* but also other languages defined\nin UNICODE 2.0 at least. I coudn't find such a thing around me. Also,\ndoes it handle Unicode combined characters?\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 29 Oct 2001 09:58:25 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: UNICODE"
},
{
"msg_contents": "> MySQL and PostgreSQL are starting to move together as far as I can see.\n> MySQL has the _option_ of transactional database formats (you can use both\n> normal MyISAM tables and transactional tables). MySQL 4.0 has all those\n> various features you speak of.\n\nNot it doesn't.\n\nIt supports the UNION statement (thank god!)\n\nAnd this is for 4.1:\n\n-------\nMySQL 4.1, the following development release\n\nInternally, through a new .frm file format for table definitions, MySQL 4.0\nlays the foundation for the new features of MySQL 4.1, such as nested\nsubqueries, stored procedures, and foreign key integrity rules, which form\nthe top of the wish list for many of our customers. Along with those, we\nwill also include simpler additions, such as multi-table UPDATE statements.\n\nAfter those additions, critics of MySQL have to be more imaginative than\never in pointing out deficiencies in the MySQL Database Management System.\nFor long already known for its stability, speed, and ease of use, MySQL will\nthen match the requirement checklist of very demanding buyers.\n--------\n\nI don't get how you can have different tables being transactional in your\ndatabase??\n\nie. What on earth does this do? (pseudo)\n\ncreate table blah not_transactional;\ncreate table hum not_transactional;\n\nbegin;\ninsert into blah values (1);\ninsert into hum values (2);\nrollback;\n\n?????\n\n\nOn all too many applications, MySQL kicks\n> ass. Admitedly, if you do massive complex database applications,\n> PostgreSQL\n> can smoke it when done right, but MySQL works great for most tasks. It's\n> not even a matter of which is better or how to compare them. It is a\n> question of 'what is your purpose for the database' and then\n> deciding based\n> on the intended purpose.\n> I did mention that it would be running BOTH MySQL and PostgreSQL, and\n> not just one. I use them both for various purposes, depending on the need\n> and am trying to move it to a seperate server to increase the speed of\n> queries on BOTH database systems. It's not a question of which is better,\n> but a question of what will maximize output for cost.\n> I think you may have misinterpreted the question\n> --\n> Mike\n>\n> ----- Original Message -----\n> From: \"Jean-Michel POURE\" <jm.poure@freesurf.fr>\n> To: <pgsql-hackers@postgresql.org>\n> Cc: \"Mike Rogers\" <temp6453@hotmail.com>\n> Sent: Sunday, October 28, 2001 3:18 PM\n> Subject: Re: [HACKERS] Ultimate DB Server\n>\n>\n> > At 13:07 28/10/01 -0400, you wrote:\n> > >I'm questioning whether anyone has done benchmarks on various hardware\n> for\n> > >PGSQL and MySQL. I'm either thinking dual P3-866's, Dual AMD-1200's,\n> etc.\n> > >I'm looking for benchmarks of large queries on striped -vs- non-striped\n> > >volumes, different processor speeds, etc.\n> >\n> > Hello Mike,\n> >\n> > IMHO, you should consider *simple* software optimization first.\n> >\n> > Hardware can bring a 2x gain whereas software optimization can boost an\n> > application by 10x. Until now, I never heard or read about a real\n> *software\n> > optimization* benchmark between MySQL and PostgreSQL.\n> >\n> > Software optimization includes the use of views, triggers,\n> rules, PL/pgSQL\n> > server side programming. By definition, it is hard to compare MySQL with\n> > PostgreSQL because MySQL *does not include* these important\n> features (and\n> > probably will never do).\n> >\n> > I see at least two easy cases where PostgreSQL beats MySQL:\n> > 1) Create a simple relational DB with triggers storing values instead of\n> > performing LEFT JOINS. Increase the number of simultaneous\n> queries. MySQL\n> > will die at x queries and PostgreSQL will still be working at\n> 5x queries.\n> > 2) Use PL/pgSQL to perform complex jobs normally devoted to an\n> application\n> > server (Java, PHP) on a separate platform. In some case (recursive loops\n> > for example), network traffic can be divided by 100. As a result,\n> > PostgreSQL can be 10x faster because everything is performed\n> server-side.\n> >\n> > This is to say that, in some circomstances, PostgreSQL running\n> on an i586\n> > with IDE drive beats MySQL on a double Pentium. In real life,\n> applications\n> > are always optimized at software level first before hardware level. This\n> is\n> > why PostsgreSQL is *by nature* better than MySQL.\n> >\n> > Unless MySQL gets better, there is no real challenge in comparing both\n> systems.\n> >\n> > Cheers,\n> > Jean-Michel POURE\n> >\n> >\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Mon, 29 Oct 2001 09:52:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "Doh! I messed up my example!\n\nThe first table was supposed to be transactional.\n\n> I don't get how you can have different tables being transactional in your\n> database??\n> \n> ie. What on earth does this do? (pseudo)\n> \ncreate table blah transactional;\n> create table hum not_transactional;\n> \n> begin;\n> insert into blah values (1);\n> insert into hum values (2);\n> rollback;\n> \n> ?????\n\nChris\n\n",
"msg_date": "Mon, 29 Oct 2001 10:40:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "> Not it doesn't.\n>\n> It supports the UNION statement (thank god!)\n>\n> And this is for 4.1:\n\nThis crap shouldn't be on the hackers list, please take it else where.\nThe hackers lists is for people developing postgresql, not for people\nauguing about the merits of postgresql vs mysql.\n\n\nPlease go elsewhere.\n\n- Brandon\n\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Sun, 28 Oct 2001 21:52:50 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "What that does is very simple: it rolls back the one that is keeping track\nof it's transactions. Think of the overhead if someone doesn't have\ntransactional statements. The idea is, in PGSQL, all inserts and updates\nare essentially logged so that they can be rolled back. Here is the MySQL\nconcept:\n Have a log table that logs all transactions (lets say, failed or not)\n 1. begin transaction\n 2. insert into non-transactional table 'user did this, status-\nunprocessed'\n 3. insert into payment table\n 4. insert into product table\n 5. update to processed\n 6. insert into shipping\n 7. update to 'pending shipping'\n Perfectly common transaction that happens. Now! What if you want the\nentry inserted and dealt with as a status and what happens, but you don't\nwant all the evidence of that to disappear when you hit rollback. It means\nyou can have some things roll back and others don't. In PGSQL, that would\nhave to be begin/rollback for only transactional entries.\n--\nMike\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>; <pgsql-hackers@postgresql.org>;\n\"Jean-Michel POURE\" <jm.poure@freesurf.fr>\nSent: Sunday, October 28, 2001 9:52 PM\nSubject: RE: [HACKERS] Ultimate DB Server\n\n\n> > MySQL and PostgreSQL are starting to move together as far as I can see.\n> > MySQL has the _option_ of transactional database formats (you can use\nboth\n> > normal MyISAM tables and transactional tables). MySQL 4.0 has all those\n> > various features you speak of.\n>\n> Not it doesn't.\n>\n> It supports the UNION statement (thank god!)\n>\n> And this is for 4.1:\n>\n> -------\n> MySQL 4.1, the following development release\n>\n> Internally, through a new .frm file format for table definitions, MySQL\n4.0\n> lays the foundation for the new features of MySQL 4.1, such as nested\n> subqueries, stored procedures, and foreign key integrity rules, which form\n> the top of the wish list for many of our customers. Along with those, we\n> will also include simpler additions, such as multi-table UPDATE\nstatements.\n>\n> After those additions, critics of MySQL have to be more imaginative than\n> ever in pointing out deficiencies in the MySQL Database Management System.\n> For long already known for its stability, speed, and ease of use, MySQL\nwill\n> then match the requirement checklist of very demanding buyers.\n> --------\n>\n> I don't get how you can have different tables being transactional in your\n> database??\n>\n> ie. What on earth does this do? (pseudo)\n>\n> create table blah not_transactional;\n> create table hum not_transactional;\n>\n> begin;\n> insert into blah values (1);\n> insert into hum values (2);\n> rollback;\n>\n> ?????\n>\n>\n> On all too many applications, MySQL kicks\n> > ass. Admitedly, if you do massive complex database applications,\n> > PostgreSQL\n> > can smoke it when done right, but MySQL works great for most tasks.\nIt's\n> > not even a matter of which is better or how to compare them. It is a\n> > question of 'what is your purpose for the database' and then\n> > deciding based\n> > on the intended purpose.\n> > I did mention that it would be running BOTH MySQL and PostgreSQL,\nand\n> > not just one. I use them both for various purposes, depending on the\nneed\n> > and am trying to move it to a seperate server to increase the speed of\n> > queries on BOTH database systems. It's not a question of which is\nbetter,\n> > but a question of what will maximize output for cost.\n> > I think you may have misinterpreted the question\n> > --\n> > Mike\n> >\n> > ----- Original Message -----\n> > From: \"Jean-Michel POURE\" <jm.poure@freesurf.fr>\n> > To: <pgsql-hackers@postgresql.org>\n> > Cc: \"Mike Rogers\" <temp6453@hotmail.com>\n> > Sent: Sunday, October 28, 2001 3:18 PM\n> > Subject: Re: [HACKERS] Ultimate DB Server\n> >\n> >\n> > > At 13:07 28/10/01 -0400, you wrote:\n> > > >I'm questioning whether anyone has done benchmarks on various hardwar\ne\n> > for\n> > > >PGSQL and MySQL. I'm either thinking dual P3-866's, Dual AMD-1200's,\n> > etc.\n> > > >I'm looking for benchmarks of large queries on striped -vs-\nnon-striped\n> > > >volumes, different processor speeds, etc.\n> > >\n> > > Hello Mike,\n> > >\n> > > IMHO, you should consider *simple* software optimization first.\n> > >\n> > > Hardware can bring a 2x gain whereas software optimization can boost\nan\n> > > application by 10x. Until now, I never heard or read about a real\n> > *software\n> > > optimization* benchmark between MySQL and PostgreSQL.\n> > >\n> > > Software optimization includes the use of views, triggers,\n> > rules, PL/pgSQL\n> > > server side programming. By definition, it is hard to compare MySQL\nwith\n> > > PostgreSQL because MySQL *does not include* these important\n> > features (and\n> > > probably will never do).\n> > >\n> > > I see at least two easy cases where PostgreSQL beats MySQL:\n> > > 1) Create a simple relational DB with triggers storing values instead\nof\n> > > performing LEFT JOINS. Increase the number of simultaneous\n> > queries. MySQL\n> > > will die at x queries and PostgreSQL will still be working at\n> > 5x queries.\n> > > 2) Use PL/pgSQL to perform complex jobs normally devoted to an\n> > application\n> > > server (Java, PHP) on a separate platform. In some case (recursive\nloops\n> > > for example), network traffic can be divided by 100. As a result,\n> > > PostgreSQL can be 10x faster because everything is performed\n> > server-side.\n> > >\n> > > This is to say that, in some circomstances, PostgreSQL running\n> > on an i586\n> > > with IDE drive beats MySQL on a double Pentium. In real life,\n> > applications\n> > > are always optimized at software level first before hardware level.\nThis\n> > is\n> > > why PostsgreSQL is *by nature* better than MySQL.\n> > >\n> > > Unless MySQL gets better, there is no real challenge in comparing both\n> > systems.\n> > >\n> > > Cheers,\n> > > Jean-Michel POURE\n> > >\n> > >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n>\n",
"msg_date": "Sun, 28 Oct 2001 23:05:36 -0400",
"msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "(reply- s/hackers/general/)\n\nWell I put that sort of thing into my application logs, personal preference\nfor now.\n\nIt seems to me mixed transaction tables are likely to create an error prone\nenvironment for little real gain.\n\nNo transactions doesn't necessarily mean faster (or slower). IMO MySQL has\nthat feature for backwards compatibility. Little to do with performance.\n\nIt would be good for claims regarding overheads/performance issues to be\nbacked by reproducible experiments, or at least interesting\nstatements/arguments.\n\nI've used MySQL and it was far better than Postgresql when Postgresql was\nPostgres95 (yuckyuckyuck!). While Postgresql has really gone a long way,\nMySQL seems to have been stuck in denial till recently.\n\nWill be interesting to see if MySQL can pull off the Windows vs Mac trick :).\n\nI hope it will be a good clean fight :).\n\nCheerio,\nLink.\n\nAt 11:05 PM 28-10-2001 -0400, Mike Rogers wrote:\n>What that does is very simple: it rolls back the one that is keeping track\n>of it's transactions. Think of the overhead if someone doesn't have\n>transactional statements. The idea is, in PGSQL, all inserts and updates\n>are essentially logged so that they can be rolled back. Here is the MySQL\n>concept:\n> Have a log table that logs all transactions (lets say, failed or not)\n> 1. begin transaction\n> 2. insert into non-transactional table 'user did this, status-\n>unprocessed'\n> 3. insert into payment table\n> 4. insert into product table\n> 5. update to processed\n> 6. insert into shipping\n> 7. update to 'pending shipping'\n> Perfectly common transaction that happens. Now! What if you want the\n>entry inserted and dealt with as a status and what happens, but you don't\n>want all the evidence of that to disappear when you hit rollback. It means\n>you can have some things roll back and others don't. In PGSQL, that would\n>have to be begin/rollback for only transactional entries.\n\n\n",
"msg_date": "Mon, 29 Oct 2001 13:36:58 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "Mike Rogers wrote:\n> \n> What that does is very simple: it rolls back the one that is keeping track\n> of it's transactions. Think of the overhead if someone doesn't have\n> transactional statements. The idea is, in PGSQL, all inserts and updates\n> are essentially logged so that they can be rolled back. Here is the MySQL\n> concept:\n> Have a log table that logs all transactions (lets say, failed or not)\n> 1. begin transaction\n> 2. insert into non-transactional table 'user did this,\n> status - unprocessed'\n> 3. insert into payment table\n> 4. insert into product table\n> 5. update to processed\n> 6. insert into shipping\n> 7. update to 'pending shipping'\n> Perfectly common transaction that happens. Now! What if you want the\n> entry inserted and dealt with as a status and what happens, but you don't\n> want all the evidence of that to disappear when you hit rollback. \n> It means you can have some things roll back and others don't. In PGSQL,\n> that would have to be begin/rollback for only transactional entries.\n\nOr you would run two parallel transactions (currently you need two\nconnections \nfor this) - one for logging and one for work.\n\nI agree that having non_transactional (i.e. logging) tables may be\nsometimes \ndesirable. I've been told that some of Oracles debugging/logging\nfacilities \nare almost useless due-to the fact that they disappear at rollback.\n\n------------------\nHannu\n",
"msg_date": "Mon, 29 Oct 2001 09:44:39 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "\n>This crap shouldn't be on the hackers list, please take it else where.\n>The hackers lists is for people developing postgresql, not for people\n>auguing about the merits of postgresql vs mysql.\n\nAgreed. Should be on pgsql-general@postgresql.org.\n\n\n",
"msg_date": "Mon, 29 Oct 2001 08:50:36 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Ultimate DB Server"
},
{
"msg_contents": "If you answer, please email to pgsql-general@postgresql.org\n\n**************************************************************************** \n***************************\n>Server side programming is a double edged sword. PostgreSQL is not a\n>distributed database, thus you are limited to the throughput of a single\n>system. Moving processing off to PHP or Java on a different system can reduce\n>the load on your server by distributing processing to other systems. If \n>you can\n>cut query execution time by moving work off to other systems, you can\n>effectively increase the capacity of your database server.\n\nYes, but for the Web, SQL queries are SELECT with LEFT JOINS to get display \nvalues of OIDs.\nIf you store LEFT JOIN results using triggers, you divide complexity by a \nfactor of 10.\n\nMySQL\nA simple example would be :\nSELECT customer_name, category_name FROM customer_table WHERE customer_oid \n= xxx\nLEFT JOIN customer_category ON customer_oidcategory = category_oid;\n\nPostgreSQL\nBecause Categories do not change a lot, it is possible to create a \ncategory_name_tg field\ntable customer_table and store the value using a trigger. As UPDATE account \nfor 5% of all queries, it is not a real overhead.\n\nTo maintain consistency, you also add a customer_timestamp to \ncustomer_table. When Category value changes all you need to do is: UPDATE \ncustomer_table SET customer_timestamp = 'now' WHERE customer_oidcategory = yyy;\n\nUnder PostgreSQL, your query becomes\nSELECT customer_name, customer_category_tg FROM customer_table WHERE \ncustomer_oid = xxx\n\n>Typically, on a heavily used database, you should try to limit server side\n>programming to that which reduces the database work load. If you are moving\n>work, which can be done on the client, back to the server, you will bottleneck\n>at the server while the client is sitting idle.\n\nI do not always agree server-side programming should be limited. It \ndepends. In some cases yes, in some cases no. Optimization is a progressive \ntask, where you start with basic things and end up with more complex \narchitecture. For what I noticed, 95% of applications were not truly \noptimized.\n\n> > This is to say that, in some circomstances, PostgreSQL running on an i586\n> > with IDE drive beats MySQL on a double Pentium. In real life, applications\n> > are always optimized at software level first before hardware level. This is\n> > why PostsgreSQL is *by nature* better than MySQL.\n>\n>One of the reasons why PostgreSQL beats MySQL, IMHO, is that it has the SQL\n>features that allow you to control and reduce the database work load by doing\n>things smarter.\n\nAgreed, This is what I meant when I said PostgreSQL beat MySQL.\n\n> > Unless MySQL gets better, there is no real challenge in comparing both \n> systems.\n>\n>It is funny, I know guys that love MySQL. Even when I show them the cool \n>things\n>they can do with Postgres, they just don't seem to get it. It is sort of like\n>talking to an Amiga user.\n\nOn heavy workload systems MySQL cannot compare to PostgreSQL. It's funny to \nread these mails\nof people doing benchmarks.\n",
"msg_date": "Mon, 29 Oct 2001 09:10:04 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Ultimate DB Server"
}
] |
[
{
"msg_contents": "I got an interesting question, and I can probably see both sides of any debate,\nbut.....\n\nSay you have a fairly large table, several million records. In this table you\nhave a key that has a fairly good number of duplicate rows. It is a users\nfavorites table, each user will have a number of entries.\n\nMy problem is, if you do a select by the user name, it does an index scan. If\nyou do a select from the whole table, ordered by the user name, it does a\nsequential scan not an index scan. It is arguable that this may be a faster\nquery, but at the cost of many more resources and a very long delay before any\nresults are returned. Is this the best behavior?\n\nAnyone have any opinions?\n\n\ncdinfo=# select count(*) from favorites ;\n count\n---------\n 4626568\n(1 row)\n\ncdinfo=# explain select * from favorites where scene_name_full = 'someone' ;\nNOTICE: QUERY PLAN:\n\nIndex Scan using fav_snf on favorites (cost=0.00..258.12 rows=64 width=73)\n\nEXPLAIN\ncdinfo=# explain select * from favorites order by scene_name_full ;\nNOTICE: QUERY PLAN:\n\nSort (cost=1782827.92..1782827.92 rows=4724736 width=73)\n -> Seq Scan on favorites (cost=0.00..113548.36 rows=4724736 width=73)\n\nEXPLAIN\ncdinfo=# set enable_seqscan=FALSE ;\nSET VARIABLE\ncdinfo=# explain select * from favorites order by scene_name_full ;\nNOTICE: QUERY PLAN:\n\nIndex Scan using fav_snf on favorites (cost=0.00..18682771.23 rows=4724736\nwidth=73)\n\nEXPLAIN\n",
"msg_date": "Sun, 28 Oct 2001 09:53:45 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Query planner, 7.2b1 select ... order by"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> My problem is, if you do a select by the user name, it does an index\n> scan. If you do a select from the whole table, ordered by the user\n> name, it does a sequential scan not an index scan. It is arguable that\n> this may be a faster query, but at the cost of many more resources and\n> a very long delay before any results are returned. Is this the best\n> behavior?\n\nUnless you use a LIMIT, preferring the cheapest total cost still seems\nlike a win to me. (libpq, at least, doesn't give back any results till\nthe whole query has been retrieved, so the claims of \"higher cost\" and\n\"long delay till first result\" are both specious.)\n\nIf you do use a LIMIT, that affects the plan choice.\n\nIf you use a cursor, things get more interesting, since the planner\nhas no way to know how much of the query you intend to retrieve,\nnor whether you'd be willing to sacrifice total time for fast initial\nresponse on the first few rows. Currently it's set to optimize\nplans for cursors on the basis of assuming that 10% of the total rows\nwill be fetched. Given the more-than-10X discrepancy between seqscan\nand indexscan costs in your example, that'll probably still give you\nthe seqscan choice. Hiroshi suggested making this fraction be a\nuser-settable parameter, which seems like a good idea to me but we\nhaven't gotten around to it yet.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 28 Oct 2001 12:58:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Query planner, 7.2b1 select ... order by "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > My problem is, if you do a select by the user name, it does an index\n> > scan. If you do a select from the whole table, ordered by the user\n> > name, it does a sequential scan not an index scan. It is arguable that\n> > this may be a faster query, but at the cost of many more resources and\n> > a very long delay before any results are returned. Is this the best\n> > behavior?\n> \n> Unless you use a LIMIT, preferring the cheapest total cost still seems\n> like a win to me. (libpq, at least, doesn't give back any results till\n> the whole query has been retrieved, so the claims of \"higher cost\" and\n> \"long delay till first result\" are both specious.)\n\nThe table is pretty big, and I was performing the query with a binary cursor.\nIt really did take a little while to get results. I was using the query to\nperform data analysis on a table. (If you are familiar with NetPerceptions,\nthink something like that)\n\nThe application framework, without any extra processing, executed the entire\nquery with a sequential scan in about 4 minutes, it performed the index scan in\nabout 34 minutes. The analysis app, takes about two hours to run with the\nsequential scan.\n\nSo you are very right, it is much more efficient to run the sequential scan for\nthe whole table.\n",
"msg_date": "Wed, 31 Oct 2001 08:47:03 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Query planner, 7.2b1 select ... order by"
}
] |
[
{
"msg_contents": "\nThis is using an almost up-to-date CVS version.\n\nSorry for the convoluted example:\n\n Create table t1(n text, f1 int, f2 int);\n create table g1(n text, t1n text);\n create table s1(k1 text, f1a int, f1b int, f2 int, x int, d timestamp);\n\n create view v1 as select k1, d, \n\t(select g1.n from g1, t1 where g1.t1n=t1.n and t1.f1 = s1.f1a and t1.f2 =\ns1.f2 limit 1) as a, \n\t(select g1.n from g1, t1 where g1.t1n=t1.n and t1.f1 = s1.f1b and t1.f2 =\ns1.f2 limit 1) as b,\n\tx\n from\n s1\n ;\n\n explain select coalesce(a, b, 'other') as name, k1, sum(x) as tot \n from v1 where\n d>'28-oct-2001 12:00' and d<current_timestamp \n group by 1,2 order by tot desc limit 40;\n ERROR: Sub-SELECT uses un-GROUPed attribute s1.f2 from outer query\n\nMaybe I am asking too much of views?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 29 Oct 2001 14:13:02 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Odd error in complex query (7.2): Sub-SELECT uses un-GROUPed..."
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Sorry for the convoluted example:\n\nA simplified example is \n\n\tcreate table t1(n text, f1 int);\n\tcreate table s1(f1a int, x int);\n\tcreate view v1 as select x,\n\t (select t1.n from t1 where t1.f1 = s1.f1a) as a\n\tfrom s1;\n\tselect a from v1 group by 1;\n\tERROR: Sub-SELECT uses un-GROUPed attribute s1.f1a from outer query\n\nThe expanded-out equivalent of the problem query is\n\n\tselect (select t1.n from t1 where t1.f1 = s1.f1a) as a from s1\n\tgroup by 1;\n\nwhich I believe is indeed illegal. But it seems like it ought to be\nlegal with the view in between ... ie, a view isn't purely a macro.\n\nThe implementation issue here is how to decide not to pull up the view\nsubquery (ie, not to flatten the query into the illegal form). We\nalready do that for certain conditions; we just have to figure out what\nadditional restriction should be used to preclude this case. The\nrestriction should be as tight as possible to avoid losing the ability\nto optimize queries using views.\n\nA simplistic idea is to not pull up views that contain subselects in\nthe targetlist, but I have a feeling that's not the right restriction.\nOr maybe it is --- maybe the point is that the view targetlist is\nlogically evaluated *before* the outer query executes, and we can't do\na pullup if evaluating it later would change the results.\n\nComments? I suspect this is trickier than it looks :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 14:36:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT uses un-GROUPed... "
},
{
"msg_contents": "At 14:36 29/10/01 -0500, Tom Lane wrote:\n>The expanded-out equivalent of the problem query is\n>\n>\tselect (select t1.n from t1 where t1.f1 = s1.f1a) as a from s1\n>\tgroup by 1;\n>\n>which I believe is indeed illegal. But it seems like it ought to be\n>legal with the view in between ... ie, a view isn't purely a macro.\n\nFWIW, MS SQL/Server won't even allow the view to be defined\n\nDec/RDB does, and it allows the query as well, with the following plannner\noutput:\n\n Reduce Sort\n Cross block of 2 entries\n Cross block entry 1\n Get Retrieval sequentially of relation S1\n Cross block entry 2\n Aggregate Conjunct Get\n Retrieval sequentially of relation T1\n\nIt also allows:\n\n select (select t1.n from t1 where t1.f1 = s1.f1a) as a from s1\n group by (select t1.n from t1 where t1.f1 = s1.f1a);\n\nwith the same plan. Which does not, on the face of it, seem illegal to me.\n\nRDB usually rewrites column-select-expressions as cross-joins (with\nappropriate checking for multiple/no rows). Which seems to work well with\nmy expectations for both queries, although I presume this it not what the\nspec says?\n\n>The implementation issue here is how to decide not to pull up the view\n>subquery (ie, not to flatten the query into the illegal form).\n\nIt's not clear to me that it should be illegal - for every row in s1, it\nshould return the result of the column-select (which may be NULL) - or is\nthat what 'not flattening the query' does?\n\n>We\n>already do that for certain conditions; we just have to figure out what\n>additional restriction should be used to preclude this case. The\n>restriction should be as tight as possible to avoid losing the ability\n>to optimize queries using views.\n\nHow about whenenever it will throw this error? ;-).,\n\n>A simplistic idea is to not pull up views that contain subselects in\n>the targetlist, but I have a feeling that's not the right restriction.\n\nThat does seem excessive. I'm way over my head here, but can a column\nselect be implemented as a special JOIN that always returns 1 row (maybe\nNULL), and throws an error if more than one row? \n\n>Or maybe it is --- maybe the point is that the view targetlist is\n>logically evaluated *before* the outer query executes,\n\nThis is very nasty, and would really hurt the utility of views.\n\n> and we can't do\n>a pullup if evaluating it later would change the results.\n\nHuh?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 30 Oct 2001 11:49:28 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 14:36 29/10/01 -0500, Tom Lane wrote:\n>> The expanded-out equivalent of the problem query is\n>> select (select t1.n from t1 where t1.f1 = s1.f1a) as a from s1\n>> group by 1;\n>> which I believe is indeed illegal.\n\n> Dec/RDB ... allows the query\n> It also allows:\n> select (select t1.n from t1 where t1.f1 = s1.f1a) as a from s1\n> group by (select t1.n from t1 where t1.f1 = s1.f1a);\n> with the same plan. Which does not, on the face of it, seem illegal to me.\n\nHmm. Maybe the query is legal, and the problem is just one of an\nincorrect check for ungrouped vars in subselects. Need to think more.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 29 Oct 2001 19:56:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT uses un-GROUPed... "
},
{
"msg_contents": "At 14:36 29/10/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> Sorry for the convoluted example:\n>\n>A simplified example is \n\nAnd here's a simpler one that seems to avoid views altogether:\n\n create table lkp(f1 int);\n create table t1(f1 int, x int);\n\n Select\n case when Exists(Select * From lkp where lkp.f1 = t1.f1) then\n 'known'\n else\n 'unknown'\n end as status, \n sum(x)\n from t1\n group by 1;\n\nIt's pretty similar to the sample you gave, but also presents the sort of\noperation people may well want to perform. \n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 30 Oct 2001 14:35:43 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT"
},
{
"msg_contents": "Philip Warner wrote:\n> \n> At 14:36 29/10/01 -0500, Tom Lane wrote:\n> >Philip Warner <pjw@rhyme.com.au> writes:\n> >> Sorry for the convoluted example:\n> >\n> >A simplified example is\n> \n> And here's a simpler one that seems to avoid views altogether:\n> \n> create table lkp(f1 int);\n> create table t1(f1 int, x int);\n> \n> Select\n> case when Exists(Select * From lkp where lkp.f1 = t1.f1) then\n> 'known'\n> else\n> 'unknown'\n> end as status,\n> sum(x)\n> from t1\n> group by 1;\n> \n\nA bit off-tppic question, but is our optimiser smart enough to \nrecognize the query inside exists as LIMIT 1 query ?\n\n------------\nHannu\n",
"msg_date": "Tue, 30 Oct 2001 10:43:36 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT"
},
{
"msg_contents": "At 10:43 30/10/01 +0200, Hannu Krosing wrote:\n>\n>A bit off-tppic question, but is our optimiser smart enough to \n>recognize the query inside exists as LIMIT 1 query ?\n>\n\nYep.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 30 Oct 2001 20:16:04 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Select\n> case when Exists(Select * From lkp where lkp.f1 = t1.f1) then\n> 'known'\n> else\n> 'unknown'\n> end as status, \n> sum(x)\n> from t1\n> group by 1;\n\nOkay, I'm convinced: the problem is that the test for ungrouped vars\nused inside subselects is too simplistic. I think it's failing to\nconsider that if the whole subselect can be considered a grouped\nexpression, we shouldn't object to ungrouped individual vars within it.\nWill work on it.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 10:01:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT uses un-GROUPed... "
},
{
"msg_contents": "On Tue, 30 Oct 2001 11:49:28 +1100\nPhilip Warner wrote:\n\n> \n> It's not clear to me that it should be illegal - for every row in s1, it\n> should return the result of the column-select (which may be NULL) - or is\n> that what 'not flattening the query' does?\n> \n> >We\n> >already do that for certain conditions; we just have to figure out what\n> >additional restriction should be used to preclude this case. The\n> >restriction should be as tight as possible to avoid losing the ability\n> >to optimize queries using views.\n> \n> How about whenenever it will throw this error? ;-).,\n> \n> >A simplistic idea is to not pull up views that contain subselects in\n> >the targetlist, but I have a feeling that's not the right restriction.\n> \n> That does seem excessive. I'm way over my head here, but can a column\n> select be implemented as a special JOIN that always returns 1 row (maybe\n> NULL), and throws an error if more than one row? \n> \n\nHi,\n\nI wouldn't think most people need a query like this, but also\nhad been in puzzle as to how not to pull up. Finally the \nproblem could be solved by using a statement of an ORDER BY.\nTherefore, if you add an ORDER BY to a view of your complex\nquery, it will work correctly. \n\nAnd, as long as each of correlative subselects which are \nin columns always returns one row, I feel it is legal \nrather than illegal that its subselects can be GROUPed.\n\n\n\n-- on 7.1.2\n\ncreate table t1(n text, f1 int, f2 int);\ncreate table g1(n text, t1n text);\ncreate table s1(k1 text, f1a int, f1b int, f2 int, x int, d timestamp);\n\n\ncreate view v1 as\nselect k1, d, \n (select g1.n from g1, t1 \n where g1.t1n=t1.n and t1.f1 = s1.f1a and t1.f2 = s1.f2 limit 1) as a, \n (select g1.n from g1, t1 \n where g1.t1n=t1.n and t1.f1 = s1.f1b and t1.f2 = s1.f2 limit 1) as b,\n x\n from s1\n order by 1 -- *** an additional statement ***\n;\n\n\nexplain\nselect coalesce(a, b, 'other') as name, k1, sum(x) as tot \n from v1 \n where d > '28-oct-2001 12:00' and d < current_timestamp \n group by 1,2 \n order by tot desc limit 40;\n\n\n\n\nRegards,\nMasaru Sugawara\n\n",
"msg_date": "Wed, 31 Oct 2001 02:49:42 +0900",
"msg_from": "Masaru Sugawara <rk73@echna.ne.jp>",
"msg_from_op": false,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT"
},
{
"msg_contents": "> Okay, I'm convinced: the problem is that the test for ungrouped vars\n> used inside subselects is too simplistic.\n\nNot only was that true, but the handling of GROUP BY expressions was\npretty grotty in general: they'd be re-evaluated at multiple levels of\nthe resulting plan tree. Which is not too bad for \"GROUP BY a+b\",\nbut it's unpleasant when a complex subselect is involved.\n\nI've committed fixes to CVS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 15:05:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Odd error in complex query (7.2): Sub-SELECT uses un-GROUPed... "
}
] |
[
{
"msg_contents": "\nThis executes quickly (as expected):\n\n explain select * from flow_stats where src_addr='1.1.1.1' \n order by log_date desc limit 5;\n NOTICE: QUERY PLAN:\n\n Limit (cost=1241.77..1241.77 rows=5 width=116)\n -> Sort (cost=1241.77..1241.77 rows=307 width=116)\n -> Index Scan using flow_stats_ix6 on flow_stats\n(cost=0.00..1229.07 rows=307 width=116)\n\nBue this executes slowly:\n\n explain select * from flow_stats where src_addr='1.1.1.1' order by\nlog_date desc limit 3;\n NOTICE: QUERY PLAN:\n\n Limit (cost=0.00..796.61 rows=3 width=116)\n -> Index Scan Backward using flow_stats_ix4 on flow_stats\n(cost=0.00..81594.14 rows=307 width=116)\n\nWhere \n\nflow_stats_ix4 is (log_date)\nflow_stats_ix6 is (src_addr,log_date)\n\nThe reason for the slowness is that the given source address does not\nexist, and it has to scan through the entire index to determine that the\nrequested value does not exist (same is true for rare values).\n\nCan the optimizer/planner be told to do an 'Index Scan Backward' on\nflow_stats_ix6, or even just an 'Index Scan' & Sort? Or are backward scans\nof secondary index segments not implemented?\n\n\n\n\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 29 Oct 2001 16:12:23 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "planner/optimizer question"
},
{
"msg_contents": "Philip Warner wrote:\n> \n> This executes quickly (as expected):\n> \n> explain select * from flow_stats where src_addr='1.1.1.1'\n> order by log_date desc limit 5;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=1241.77..1241.77 rows=5 width=116)\n> -> Sort (cost=1241.77..1241.77 rows=307 width=116)\n> -> Index Scan using flow_stats_ix6 on flow_stats\n> (cost=0.00..1229.07 rows=307 width=116)\n> \n> Bue this executes slowly:\n> \n> explain select * from flow_stats where src_addr='1.1.1.1' order by\n> log_date desc limit 3;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..796.61 rows=3 width=116)\n> -> Index Scan Backward using flow_stats_ix4 on flow_stats\n> (cost=0.00..81594.14 rows=307 width=116)\n> \n> Where\n> \n> flow_stats_ix4 is (log_date)\n> flow_stats_ix6 is (src_addr,log_date)\n> \n> The reason for the slowness is that the given source address does not\n> exist, and it has to scan through the entire index to determine that the\n> requested value does not exist (same is true for rare values).\n> \n> Can the optimizer/planner be told to do an 'Index Scan Backward' on\n> flow_stats_ix6, or even just an 'Index Scan' & Sort? Or are backward scans\n> of secondary index segments not implemented?\n\nHow about the following ?\n\n explain select * from flow_stats where src_addr='1.1.1.1'\n order by src_addr desc, log_date desc limit 3;\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 29 Oct 2001 15:40:00 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: planner/optimizer question"
},
{
"msg_contents": "At 15:40 29/10/01 +0900, Hiroshi Inoue wrote:\n>Philip Warner wrote:\n>\n>How about the following ?\n>\n> explain select * from flow_stats where src_addr='1.1.1.1'\n> order by src_addr desc, log_date desc limit 3;\n>\n\nYep, that works. Thanks. \n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Mon, 29 Oct 2001 20:23:39 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: planner/optimizer question"
}
] |
[
{
"msg_contents": "Hi All,\n\nI was just mulling over how hard it would to implement Java stored\nprocedures (mulling being the operative word) and was thinking how to\nbest implement postgresql <-> java communications. (ie shared memory\nvia JNI?) I have read the past posts regarding possible PL/Java\nimplementations and it's basically stopped at \"How to implement\nPostrgesql <-> Java data passing?\".\n\nI'm not sure if I'd have time to (nor the skill to) actually implement\nanything, but ...\n\nBasically, I thought that pljava_init_all could fork and start the JVM\nif it hasn't already been started. Then using the single instance of\nJVM, run a base class (PLJavaLoader) which (loops and) checks postgresql\n(*) to see if there is a waiting procedure call. If there is, then it\nwould dynamic load the procedure's class (which is of base class\nPLJavaProcedure) and run it in a Thread. Each class intance would run\nin its own thread, so there would be only one JVM for all postgresql\nprocesses, but one thread for each called pljava procedure. For\nfunctions, the thread would then load the data for the functions\narguments(*). The java procedure / function would then do its work and\nthen return any data (*).\n\nInstead of using SPI to execute sql, the class would use jdbc to connect\nto the server. The PLJavaLoader class could cache JDBC connections to\nreduce the connection overhead (?).\n\nPlaces marked (*) is where java and postgresql would need to\ncommunicate. It may be possible to use a tempory table to pass stuff\nback and forth, but this may be problematic and slow (?). I was\nthinking JNI + shared memory or named pipes, but I could be way off\nbase. Portability would problably be an issue here.\n\nAnother thought was to use a SOAP interface to pass data between\npostgresql and the JVM. I think the overhead of SOAP using HTTP and\nalso the java class having a connection to postgresql may be too much (?).\n\nI understand that this is probably way too simplistic, but looking at\npltcl.c, most of it (looks as though it) could be done farily simply.\n\nLoading pljava classes could be done like C functions: ie\n\nCREATE FUNCTION overpaid(int4, int4)\nRETURNS bool\nAS 'PGROOT/java/classes/funcs.class'\nLANGUAGE 'java'\nNAME 'MyClasses.CalcOverpaid';\n\nwhich would have to be javac'ed before hand. The NAME bit points to the\nactual Java class that is run for the function.\n\npackage MyClasses;\npublic class CalcOverpaid extends PLJavaProcedure\n{\n public CalcOverpaid()\n {\n super(); //this calls getDataFromPgsql (JNI function)\n //initialize class\n }\n public void run()\n {\n // do work here\n }\n //PLJavaProcedure calls sendDataToPgsql (somehow)\n}\n\nThe actual Java structure would of course have to be worked out. It\nmight be better if the class to run wasn't itself a Thread'ed class\n(from PLJavaProcedure), but was run _in_ a thread from PLJavaLoader.\n(then the programmer wouldn't necessary have to follow any real\nconvention, the Loader sets everything up, gets the necesarry data from\npostgresql (argument data etc) and then loads and runs the class, which\nthen returns, and the Loader then returns the data to postgresql.\n\nI'm sure that using this sort of single instance of the JVM would be\nmore desirable than starting the JVM each and everytime a java procedure\nwas run.\n\nI appologise if I have this is all wrong. I'm not a master coder, and\ndefiniately don't know the internals of postgresql well, but I thought I\nmay as well put this out there for the hell of it.\n\nAshley Cambrell\n\n",
"msg_date": "Mon, 29 Oct 2001 17:35:35 +1100",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": true,
"msg_subject": "Best way for Postrgesql to pass info to java and back again?\n (PL/Java)"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Ashley Cambrell <ash@freaky-namuh.com>\nSent: Monday, October 29, 2001 1:35 AM\n\n> I was just mulling over how hard it would to implement Java stored\n> procedures ...\n\nThis is an interesting proposal. I was thiniking of it also\nsome time ago when someone inquired whether PG has PL/Java or not...\n\n> (mulling being the operative word) and was thinking how to\n> best implement postgresql <-> java communications. (ie shared memory\n> via JNI?) I have read the past posts regarding possible PL/Java\n> implementations and it's basically stopped at \"How to implement\n> Postrgesql <-> Java data passing?\".\n\nMaybe the same or a similar way as JDBC folks address it?\n\n> I'm not sure if I'd have time to (nor the skill to) actually implement\n> anything, but ...\n\nSame here, but besides having skilled people working on it one has\nto initiate the idea first, that's what you did :) and it might\nturn out to something more tangeable after fair amount of discussion.\n\n> Instead of using SPI to execute sql, the class would use jdbc to connect\n> to the server. The PLJavaLoader class could cache JDBC connections to\n> reduce the connection overhead (?).\n\nI'm thinking not to use the JDBC directly, not only to reduce the\nconnection overhead but also calls to JDBC layer itself. Instead,\none can possibly reuse some of the JDBC code, me thinks.\n\nJust a couple of quick thoughts...\n\n-s\n\n",
"msg_date": "Mon, 29 Oct 2001 15:06:13 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: Best way for Postrgesql to pass info to java and back again?\n\t(PL/Java)"
},
{
"msg_contents": "Hi Serguei,\n\nSerguei Mokhov wrote:\n\n>----- Original Message ----- \n>From: Ashley Cambrell <ash@freaky-namuh.com>\n>Sent: Monday, October 29, 2001 1:35 AM\n>\n>>I was just mulling over how hard it would to implement Java stored\n>>procedures ...\n>>\n>\n>This is an interesting proposal. I was thiniking of it also\n>some time ago when someone inquired whether PG has PL/Java or not...\n>\n>>(mulling being the operative word) and was thinking how to\n>>best implement postgresql <-> java communications. (ie shared memory\n>>via JNI?) I have read the past posts regarding possible PL/Java\n>>implementations and it's basically stopped at \"How to implement\n>>Postrgesql <-> Java data passing?\".\n>>\n>\n>Maybe the same or a similar way as JDBC folks address it?\n>\nI have looked at the JDBC drivers, and it seems to be all native Java \ncode. They actually use socket and standard buffered i/o readers to to \nthe protcol transport. (no JNI anywhere [I think])\n\nThe main question is still how to communicate low level (pre jdbc \nconnection) to postgresql. How could the waiting JVM (and procedure \nrunner) get notified that a waiting procedure request was there. Maybe \n a very small network layer instead of shared memory. I single shared \n\"notifiy\" socket on postgresql's end that the JVM that a request is \nthere [this request contains enough info to dynamically load the base \nprocedure class and start a thread], the JVM then opens another socket \nfor 2 way comms (so as not to block the notify socket), it then gets the \nargs for the procedure, runs the procedure and then sends the results \nback done that comms socket.\n\n??\n\n>>I'm not sure if I'd have time to (nor the skill to) actually implement\n>>anything, but ...\n>>\n>\n>Same here, but besides having skilled people working on it one has\n>to initiate the idea first, that's what you did :) and it might\n>turn out to something more tangeable after fair amount of discussion.\n>\nMy thoughts exactly :-)\n\n>>Instead of using SPI to execute sql, the class would use jdbc to connect\n>>to the server. The PLJavaLoader class could cache JDBC connections to\n>>reduce the connection overhead (?).\n>>\n>\n>I'm thinking not to use the JDBC directly, not only to reduce the\n>connection overhead but also calls to JDBC layer itself. Instead,\n>one can possibly reuse some of the JDBC code, me thinks.\n>\nThe jdbc code is a very thin wrapper to a java implemented fe (?) \nprotocol. Stuff like:\n\n/**\n * Sends an integer to the back end\n *\n * @param val the integer to be sent\n * @param siz the length of the integer in bytes (size of structure)\n * @exception IOException if an I/O error occurs\n */\n public void SendInteger(int val, int siz) throws IOException\n {\n byte[] buf = bytePoolDim1.allocByte(siz);\n\n while (siz-- > 0)\n {\n buf[siz] = (byte)(val & 0xff);\n val >>= 8;\n }\n Send(buf);\n }\n\n\nThere doesn't seem to be any advantage to using anything under the jdbc \nprotocol. It would only make it more complex.\n\nUnless there was a reason to use SPI that I don't know about?\n\n>\n>Just a couple of quick thoughts...\n>\nThanks Serguei\n\n>\n>-s\n>\n>\nAshley Cambrell\n\n\n\n\n\n\n\nHi Serguei,\n\nSerguei Mokhov wrote:\n\n----- Original Message ----- From: Ashley Cambrell <ash@freaky-namuh.com>Sent: Monday, October 29, 2001 1:35 AM\n\nI was just mulling over how hard it would to implement Java storedprocedures ...\n\nThis is an interesting proposal. I was thiniking of it alsosome time ago when someone inquired whether PG has PL/Java or not...\n\n(mulling being the operative word) and was thinking how tobest implement postgresql <-> java communications. (ie shared memoryvia JNI?) I have read the past posts regarding possible PL/Javaimplementations and it's basically stopped at \"How to implementPostrgesql <-> Java data passing?\".\n\nMaybe the same or a similar way as JDBC folks address it?\n\nI have looked at the JDBC drivers, and it seems to be all native Java code.\nThey actually use socket and standard buffered i/o readers to to the protcol\ntransport. (no JNI anywhere [I think])\n\nThe main question is still how to communicate low level (pre jdbc connection)\nto postgresql. How could the waiting JVM (and procedure runner) get notified\nthat a waiting procedure request was there. Maybe a very small network\nlayer instead of shared memory. I single shared \"notifiy\" socket on postgresql's\nend that the JVM that a request is there [this request contains enough info\nto dynamically load the base procedure class and start a thread], the JVM\nthen opens another socket for 2 way comms (so as not to block the notify\nsocket), it then gets the args for the procedure, runs the procedure and\nthen sends the results back done that comms socket.\n\n??\n\n\nI'm not sure if I'd have time to (nor the skill to) actually implementanything, but ...\n\nSame here, but besides having skilled people working on it one hasto initiate the idea first, that's what you did :) and it mightturn out to something more tangeable after fair amount of discussion.\n\nMy thoughts exactly :-)\n\n\nInstead of using SPI to execute sql, the class would use jdbc to connectto the server. The PLJavaLoader class could cache JDBC connections toreduce the connection overhead (?).\n\nI'm thinking not to use the JDBC directly, not only to reduce theconnection overhead but also calls to JDBC layer itself. Instead,one can possibly reuse some of the JDBC code, me thinks.\n\nThe jdbc code is a very thin wrapper to a java implemented fe (?) protocol.\n Stuff like:\n\n/**\n * Sends an integer to the back end\n *\n * @param val the integer to be sent\n * @param siz the length of the integer in bytes (size of structure)\n * @exception IOException if an I/O error occurs\n */\n public void SendInteger(int val, int siz) throws IOException\n {\n byte[] buf = bytePoolDim1.allocByte(siz);\n\n while (siz-- > 0)\n {\n buf[siz] = (byte)(val & 0xff);\n val >>= 8;\n }\n Send(buf);\n }\n\n\nThere doesn't seem to be any advantage to using anything under the jdbc protocol.\n It would only make it more complex.\n\nUnless there was a reason to use SPI that I don't know about?\n\nJust a couple of quick thoughts...\n\nThanks Serguei\n\n-s\n\nAshley Cambrell",
"msg_date": "Tue, 30 Oct 2001 08:56:32 +1100",
"msg_from": "Ashley Cambrell <ash@freaky-namuh.com>",
"msg_from_op": true,
"msg_subject": "Re: Best way for Postrgesql to pass info to java and back again?\n\t(PL/Java)"
}
] |
[
{
"msg_contents": "I found a nasty bug in psql which causes the regression test failed.\nThe bug is a illegal call to calloc with 0 element parameter. Note\nthat it only shows up with MB enabled. Also calloc on some platforms\nseem to accept such a parameters (I found the bug on AIX 5L).\n\nFor those who are doing the beta test, patches against beta1 included.",
"msg_date": "Mon, 29 Oct 2001 15:40:33 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "psql bug in 7.2 beta1"
}
] |
[
{
"msg_contents": "\n> Bue this executes slowly:\n> \n> explain select * from flow_stats where src_addr='1.1.1.1' order by\n> log_date desc limit 3;\n> NOTICE: QUERY PLAN:\n> \n> Limit (cost=0.00..796.61 rows=3 width=116)\n> -> Index Scan Backward using flow_stats_ix4 on flow_stats\n> (cost=0.00..81594.14 rows=307 width=116)\n> \n> Where \n> \n> flow_stats_ix4 is (log_date)\n> flow_stats_ix6 is (src_addr,log_date)\n\nThis would be a possible optimization, that other db's also seem to miss\n(at least in older versions). The trick with all ot them is to include\nthe =constant restricted column in the order by:\n\nselect * from flow_stats where src_addr='1.1.1.1' \norder by src_addr desc, log_date desc limit 3;\n\nNote, that because src_addr is fixed it won't change the result.\n\nAndreas\n",
"msg_date": "Mon, 29 Oct 2001 11:02:01 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: planner/optimizer question"
}
] |
[
{
"msg_contents": "Hello,\n\nI discussed a problem concerning the speed of PostgreSQL compared to\nMS SQL server heavily on postgres-general list. The thread starts with\nmessage\n\n http://fts.postgresql.org/db/mw/msg.html?mid=1035557\n\nNow I tried a snapshot of version 7.2 and got an increase of speed of\nabout factor 2. But sorry this is really not enough. The very simple\ntest I pointed to in my mail is even much to slow and the issue would\nprobably spoil down the whole project which should be a complete open\nsource solution and would perhaps and in any M$ stuff. I�ve got under\nheavy preasur from my employer who was talking about the nice world\nof MS .net (while he is using MS-SQL exclusively). To make the thing\nclear the issue is the gal database of infectious diseases in Germany\nrunned by the Robert Koch-Institute. So the beast could be of some\nimportance for increasing the acceptance of PostgreSQL and Open Source\nin the field of medicine which is generally known for the money which\nis involved in. So I really hope that some skilled programmers would\nbe able to find a good way to solve the performance issue perhaps by\njust profiling the simple query\n\n SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\nGROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;\n\nto the data set I put on\n\n http://www.foodborne-net.de/~tillea/ifsg/ifsgtest.dump.bz2\n\nIf this should take less than half a second on a modern PC I could\ncontinue to try mo realistic queries.\n\nI really hope that I could readjust the image of PostgreSQL in the\neyes of my M$-centered colleagues.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Mon, 29 Oct 2001 13:43:37 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Serious performance problem"
},
{
"msg_contents": "Seems that problem is very simple :))\nMSSql can do queries from indexes, without using actual table at all.\nPostgresql doesn't.\n\nSo mssql avoids sequental scanning of big table, and simply does scan of\nindex which is already in needed order and has very much less size.\n\nOn Mon, Oct 29, 2001 at 01:43:37PM +0100, Tille, Andreas wrote:\n> Hello,\n> \n> I discussed a problem concerning the speed of PostgreSQL compared to\n> MS SQL server heavily on postgres-general list. The thread starts with\n> message\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1035557\n> \n> Now I tried a snapshot of version 7.2 and got an increase of speed of\n> about factor 2. But sorry this is really not enough. The very simple\n> test I pointed to in my mail is even much to slow and the issue would\n> probably spoil down the whole project which should be a complete open\n> source solution and would perhaps and in any M$ stuff. I've got under\n> heavy preasur from my employer who was talking about the nice world\n> of MS .net (while he is using MS-SQL exclusively). To make the thing\n> clear the issue is the gal database of infectious diseases in Germany\n> runned by the Robert Koch-Institute. So the beast could be of some\n> importance for increasing the acceptance of PostgreSQL and Open Source\n> in the field of medicine which is generally known for the money which\n> is involved in. So I really hope that some skilled programmers would\n> be able to find a good way to solve the performance issue perhaps by\n> just profiling the simple query\n> \n> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n> GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;\n> \n> to the data set I put on\n> \n> http://www.foodborne-net.de/~tillea/ifsg/ifsgtest.dump.bz2\n> \n> If this should take less than half a second on a modern PC I could\n> continue to try mo realistic queries.\n> \n> I really hope that I could readjust the image of PostgreSQL in the\n> eyes of my M$-centered colleagues.\n> \n> Kind regards\n> \n> Andreas.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Mon, 29 Oct 2001 15:45:45 +0200",
"msg_from": "Vsevolod Lobko <seva@sevasoft.kiev.ua>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "Andreas - \nI took a look at your problem, since I'm sort of in the field,\nand would liek to see free solutions spread, as well.\n\nHere's what I see: Your example touches on what can be an achilles\nheel for pgsql's current statistical analyzer: selection on data fields\nthat have a few common values. Often, the indices don't get used, since\na large fraction of the table needs to be scanned, in any case. In\nyour example, fully 68% of the table fits the where condition.\n\nHere's some timing results on my machine:\n\nYour dataset and query, as written:\n\nreal 0m25.272s\nuser 0m0.090s\nsys 0m0.050s\n\nCreating an index on meldekategorie, and forcing it's use with\n\"set enable_seqscan = off\"\n\nreal 0m14.743s\nuser 0m0.070s\nsys 0m0.050s\n\nSame, with index on istaktuell:\n\nreal 0m26.511s\nuser 0m0.050s\nsys 0m0.060s\n\nNow, with an index on both meldekategorie and istaktuell:\n\nreal 0m7.179s\nuser 0m0.060s\nsys 0m0.030s\n\nI think we have a winner. No it's not sub-second, but I improved the time\nby 3x just by trying some indices. Note that I _still_ had to force the\nuse of indices for this one. It's also the first time I've personally seen\na query/dataset that benefits this much from a two-key index.\n\nAs another poster replied to you, there is limitation with postgresql's\nuse of indices that arises from MVCC: even if the only data requested is\nthat stored in the index itself, the backend must visit the actual tuple\nin the table to ensure that it is 'visible' to the current transaction.\n\nHow realistic a representation of your real workload is this query? Realize\nthat more selective, complex queries are where pgsql shines compared to\nother RDBMS: the 'fast table scanner' type query that you proposed as your\ntest don't really let pgsql stretch it's legs. Do you have example timings\nfrom MS-SQL or others?\n\nRoss\n\nOn Mon, Oct 29, 2001 at 01:43:37PM +0100, Tille, Andreas wrote:\n> Hello,\n> \n> I discussed a problem concerning the speed of PostgreSQL compared to\n> MS SQL server heavily on postgres-general list. The thread starts with\n> message\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1035557\n> \n> Now I tried a snapshot of version 7.2 and got an increase of speed of\n> about factor 2. But sorry this is really not enough. The very simple\n> test I pointed to in my mail is even much to slow and the issue would\n> probably spoil down the whole project which should be a complete open\n> source solution and would perhaps and in any M$ stuff. I?ve got under\n> heavy preasur from my employer who was talking about the nice world\n> of MS .net (while he is using MS-SQL exclusively). To make the thing\n> clear the issue is the gal database of infectious diseases in Germany\n> runned by the Robert Koch-Institute. So the beast could be of some\n> importance for increasing the acceptance of PostgreSQL and Open Source\n> in the field of medicine which is generally known for the money which\n> is involved in. So I really hope that some skilled programmers would\n> be able to find a good way to solve the performance issue perhaps by\n> just profiling the simple query\n> \n> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n> GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;\n> \n> to the data set I put on\n> \n> http://www.foodborne-net.de/~tillea/ifsg/ifsgtest.dump.bz2\n> \n> If this should take less than half a second on a modern PC I could\n> continue to try mo realistic queries.\n> \n> I really hope that I could readjust the image of PostgreSQL in the\n> eyes of my M$-centered colleagues.\n> \n> Kind regards\n> \n> Andreas.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nRoss Reedstrom, Ph.D. reedstrm@rice.edu\nExecutive Director phone: 713-348-6166\nGulf Coast Consortium for Bioinformatics fax: 713-348-6182\nRice University MS-39\nHouston, TX 77005\n",
"msg_date": "Mon, 29 Oct 2001 11:31:54 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Mon, 29 Oct 2001, Ross J. Reedstrom wrote:\n\n> Here's what I see: Your example touches on what can be an achilles\n> heel for pgsql's current statistical analyzer: selection on data fields\n> that have a few common values. Often, the indices don't get used, since\n> a large fraction of the table needs to be scanned, in any case. In\n> your example, fully 68% of the table fits the where condition.\n> ...\n>\n> I think we have a winner. No it's not sub-second, but I improved the time\n> by 3x just by trying some indices. Note that I _still_ had to force the\n> use of indices for this one. It's also the first time I've personally seen\n> a query/dataset that benefits this much from a two-key index.\nThis is true for this example and I also played with indices as you. I also\nenforced the index scan and compared with forbidding the index scan. The\nresult was on my more realistic examples that both versions performed quite\nthe same. There was no *real* difference. For sure in this simple query there\nis a difference but the real examples showed only 2% - 5% speed increase\n(if not slower with enforcing index scans!).\n\n> As another poster replied to you, there is limitation with postgresql's\n> use of indices that arises from MVCC: even if the only data requested is\n> that stored in the index itself, the backend must visit the actual tuple\n> in the table to ensure that it is 'visible' to the current transaction.\nAny possibility to switch of this temporarily for certain queries like this\nif the programmer could make sure that it is not necessary? Just a stupid\nidea from a bloody uneducated man in database-engeniering.\n\n> How realistic a representation of your real workload is this query? Realize\n> that more selective, complex queries are where pgsql shines compared to\n> other RDBMS: the 'fast table scanner' type query that you proposed as your\n> test don't really let pgsql stretch it's legs. Do you have example timings\n> from MS-SQL or others?\nUnfortunately the four test we did here seemed all to suffer from the\nsame problem. The situation is that there is a given database structure\nwhich was developed over more than a year on MS-SQL and has a Access GUI.\nNow parts of the UI should be made public via web (I want to use Zope)\nand I just imported the data and did some example queries with the\nterrible slow result.\n\nKind regards and thanks for your ideas\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 10:55:50 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Mon, 29 Oct 2001, Vsevolod Lobko wrote:\n\n> Seems that problem is very simple :))\n> MSSql can do queries from indexes, without using actual table at all.\n> Postgresql doesn't.\n>\n> So mssql avoids sequental scanning of big table, and simply does scan of\n> index which is already in needed order and has very much less size.\nHmmm, could anyone imagine a simple or not *solution* of the Problem.\nI�m thinking of some switch the database programmer could use if he\nreally knows what he is doing.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 10:59:10 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\"Tille, Andreas\" wrote:\n> \n> On Mon, 29 Oct 2001, Ross J. Reedstrom wrote:\n> \n> > Here's what I see: Your example touches on what can be an achilles\n> > heel for pgsql's current statistical analyzer: selection on data fields\n> > that have a few common values. Often, the indices don't get used, since\n> > a large fraction of the table needs to be scanned, in any case. In\n> > your example, fully 68% of the table fits the where condition.\n> > ...\n> >\n> > I think we have a winner. No it's not sub-second, but I improved the time\n> > by 3x just by trying some indices. Note that I _still_ had to force the\n> > use of indices for this one. It's also the first time I've personally seen\n> > a query/dataset that benefits this much from a two-key index.\n> This is true for this example and I also played with indices as you. I also\n> enforced the index scan and compared with forbidding the index scan. The\n> result was on my more realistic examples that both versions performed quite\n> the same. There was no *real* difference. For sure in this simple query there\n> is a difference but the real examples showed only 2% - 5% speed increase\n> (if not slower with enforcing index scans!).\n\nI studied his dataset and found that a simple count(*) on whole table \ntook 1.3 sec on my Celeron 375 so I'm sure that the more complex query, \nwhich has to visit 2/3 of tuples will not be able to execute under 1 sec\n\nMy playing with indexes / subqueries and query rewriting got the example \nquery (actually a functional equivalent) to run in ~5 sec with simple \naggregate(group(indexscan))) plan and I suspect that this is how fast \nit will be on my hardware\n\nIt could probably be soon possible to make it run in ~ 1.5 by using an\naggregate \nfunction that does a sequential scan and returns a rowset.\n\n> > As another poster replied to you, there is limitation with postgresql's\n> > use of indices that arises from MVCC: even if the only data requested is\n> > that stored in the index itself, the backend must visit the actual tuple\n> > in the table to ensure that it is 'visible' to the current transaction.\n> Any possibility to switch of this temporarily for certain queries like this\n> if the programmer could make sure that it is not necessary? Just a stupid\n> idea from a bloody uneducated man in database-engeniering.\n\nThere have been plans to set aside a bit in index that would mark the\ndeleted \ntuple. Unfortunately this helps only in cases when there are many\ndeleted tuples\nand all live tuples have to be checked anyway ;(\n\n--------------\nHannu\n",
"msg_date": "Tue, 30 Oct 2001 12:20:57 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\"Tille, Andreas\" wrote:\n> \n> Hello,\n> \n> I discussed a problem concerning the speed of PostgreSQL compared to\n> MS SQL server heavily on postgres-general list. The thread starts with\n> message\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1035557\n> \n> Now I tried a snapshot of version 7.2 and got an increase of speed of\n> about factor 2. But sorry this is really not enough. The very simple\n> test I pointed to in my mail is even much to slow and the issue would\n> probably spoil down the whole project which should be a complete open\n> source solution and would perhaps and in any M$ stuff. I�ve got under\n> heavy preasur from my employer who was talking about the nice world\n> of MS .net (while he is using MS-SQL exclusively). To make the thing\n> clear the issue is the gal database of infectious diseases in Germany\n> runned by the Robert Koch-Institute. So the beast could be of some\n> importance for increasing the acceptance of PostgreSQL and Open Source\n> in the field of medicine which is generally known for the money which\n> is involved in. So I really hope that some skilled programmers would\n> be able to find a good way to solve the performance issue perhaps by\n> just profiling the simple query\n> \n> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n> GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;\n> \n> to the data set I put on\n> \n> http://www.foodborne-net.de/~tillea/ifsg/ifsgtest.dump.bz2\n> \n> If this should take less than half a second on a modern PC I could\n> continue to try mo realistic queries.\n\n\nI tried some more on optimizing the query on my work computer \n(AMD ATHLON 850, 512MB, PostgreSQL 7.1.3 with default memory settings)\n\n\nSELECT MeldeKategorie, \n Count(ID) AS Anz\n FROM Hauptdaten_Fall \n WHERE IstAktuell=20\nGROUP BY MeldeKategorie \nORDER BY MeldeKategorie;\n\nreal 0m9.675s\n\ncreate index i1 on Hauptdaten_Fall(IstAktuell,MeldeKategorie);\n\n----------------------------\nset enable_seqscan = off;\nSELECT MeldeKategorie, \n Count(ID) AS Anz\n FROM Hauptdaten_Fall \n WHERE IstAktuell=20\nGROUP BY MeldeKategorie \nORDER BY MeldeKategorie;\n\nAggregate (cost=4497.30..4510.18 rows=258 width=16)\n -> Group (cost=4497.30..4503.74 rows=2575 width=16)\n -> Sort (cost=4497.30..4497.30 rows=2575 width=16)\n -> Index Scan using i1 on hauptdaten_fall \n(cost=0.00..4351.40 rows=2575 width=16)\n\nreal 0m7.131s\n\n---------------------------\n\nset enable_seqscan = off;\nSELECT MeldeKategorie, \n Count(ID) AS Anz\n FROM Hauptdaten_Fall \n WHERE IstAktuell=20\nGROUP BY IstAktuell,MeldeKategorie\nORDER BY IstAktuell,MeldeKategorie;\n\nAggregate (cost=4497.30..4510.18 rows=258 width=16)\n -> Group (cost=4497.30..4503.74 rows=2575 width=16)\n -> Index Scan using i1 on hauptdaten_fall (cost=0.00..4351.40\nrows=2575 width=16)\n\nreal 0m3.223s\n\n-- same after doing\n\ncluster i1 on Hauptdaten_Fall;\n\nreal 1.590 -- 1.600\n\n\n\nselect count(*) from Hauptdaten_Fall;\n\nreal 0m0.630s\n\n---------------------------\n\nThe following query is marginally (about 0.1 sec) faster, though the \nplan looks the same down to cost estimates.\n\nSET ENABLE_SEQSCAN = OFF;\nSELECT MeldeKategorie, \n Count(*) AS Anz\n FROM (select IstAktuell,MeldeKategorie from Hauptdaten_Fall where\nIstAktuell=20) sub\nGROUP BY IstAktuell,MeldeKategorie\nORDER BY IstAktuell,MeldeKategorie;\n\nAggregate (cost=0.00..4370.72 rows=258 width=16)\n -> Group (cost=0.00..4364.28 rows=2575 width=16)\n -> Index Scan using i1 on hauptdaten_fall (cost=0.00..4351.40\nrows=2575 width=16)\n\nreal 0m1.438s - 1.506s\n\n---------------------------\n\nnow I make the dataset bigger keeping the number of rows returned by\nquery the same\n\ninsert into hauptdaten_fall (istaktuell, meldekategorie)\nselect istaktuell + 20, meldekategorie \nfrom hauptdaten_fall ;\n\nINSERT 0 257530\n\ninsert into hauptdaten_fall (istaktuell, meldekategorie)\nselect istaktuell + 40, meldekategorie \nfrom hauptdaten_fall ;\n\nINSERT 0 515060\nifsgtest=# select count(*) from hauptdaten_fall;\n count \n---------\n 1030120\n(1 row)\n\ncluster i1 on Hauptdaten_Fall;\nvacuum analyze;\n\n\n-- The query time is still the same 1.44 - 1.5 sec\n\nSET ENABLE_SEQSCAN = OFF;\nSELECT MeldeKategorie, \n Count(*) AS Anz\n FROM (select IstAktuell,MeldeKategorie from Hauptdaten_Fall where\nIstAktuell=20) sub\nGROUP BY IstAktuell,MeldeKategorie\nORDER BY IstAktuell,MeldeKategorie;\n\nAggregate (cost=0.00..4370.72 rows=258 width=16)\n -> Group (cost=0.00..4364.28 rows=2575 width=16)\n -> Index Scan using i1 on hauptdaten_fall (cost=0.00..4351.40\nrows=2575 width=16)\n\nreal 0m1.438s - 1.506s\n\n----------------------------\n\nnow back to original data distribution, just 4 times bigger\n\nifsgtest=# update hauptdaten_fall\nifsgtest-# set istaktuell = case when istaktuell % 20 = 0 then 20 else\n10 end\nifsgtest-# ;\nUPDATE 1030120\nifsgtest=# vacuum analyze;\nVACUUM\n\nSET ENABLE_SEQSCAN = OFF;\nSELECT MeldeKategorie, \n Count(*) AS Anz\n FROM (select IstAktuell,MeldeKategorie from Hauptdaten_Fall where\nIstAktuell=20) sub\nGROUP BY IstAktuell,MeldeKategorie\nORDER BY IstAktuell,MeldeKategorie;\n\nreal 0m6.077 -- 6.606s\n\nand after clustering:\ncluster i1 on Hauptdaten_Fall;\n\nreal 0m5.683 - 5.750s\n\nso it's linear growth here\n\n----------------------------\n\nHannu\n",
"msg_date": "Wed, 31 Oct 2001 12:25:00 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "I have been thinking about this query, I downloaded all your info and I read\nyour reply to a previous post. \n\nAt issue, you say MSSQL outperforms PGSQL, this may be true for a finite set of\nquery types, it may even be true for your entire application, but for how long?\nWhat will be the nature of your data next year? \n\nYour query is a prime example of where application optimization needs to\nhappen. Regardless if MSSQL can currently execute that query quickly, at some\npoint there will be a volume of data which is too large to process quickly.\nThis table, you say, is created periodically with a cron job. How hard would it\nbe to append a couple SQL statements to create a summary table for the high\nspeed queries?\n\nPersonally, I think your approach needs to be modified a bit. The fact that\nyour query runs well on one SQL database and poorly on another indicates to me\nthat you will be tied to one database forever. If you use standard database\noptimization techniques in your design, you can choose any database by the\nwhole of important criteria, such as reliability, speed, support,\nadministration, and price, rather than just speed.\n\nAlso, if you use MSSQL you will need to have some version of MS-Windows on\nwhich to run it, that alone indicates to me you will have reliability problems.\n\n\"Tille, Andreas\" wrote:\n> \n> Hello,\n> \n> I discussed a problem concerning the speed of PostgreSQL compared to\n> MS SQL server heavily on postgres-general list. The thread starts with\n> message\n> \n> http://fts.postgresql.org/db/mw/msg.html?mid=1035557\n> \n> Now I tried a snapshot of version 7.2 and got an increase of speed of\n> about factor 2. But sorry this is really not enough. The very simple\n> test I pointed to in my mail is even much to slow and the issue would\n> probably spoil down the whole project which should be a complete open\n> source solution and would perhaps and in any M$ stuff. I�ve got under\n> heavy preasur from my employer who was talking about the nice world\n> of MS .net (while he is using MS-SQL exclusively). To make the thing\n> clear the issue is the gal database of infectious diseases in Germany\n> runned by the Robert Koch-Institute. So the beast could be of some\n> importance for increasing the acceptance of PostgreSQL and Open Source\n> in the field of medicine which is generally known for the money which\n> is involved in. So I really hope that some skilled programmers would\n> be able to find a good way to solve the performance issue perhaps by\n> just profiling the simple query\n> \n> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n> GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY Hauptdaten_Fall.MeldeKategorie;\n> \n> to the data set I put on\n> \n> http://www.foodborne-net.de/~tillea/ifsg/ifsgtest.dump.bz2\n> \n> If this should take less than half a second on a modern PC I could\n> continue to try mo realistic queries.\n> \n> I really hope that I could readjust the image of PostgreSQL in the\n> eyes of my M$-centered colleagues.\n> \n> Kind regards\n> \n> Andreas.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n5-4-3-2-1 Thunderbirds are GO!\n------------------------\nhttp://www.mohawksoft.com\n",
"msg_date": "Wed, 31 Oct 2001 08:14:02 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Wed, 31 Oct 2001, Hannu Krosing wrote:\n\n> I tried some more on optimizing the query on my work computer\n> (AMD ATHLON 850, 512MB, PostgreSQL 7.1.3 with default memory settings)\n>\n>\n> SELECT MeldeKategorie,\n> Count(ID) AS Anz\n> FROM Hauptdaten_Fall\n> WHERE IstAktuell=20\n> GROUP BY MeldeKategorie\n> ORDER BY MeldeKategorie;\n>\n> real 0m9.675s\n>\n> create index i1 on Hauptdaten_Fall(IstAktuell,MeldeKategorie);\n>\n> ----------------------------\n> set enable_seqscan = off;\n> SELECT MeldeKategorie,\n> Count(ID) AS Anz\n> FROM Hauptdaten_Fall\n> WHERE IstAktuell=20\n> GROUP BY MeldeKategorie\n> ORDER BY MeldeKategorie;\n>\n> Aggregate (cost=4497.30..4510.18 rows=258 width=16)\n> -> Group (cost=4497.30..4503.74 rows=2575 width=16)\n> -> Sort (cost=4497.30..4497.30 rows=2575 width=16)\n> -> Index Scan using i1 on hauptdaten_fall\n> (cost=0.00..4351.40 rows=2575 width=16)\n>\n> real 0m7.131s\n>\n> ---------------------------\n>\n> set enable_seqscan = off;\n> SELECT MeldeKategorie,\n> Count(ID) AS Anz\n> FROM Hauptdaten_Fall\n> WHERE IstAktuell=20\n> GROUP BY IstAktuell,MeldeKategorie\n> ORDER BY IstAktuell,MeldeKategorie;\n>\n> Aggregate (cost=4497.30..4510.18 rows=258 width=16)\n> -> Group (cost=4497.30..4503.74 rows=2575 width=16)\n> -> Index Scan using i1 on hauptdaten_fall (cost=0.00..4351.40\n> rows=2575 width=16)\n>\n> real 0m3.223s\nHmmm, could you please explain the theory behind that for quite a\nbeginner like me (perhaps on -general if you feel it apropriate)\n\nThe change in the second select is that you included IstAktuell in the\nGROUP BY/ORDER BY clause and this gives a speed increas by factor 2.\nIt seems that the \"Sort\" can be left out in this case if I look at the\nplan, but why that? The WHERE clause should select just all IstAktuell=20\ndata sets and so the GROUP BY/ORDER BY clauses should every time have the\nsame work - as for my humble understanding.\n\n>\n> -- same after doing\n>\n> cluster i1 on Hauptdaten_Fall;\n>\n> real 1.590 -- 1.600\nThat�s also interesting. In reality the table Hauptdaten_Fall has many fields\nwith many indices. If I understand things right it makes no sense to have\nmore than one clustered index, right? A further speed increase of factor two\nwould be welcome. Could I expect this if I would find out the \"sensitive\"\nindex of my table for certain tasks? Or is my understanging wrong and it\nmakes sense to cluster more than one index. Unfortunately clustering the\nindex of a huge table takes some time. Could I speed this up by some\ntricks?\n\n> select count(*) from Hauptdaten_Fall;\n>\n> real 0m0.630s\n>\n> ---------------------------\n>\n> The following query is marginally (about 0.1 sec) faster, though the\n> plan looks the same down to cost estimates.\n>\n> SET ENABLE_SEQSCAN = OFF;\n> SELECT MeldeKategorie,\n> Count(*) AS Anz\n> FROM (select IstAktuell,MeldeKategorie from Hauptdaten_Fall where\n> IstAktuell=20) sub\n> GROUP BY IstAktuell,MeldeKategorie\n> ORDER BY IstAktuell,MeldeKategorie;\n>\n> Aggregate (cost=0.00..4370.72 rows=258 width=16)\n> -> Group (cost=0.00..4364.28 rows=2575 width=16)\n> -> Index Scan using i1 on hauptdaten_fall (cost=0.00..4351.40\n> rows=2575 width=16)\n>\n> real 0m1.438s - 1.506s\nHmm, perhaps this is nearly nothing or is there any theory that a\ncount(*) is faster than a count(<fieldname>)?\n\n> ...\n> real 0m6.077 -- 6.606s\n>\n> and after clustering:\n> cluster i1 on Hauptdaten_Fall;\n>\n> real 0m5.683 - 5.750s\n>\n> so it's linear growth here\nThis is what my colleague was afraid of: We would have linear growth\ncompared to the log(n) growth which is to be expected on MS SQL server\n(for this certain type of queries and for sure up to a far limit of\ndata where other constraints could get influence, but we are far from\nthis limit). This would not convince him :-(.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Thu, 1 Nov 2001 16:02:16 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
}
] |
[
{
"msg_contents": "Has anyone successfully hacked an external database table as a view.\r\n\r\nI was thinking that this may be possible using a C function and the Rules architecture but I don't have much experience with PostgreSQL so I thought I\r\nwould check with the list to see what if anything others had attempted.",
"msg_date": "Mon, 29 Oct 2001 11:27:20 -0500",
"msg_from": "\"Sean K. Sell\" <sean@nist.gov>",
"msg_from_op": true,
"msg_subject": "External Database Connection"
},
{
"msg_contents": "\"Sean K. Sell\" wrote:\n\n> Has anyone successfully hacked an external database table as a view.\n>\n> I was thinking that this may be possible using a C function and the Rules architecture but I don't have much experience with PostgreSQL so I thought I\n> would check with the list to see what if anything others had attempted.\n>\n\nUsing C functions, you could probably do something like this:\n\ncreate view dbname_table as select ext_query('select * from table', 'dbname'), extq_col_varchar('foo') as foo, extq_col_numeric('bar') as bar\n\nIt would not be a generic solution, but it could be done with today's sources.\n\n\n\n",
"msg_date": "Mon, 29 Oct 2001 12:30:15 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: External Database Connection"
},
{
"msg_contents": "----- Original Message ----- \nFrom: Sean K. Sell <sean@nist.gov>\nSent: Monday, October 29, 2001 11:27 AM\n\n> Has anyone successfully hacked an external database table as a view.\n> \n> I was thinking that this may be possible using a C function and the Rules\n> architecture but I don't have much experience with PostgreSQL so I thought I\n> would check with the list to see what if anything others had attempted.\n\nThere is an upcoming implementation of schemas in PostgreSQL\nhopefully for 7.3, and some work is being done already towards\nit (see Bill Studenmund's proposed implementation of packages as a step\nforward to the schemas and discussion arisen from it).\nBut for now, there is no such a hack to my knowledge.\n\n-s\n\n\n",
"msg_date": "Mon, 29 Oct 2001 14:45:55 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: External Database Connection"
},
{
"msg_contents": "Sean K. Sell wrote:\n\n> Has anyone successfully hacked an external database table as a view.\n> \n> I was thinking that this may be possible using a C function and the Rules architecture but I don't have much experience with PostgreSQL so I thought I\n> would check with the list to see what if anything others had attempted.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\nSort of (it's ugly, but works for simple needs) . . . see dblink under \ncontrib in the 7.2 beta. Be sure to read the README.\n\n-- Joe\n\n\n",
"msg_date": "Mon, 29 Oct 2001 21:38:44 -0800",
"msg_from": "Joe Conway <joseph.conway@home.com>",
"msg_from_op": false,
"msg_subject": "Re: External Database Connection"
}
] |
[
{
"msg_contents": "Anandtech did an article on the dual athlon box that runs their forum.\nOt's not MySQL or PgSQL, but you might check it out.\n\nhttp://www.anandtech.com/showdoc.html?i=1514\n\n\n-----Original Message-----\nFrom: Mike Rogers [mailto:temp6453@hotmail.com]\nSent: Sunday, October 28, 2001 10:08 AM\nTo: mysql@lists.mysql.com; pgsql-hackers@postgresql.org;\npgsql-admin@postgresql.org\nSubject: Ultimate DB Server\n\n\nI'm questioning whether anyone has done benchmarks on various hardware for\nPGSQL and MySQL. I'm either thinking dual P3-866's, Dual AMD-1200's, etc.\nI'm looking for benchmarks of large queries on striped -vs- non-striped\nvolumes, different processor speeds, etc.\n\nAny thoughts people?\n\n\n---------------------------------------------------------------------\nBefore posting, please check:\n http://www.mysql.com/manual.php (the manual)\n http://lists.mysql.com/ (the list archive)\n\nTo request this thread, e-mail <mysql-thread89232@lists.mysql.com>\nTo unsubscribe, e-mail\n<mysql-unsubscribe-Eric.George=peterson.af.mil@lists.mysql.com>\nTrouble unsubscribing? Try: http://lists.mysql.com/php/unsubscribe.php\n",
"msg_date": "Mon, 29 Oct 2001 16:29:59 -0000",
"msg_from": "George Eric R Contr AFSPC/CVYZ <Eric.George@PETERSON.af.mil>",
"msg_from_op": true,
"msg_subject": "Re: Ultimate DB Server"
}
] |
[
{
"msg_contents": "Hello Andreas,\n\nA possible solution would be:\nCREATE TABLE foo AS\nSELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz \nFROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\nGROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY \nHauptdaten_Fall.MeldeKategorie;\n\nThis is suitable if your data does not change often. To get automatic updates:\n1) Define iPrecision, the precision that you need (integer).\n2) Create a trigger which increases a counter when a record is updated or \ninserted. When the counter reaches iPrecision, do a DROP TABLE foo + CREATE \nTABLE foo AS SELECT Hauptdaten_Fall.MeldeKategorie, \nCount(Hauptdaten_Fall.ID)... This will take a few seconds but only once. \nRun a batch script within a time frame (1 hour, 4 hours, 1 day ?) so a \nhuman user has very little chance to reach iPrecision.\n\nOn 300.000 records, you will get instant results. There are plenty of \ntricks like this one. If you employ them, you will ***never*** reach the \nlimits of a double Pentium III computer with U3W discs.\n\nIf you need to answer this message, please reply on \npgsql-general@postgresql.org.\n\nCheers,\nJean-Michel POURE\n\n\nAt 13:43 29/10/01 +0100, you wrote:\n>Hello,\n>\n>I discussed a problem concerning the speed of PostgreSQL compared to\n>MS SQL server heavily on postgres-general list. The thread starts with\n>message\n>\n> http://fts.postgresql.org/db/mw/msg.html?mid=1035557\n>\n>Now I tried a snapshot of version 7.2 and got an increase of speed of\n>about factor 2. But sorry this is really not enough. The very simple\n>test I pointed to in my mail is even much to slow and the issue would\n>probably spoil down the whole project which should be a complete open\n>source solution and would perhaps and in any M$ stuff. I�ve got under\n>heavy preasur from my employer who was talking about the nice world\n>of MS .net (while he is using MS-SQL exclusively). To make the thing\n>clear the issue is the gal database of infectious diseases in Germany\n>runned by the Robert Koch-Institute. So the beast could be of some\n>importance for increasing the acceptance of PostgreSQL and Open Source\n>in the field of medicine which is generally known for the money which\n>is involved in. So I really hope that some skilled programmers would\n>be able to find a good way to solve the performance issue perhaps by\n>just profiling the simple query\n>\n> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS \n> Anz FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n>GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY \n>Hauptdaten_Fall.MeldeKategorie;\n>\n>to the data set I put on\n>\n> http://www.foodborne-net.de/~tillea/ifsg/ifsgtest.dump.bz2\n>\n>If this should take less than half a second on a modern PC I could\n>continue to try mo realistic queries.\n>\n>I really hope that I could readjust the image of PostgreSQL in the\n>eyes of my M$-centered colleagues.\n>\n>Kind regards\n>\n> Andreas.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 5: Have you checked our extensive FAQ?\n>\n>http://www.postgresql.org/users-lounge/docs/faq.html\n\n",
"msg_date": "Mon, 29 Oct 2001 20:10:32 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Mon, 29 Oct 2001, Jean-Michel POURE wrote:\n\n> A possible solution would be:\n> CREATE TABLE foo AS\n> SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz\n> FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n> GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY\n> Hauptdaten_Fall.MeldeKategorie;\nSorry, this is NO solution of my problem.\n\n> On 300.000 records, you will get instant results. There are plenty of\n> tricks like this one. If you employ them, you will ***never*** reach the\n> limits of a double Pentium III computer with U3W discs.\nIt is really no help if I solve the speed issue of this *very simple,\nzeroth order try*. I repeat a hava a plenty of queries which do much\nmore complicated stuff than this. This is just a rude strip down of the\nproblem fit for debugging/profg issues of the database *server*. Simple\ntricks on a simple example do not help.\n\n> If you need to answer this message, please reply on\n> pgsql-general@postgresql.org.\nNo, because ...\n\n> >I discussed a problem concerning the speed of PostgreSQL compared to\n> >MS SQL server heavily on postgres-general list. The thread starts with\n> >message\n> >\n> > http://fts.postgresql.org/db/mw/msg.html?mid=1035557\nI did so and got the explicit advise of Tom to ask here.\n\nConsider the problem as a benchmark. I would love to see postgresql\nas the winner.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 10:42:16 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\n> > CREATE TABLE foo AS\n> > SELECT Hauptdaten_Fall.MeldeKategorie, Count(Hauptdaten_Fall.ID) AS Anz\n> > FROM Hauptdaten_Fall WHERE (((Hauptdaten_Fall.IstAktuell)=20))\n> > GROUP BY Hauptdaten_Fall.MeldeKategorie ORDER BY\n> > Hauptdaten_Fall.MeldeKategorie;\n>Sorry, this is NO solution of my problem.\n\nAllo Andreas,\n\nFor every problem there is a solution. That is what software optimization \nis all about. Do you make use of PL/pgSQL stored queries in your database? \nIf not, you will probably end up with terrible nested queries that will \neat-up server time and power.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Tue, 30 Oct 2001 11:17:51 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Serious performance problem"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Jean-Michel POURE wrote:\n\n> >Sorry, this is NO solution of my problem.\n ^^^^\n> For every problem there is a solution. That is what software optimization\nFor sure and as I said I�m really sure that it is a server side programming\nissue (and others thought this as well on the hackers list).\nBut your proposal was not the solution.\n\n> is all about. Do you make use of PL/pgSQL stored queries in your database?\n> If not, you will probably end up with terrible nested queries that will\n> eat-up server time and power.\nAlso stored procedures are not the solution and I see no reason to discuss\nthis topic here on general list (moreover you leave people unclear about\nthe original question). It was discussed here in two long threads and\nbelongs to the PostgreSQL programmers. I hope not to violate Tom Lanes\nprivacy if I quote here one sentence from him in a private posting:\n \"Performance issues can be discussed in -hackers.\"\nWe do not talk about the performance of the datapase application but\nabout the server itself.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 13:17:18 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Serious performance problem"
}
] |
[
{
"msg_contents": "Sorry to barge in with this but I'm getting lost in all the discussion\non schemes, packages, and namespaces. I checked the archives and todo\nlist but didn't see tablespaces mentioned since earlier this year and\nlast year. I seem to remember a message from Tom Lane - which has got\naway from me - about tablespaces. Did these get into 7.2, will they be\nin 7.3.\n I'm doing some hardware amd performance planning that will be\naffected (or so I think) by the ablility to place stuff on specific\ndevices.\n\n\nTIA,\nRod\n-- \n Let Accuracy Triumph Over Victory\n\n Zetetic Institute\n \"David's Sling\"\n Marc Stiegler\n\n",
"msg_date": "Mon, 29 Oct 2001 13:19:40 -0800 (PST)",
"msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>",
"msg_from_op": true,
"msg_subject": "Tablespaces"
}
] |
[
{
"msg_contents": "I have observed some disturbing behavior with the latest (7.1.3) version of\nPotgreSQL.\n\nIn an application that I am working on with a team of folks, there is a\nspecific need to execute a series of SQL statements similar to those used in\nthe 'loaddb.pl' script included below. Without getting into better ways to\nincrement rowid's (this code is part of another tool that we are using), I'd\nlike to know why I get the following results with PostgreSQL and MySQL.\n\nIn 3 separate runs I get the following PostgreSQL results:\n\n o 1 - 2000 records inserted in 12 seconds.\n o 2001 - 4000 records inserted in 16 seconds.\n o 4001 - 6000 records inserted in 20 seconds.\n \nYou see, there is a clear performance degradation here that is associated\nwith the number of records in the database. It appears that the main culprit\nis the update statement that is issued (see 'loaddb.pl' script below). This\nperformance behavior is not expected. Especially with so few rows in such a\nsmall table. \n\nIn 3 separate runs I get the following MySQL results:\n\n o 1 - 2000 records inserted in 6 seconds.\n o 2001 - 4000 records inserted in 5 seconds.\n o 4001 - 6000 records inserted in 6 seconds.\n\nYou see, MySQL performs as expected. There is no performance degradation\nhere that is related to the number of records in the database tables.\n\nI have been a huge fan and advocate of PostgreSQL. I was stunned to see this\nbehavior. I am hoping that it is either a bug that has been fixed, or that I\ncan alter my PostgreSQL configuration to eliminate this behavior.\n\nI have an urgent need to resolve this situation. If I cannot solve the\nproblem soon, I will be forced to drop PostgreSQL in favor of MySQL. This is\nnot something that I wish to do.\n\nPlease help.\n\nThanks in advance.\n\n- Jim\n\n\n########################################################################\n#!/usr/bin/perl -w\n#\n## setupdb.pl \n#\n## Simple perl script that creates the 'problemtest' db.\n#\n## Usage: ./setupdb.pl <db-type>\n#\n## Assumes that the 'problemtest' PostgreSQL and MySQL databases exist.\n## and that there is a user 'problemtest' with proper privileges.\n#\n########################################################################\n\nuse strict;\nuse DBI;\n\nmy $dbd;\n\nif (@ARGV) {\n if (uc($ARGV[0]) eq 'POSTGRESQL') {\n $dbd = 'Pg';\n } elsif (uc($ARGV[0]) eq 'MYSQL') {\n $dbd = 'mysql';\n } else {\n &DoUsage();\n }\n} else {\n &DoUsage();\n}\n\nmy $dsn = \"DBI:$dbd:dbname=problemtest\";\nmy $usr = 'problemtest';\nmy $pwd = 'problemtest';\n\nmy $dbh = DBI->connect($dsn,$usr,$pwd,\n { AutoCommit => 1, RaiseError => 1 });\n\n$dbh->do(<<END);\ndrop table foo\nEND\n\n$dbh->do(<<END);\ndrop table control\nEND\n\n$dbh->do(<<END);\ncreate table foo (\n id integer not null,\n primary key (id),\n name varchar(100))\nEND\n\n$dbh->do(<<END);\ncreate table control (\n next_id integer not null)\nEND\n\n$dbh->do(<<END);\ninsert into control (next_id) values(1)\nEND\n\n$dbh->disconnect();\n\n\nsub DoUsage {\n print \"\\n\\tUsage: ./setupdb.pl <db-type>\\n\";\n print \"\\tWhere db-type is 'PostgreSQL' or 'MySQL'\\n\\n\";\n exit 0;\n}\n\n########################################################################\n#!/usr/bin/perl -w\n#\n## loaddb.pl \n#\n## Simple perl script to illustrate the performance degradation\n## of the update statement with PostgreSQL as compared to MySQL.\n#\n## Usage: ./loaddb.pl <db-type> <range-start> <range-end>\n#\n########################################################################\n\nuse strict;\nuse DBI;\n\nmy $dbd;\n\nif (@ARGV == 3) {\n if (uc($ARGV[0]) eq 'POSTGRESQL') {\n $dbd = 'Pg';\n } elsif (uc($ARGV[0]) eq 'MYSQL') {\n $dbd = 'mysql';\n } else {\n &DoUsage();\n }\n} else {\n &DoUsage();\n}\n\nmy $dsn = \"DBI:$dbd:dbname=problemtest\";\nmy $usr = 'problemtest';\nmy $pwd = 'problemtest';\n\nmy $dbh = DBI->connect($dsn,$usr,$pwd,\n { AutoCommit => 1, RaiseError => 1 });\n\nmy $inc_id = $dbh->prepare(\"update control set next_id = next_id + 1\");\nmy $get_id = $dbh->prepare(\"select next_id from control\");\nmy $insert = $dbh->prepare(\"insert into foo (id,name) values(?,?)\");\n\nmy $start = time;\nforeach($ARGV[1]..$ARGV[2]){\n $inc_id->execute();\n $get_id->execute();\n my $id = $get_id->fetchrow_array();\n $insert->execute($id,\"name$id\");\n}\nmy $duration = time - $start;\nprint \"duration = $duration\\n\";\n\n$inc_id->finish();\n$get_id->finish();\n$insert->finish();\n\n$dbh->disconnect();\n\nsub DoUsage {\n print \"\\n\\tUsage: ./loaddb.pl <db-type> <range-start> <range-end>.\\n\";\n print \"\\tWhere db-type is 'PostgreSQL' or 'MySQL'.\\n\\n\";\n exit 0;\n}\n\n\n\n\n\n",
"msg_date": "Mon, 29 Oct 2001 17:02:03 -0500",
"msg_from": "James Patterson <jpatterson@amsite.com>",
"msg_from_op": true,
"msg_subject": "Performance problems???"
},
{
"msg_contents": "James Patterson <jpatterson@amsite.com> writes:\n\n> I have observed some disturbing behavior with the latest (7.1.3) version of\n> PotgreSQL.\n> \n> In an application that I am working on with a team of folks, there is a\n> specific need to execute a series of SQL statements similar to those used in\n> the 'loaddb.pl' script included below. Without getting into better ways to\n> increment rowid's (this code is part of another tool that we are using), I'd\n> like to know why I get the following results with PostgreSQL and MySQL.\n> \n> In 3 separate runs I get the following PostgreSQL results:\n> \n> o 1 - 2000 records inserted in 12 seconds.\n> o 2001 - 4000 records inserted in 16 seconds.\n> o 4001 - 6000 records inserted in 20 seconds.\n> \n> You see, there is a clear performance degradation here that is associated\n> with the number of records in the database. It appears that the main culprit\n> is the update statement that is issued (see 'loaddb.pl' script below). This\n> performance behavior is not expected. Especially with so few rows in such a\n> small table. \n\nOne thing you should definitely do is wrap the entire load loop\n((update/select/insert) * N) in a transaction. This will give you a\nhuge speedup. Otherwise you are forcing a disk sync after every SQL\nstatement.\n\nYou may still see some degradation as the table size grows, but actual \ntimes should be more comparable to MySQL. \n\n> In 3 separate runs I get the following MySQL results:\n> \n> o 1 - 2000 records inserted in 6 seconds.\n> o 2001 - 4000 records inserted in 5 seconds.\n> o 4001 - 6000 records inserted in 6 seconds.\n> \n> You see, MySQL performs as expected. There is no performance degradation\n> here that is related to the number of records in the database tables.\n> \n> I have been a huge fan and advocate of PostgreSQL. I was stunned to see this\n> behavior. I am hoping that it is either a bug that has been fixed, or that I\n> can alter my PostgreSQL configuration to eliminate this behavior.\n> \n> I have an urgent need to resolve this situation. If I cannot solve the\n> problem soon, I will be forced to drop PostgreSQL in favor of MySQL. This is\n> not something that I wish to do.\n\nI think the main problem, or one of them, is that you're not using the\nproper mechanism for generating sequential numbers. If you used a\nreal SEQUENCE instead of a one-row table you wouldn't get the MVCC\npenalty from updating that table thousands of times, which is part of\nyour problem I think.\n\nI understand your issue with not wanting to change existing code, but\nthe fact is that a sequence is the right way to do this in PostgreSQL.\nUpdating a one-row table as you're doing requires a new copy of the\nrow to be created each time it's updated (because of MVCC) which slows\nthings down until VACUUM is run.\n\nTry using a sequence along with wrapping everything in a transaction\n(turn off autocommit and use BEGIN and COMMIT) and I think you'll be\npleasantly surprised. \n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "30 Oct 2001 15:07:41 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems???"
},
{
"msg_contents": "James Patterson <jpatterson@amsite.com> writes:\n> I have observed some disturbing behavior with the latest (7.1.3) version of\n> PotgreSQL.\n\nTry vacuuming the \"control\" table every so often --- you're accumulating\nhuge numbers of dead rows in it.\n\nEven better, replace \"control\" with a sequence object.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 15:32:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems??? "
},
{
"msg_contents": "James Patterson wrote:\n> \n> I have observed some disturbing behavior with the latest (7.1.3) version of\n> PotgreSQL.\n> \n> In an application that I am working on with a team of folks, there is a\n> specific need to execute a series of SQL statements similar to those used in\n> the 'loaddb.pl' script included below. Without getting into better ways to\n> increment rowid's (this code is part of another tool that we are using), I'd\n> like to know why I get the following results with PostgreSQL and MySQL.\n> \n> In 3 separate runs I get the following PostgreSQL results:\n> \n> o 1 - 2000 records inserted in 12 seconds.\n> o 2001 - 4000 records inserted in 16 seconds.\n> o 4001 - 6000 records inserted in 20 seconds.\n> \n> You see, there is a clear performance degradation here that is associated\n> with the number of records in the database. It appears that the main culprit\n> is the update statement that is issued (see 'loaddb.pl' script below). This\n> performance behavior is not expected. Especially with so few rows in such a\n> small table.\n> \n> In 3 separate runs I get the following MySQL results:\n> \n> o 1 - 2000 records inserted in 6 seconds.\n> o 2001 - 4000 records inserted in 5 seconds.\n> o 4001 - 6000 records inserted in 6 seconds.\n> \n> You see, MySQL performs as expected. There is no performance degradation\n> here that is related to the number of records in the database tables.\n> \n> I have been a huge fan and advocate of PostgreSQL. I was stunned to see this\n> behavior. I am hoping that it is either a bug that has been fixed, or that I\n> can alter my PostgreSQL configuration to eliminate this behavior.\n> \n> I have an urgent need to resolve this situation. If I cannot solve the\n> problem soon, I will be forced to drop PostgreSQL in favor of MySQL. This is\n> not something that I wish to do.\n\nYou really should us e a sequence.\n\nYou will most likely need to change the way you create sequence numbers\neven for mysql\nas the following is not safe on non-transactional DB.\n\n> my $inc_id = $dbh->prepare(\"update control set next_id = next_id + 1\");\n> my $get_id = $dbh->prepare(\"select next_id from control\");\n\nif two backends happen to interleave their queries\n\n1> my $inc_id = $dbh->prepare(\"update control set next_id = next_id +\n1\");\n2> my $inc_id = $dbh->prepare(\"update control set next_id = next_id +\n1\");\n1> my $get_id = $dbh->prepare(\"select next_id from control\");\n2> my $get_id = $dbh->prepare(\"select next_id from control\");\n\nthen both will get the same next_id which is probably not what you want.\n\n-------------\nHannu\n",
"msg_date": "Wed, 31 Oct 2001 10:28:55 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Performance problems???"
}
] |
[
{
"msg_contents": "I get no mail from pgsql-committers since Oct 19. Does anybody know\nwhat's going on?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Oct 2001 10:01:56 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "pgsql-committers?"
},
{
"msg_contents": "\nokay, this is most odd ... I know I've been receiving ... but tom has also\nbeen reporting intermident problems ... we're using delivery_rules in\nmajordomo to pump all the messages through a dedicated server ...\n\nMichael ... could there be a problem with delivery_rules? maybe where the\nnetwork is drop'ng between ServerA and ServerB, but not being compensated\nfor?\n\n\n On Tue, 30 Oct 2001, Tatsuo Ishii wrote:\n\n> I get no mail from pgsql-committers since Oct 19. Does anybody know\n> what's going on?\n> --\n> Tatsuo Ishii\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n",
"msg_date": "Mon, 29 Oct 2001 21:27:01 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": ">>>>> \"MGF\" == Marc G Fournier <scrappy@hub.org> writes:\n\nMGF> Michael ... could there be a problem with delivery_rules? maybe\nMGF> where the network is drop'ng between ServerA and ServerB, but not\nMGF> being compensated for?\n\nIt is always possible, but the delivery and SMTP code has been stable\nfor some time now and we will retry for ages. Before giving up the\nsystem will contact all of the fallback hosts you list and then finally\nlocalhost. And then if it still can't get through it will put several\nalarming messages in your logs.\n\nThe logs of the various MTAs involved should tell you what's going on.\n(And if that doesn't help, you can turn the debug level up to 1000 and\nget a log of the entire SMTP transaction from Majordomo's standpoint.)\n\n - J<\n",
"msg_date": "29 Oct 2001 21:14:36 -0600",
"msg_from": "Jason L Tibbitts III <tibbs@math.uh.edu>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> okay, this is most odd ... I know I've been receiving ... but tom has also\n> been reporting intermident problems ... we're using delivery_rules in\n> majordomo to pump all the messages through a dedicated server ...\n\nYeah, I'm still seeing intermittent loss of committer messages; for\nexample, I never saw a commit for Bruce's first pgindent run. (I just\nchecked my mail logs to verify this.) But I'm not missing all of them\nas Tatsuo reports. Anyone else seeing problems?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 00:18:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > okay, this is most odd ... I know I've been receiving ... but tom has also\n> > been reporting intermident problems ... we're using delivery_rules in\n> > majordomo to pump all the messages through a dedicated server ...\n> \n> Yeah, I'm still seeing intermittent loss of committer messages; for\n> example, I never saw a commit for Bruce's first pgindent run. (I just\n> checked my mail logs to verify this.) But I'm not missing all of them\n> as Tatsuo reports. Anyone else seeing problems?\n\nI've seen no mail from pgsql-committers since Oct 19.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 30 Oct 2001 14:46:50 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": "At 00:18 30/10/01 -0500, Tom Lane wrote:\n>as Tatsuo reports. Anyone else seeing problems?\n\nNothing since 19-Oct, and before then it was getting patchy.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Tue, 30 Oct 2001 16:48:36 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers? "
},
{
"msg_contents": "that's strange, fts.postgresql.org has a lot of commiter messages\n(select hackers list)\n\n\tOleg\nOn Tue, 30 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > okay, this is most odd ... I know I've been receiving ... but tom has also\n> > been reporting intermident problems ... we're using delivery_rules in\n> > majordomo to pump all the messages through a dedicated server ...\n>\n> Yeah, I'm still seeing intermittent loss of committer messages; for\n> example, I never saw a commit for Bruce's first pgindent run. (I just\n> checked my mail logs to verify this.) But I'm not missing all of them\n> as Tatsuo reports. Anyone else seeing problems?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Tue, 30 Oct 2001 12:37:57 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers? "
},
{
"msg_contents": "...\n> Yeah, I'm still seeing intermittent loss of committer messages; for\n> example, I never saw a commit for Bruce's first pgindent run. (I just\n> checked my mail logs to verify this.) But I'm not missing all of them\n> as Tatsuo reports. Anyone else seeing problems?\n\nSame as everyone else, though I haven't tracked down dates. I did not\nsee Bruce's commit...\n\n - Thomas\n",
"msg_date": "Tue, 30 Oct 2001 13:24:26 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": "\nokay, just sent a rather long one to mj2-dev that I didn't bother to send\nto here ... according to the logs on rs.postgresql.org, we've sent 35\nmessages successfully to t-ishii@sra.co.jp since midnight ... the question\nis, how many should have been sent over to him ... ?\n\nOn Tue, 30 Oct 2001, Thomas Lockhart wrote:\n\n> ...\n> > Yeah, I'm still seeing intermittent loss of committer messages; for\n> > example, I never saw a commit for Bruce's first pgindent run. (I just\n> > checked my mail logs to verify this.) But I'm not missing all of them\n> > as Tatsuo reports. Anyone else seeing problems?\n>\n> Same as everyone else, though I haven't tracked down dates. I did not\n> see Bruce's commit...\n>\n> - Thomas\n>\n\n",
"msg_date": "Tue, 30 Oct 2001 08:29:24 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": "> okay, just sent a rather long one to mj2-dev that I didn't bother to send\n> to here ... according to the logs on rs.postgresql.org, we've sent 35\n> messages successfully to t-ishii@sra.co.jp since midnight ... the question\n> is, how many should have been sent over to him ... ?\n\nI have received 30 or so emails from postgresql.org in Oct 30, mostly\nfrom pgsql-hackers list. Anyway I haven't received any mail from\npgsql-committers for a while. My co-worker saids she has not received\neither...\n\nCan you send me a test mail from the pgsql-committers list?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Oct 2001 22:57:55 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": "\none sent\n\nOn Tue, 30 Oct 2001, Tatsuo Ishii wrote:\n\n> > okay, just sent a rather long one to mj2-dev that I didn't bother to send\n> > to here ... according to the logs on rs.postgresql.org, we've sent 35\n> > messages successfully to t-ishii@sra.co.jp since midnight ... the question\n> > is, how many should have been sent over to him ... ?\n>\n> I have received 30 or so emails from postgresql.org in Oct 30, mostly\n> from pgsql-hackers list. Anyway I haven't received any mail from\n> pgsql-committers for a while. My co-worker saids she has not received\n> either...\n>\n> Can you send me a test mail from the pgsql-committers list?\n> --\n> Tatsuo Ishii\n>\n\n",
"msg_date": "Tue, 30 Oct 2001 09:27:35 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Tom Lane wrote:\n\n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > okay, this is most odd ... I know I've been receiving ... but tom has also\n> > been reporting intermident problems ... we're using delivery_rules in\n> > majordomo to pump all the messages through a dedicated server ...\n>\n> Yeah, I'm still seeing intermittent loss of committer messages; for\n> example, I never saw a commit for Bruce's first pgindent run. (I just\n> checked my mail logs to verify this.) But I'm not missing all of them\n> as Tatsuo reports. Anyone else seeing problems?\n\n3 on the 15th from you, 1 on the 19th from Hiroshi and nothing since.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 30 Oct 2001 10:12:39 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers? "
},
{
"msg_contents": "\nLet's see if *that* fxes it ... Michael sent me a command to run that\nallows me to see how deelivery is working fo ra specific email ... turns\nout that I was doing the delivery_rules wrong, where I had all three hosts\nlisted, instead of the one I wanted for delivery with two backups ...\n\nThe command also showed me that I had the backups mis-confuigured, so\nrelaying was being denied for those ones, which happened to be Tatsuo's\naddresses ...\n\nI just sent a test through that *looks* like it went through properly ...\n*cross fingers*\n\n\nOn Tue, 30 Oct 2001, Vince Vielhaber wrote:\n\n> On Tue, 30 Oct 2001, Tom Lane wrote:\n>\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > okay, this is most odd ... I know I've been receiving ... but tom has also\n> > > been reporting intermident problems ... we're using delivery_rules in\n> > > majordomo to pump all the messages through a dedicated server ...\n> >\n> > Yeah, I'm still seeing intermittent loss of committer messages; for\n> > example, I never saw a commit for Bruce's first pgindent run. (I just\n> > checked my mail logs to verify this.) But I'm not missing all of them\n> > as Tatsuo reports. Anyone else seeing problems?\n>\n> 3 on the 15th from you, 1 on the 19th from Hiroshi and nothing since.\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n",
"msg_date": "Tue, 30 Oct 2001 12:35:04 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers? "
},
{
"msg_contents": "> ...\n> > Yeah, I'm still seeing intermittent loss of committer messages; for\n> > example, I never saw a commit for Bruce's first pgindent run. (I just\n> > checked my mail logs to verify this.) But I'm not missing all of them\n> > as Tatsuo reports. Anyone else seeing problems?\n> \n> Same as everyone else, though I haven't tracked down dates. I did not\n> see Bruce's commit...\n\nThe file list was huge. A pgindent commit is hard to miss. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Oct 2001 16:53:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers?"
}
] |
[
{
"msg_contents": "Hi,\n\nThere seems no docs about statistics collector. Especially I see\nfollowing in guc:\n\n#ifdef BTREE_BUILD_STATS\n#show_btree_build_stats = false\n#endif\n\nWhat is it? No conifgure option, no ifdef in pg_config.h...\n--\nTatsuo Ishii\n\n",
"msg_date": "Tue, 30 Oct 2001 13:35:23 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "statistics collector"
},
{
"msg_contents": "> Hi,\n> \n> There seems no docs about statistics collector. Especially I see\n> following in guc:\n> \n> #ifdef BTREE_BUILD_STATS\n> #show_btree_build_stats = false\n> #endif\n> \n> What is it? No conifgure option, no ifdef in pg_config.h...\n\nPlease ignore this. I noticed after postings that it does nothing with\nthe new statistics collector.\n\nAnyway, I see no docs for the statistics collector...\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Oct 2001 14:03:19 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: statistics collector"
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> There seems no docs about statistics collector. Especially I see\n> following in guc:\n\n> #ifdef BTREE_BUILD_STATS\n> #show_btree_build_stats = false\n> #endif\n\n> What is it? No conifgure option, no ifdef in pg_config.h...\n\nThat's not new stats-collector stuff, that's ancient undocumented btree\ncruft (possibly dating back to Berkeley). Jan deserves blame for the\npoor state of documentation of the stats collector, but not for this ;-)\n\nWhat I've been able to deduce about the stats collector is documented\nnow in the Admin Guide...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 00:13:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: statistics collector "
},
{
"msg_contents": "> What I've been able to deduce about the stats collector is documented\n> now in the Admin Guide...\n\nGreat!\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 30 Oct 2001 15:20:05 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: statistics collector "
}
] |
[
{
"msg_contents": " > For a 'standalone' view, this is fine, but if the view is used in \nanother view or a function then that will break (I think I'm teaching my \nGrandmother to suck eggs here Jean-Michel!).\n > 1) Attempt to create a view with the new definition to ensure it's valid.\n > 2) Drop the old view.\n > 3) Create the new view.\n > 4) Re-apply any comments and ACLs.\n > 5) Query pg_class for the updated OID.\n\nDear Friends,\n\nI did not get this email on pgadmin-hackers. We need view dependency \nchecking, otherwise there is no chance that I can one day migrate from \npgAdmin I to pgAdmin II. Hopefully, updating a view is not too difficult:\n\n- Attempt to create a view with the new definition to ensure it's valid.\n- Open transaction (in locking mode as we may drop triggers in many tables).\n- Drop dependent views in OID order. Keep CREATE SQL strings for future usage.\n- Drop dependent triggers. Keep CREATE SQL strings for future usage.\n- Drop dependent rules. Keep CREATE SQL strings for future usage.\n- Drop the old view and create the new view.\n- Create dependent views, triggers and rules.\n- Re-apply any comments and ACLs.\n- Commit transaction.\n- Query pg_class for the updated OID.\n\nAny feedback?\n\nAnother issue is that views get very complex when commited. An example \nwould be:\nCREATE VIEW \"view_data_source\"\nAS SELECT * FROM table 1\nLEFT JOIN table 2 ON (xx=ccc)\nLEFT JOIN table 3 ON (xx=ccc)\n\nWhen committed, this view becomes a nightmare because it can hardly be \nread. Another subsequent problem is that views with SELECT * FROM table1 \nneed updating when fields are added/dropped in tables. In the end we always \ncome up with the conclusion that changes should be applied internally to \nPostgreSQL.\n\nI am going to have a look at updating views within a single transaction. \nAre there special guidelines for compiling phSchema?\n\nBest regards,\nJean-Michel\n",
"msg_date": "Tue, 30 Oct 2001 08:56:34 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: DROP/CREATE"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 30 October 2001 07:57\n> To: dpage@vale-housing.co.uk\n> Cc: pgadmin-hackers@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: RE: DROP/CREATE\n> \n> \n> > For a 'standalone' view, this is fine, but if the view is used in \n> another view or a function then that will break (I think I'm \n> teaching my \n> Grandmother to suck eggs here Jean-Michel!).\n> > 1) Attempt to create a view with the new definition to \n> ensure it's valid. > 2) Drop the old view. > 3) Create the \n> new view. > 4) Re-apply any comments and ACLs. > 5) Query \n> pg_class for the updated OID.\n> \n> Dear Friends,\n> \n> I did not get this email on pgadmin-hackers. We need view dependency \n> checking, otherwise there is no chance that I can one day \n> migrate from \n> pgAdmin I to pgAdmin II. Hopefully, updating a view is not \n> too difficult:\n> \n> - Attempt to create a view with the new definition to ensure \n> it's valid.\n> - Open transaction (in locking mode as we may drop triggers \n> in many tables).\n> - Drop dependent views in OID order. Keep CREATE SQL strings \n> for future usage.\n> - Drop dependent triggers. Keep CREATE SQL strings for future usage.\n> - Drop dependent rules. Keep CREATE SQL strings for future usage.\n> - Drop the old view and create the new view.\n> - Create dependent views, triggers and rules.\n> - Re-apply any comments and ACLs.\n> - Commit transaction.\n> - Query pg_class for the updated OID.\n> \n> Any feedback?\n\nWell, I would point out that pgAdmin I doesn't do all this, but I'll concede\nthat it does do more than pgAdmin II at the moment.\n\nI don't think rules are an issue are they? Can you create them on Views\n(certainly pgAdmin won't let you - should it?) - scrub that, (typing as I\nthink!) how else would you create an updateable view using rules? Does the\nsame apply to triggers i.e. can you create them on views?\n\n> Another issue is that views get very complex when commited. \n> An example \n> would be:\n> CREATE VIEW \"view_data_source\"\n> AS SELECT * FROM table 1\n> LEFT JOIN table 2 ON (xx=ccc)\n> LEFT JOIN table 3 ON (xx=ccc)\n> \n> When committed, this view becomes a nightmare because it can \n> hardly be \n> read. Another subsequent problem is that views with SELECT * \n> FROM table1 \n> need updating when fields are added/dropped in tables. In the \n> end we always \n> come up with the conclusion that changes should be applied \n> internally to \n> PostgreSQL.\n\nI'm beginning to think this is correct. I see the work you did in pgAdmin I\nas a kind of proof of concept. The more we discuss these things, the more I\nthink of problems like this that would be seriously hard work to do client\nside. To get around the problem here for example, you need to have a full\nblown parser to figure out the tables involved. What if the view calls some\nfunctions as well? What if that function takes an entire tuple from a\n(modified) table as an argument (or returns it) - then things get really\nhairy.\n \nI think the only way we can reliably do this is with the addition of either\nsafe CREATE OR REPLACE sql commands, or addition of a suitable\npg_dependencies table which is maintained by PostgreSQL itself.\n\n> I am going to have a look at updating views within a single \n> transaction. \n> Are there special guidelines for compiling phSchema?\n\nNo, just that if you break compatibility you may need to run buildall.bat(?)\nto recompile everything. Please don't commit anything to do with this until\nI've taken a look either - I don't want to add any more features now until\nafter the first full release.\n\nCheers, Dave.\n",
"msg_date": "Tue, 30 Oct 2001 08:25:17 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: DROP/CREATE"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 30 October 2001 09:21\n> To: Dave Page\n> Cc: pgadmin-hackers@postgresql.org\n> Subject: RE: DROP/CREATE\n>\n> What if that \n> >function takes an entire tuple from a\n> >(modified) table as an argument (or returns it) - then \n> things get really\n> >hairy.\n> >\n> >I think the only way we can reliably do this is with the addition of \n> >either safe CREATE OR REPLACE sql commands, or addition of a \n> suitable \n> >pg_dependencies table which is maintained by PostgreSQL itself.\n> \n> A third solution would be to work with PL/pgSQL and \n> development tables (i.e \n> code repository).\n> The notion of Code repository is interesting because it is \n> not linked to \n> PostgreSQL internals.\n> A code repository can be located anywhere on the planet. Cool \n> feature for \n> development teams.\n\nYes (and I agree that it would be a good feature), but that will still\nrequire full client side parsing of the code to figure out the dependencies\n- I for one, do not wish to try to recreate (and keep up-to-date) the\nPostgreSQL parser in VB. Besides which, if we take it that far then we might\njust as well use reverse engineered SQL to scan for dependencies. I know you\ndon't like reverse engineered code, but bear in mind that the important bits\nare reported directly from PostgreSQL (e.g. pg_proc.prosrc).\n\n> With PL/pgSQL we can ***easily*** track and rebuild objects. \n> Before that, \n> we need a PL/pgSQL wizard in pgAdmin.\n> PostgreSQL might incorporate PL/pgSQL as a standard feature \n> when protection \n> for infinite loops is added.\n\nI think that's unlikely from the responses you got from pgsql-hackers\nrecently.\n \n> Code repositories would be a nice solution as completely \n> independent from \n> PgAdmin. This means PhpPgAdmin would also benefit from it. \n> Ultimately, when \n> Postgresql gets PL/pgSQL infinite loop protection, \n> repositories could get \n> included in Postgresql. So why not go for it?\n\nI've no problem with working with the phpPgAdmin people, that can only be a\ngood thing.\n\n> > > I am going to have a look at updating views within a single \n> > > transaction. Are there special guidelines for compiling phSchema?\n> >\n> >No, just that if you break compatibility you may need to run \n> >buildall.bat(?) to recompile everything. Please don't commit \n> anything \n> >to do with this until I've taken a look either - I don't want to add \n> >any more features now until after the first full release.\n> \n> OK, I will not upload pgSchema to CVS if modified. On my \n> side, I have to \n> consider migration from pgAdmin I to pgAdmin II to comply \n> with PostgreSQL \n> 7.2. Without rebuilding, I cannot work and maintain 100 \n> tables, 50 views, \n> 30 triggers and 200 functions.\n\nNo, I can see your problem. Remember though that the code in pgAdmin I is\nfar from foolproof, as you've said before, we need absolute confidence that\n*every* dependency is found and dealt with, something the pgAdmin I code\nmakes a good stab at but could be fooled.\n\nI really believe that the only truly reliable way to do this is for\nPostgreSQL to provide either a pg_dependencies table or a function that\ntells us the dependencies for a given object. If this email actually makes\nit to the pgsql-hackers list perhaps someone can comment on whether this is\nlikely to happen?\n\n> What are your plans? If you don't mind, I would prefer to go \n> for a PL/pgSQL \n> repository feature. This would be more advanced that in \n> pgAdmin I, testing \n> the new features on my side only. Please advise me for \n> pgShema compilation \n> guidelines.\n\nI'm happy for you to look at code repositories, though I think they should\nallow use of PL/Perl and PL/TCL as well. This shouldn't be a problem of\ncourse because the PL code isn't 'compiled' by PostgreSQL like SQL functions\nor Views are.\n\nAs far as pgSchema goes, compile it as I said, but pay attention to the\nexisting design and try to match the style/layout of the classes. For an\nexample of 'bolted on' functionality (as opposed to the core object\nhierarchy), look at the History/Graveyard stuff.\n\nCheers, Dave.\n",
"msg_date": "Tue, 30 Oct 2001 10:02:00 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: DROP/CREATE"
}
] |
[
{
"msg_contents": "\n>Yes (and I agree that it would be a good feature), but that will still\n>require full client side parsing of the code to figure out the dependencies\n>- I for one, do not wish to try to recreate (and keep up-to-date) the\n>PostgreSQL parser in VB. Besides which, if we take it that far then we might\n>just as well use reverse engineered SQL to scan for dependencies. I know you\n>don't like reverse engineered code, but bear in mind that the important bits\n>are reported directly from PostgreSQL (e.g. pg_proc.prosrc).\n\nIMHO view modification can be achieved within one transaction, without \ndevelopment table nor PL/pgSQL.\n\nCould you give me your feedback again for view modification:\n- Attempt to create a view with the new definition to ensure it's valid.\n- Open transaction (in locking mode as we may drop triggers in many tables).\n- Drop dependent views in OID order. Keep CREATE SQL strings for future usage.\n- Drop dependent triggers. Keep CREATE SQL strings for future usage.\n- Drop dependent rules. Keep CREATE SQL strings for future usage.\n- Drop the old view and create the new view.\n- Create dependent views, triggers and rules.\n- Re-apply any comments and ACLs.\n- Commit transaction.\n- Query pg_class for the updated OID.\n\nThis would allow migration from pgAdmin I to pgAdmin II.\n\n/Later,\nJean-Michel\n",
"msg_date": "Tue, 30 Oct 2001 11:38:48 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: DROP/CREATE"
}
] |
[
{
"msg_contents": "On Mon, 29 Oct 2001, Vsevolod Lobko wrote:\n\n> Seems that problem is very simple :))\n> MSSql can do queries from indexes, without using actual table at all.\n> Postgresql doesn't.\n>\n> So mssql avoids sequental scanning of big table, and simply does scan of\n> index which is already in needed order and has very much less size.\nI forewarded this information to my colleague and he replied the following\n(im translating from German into English):\n\nhc> I expected this problem. But what is the purpose of an index: Not\nhc> to look into the table itself. Moreover this means that the expense\nhc> grows linear with the table size - no good prospect at all (the\nhc> good thing is it is not exponential :-)).\nI have to explain that we are in the *beginning* of production process.\nWe expect a lot more of data.\n\nhc> In case of real index usage the expense grows only with log(n).\nhc> No matter about the better philosophy of database servers, MS-SQL-Server\nhc> has a consequent index usage and so it is very fast at many queries.\nhc> When performing a query to a field without index, I get a slow\nhc> table scan. This is like measuring the speed of the harddisk and\nhc> the cleverness of the cache.\n\nThe consequence for my problem is now: If it is technically possible\nto implement index scans without table lookups please implement it. If\nnot we just have to look for another database engine which does so,\nbecause our applictaion really need the speed on this type of queries.\nI repeat from my initial posting: The choice of the server for our\napplication could have importance for many projects in the field of\nmedicine in Germany. I really hope that there is a reasonable solution\nwhich perhaps could give a balance between safety and speed. For\nexample I can assure in my application that the index, once created\nwill be valid, because I just want to read in a new set of data once\na day (from the MS-SQL Server which collects data over the day). So\nI could recreate all indices after the import and the database is\nreadonly until the next cron job. Is there any chance to speed up\nthose applications?\n\nKind regards\n\n Andreas.\n\n",
"msg_date": "Tue, 30 Oct 2001 11:44:16 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On 30 Oct 2001 at 11:44 (+0100), Tille, Andreas wrote:\n| On Mon, 29 Oct 2001, Vsevolod Lobko wrote:\n| \n| > Seems that problem is very simple :))\n| > MSSql can do queries from indexes, without using actual table at all.\n| > Postgresql doesn't.\n| >\n| > So mssql avoids sequental scanning of big table, and simply does scan of\n| > index which is already in needed order and has very much less size.\n| I forewarded this information to my colleague and he replied the following\n| (im translating from German into English):\n| \n| hc> I expected this problem. But what is the purpose of an index: Not\n| hc> to look into the table itself. Moreover this means that the expense\n| hc> grows linear with the table size - no good prospect at all (the\n| hc> good thing is it is not exponential :-)).\n| I have to explain that we are in the *beginning* of production process.\n| We expect a lot more of data.\n| \n| hc> In case of real index usage the expense grows only with log(n).\n| hc> No matter about the better philosophy of database servers, MS-SQL-Server\n| hc> has a consequent index usage and so it is very fast at many queries.\n| hc> When performing a query to a field without index, I get a slow\n| hc> table scan. This is like measuring the speed of the harddisk and\n| hc> the cleverness of the cache.\n| \n| The consequence for my problem is now: If it is technically possible\n| to implement index scans without table lookups please implement it. If\n| not we just have to look for another database engine which does so,\n| because our applictaion really need the speed on this type of queries.\n| I repeat from my initial posting: The choice of the server for our\n| application could have importance for many projects in the field of\n| medicine in Germany. I really hope that there is a reasonable solution\n| which perhaps could give a balance between safety and speed. For\n| example I can assure in my application that the index, once created\n| will be valid, because I just want to read in a new set of data once\n| a day (from the MS-SQL Server which collects data over the day). So\n| I could recreate all indices after the import and the database is\n| readonly until the next cron job. Is there any chance to speed up\n| those applications?\n\nCREATE INDEX idx_meldekategorie_hauptdaten_f\n ON hauptdaten_fall(meldekategorie);\nCLUSTER idx_meldekategorie_hauptdaten_f ON hauptdaten_fall;\n\nAggregate (cost=5006.02..5018.90 rows=258 width=16)\n -> Group (cost=5006.02..5012.46 rows=2575 width=16)\n -> Sort (cost=5006.02..5006.02 rows=2575 width=16)\n -> Seq Scan on hauptdaten_fall (cost=0.00..4860.12 rows=2575 width=16)\n\nThis looks much nicer, but is still quite slow. I'm quite sure the\nslowness is in the sort(), since all queries that don't sort, return\nquickly. I hoped the clustered index would speed up the sort, but \nthat is not the case. \n\nIt _seems_ a simple optimization would be to not (re)sort the tuples \nwhen using a clustered index.\n\nif( the_column_to_order_by_is_clustered ){\n if( order_by_is_DESC )\n // reverse the tuples to handle\n}\n\nI haven't looked at the code to see if this is even feasible, but I\ndo imagine there is enough info available to avoid an unnecessary\nsort on the CLUSTERED index. The only problem I see with this is\nif the CLUSTERed index is not kept in a CLUSTERed state as more\nrecords are added to this table.\n\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 30 Oct 2001 06:48:40 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "AFAIK, sorting is necessary even when you have CLUSTERed a table using an index.\n\nSomewhere on the docs I read sth like \"CLUSTER reorders the table on disk so that entries\ncloser on the index are closer on the disk\" (obviously written in better English ;-)\n\nBut if you INSERT a single row later, it will NOT get inserted to the right place. So\nSORT is still necessary.\n\nMAYBE, but I am not sure at all, the sort may take place in less \"real\" time than in case\nthe table was not CLUSTERed, as the table is \"nearly\" sorted.\n\nHackers, is the sorting algorithm capable of exiting at the very moment the table is\nsorted, or are some extra passes always calculated?\n\nGood luck!\n\nAntonio\n\nBrent Verner wrote:\n\n> On 30 Oct 2001 at 11:44 (+0100), Tille, Andreas wrote:\n> | On Mon, 29 Oct 2001, Vsevolod Lobko wrote:\n> |\n> | > Seems that problem is very simple :))\n> | > MSSql can do queries from indexes, without using actual table at all.\n> | > Postgresql doesn't.\n> | >\n> | > So mssql avoids sequental scanning of big table, and simply does scan of\n> | > index which is already in needed order and has very much less size.\n> | I forewarded this information to my colleague and he replied the following\n> | (im translating from German into English):\n> |\n> | hc> I expected this problem. But what is the purpose of an index: Not\n> | hc> to look into the table itself. Moreover this means that the expense\n> | hc> grows linear with the table size - no good prospect at all (the\n> | hc> good thing is it is not exponential :-)).\n> | I have to explain that we are in the *beginning* of production process.\n> | We expect a lot more of data.\n> |\n> | hc> In case of real index usage the expense grows only with log(n).\n> | hc> No matter about the better philosophy of database servers, MS-SQL-Server\n> | hc> has a consequent index usage and so it is very fast at many queries.\n> | hc> When performing a query to a field without index, I get a slow\n> | hc> table scan. This is like measuring the speed of the harddisk and\n> | hc> the cleverness of the cache.\n> |\n> | The consequence for my problem is now: If it is technically possible\n> | to implement index scans without table lookups please implement it. If\n> | not we just have to look for another database engine which does so,\n> | because our applictaion really need the speed on this type of queries.\n> | I repeat from my initial posting: The choice of the server for our\n> | application could have importance for many projects in the field of\n> | medicine in Germany. I really hope that there is a reasonable solution\n> | which perhaps could give a balance between safety and speed. For\n> | example I can assure in my application that the index, once created\n> | will be valid, because I just want to read in a new set of data once\n> | a day (from the MS-SQL Server which collects data over the day). So\n> | I could recreate all indices after the import and the database is\n> | readonly until the next cron job. Is there any chance to speed up\n> | those applications?\n>\n> CREATE INDEX idx_meldekategorie_hauptdaten_f\n> ON hauptdaten_fall(meldekategorie);\n> CLUSTER idx_meldekategorie_hauptdaten_f ON hauptdaten_fall;\n>\n> Aggregate (cost=5006.02..5018.90 rows=258 width=16)\n> -> Group (cost=5006.02..5012.46 rows=2575 width=16)\n> -> Sort (cost=5006.02..5006.02 rows=2575 width=16)\n> -> Seq Scan on hauptdaten_fall (cost=0.00..4860.12 rows=2575 width=16)\n>\n> This looks much nicer, but is still quite slow. I'm quite sure the\n> slowness is in the sort(), since all queries that don't sort, return\n> quickly. I hoped the clustered index would speed up the sort, but\n> that is not the case.\n>\n> It _seems_ a simple optimization would be to not (re)sort the tuples\n> when using a clustered index.\n>\n> if( the_column_to_order_by_is_clustered ){\n> if( order_by_is_DESC )\n> // reverse the tuples to handle\n> }\n>\n> I haven't looked at the code to see if this is even feasible, but I\n> do imagine there is enough info available to avoid an unnecessary\n> sort on the CLUSTERED index. The only problem I see with this is\n> if the CLUSTERed index is not kept in a CLUSTERed state as more\n> records are added to this table.\n>\n> brent\n>\n> --\n> \"Develop your talent, man, and leave the world something. Records are\n> really gifts from people. To think that an artist would love you enough\n> to share his music with anyone is a beautiful thing.\" -- Duane Allman\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n\n",
"msg_date": "Tue, 30 Oct 2001 14:53:29 +0100",
"msg_from": "Antonio Fiol =?iso-8859-1?Q?Bonn=EDn?= <fiol@w3ping.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Antonio Fiol Bonn�n wrote:\n\n> AFAIK, sorting is necessary even when you have CLUSTERed a table using an index.\nSorting is not the performance constraint in my example. Just leave out\nthe sorting and see what happens ...\n\n> But if you INSERT a single row later, it will NOT get inserted to the right place. So\n> SORT is still necessary.\nWell rearanging the database in a cronjob after inserting new data once a day\nover night would be possible - but I doubt that it makes a big difference.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 15:09:00 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Antonio Fiol [iso-8859-1] Bonn�n wrote:\n\n> > | > Seems that problem is very simple :))\n> > | > MSSql can do queries from indexes, without using actual table at all.\n> > | > Postgresql doesn't.\n> > | >\n> > | > So mssql avoids sequental scanning of big table, and simply does scan of\n> > | > index which is already in needed order and has very much less size.\n<snip>\n> > | The consequence for my problem is now: If it is technically possible\n> > | to implement index scans without table lookups please implement it. If\nThe feature you are looking for is called 'index coverage'. Unfortunately,\nit is not easy to implement with Postgresql, and it is one of few\noutstanding 'nasties'. The reason you can't do it is follows: Postgres\nuses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\nif index contains all the information you need, you still need to access\nmain table to check if the tuple is valid. \n\nPossible workaround: store tuple validity in index, that way, a lot more\nspace is wasted (16 more bytes/tuple/index), and you will need to update\nall indices when the base table is updated, even if indexed information\nhave not changed.\n\nFundamentally, this may be necessary anyway, to make index handlers aware\nof transactions and tuple validity (currently, if you have unique index,\nyou may have conflicts when different transactions attempt to insert\nconflicting data, _at the time of insert, not at time of commit_).\n\n-alex\n\n",
"msg_date": "Tue, 30 Oct 2001 10:24:17 -0500 (EST)",
"msg_from": "Alex Pilosov <alex@pilosoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Alex Pilosov wrote:\n\n> The feature you are looking for is called 'index coverage'. Unfortunately,\n> it is not easy to implement with Postgresql, and it is one of few\n> outstanding 'nasties'. The reason you can't do it is follows: Postgres\n> uses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\n> if index contains all the information you need, you still need to access\n> main table to check if the tuple is valid.\nWell, I do not fully understand that stuff, but I get a feeling of the\nproblem. Thanks for the explanation.\n\n> Possible workaround: store tuple validity in index, that way, a lot more\n> space is wasted (16 more bytes/tuple/index), and you will need to update\n> all indices when the base table is updated, even if indexed information\n> have not changed.\nThis would be acceptable for *my* special application but I�m afraid\nthis could be a problem for others.\n\n> Fundamentally, this may be necessary anyway, to make index handlers aware\n> of transactions and tuple validity (currently, if you have unique index,\n> you may have conflicts when different transactions attempt to insert\n> conflicting data, _at the time of insert, not at time of commit_).\nAs I said all this wouln�t be a problem for my application. I just\nrun a sequential insert of data each night. Then the database is read only.\n\nDoes anybody see chances that 'index coverage' would be implemented into\n7.2. This would be a cruxial feature for my application. If it will\nnot happen in a reasonable time frame I would have to look for\nalternative database server. Anybody knows something about MySQL or\nInterbase?\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 17:13:46 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\n> Does anybody see chances that 'index coverage' would be implemented into\n> 7.2. This would be a cruxial feature for my application. If it will\n> not happen in a reasonable time frame I would have to look for\n> alternative database server. Anybody knows something about MySQL or\n> Interbase?\n\nSince I don't remember anyone mentioning working on it here and 7.2 just\nwent into beta, I don't think it's likely. If you want to push, you may\nbe able to convince someone for 7.3.\n\n\n",
"msg_date": "Tue, 30 Oct 2001 08:51:50 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Wednesday 31 October 2001 03:13, you wrote:\n> On Tue, 30 Oct 2001, Alex Pilosov wrote:\n\n> As I said all this wouln�t be a problem for my application. I just\n> run a sequential insert of data each night. Then the database is read\n> only.\n>\n> Does anybody see chances that 'index coverage' would be implemented into\n> 7.2. This would be a cruxial feature for my application. If it will\n\nAndreas,\n\nI have the feeling that your problem is solved best by taking a different \napproach. \nAs A. Pilosovs posting pointed out, index coverage is a problem intrinsic to \nthe MVCC implementation (IMHO a small price to pay for a priceless feature). \nI can't see why much effort should go into a brute force method to implement \nindex coverage, if your problem can be solved more elegant in a different way.\n\nWith the example you posted, it is essentially only simple statistics you \nwant to run on tables where the *majority* of records would qualify in your \nquery.\nWhy not create an extra \"statistics\" table which is updated automatically \nthrough triggers in your original table? That way, you will always get \nup-to-date INSTANT query results no matter how huge your original table is.\n\nAnd, don't forget that the only way MS SQL can achieve the better performance \nhere is through mercilessly hogging ressources. In a complex database \nenvironment with even larger tables, the performance gain in MS SQL would be \nminimal (my guess).\n\nHorst\n",
"msg_date": "Wed, 31 Oct 2001 14:37:29 +1100",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\n>Why not create an extra \"statistics\" table which is updated automatically\n>through triggers in your original table? That way, you will always get\n>up-to-date INSTANT query results no matter how huge your original table is.\n>\n>And, don't forget that the only way MS SQL can achieve the better performance\n>here is through mercilessly hogging ressources. In a complex database\n>environment with even larger tables, the performance gain in MS SQL would be\n>minimal (my guess).\n\nDefinitely. This is a design optimization problem not an index problem.\n",
"msg_date": "Wed, 31 Oct 2001 06:41:56 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "Alex Pilosov wrote:\n> \n> On Tue, 30 Oct 2001, Antonio Fiol [iso-8859-1] Bonnᅵn wrote:\n> \n> > > | > Seems that problem is very simple :))\n> > > | > MSSql can do queries from indexes, without using actual table at all.\n> > > | > Postgresql doesn't.\n> > > | >\n> > > | > So mssql avoids sequental scanning of big table, and simply does scan of\n> > > | > index which is already in needed order and has very much less size.\n> <snip>\n> > > | The consequence for my problem is now: If it is technically possible\n> > > | to implement index scans without table lookups please implement it. If\n> The feature you are looking for is called 'index coverage'. Unfortunately,\n> it is not easy to implement with Postgresql, and it is one of few\n> outstanding 'nasties'. The reason you can't do it is follows: Postgres\n> uses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\n> if index contains all the information you need, you still need to access\n> main table to check if the tuple is valid.\n> \n> Possible workaround: store tuple validity in index, that way, a lot more\n> space is wasted (16 more bytes/tuple/index), and you will need to update\n> all indices when the base table is updated, even if indexed information\n> have not changed.\n\nAFAIK you will need to update all indexes anyway as MVCC changes the\nlocation \nof the new tuple.\n\n-------------\nHannu\n",
"msg_date": "Wed, 31 Oct 2001 10:34:32 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\"Tille, Andreas\" wrote:\n> \n> On Tue, 30 Oct 2001, Alex Pilosov wrote:\n> \n> > The feature you are looking for is called 'index coverage'. Unfortunately,\n> > it is not easy to implement with Postgresql, and it is one of few\n> > outstanding 'nasties'. The reason you can't do it is follows: Postgres\n> > uses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\n> > if index contains all the information you need, you still need to access\n> > main table to check if the tuple is valid.\n> Well, I do not fully understand that stuff, but I get a feeling of the\n> problem. Thanks for the explanation.\n> \n> > Possible workaround: store tuple validity in index, that way, a lot more\n> > space is wasted (16 more bytes/tuple/index), and you will need to update\n> > all indices when the base table is updated, even if indexed information\n> > have not changed.\n> This would be acceptable for *my* special application but Iᅵm afraid\n> this could be a problem for others.\n> \n> > Fundamentally, this may be necessary anyway, to make index handlers aware\n> > of transactions and tuple validity (currently, if you have unique index,\n> > you may have conflicts when different transactions attempt to insert\n> > conflicting data, _at the time of insert, not at time of commit_).\n> As I said all this woulnᅵt be a problem for my application. I just\n> run a sequential insert of data each night. Then the database is read only.\n> \n> Does anybody see chances that 'index coverage' would be implemented into\n> 7.2. This would be a cruxial feature for my application. If it will\n> not happen in a reasonable time frame I would have to look for\n> alternative database server. Anybody knows something about MySQL or\n> Interbase?\n\nIf it is static data and simple queries then there is fairly good chance \nthat MySQL is a good choice .\n\nAs fo the other two opensource databases (Interbase and SAPDB (a\nmodyfied \nversion of ADABAS released under GPL by SAP - http://www.sapdb.com/) I\nhave \nno direct experience. \n\nI occasionally read sapdb mailing list, and I've got an impression that\nit \nis quite usable and stable DB once you have set it up. Setting up seems \norder(s) of magnitude harder than for PostgreSQL or MySQL.\n\nWeather it actually runs full-table aggregates faster than PG is a thing \nI can't comment on, but you could get some of their people to do the \nbenchmarking for you if you send them an advocacy-urging request, like\nI'd \nswitch if you show me that yur dbis fast enough ;)\n\n-------------------\nHannu\n",
"msg_date": "Wed, 31 Oct 2001 10:47:00 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Tuesday 30 October 2001 21:24, Alex Pilosov wrote:\n> > > | The consequence for my problem is now: If it is technically possible\n> > > | to implement index scans without table lookups please implement it. \n> > > | If\n>\n> The feature you are looking for is called 'index coverage'. Unfortunately,\n> it is not easy to implement with Postgresql, and it is one of few\n> outstanding 'nasties'. The reason you can't do it is follows: Postgres\n> uses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\n> if index contains all the information you need, you still need to access\n> main table to check if the tuple is valid.\n>\n> Possible workaround: store tuple validity in index, that way, a lot more\n> space is wasted (16 more bytes/tuple/index), and you will need to update\n> all indices when the base table is updated, even if indexed information\n> have not changed.\n\nWhat is the problem to implement this index as a special index type for \npeople who need this? Just add a flag keyword to index creation clause.\n\nActually I would like to hear Tom's opinion on this issue. This issue is of \nmy interest too.\n\nAlso I saw sometime ago in hackers that there is a patch implementing this...\nOr I am wrong here?\n\n--\nDenis\n\n",
"msg_date": "Thu, 1 Nov 2001 00:26:10 +0600",
"msg_from": "Denis Perchine <dyp@perchine.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Wed, 31 Oct 2001, Horst Herb wrote:\n\n> I have the feeling that your problem is solved best by taking a different\n> approach.\n> As A. Pilosovs posting pointed out, index coverage is a problem intrinsic to\n> the MVCC implementation (IMHO a small price to pay for a priceless feature).\nCould somebody explain MVCC to such an uneducated man like me. Is this a\ncertain feature (which perhaps MS SQL) doesn�t have and which might be\nimportant in the future?\n> I can't see why much effort should go into a brute force method to implement\n> index coverage, if your problem can be solved more elegant in a different way.\n>\n> With the example you posted, it is essentially only simple statistics you\n> want to run on tables where the *majority* of records would qualify in your\n> query.\n> Why not create an extra \"statistics\" table which is updated automatically\n> through triggers in your original table? That way, you will always get\n> up-to-date INSTANT query results no matter how huge your original table is.\nMy problem is to convince my colleague. I�m afraid that he would consider\nthose optimizing stuff as \"tricks\" to work around constraints of the\ndatabase server. He might argue that if it comes to the point that also\nMS SQL server needs some speed improvement and he has to do the same\nperformance tuning things MS SQL does outperform PostgreSQL again and we\nare at the end with our wisdom. I repeat: I for myself see the strength\nof OpenSource (Horst, you know me ;-) ) and I would really love to use\nPostgreSQL. But how to prove those arguing wrong? *This* is my problem.\nWe have to do a design decision. My colleague is a mathematician who\nhas prefered MS SQL server some years ago over Oracle and had certain\nreasons for it based on estimations of our needs. He had no problems\nwith UNIX or something else and he theoretically is on my side that OpenSource\nis the better way and would accept it if it would give the same results\nas his stuff.\nBut he had never had some performance problems with his databases and\nknows people who claim to fill Zillions of Megabytes of MS SQL server.\nSo he doubt on the quality of PostgreSQL server if it has problems in\nthe first run. I have to admit that his point of view is easy to\nunderstand. I would have to prove (!) that we wouldn�t have trouble\nwith bigger databases and that those things are no \"dirty workarounds\"\nof a weak server.\n\n> And, don't forget that the only way MS SQL can achieve the better performance\n> here is through mercilessly hogging ressources. In a complex database\n> environment with even larger tables, the performance gain in MS SQL would be\n> minimal (my guess).\nUnfortunately it is not enough to guess. He has enough experiences that\nI knows that the MS SQL server is fit for the task he wants to solve. If\nI tell him: \"*Perhaps* you could run into trouble.\", he would just laugh\nabout me because I�m in trouble *now* and can�t prove that I won�t be\nagain.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Thu, 1 Nov 2001 16:24:48 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\n> My problem is to convince my colleague. I�m afraid that he would consider\n> those optimizing stuff as \"tricks\" to work around constraints of the\n> database server. He might argue that if it comes to the point that also\n> MS SQL server needs some speed improvement and he has to do the same\n> performance tuning things MS SQL does outperform PostgreSQL again and we\n> are at the end with our wisdom. I repeat: I for myself see the strength\n> of OpenSource (Horst, you know me ;-) ) and I would really love to use\n> PostgreSQL. But how to prove those arguing wrong? *This* is my problem.\n> We have to do a design decision. My colleague is a mathematician who\n> has prefered MS SQL server some years ago over Oracle and had certain\n> reasons for it based on estimations of our needs. He had no problems\n> with UNIX or something else and he theoretically is on my side that OpenSource\n> is the better way and would accept it if it would give the same results\n> as his stuff.\n> But he had never had some performance problems with his databases and\n> knows people who claim to fill Zillions of Megabytes of MS SQL server.\n> So he doubt on the quality of PostgreSQL server if it has problems in\n> the first run. I have to admit that his point of view is easy to\n> understand. I would have to prove (!) that we wouldn�t have trouble\n> with bigger databases and that those things are no \"dirty workarounds\"\n> of a weak server.\n>\n> > And, don't forget that the only way MS SQL can achieve the better performance\n> > here is through mercilessly hogging ressources. In a complex database\n> > environment with even larger tables, the performance gain in MS SQL would be\n> > minimal (my guess).\n> Unfortunately it is not enough to guess. He has enough experiences that\n> I knows that the MS SQL server is fit for the task he wants to solve. If\n> I tell him: \"*Perhaps* you could run into trouble.\", he would just laugh\n> about me because I�m in trouble *now* and can�t prove that I won�t be\n> again.\n\nThe only way to know for certain is to try both at various sizes to see.\nGetting numbers for one type of query on one size database tells very\nlittle. Load a test set that's 100, 1000, whatever times the current size\nand see what happens. ISTM anything short of this is fairly meaningless.\nWhat point does the other person expect to run into problems, how would\nhe solve them, how does postgres run at that point with and without\nspecial optimization.\n\nIt's perfectly possible that for the particular queries and load you're\nrunning that MSSQL will be better, there's nothing\nwrong with that. Conversely, it's entirely possible that one could find\nworkloads that postgres is better at.\n\n",
"msg_date": "Thu, 1 Nov 2001 08:37:12 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "\"Tille, Andreas\" <TilleA@rki.de> writes:\n\n> Could somebody explain MVCC to such an uneducated man like me. Is this a\n> certain feature (which perhaps MS SQL) doesn���t have and which might be\n> important in the future?\n\nhttp://www.us.postgresql.org/users-lounge/docs/7.1/postgres/mvcc.html\n\n(Or substitute your favorite mirror)\n\nOnly Oracle has anything like it AFAIK.\n\n-Doug\n-- \nLet us cross over the river, and rest under the shade of the trees.\n --T. J. Jackson, 1863\n",
"msg_date": "01 Nov 2001 12:11:05 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "I am contemplating including log4jme source code into the jdbc driver.\nWho would be the best person to contact wrt ironing out the licensing\nissues?\n\nDave\n\n",
"msg_date": "Thu, 1 Nov 2001 12:58:47 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Licensing issues including another projects source code into the jdbc\n\tdriver"
},
{
"msg_contents": "\nI am contemplating including log4jme source code into the jdbc driver.\nWho would be the best person to contact wrt ironing out the licensing\nissues?\n\nDave\n\n\n",
"msg_date": "Thu, 1 Nov 2001 14:13:28 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Licensing issues including another projects source code into the jdbc\n\tdriver"
},
{
"msg_contents": "> \n> I am contemplating including log4jme source code into the jdbc driver.\n> Who would be the best person to contact wrt ironing out the licensing\n> issues?\n\nCan you tell us what license it uses?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Nov 2001 14:48:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Licensing issues including another projects source code"
},
{
"msg_contents": "It is using the apache licence\n\nhttp://www.qos.ch/log4jME/LICENSE.txt\n\nIt appears that they allow the code to be used in either binary or\nsource as long as their licence remains intact\n\nDave\n\n-----Original Message-----\nFrom: Bruce Momjian [mailto:pgman@candle.pha.pa.us] \nSent: November 1, 2001 2:49 PM\nTo: dave@fastcrypt.com\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Licensing issues including another projects\nsource code into the jdbc driver\n\n\n> \n> I am contemplating including log4jme source code into the jdbc driver.\n> Who would be the best person to contact wrt ironing out the licensing\n> issues?\n\nCan you tell us what license it uses?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 190\n\n",
"msg_date": "Thu, 1 Nov 2001 15:14:28 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Licensing issues including another projects source code into the\n\tjdbc driver"
},
{
"msg_contents": "Dave Cramer writes:\n\n> It is using the apache licence\n>\n> http://www.qos.ch/log4jME/LICENSE.txt\n>\n> It appears that they allow the code to be used in either binary or\n> source as long as their licence remains intact\n\nThe apache license has an advertising clause, which is not acceptable.\nGetting someone to relicense the software is difficult to impossible if\nthere is a multitude of outside contributors.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 4 Nov 2001 14:04:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Licensing issues including another projects source"
},
{
"msg_contents": "Peter,\n\nI presume you are referring to the 3rd clause? What is the issue with\nthis clause?\n\nDave\n\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:peter_e@gmx.net] \nSent: November 4, 2001 8:05 AM\nTo: Dave Cramer\nCc: 'Bruce Momjian'; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Licensing issues including another projects\nsource code into the jdbc driver\n\n\nDave Cramer writes:\n\n> It is using the apache licence\n>\n> http://www.qos.ch/log4jME/LICENSE.txt\n>\n> It appears that they allow the code to be used in either binary or\n> source as long as their licence remains intact\n\nThe apache license has an advertising clause, which is not acceptable.\nGetting someone to relicense the software is difficult to impossible if\nthere is a multitude of outside contributors.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\n",
"msg_date": "Sun, 4 Nov 2001 09:21:20 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Licensing issues including another projects source code into the\n\tjdbc driver"
},
{
"msg_contents": "\n\nAlex Pilosov wrote:\n> \n> On Tue, 30 Oct 2001, Antonio Fiol [iso-8859-1] Bonn�n wrote:\n> \n> > > | > Seems that problem is very simple :))\n> > > | > MSSql can do queries from indexes, without using actual table at all.\n> > > | > Postgresql doesn't.\n> > > | >\n> > > | > So mssql avoids sequental scanning of big table, and simply does scan of\n> > > | > index which is already in needed order and has very much less size.\n> <snip>\n> > > | The consequence for my problem is now: If it is technically possible\n> > > | to implement index scans without table lookups please implement it. If\n> The feature you are looking for is called 'index coverage'. Unfortunately,\n> it is not easy to implement with Postgresql, and it is one of few\n> outstanding 'nasties'. The reason you can't do it is follows: Postgres\n> uses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\n> if index contains all the information you need, you still need to access\n> main table to check if the tuple is valid.\n> \n> Possible workaround: store tuple validity in index, that way, a lot more\n> space is wasted (16 more bytes/tuple/index), and you will need to update\n> all indices when the base table is updated, even if indexed information\n> have not changed.\n> \n\nMaybe just a silly idea, but would'nt it be possible (and useful)\nto store tuple validity in a separate bitmap file, that reports in every\nbit the validity of the corresponding tuple? It would grow linearly, but\nat least it would be very small compared to the actual data...\nBest regards\nAndrea Aime\n",
"msg_date": "Mon, 05 Nov 2001 12:03:23 +0100",
"msg_from": "\"Andrea Aime\" <aaime@comune.modena.it>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "Andrea Aime wrote:\n> \n> Alex Pilosov wrote:\n> >\n> > On Tue, 30 Oct 2001, Antonio Fiol [iso-8859-1] BonnО©╫n wrote:\n> >\n> > > > | > Seems that problem is very simple :))\n> > > > | > MSSql can do queries from indexes, without using actual table at all.\n> > > > | > Postgresql doesn't.\n> > > > | >\n> > > > | > So mssql avoids sequental scanning of big table, and simply does scan of\n> > > > | > index which is already in needed order and has very much less size.\n> > <snip>\n> > > > | The consequence for my problem is now: If it is technically possible\n> > > > | to implement index scans without table lookups please implement it. If\n> > The feature you are looking for is called 'index coverage'. Unfortunately,\n> > it is not easy to implement with Postgresql, and it is one of few\n> > outstanding 'nasties'. The reason you can't do it is follows: Postgres\n> > uses MVCC, and stores 'when' the tuple is alive inside the tuple. So, even\n> > if index contains all the information you need, you still need to access\n> > main table to check if the tuple is valid.\n> >\n> > Possible workaround: store tuple validity in index, that way, a lot more\n> > space is wasted (16 more bytes/tuple/index), and you will need to update\n> > all indices when the base table is updated, even if indexed information\n> > have not changed.\n> >\n> \n> Maybe just a silly idea, but would'nt it be possible (and useful)\n> to store tuple validity in a separate bitmap file, that reports in every\n> bit the validity of the corresponding tuple? It would grow linearly, but\n> at least it would be very small compared to the actual data...\n\nI see two problems with this approach:\n\n1. Tuple validity is different for different transactions running\nconcurrently.\n\nWe still could cache death-transaction_ids of tuples _in_memory_ quite\ncheaply \ntime-wize, but I'm not sure how big win it will be in general\n\n2. thene is no easy way to know which bit corresponds to which tuple as\neach \n database page can contain arbitrary number of pages (this one is\neasyer,\n as we can use a somewhat sparse bitmap that is less space-efficient)\n\n------------\nHannu\n",
"msg_date": "Mon, 05 Nov 2001 16:51:05 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "Umm ok,\n\nThis is dissappointing; kind of defeats the purpose of being able to\nleverage open source code for the good of all open source projects. It\nisn't that big a deal, we can write our own logging package.\n\nDave \n\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:peter_e@gmx.net] \nSent: November 5, 2001 3:05 PM\nTo: Dave Cramer\nCc: 'Bruce Momjian'; pgsql-hackers@postgresql.org\nSubject: RE: [HACKERS] Licensing issues including another projects\nsource code into the jdbc driver\n\n\nDave Cramer writes:\n\n> I presume you are referring to the 3rd clause? What is the issue with\n> this clause?\n\nIt would require everyone that ships a product based on the JDBC driver\nto\nmention this acknowledgement in advertisements, which is annoying and\nimpractical. More generally, it would introduce a divergence in\nlicensing\nin PostgreSQL, which should be avoided.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n\n",
"msg_date": "Mon, 5 Nov 2001 14:59:43 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: Licensing issues including another projects source code into the\n\tjdbc driver"
},
{
"msg_contents": "Dave Cramer writes:\n\n> I presume you are referring to the 3rd clause? What is the issue with\n> this clause?\n\nIt would require everyone that ships a product based on the JDBC driver to\nmention this acknowledgement in advertisements, which is annoying and\nimpractical. More generally, it would introduce a divergence in licensing\nin PostgreSQL, which should be avoided.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 5 Nov 2001 21:04:46 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Licensing issues including another projects source"
}
] |
[
{
"msg_contents": "> I also\n> enforced the index scan and compared with forbidding the index scan.\nThe\n> result was on my more realistic examples that both versions performed\nquite\n> the same. There was no *real* difference. For sure in this simple\nquery there\n> is a difference but the real examples showed only 2% - 5% speed\nincrease\n> (if not slower with enforcing index scans!).\n\nYou could somewhat speed up the query if you avoid that the sort\nhits the disk. A simple test here showed, that you need somewhere\nnear sort_mem = 15000 in postgresql.conf.\n\nAndreas\n",
"msg_date": "Tue, 30 Oct 2001 12:29:50 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Zeugswetter Andreas SB SD wrote:\n\n> You could somewhat speed up the query if you avoid that the sort\n> hits the disk. A simple test here showed, that you need somewhere\n> near sort_mem = 15000 in postgresql.conf.\nWell this are the usual hints from pgsql-general. I did so and\nincreased step by step to:\n\n shared_buffers = 131072\n sort_mem = 65536\n\nThis lead to a double of speed in my tests but this are settings where\nan enhancement of memory doesn�t result in a speed increase any more.\n\nWhen I was posting my question here I was talking about this \"tuned\"\nPostgreSQL server. The default settings where even worse!\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 30 Oct 2001 13:46:22 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
}
] |
[
{
"msg_contents": "\n>For example I can assure in my application that the index, once created\n>will be valid, because I just want to read in a new set of data once\n>a day (from the MS-SQL Server which collects data over the day). So\n>I could recreate all indices after the import and the database is\n>readonly until the next cron job. Is there any chance to speed up\n>those applications?\n\nHello Andreas,\n\nIs your database read-only? Good point, sorry to insist your problem is \nsoftware optimization. In your case, the database may climb up to 200 \nmillion rows (1000 days x 200.000 rows). What are you going to do then? Buy \na 16 Itanium computer with 10 Gb RAM and MS SQL Server licence. Have a \nclose look at your problem. How much time does it get MS SQL Server to \nquery 200 million rows ? The problem is not in choosing MS SQL or \nPostgreSQL ...\n\nIf you are adding 200.000 rows data everyday, consider using a combination \nof CREATE TABLE AS to create a result table with PL/pgSQL triggers to \nmaintain data consistency. You will then get instant results, even on 2 \nbillion rows because you will always query the result table; not the \noriginal one. Large databases are always optimized this way because, even \nin case of smart indexes, there are things (like your problem) that need \n*smart* optimization.\n\nDo you need PL/pgSQL source code to perform a test on 2 billion rows? If \nso, please email me on pgsql-general and I will send you the code.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Tue, 30 Oct 2001 13:06:44 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] Serious performance problem"
},
{
"msg_contents": "On Tue, 30 Oct 2001, Jean-Michel POURE wrote:\n\n> Is your database read-only?\nDaily update from MS-SQL server. Between updates it is Read-only.\n\n> Good point, sorry to insist your problem is\n> software optimization. In your case, the database may climb up to 200\n> million rows (1000 days x 200.000 rows). What are you going to do then? Buy\n> a 16 Itanium computer with 10 Gb RAM and MS SQL Server licence. Have a\n> close look at your problem. How much time does it get MS SQL Server to\n> query 200 million rows ? The problem is not in choosing MS SQL or\n> PostgreSQL ...\nThe problem is for sure. If one server is 10 to 30 times faster for the very\nsame tasks and chances are high that it skales better for the next orders of\nmagnitude where our data will fit in for the next years because of real\nindex usage (see postings on the hackers list) than the decission is easy.\nMy colleague made sure that MS SQL server is fit for the next years and\nI can only convince him if an other Server has a comparable speed *for the\nsame task*.\n\n> If you are adding 200.000 rows data everyday, consider using a combination\nI do not add this much.\n\n> of CREATE TABLE AS to create a result table with PL/pgSQL triggers to\n> maintain data consistency. You will then get instant results, even on 2\n> billion rows because you will always query the result table; not the\n> original one. Large databases are always optimized this way because, even\n> in case of smart indexes, there are things (like your problem) that need\n> *smart* optimization.\n>\n> Do you need PL/pgSQL source code to perform a test on 2 billion rows? If\n> so, please email me on pgsql-general and I will send you the code.\nI really believe that there are many problems in the world that fall under\nthis category and you are completely right. My coleague is a database\nexpert (I consider me as a beginner) and he made sure that performance is\nno issue for the next couple of years. So what? Spending hours in\noptimisation into things who work perfectly? Why not asking the\nPostgreSQL authors to optimize tha server this way the very same task\nperforms comparable?????? If we afterwards need further database\noptimization because of further constrains, I�m the first who will start\nthis. But there must be server code in the world that is able to answer\nthe example query that fast. This is proven!\n\nKind regards\n\n Andreas.\n\n",
"msg_date": "Tue, 30 Oct 2001 15:09:32 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
}
] |
[
{
"msg_contents": "I am seeing the attached regression diffs on timetz, which passed\nlast week. It looks like all of these are related to the fact that\nunmarked timetz values are now presumed to be PST (-8) not PST (-7).\n\n\t\t\tregards, tom lane\n\n*** ./expected/timetz.out\tWed Oct 3 01:29:26 2001\n--- ./results/timetz.out\tTue Oct 30 12:51:38 2001\n***************\n*** 33,62 ****\n 00:01:00-07\n 01:00:00-07\n 02:03:00-07\n! (3 rows)\n \n SELECT f1 AS \"Seven\" FROM TIMETZ_TBL WHERE f1 > '05:06:07';\n Seven \n ----------------\n 07:07:00-08\n- 08:08:00-04\n 11:59:00-07\n 12:00:00-07\n 12:01:00-07\n 23:59:00-07\n 23:59:59.99-07\n! (7 rows)\n \n SELECT f1 AS \"None\" FROM TIMETZ_TBL WHERE f1 < '00:00';\n None \n! ------\n! (0 rows)\n \n SELECT f1 AS \"Ten\" FROM TIMETZ_TBL WHERE f1 >= '00:00';\n Ten \n ----------------\n- 00:01:00-07\n- 01:00:00-07\n 02:03:00-07\n 07:07:00-08\n 08:08:00-04\n--- 33,62 ----\n 00:01:00-07\n 01:00:00-07\n 02:03:00-07\n! 08:08:00-04\n! (4 rows)\n \n SELECT f1 AS \"Seven\" FROM TIMETZ_TBL WHERE f1 > '05:06:07';\n Seven \n ----------------\n 07:07:00-08\n 11:59:00-07\n 12:00:00-07\n 12:01:00-07\n 23:59:00-07\n 23:59:59.99-07\n! (6 rows)\n \n SELECT f1 AS \"None\" FROM TIMETZ_TBL WHERE f1 < '00:00';\n None \n! -------------\n! 00:01:00-07\n! 01:00:00-07\n! (2 rows)\n \n SELECT f1 AS \"Ten\" FROM TIMETZ_TBL WHERE f1 >= '00:00';\n Ten \n ----------------\n 02:03:00-07\n 07:07:00-08\n 08:08:00-04\n***************\n*** 65,71 ****\n 12:01:00-07\n 23:59:00-07\n 23:59:59.99-07\n! (10 rows)\n \n --\n -- TIME simple math\n--- 65,71 ----\n 12:01:00-07\n 23:59:00-07\n 23:59:59.99-07\n! (8 rows)\n \n --\n -- TIME simple math\n\n======================================================================\n\n",
"msg_date": "Tue, 30 Oct 2001 13:00:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "timetz regression test is showing several DST-related failures"
},
{
"msg_contents": "I said:\n> I am seeing the attached regression diffs on timetz, which passed\n> last week. It looks like all of these are related to the fact that\n> unmarked timetz values are now presumed to be PST (-8) not PST (-7).\n\nSigh, make that \"PST (-8) not PDT (-7)\"\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 15:16:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: timetz regression test is showing several DST-related failures "
},
{
"msg_contents": "> > I am seeing the attached regression diffs on timetz, which passed\n> > last week. It looks like all of these are related to the fact that\n> > unmarked timetz values are now presumed to be PST (-8) not PST (-7).\n> Sigh, make that \"PST (-8) not PDT (-7)\"\n\nWhatever. I'll look at them to see if I can formulate a test which fills\nin the blanks correctly.\n\nNothing like a useless data type which is *also* a pita :/\n\n - Thomas\n",
"msg_date": "Wed, 31 Oct 2001 05:58:46 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: timetz regression test is showing several DST-related "
},
{
"msg_contents": "> I am seeing the attached regression diffs on timetz, which passed\n> last week. It looks like all of these are related to the fact that\n> unmarked timetz values are now presumed to be PST (-8) not PST (-7).\n\nOK, I've updated the regression test by including an explicit time zone\nin the query constants. All tests pass on my Linux box.\n\n - Thomas\n",
"msg_date": "Wed, 31 Oct 2001 14:47:51 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: timetz regression test is showing several DST-related failures"
}
] |
[
{
"msg_contents": "\nCome on guys ... 275k in order to add three words of text?\n\n",
"msg_date": "Tue, 30 Oct 2001 13:58:01 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Postings from Bruce and Olivier rejected ..."
},
{
"msg_contents": "\nOops, sorry. Thanks for rejecting that.\n\n> \n> Come on guys ... 275k in order to add three words of text?\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Oct 2001 20:54:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Postings from Bruce and Olivier rejected ..."
}
] |
[
{
"msg_contents": "I have been talking to a company in New York City that wants to port\nPostgreSQL to some custom, high-performance hardware. They wish to hire\ndevelopers who are experienced in the backend PostgreSQL code. If you\nare interested, you can contact Ken Yip at 1-646-245-6909. I believe\nthe work requires you to be in the New York metropolitan area.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 30 Oct 2001 16:19:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Job: NYC database internals developers"
}
] |
[
{
"msg_contents": "=?iso-8859-2?Q?Mariusz_Czu=B3ada?= <manieq@wp.pl> writes:\n> 1. I have a FAT32 partition, which is r/w accessible from both OSes.\n> 2. I have same version of postgres installed on linux and on w2k\n> (with cygwin support).\n> 3. I have PGDATA set to same directory on the 'shared' disk.\n\n> Q.: Can I access my databases no matter which os I currently use?\n\nI think that will work, seeing as how it's the same architecture in\nboth cases and only one instance of Postgres will be running at a time.\nBut I recommend not trusting it till you've tested it ... keep backups!\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 18:14:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: portability of datafiles "
},
{
"msg_contents": "Hi!\n\nI am developing a simple HR app. I decided to implement it with\npostgresql and apache. I have both linux and w2k on my box,\nbut wanted to be able to develop independently from OS.\nSo, I have an idea, and I'd like to ask you to verify it's logic\nand possible.\n\n1. I have a FAT32 partition, which is r/w accessible from both OSes.\n2. I have same version of postgres installed on linux and on w2k\n (with cygwin support).\n3. I have PGDATA set to same directory on the 'shared' disk.\n\nQ.: Can I access my databases no matter which os I currently use?\n\nOr, if not, is it possible to easily patch postgres to ensure binary\ncompatibility of datafiles beetween different systems at least on\nthe same hardware platform (like Linux end Windows on same PC)?\n\nTIA,\n\nMariusz\n\n\n",
"msg_date": "Tue, 30 Oct 2001 23:14:35 -0800",
"msg_from": "=?iso-8859-2?Q?Mariusz_Czu=B3ada?= <manieq@wp.pl>",
"msg_from_op": false,
"msg_subject": "portability of datafiles"
}
] |
[
{
"msg_contents": "We have had a few discussions about the meaning of \"iscachable,\" and I'd like\nto nag and post this again.\n\nThe current meaning of \"iscachable\" is to mean that it can last forever in some\npersistent cache somewhere that doesn't yet exist, in practice this seems to be\njust a some basic transaction level. A function without \"iscachable\" is called\nevery time it is used.\n\nIt seems there should be 3 core function cache levels:\n\n1) \"noncacheable,\" this should always be called every time it is used.\n2) \"cachable,\" this should mean that it will be called only once per unique set\nof parameters within a transaction.\n3) \"persistent,\" this could mean it never needs to be called twice.\n\nWith the above definitions, it would make sense to have \"iscacheable\" as the\ndefault for a function.\n\nDoes this make sense?\n",
"msg_date": "Tue, 30 Oct 2001 18:41:31 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "with (iscachable)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> It seems there should be 3 core function cache levels:\n\nThere should be 3, but not defined like this:\n\n> 1) \"noncacheable,\" this should always be called every time it is used.\n> 2) \"cachable,\" this should mean that it will be called only once per unique set\n> of parameters within a transaction.\n> 3) \"persistent,\" this could mean it never needs to be called twice.\n\nWe will *not* implement function caching as implied by #2. What we want\nis a definition that says that it's okay to omit redundant calls, not\none that promises we will not make any redundant calls.\n\nReasonable definitions would be:\n\n1. noncachable: must be called every time; not guaranteed to return same\nresult for same parameters even within a query. random(), timeofday(),\nnextval() are examples.\n\n2. fully cachable: function guarantees same result for same parameters\nno matter when invoked. This setting allows a call with constant\nparameters to be constant-folded on sight.\n\n3. query cachable: function guarantees same result for same parameters\nwithin a single query, or more precisely within a single\nCommandCounterIncrement interval. This corresponds to the actual\nbehavior of functions that execute SELECTs, and it's sufficiently strong\nto allow the function result to be used in an indexscan, which is what\nwe really care about.\n\nI'm by no means wedded to those names ... maybe someone can think of\nbetter terminology.\n\n> With the above definitions, it would make sense to have \"iscacheable\" as the\n> default for a function.\n\nI'd still vote for noncachable as the default; unsurprising behavior is\nto be preferred over maximum performance IMHO.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Oct 2001 19:37:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: with (iscachable) "
},
{
"msg_contents": "Tom Lane wrote:\n> Reasonable definitions would be:\n> \n> 1. noncachable: must be called every time; not guaranteed to return same\n> result for same parameters even within a query. random(), timeofday(),\n> nextval() are examples.\n> \n> 2. fully cachable: function guarantees same result for same parameters\n> no matter when invoked. This setting allows a call with constant\n> parameters to be constant-folded on sight.\nSomething like strlower() or metaphone() are perfect examples.\n> \n> 3. query cachable: function guarantees same result for same parameters\n> within a single query, or more precisely within a single\n> CommandCounterIncrement interval. This corresponds to the actual\n> behavior of functions that execute SELECTs, and it's sufficiently strong\n> to allow the function result to be used in an indexscan, which is what\n> we really care about.\n\nThis is IMHO the important one. I would not presume to argue the scope, but as\nsomeone who has spent the last year building entire systems on Postgres (and\nloving every minute of it) I would really like a fairly reliable definition\nfrom which someone can understand and predict the behavior.\n\nCurrently, it seems fairly reliable that \"iscachable\" functions are called once\nper unique parameter within the scope of a select, and I like this behavior.\nWhen/if it breaks I will have some work to do. (As observed with fairly simple\nqueries.)\n\n> \n> I'm by no means wedded to those names ... maybe someone can think of\n> better terminology.\n\nI kinda like the terminology I presented, \"cacheable, non-cacheable,\npersistent\" but hey, they are only words. I don't think people will be\nconfused, \"persistent\" and \"non-cacheable\" are pretty obvious.\n> \n> > With the above definitions, it would make sense to have \"iscacheable\" as the\n> > default for a function.\n> \n> I'd still vote for noncachable as the default; unsurprising behavior is\n> to be preferred over maximum performance IMHO.\n\nI've thought about this one, a lot, and I can totally see your point of view,\nbut it isn't clear to me that it would be surprising that a function would only\nbe called once per unique set of parameters per select. What do other databases\ndo?\n",
"msg_date": "Tue, 30 Oct 2001 21:19:46 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: with (iscachable)"
}
] |
[
{
"msg_contents": "\nSome warnings in current CVS build in BSD4.3:\n\nxact.c:590: warning: implicit declaration of function `select'\ndynloader.c:85: warning: unused variable `buf'\n/usr/include/grp.h:58: warning: parameter names (without types) in function\ndeclaration\npgc.c:1244: warning: label `find_rule' defined but not used\npgc.c:3091: warning: `yy_flex_realloc' defined but not used\nodbcapi.c:140: warning: no previous prototype for `SQLDataSources'\npg_restore.c:166: warning: implicit declaration of function `getopt'\npl_scan.c:1004: warning: label `find_rule' defined but not used\npl_scan.c:2295: warning: `yy_flex_realloc' defined but not used\n\nAlso some odd tsort messages. eg.\n\n\"/usr/local/pgsql-7.2dev/etc\"' -c -o pqsignal.o pqsignal.c\nar cr libpq.a `lorder fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o\nfe-lobj.o pqexpbuffer.o dllist.o md5.o pqsignal.o | tsor\nt`\ntsort: cycle in data\ntsort: fe-connect.o\ntsort: fe-exec.o\ntsort: cycle in data\ntsort: fe-auth.o\ntsort: fe-connect.o\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 31 Oct 2001 19:09:31 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Warnings in CVS build"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Some warnings in current CVS build in BSD4.3:\n\n> xact.c:590: warning: implicit declaration of function `select'\n\nWhere is select() declared in your system headers? Evidently we're\nmissing a #include, but I dunno which.\n\n> dynloader.c:85: warning: unused variable `buf'\n\nAssuming that this is freebsd, I've suppressed that warning.\n\n> /usr/include/grp.h:58: warning: parameter names (without types) in function\n> declaration\n\nThis one probably ought to be directed to the BSD maintainers.\n\n> pgc.c:1244: warning: label `find_rule' defined but not used\n> pgc.c:3091: warning: `yy_flex_realloc' defined but not used\n> pl_scan.c:1004: warning: label `find_rule' defined but not used\n> pl_scan.c:2295: warning: `yy_flex_realloc' defined but not used\n\nAs Thomas pointed out, we can't do much about these without control\nof the flex sources. They've irritated me for a long time, since\nthey're the only build warnings I get. It's interesting though that\nour other flex files don't provoke these warnings. Perhaps the\nproblem only occurs if the flex source file uses yylineno?\n\n> odbcapi.c:140: warning: no previous prototype for `SQLDataSources'\n\nI think Hiroshi fixed this already.\n\n> pg_restore.c:166: warning: implicit declaration of function `getopt'\n\nThis one's yours to fix ...\n\n> Also some odd tsort messages. eg.\n\nThese are just tsort being noisy. I'm not sure why we bother with\ntsorting library member files any more anyway --- do any supported\nplatforms actually need it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 00:52:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Warnings in CVS build "
},
{
"msg_contents": "At 00:52 1/11/01 -0500, Tom Lane wrote:\n>\n>> xact.c:590: warning: implicit declaration of function `select'\n>\n>Where is select() declared in your system headers? Evidently we're\n>missing a #include, but I dunno which.\n>\n\n\nunistd.h:\n\nint select __P((int, fd_set *, fd_set *, fd_set *, struct timeval *));\n\n(it is FreeBSD).\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 01 Nov 2001 17:11:36 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Warnings in CVS build "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n>> Where is select() declared in your system headers? Evidently we're\n>> missing a #include, but I dunno which.\n\n> unistd.h:\n\nOkay, added. That select() has been in xact.c since before 7.1, so\nI'm surprised this wasn't reported before ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 01:18:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Warnings in CVS build "
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: 30 October 2001 05:18\n> To: Marc G. Fournier\n> Cc: Tatsuo Ishii; mj2-dev@csf.colorado.edu; \n> pgsql-hackers@postgresql.org\n> Subject: Re: pgsql-committers? \n> \n> \n> \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > okay, this is most odd ... I know I've been receiving ... \n> but tom has \n> > also been reporting intermident problems ... we're using \n> > delivery_rules in majordomo to pump all the messages through a \n> > dedicated server ...\n> \n> Yeah, I'm still seeing intermittent loss of committer \n> messages; for example, I never saw a commit for Bruce's first \n> pgindent run. (I just checked my mail logs to verify this.) \n> But I'm not missing all of them as Tatsuo reports. Anyone \n> else seeing problems?\n\nI've seen problems posting to pgadmin-hackers@postgresql.org &\npgsql-www@postgresql.org. I did report them to Marc & Chris.\n\nI know the posts get to the archives, but neither I or someone else I have\nspoken to privately have received them back from the lists.\n\nIf examples are needed again Marc, please let me know.\n\nRegards, Dave.\n",
"msg_date": "Wed, 31 Oct 2001 08:28:36 -0000",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pgsql-committers? "
},
{
"msg_contents": "\nI believe the latest chnages to the delivery_rules should fix he problems\nwith pgadmin-* also ...\n\nOn Wed, 31 Oct 2001, Dave Page wrote:\n\n>\n>\n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> > Sent: 30 October 2001 05:18\n> > To: Marc G. Fournier\n> > Cc: Tatsuo Ishii; mj2-dev@csf.colorado.edu;\n> > pgsql-hackers@postgresql.org\n> > Subject: Re: pgsql-committers?\n> >\n> >\n> > \"Marc G. Fournier\" <scrappy@hub.org> writes:\n> > > okay, this is most odd ... I know I've been receiving ...\n> > but tom has\n> > > also been reporting intermident problems ... we're using\n> > > delivery_rules in majordomo to pump all the messages through a\n> > > dedicated server ...\n> >\n> > Yeah, I'm still seeing intermittent loss of committer\n> > messages; for example, I never saw a commit for Bruce's first\n> > pgindent run. (I just checked my mail logs to verify this.)\n> > But I'm not missing all of them as Tatsuo reports. Anyone\n> > else seeing problems?\n>\n> I've seen problems posting to pgadmin-hackers@postgresql.org &\n> pgsql-www@postgresql.org. I did report them to Marc & Chris.\n>\n> I know the posts get to the archives, but neither I or someone else I have\n> spoken to privately have received them back from the lists.\n>\n> If examples are needed again Marc, please let me know.\n>\n> Regards, Dave.\n>\n\n",
"msg_date": "Wed, 31 Oct 2001 04:58:13 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: pgsql-committers? "
}
] |
[
{
"msg_contents": "\nSome warnings in current CVS build in Linux (SuSE 7.2):\n\npgc.c: In function `yylex':\npgc.c:1243: warning: label `find_rule' defined but not used\npgc.l: At top level:\npgc.c:3090: warning: `yy_flex_realloc' defined but not used\nodbcapi.c:140: warning: no previous prototype for `SQLDataSources'\npl_scan.c: In function `plpgsql_base_yylex':\npl_scan.c:1003: warning: label `find_rule' defined but not used\nscan.l: At top level:\npl_scan.c:2294: warning: `yy_flex_realloc' defined but not used\nIn file included from plperl.c:82:\n/usr/lib/perl5/5.6.1/i586-linux/CORE/perl.h:2155: warning: `DEBUG' redefined\n../../../src/include/utils/elog.h:22: warning: this is the location of the\nprevious definition\nIn file included from SPI.xs:41:\n/usr/lib/perl5/5.6.1/i586-linux/CORE/perl.h:2155: warning: `DEBUG' redefined\n../../../src/include/utils/elog.h:22: warning: this is the location of the\nprevious definition\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Wed, 31 Oct 2001 20:02:32 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Warnings in CVS build (Linux)"
},
{
"msg_contents": "> Some warnings in current CVS build in Linux (SuSE 7.2):\n> pgc.c: In function `yylex':\n> pgc.c:1243: warning: label `find_rule' defined but not used\n> pgc.l: At top level:\n> pgc.c:3090: warning: `yy_flex_realloc' defined but not used\n\nThese are normal; the code automatically generated by lex defines these\nroutines.\n\n> odbcapi.c:140: warning: no previous prototype for `SQLDataSources'\n\nHmm. This is a stubbed-out routine; can someone add the prototype in the\nappropriate place? Maybe isql.h?\n\n> In file included from plperl.c:82:\n> /usr/lib/perl5/5.6.1/i586-linux/CORE/perl.h:2155: warning: `DEBUG' redefined\n> ../../../src/include/utils/elog.h:22: warning: this is the location of the\n> previous definition\n> In file included from SPI.xs:41:\n> /usr/lib/perl5/5.6.1/i586-linux/CORE/perl.h:2155: warning: `DEBUG' redefined\n> ../../../src/include/utils/elog.h:22: warning: this is the location of the\n> previous definition\n\nYuck.\n\n - Thomas\n",
"msg_date": "Wed, 31 Oct 2001 14:39:08 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Warnings in CVS build (Linux)"
}
] |
[
{
"msg_contents": "\nOnce again, a slightly convoluted question, but it seems that PG may be\ndoing a little more work than is necessary when selecting from views with\nsub-selects. It seems that every time a view field is being referenced in\nan outer select expression, the view field is being re-evaluated. Is there\nany way to get PG to know that it only needs to do the aggregate once?\n\neg.\n\n create table b(f1 int, f2 int);\n create table r(f1 int);\n\n create view bv as select f1,f2,\n exists(select * from r where r.f1=b.f1) as has_f1,\n exists(select * from r where r.f1=b.f2) as has_f2\n from b;\n\n explain select f1,f2,\n case when has_f1 and has_f2 then 'both' \n when has_f1 then 'f1_only' \n when has_f2 then 'f2_only' \n else 'none' \n end as status\n from bv;\n\n Seq Scan on b (cost=0.00..20.00 rows=1000 width=8)\n SubPlan\n -> Seq Scan on r (cost=0.00..22.50 rows=5 width=4)\n -> Seq Scan on r (cost=0.00..22.50 rows=5 width=4)\n -> Seq Scan on r (cost=0.00..22.50 rows=5 width=4)\n -> Seq Scan on r (cost=0.00..22.50 rows=5 width=4)\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Thu, 01 Nov 2001 15:16:06 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Another planner/optimizer question..."
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Is there any way to get PG to know that it only needs to do the\n> aggregate once?\n\nIt'd probably be possible to look for duplicated aggrefs being assigned\nto the same Agg plan node during planning. However, I'm not entirely\nconvinced that it's worth the trouble --- the individual transition\nfunction calls are not usually all that expensive.\n\nBut ... the example you are offering has nothing to do with aggregates.\nSubplans are a different and much messier deal. The best I could offer\nyou (short of a complete redesign of subqueries) would be to not pull up\nviews that have any subqueries, which would probably be a net loss.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 10:22:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another planner/optimizer question... "
},
{
"msg_contents": "At 10:22 1/11/01 -0500, Tom Lane wrote:\n>The best I could offer\n>you (short of a complete redesign of subqueries) would be to not pull up\n>views that have any subqueries, which would probably be a net loss.\n\nThat's probably true 90% percent of the time; it would be interesting to be\nable to turn this on & off on a per-query basis (or even a per-view basis).\nIs this hard?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 02 Nov 2001 11:02:55 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Another planner/optimizer question... "
}
] |
[
{
"msg_contents": "A few clarifications so you have more to go on: \n\nto create the problem, we used the sql command with the \"';\" following the [CR] at the end of the typed characters as shown here:\n\nalter user yyyyy with password 'xxxxxx\n';\n\nthen we attempted to log in as that user from another users' local session using the psql -username=yyyyy command.\n\nnone of the users had a 'validuntil' date (it was null). But setting one didn't help either.\n\nwe are using 7.1.3 on a solaris machine. We noticed the problem when we examined the pg_pwd file and saw that the validuntil date we entered was preceded what looked like an early line wrap.\n\nI was quite surprised that such a small input error could cause the backend to shutdown. Should psql remove [CR]s that are contained within ''? (at least for this command)?\n\nThanks for looking into this problem.\n\nTom\n\n>>> Tom Lane <tgl@sss.pgh.pa.us> 10/31/01 21:43 PM >>>\n\"Thomas Yackel\" <yackelt@ohsu.edu> writes:\n> I got the error: \"Bad abstime external representation ''\" when attempted to start psql as a particular user and the postmaster shutdown.\n\n> The problem, we discovered, is that this user had a carriage return contained within his password. Changing the password to remove the CR avoided the system shutdown.\n\nHmm. I can see how a linefeed in a password would create a problem (it\nbreaks the line-oriented formatting of the pg_pwd file). However, I\ncan't reproduce a postmaster crash here. Either I'm not testing the\nright combination of circumstances, or current sources are more robust\nabout this tha 7.1. That's not unlikely given that Bruce rewrote the\npassword-file-parsing code a couple months ago.\n\nIn any case it seems like it'd be a good idea to forbid nonprinting\ncharacters in passwords. Comments anyone?\n\n\t\t\tregards, tom lane\n\n",
"msg_date": "Wed, 31 Oct 2001 22:42:16 -0800",
"msg_from": "\"Thomas Yackel\" <yackelt@ohsu.edu>",
"msg_from_op": true,
"msg_subject": "Re: user authentication crash by Erik Luke (20-08-2001;"
}
] |
[
{
"msg_contents": "Hi all,\n\nat the moment import/export of large objects on server-side only can be \nactivated for all users by editing config.h due to security reasons.\n\nMy idea is, to enable in for everyone, when using s apecial directory (e.g. \n/tmp). What do you think about this?\n\nRegards, Klaus\n\n\n-- \nTWC GmbH\nSchlossbergring 9\n79098 Freiburg i. Br.\nhttp://www.twc.de\n",
"msg_date": "Thu, 1 Nov 2001 08:02:01 +0100",
"msg_from": "Klaus Reger <K.Reger@twc.de>",
"msg_from_op": true,
"msg_subject": "import/export of large objects on server-side"
},
{
"msg_contents": "Klaus Reger <K.Reger@twc.de> writes:\n> at the moment import/export of large objects on server-side only can be \n> activated for all users by editing config.h due to security reasons.\n> My idea is, to enable in for everyone, when using s apecial directory (e.g. \n> /tmp). What do you think about this?\n\nIt'd still be a security hole, and not significantly smaller (consider\nsymlinks).\n\nUse the client-side LO import/export functions, instead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 10:14:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: import/export of large objects on server-side "
},
{
"msg_contents": "> Klaus Reger <K.Reger@twc.de> writes:\n>> at the moment import/export of large objects on server-side only can\n>> be activated for all users by editing config.h due to security\n>> reasons. My idea is, to enable in for everyone, when using s apecial\n>> directory (e.g. /tmp). What do you think about this?\n>\n> It'd still be a security hole, and not significantly smaller (consider\n> symlinks).\n>\n> Use the client-side LO import/export functions, instead.\n\nok, i've read the config.h and the sources. I agree that this can be a\nsecurity hole. But for our application we need lo-access from\nPL/PGSQL-Procedures (explicitly on the server). We have to check out\ndocuments, work with them and then check the next version in.\n\nWhats about an configuration-file entry, in the matter\nLO_DIR=/directory or none (which is the default).\nFor our product we want to be compatible with the original sources of Pg,\navoiding own patches in every new version.\n\nWhat do you think about this idea? Do you have any other suggestions for\nserverside lo-ing, without granting every user superuser-privileges?\n\nRegards, Klaus\n\n\n\n\n\n",
"msg_date": "Fri, 2 Nov 2001 10:40:11 +0100 (CET)",
"msg_from": "\"Klaus Reger\" <K.Reger@twc.de>",
"msg_from_op": false,
"msg_subject": "Re: import/export of large objects on server-side"
}
] |
[
{
"msg_contents": "hi, there!\n\nIf I have two tables, first with primary key, second references to first\nand in one transaction I insert row in any of these tables and when try to\ndelete it, I receive : \n\nERROR: triggered data change violation on relation 'xxx'\n\nHere is an example:\n\ntemp=# CREATE TABLE prim (i int primary key);\nNOTICE: CREATE TABLE/PRIMARY KEY will create implicit index 'prim_pkey'\nfor table 'prim'\nCREATE\ntemp=# create table fore (j int references prim);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY\ncheck(s)\nCREATE\ntemp=# insert into prim values(1);\nINSERT 85836 1\ntemp=# begin;\nBEGIN\ntemp=# INSERT INTO fore values (1);\nINSERT 85837 1\ntemp=# delete from fore;\nERROR: triggered data change violation on relation \"fore\"\ntemp=# rollback;\nROLLBACK\ntemp=# begin;\nBEGIN\ntemp=# INSERT INTO prim VALUES (2);\nINSERT 85880 1\ntemp=# DELETE from prim where i = 2;\nERROR: triggered data change violation on relation \"prim\"\ntemp=# rollback;\nROLLBACK\ntemp=# select version();\n version \n-------------------------------------------------------------\n PostgreSQL 7.1.3 on i686-pc-linux-gnu, compiled by GCC 2.96\n(1 row)\n\n\n/Constantin\n\n",
"msg_date": "Thu, 1 Nov 2001 17:41:51 +0600 (NOVT)",
"msg_from": "\"Constantin S. Svintsoff\" <cs@newst.net>",
"msg_from_op": true,
"msg_subject": "ERROR: Triggered data change violation."
}
] |
[
{
"msg_contents": "\nStarting at about 4:30pm ADT this afternoon, as was previously alluded to,\nthe server will be going down for several hours while we migrate it to our\nnew server ...\n\nThe only thing that should be affected by this downtime is the mailing\nlists themselves, as www.postgresql.org is already on a different server,\nand the mirror sites are all still active ...\n\nDue to the virtual machine nature of the server, nothing will be changed,\nmoved or lost, and no permissions will be changed, in the move ... the IP\nwill change though, so if you DNS doesn't pick up the change fast enough,\nit will become:\n\n\t\t\t64.49.215.8\n\n\n\n",
"msg_date": "Thu, 1 Nov 2001 09:56:17 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Server going down for several hours ..."
}
] |
[
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\n\nI've had two overnight crashes with postgresql-7.2b1. Neither logged any\nuseful info in the logfile created by pg_ctl, syslog or messages.\n\nThe server has one user database with 1148 records. It is however queried\nfor each incoming email. It failed 2 regressions test the time timetz\nwhich was discussed and geometry which it always fails. Postgresql-7.1.x\nwas running rather smoothly on this box.\n\nThe box is an i586, running linux 2.4 with gcc-3.0.2, binutils-2.11.2,\nglibc-2.2.x. If you need anymore info or have any suggestions that might\nhelp debug this feel free to mail me.\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: Made with pgp4pine 1.76\n\niEYEARECAAYFAjvhZakACgkQwtU6L/A4vVBrRQCgkQSxwwkX2QfNm+tdAW+UxDGm\nT3IAniTGEcImI1i/Ggbbhy9dfGfUPBDQ\n=cgeK\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Thu, 1 Nov 2001 10:09:20 -0500 (EST)",
"msg_from": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com>",
"msg_from_op": true,
"msg_subject": "Posgresql 7.2b1 crashes"
},
{
"msg_contents": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com> writes:\n> I've had two overnight crashes with postgresql-7.2b1. Neither logged any\n> useful info in the logfile created by pg_ctl, syslog or messages.\n\nPlease define \"crash\". If it was a coredump, how about a stack\nbacktrace? Can you determine what query it was executing?\n\nWhile I'd like to help you, you have not provided one single bit of\ninformation that could possibly be used to identify the problem ...\n\n> glibc-2.2.x.\n\n... except perhaps that. If you compiled with --enable-locale, an\nupdate to glibc 2.2.3 is strongly advised. There's a nasty bug in\nstrcoll() in 2.2.x.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 11:49:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thu, 1 Nov 2001, Tom Lane wrote:\n\n> \"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com> writes:\n> > I've had two overnight crashes with postgresql-7.2b1. Neither logged any\n> > useful info in the logfile created by pg_ctl, syslog or messages.\n>\n> Please define \"crash\". If it was a coredump, how about a stack\n> backtrace? Can you determine what query it was executing?\n>\nI don't have a core file, it died overnight both times so i don't know\nexactly but I can give you the general query it performs. By crash i mean\nthe postmaster process is gone along with it's sub-processes or threads.\n\nIt runs several hundred of these queries per day:\nselect error from accessdb where lower(email)=lower('%s') limit 1;\n%s is usually replaced with an email address, domain name or ip address.\n\n\n> While I'd like to help you, you have not provided one single bit of\n> information that could possibly be used to identify the problem ...\n>\nSorry, but i've been unable to gather all that much about the problem.\nI started postmaster with -B 512 -N 64 -i, I'm going to try to up the\ndebugging level and see if it gives anymore incite into why it crashed.\n\n> > glibc-2.2.x.\n>\n> ... except perhaps that. If you compiled with --enable-locale, an\n> update to glibc 2.2.3 is strongly advised. There's a nasty bug in\n> strcoll() in 2.2.x.\n>\nI think i'm running 2.2.3, but i'm not 100% sure.\nfrom config.status:\n./configure --enable-multibyte --with-maxbackends=128 --with-openssl\n- --enable-odbc --with-CXX --with-gnu-ld --enable-syslog\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: Made with pgp4pine 1.76\n\niEYEARECAAYFAjvhkZQACgkQwtU6L/A4vVDD+gCfeOlPaEgRtdtRtjy6Ku7l2/jh\nM/0An2OT5vNFrfx2vc5FjpzccAiBi2sg\n=Ry99\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Thu, 1 Nov 2001 13:16:43 -0500 (EST)",
"msg_from": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
},
{
"msg_contents": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com> writes:\n> I don't have a core file, it died overnight both times so i don't know\n> exactly but I can give you the general query it performs. By crash i mean\n> the postmaster process is gone along with it's sub-processes or threads.\n\nPostmaster dies too? Wow. If you aren't seeing a core file, perhaps\nit's because you are starting the postmaster under \"ulimit -c 0\".\nYou need the process context to be \"ulimit -c unlimited\" to allow cores\nto be dropped. Might be worth running with -d 2 to enable query logging\nas well.\n\n>> ... except perhaps that. If you compiled with --enable-locale, an\n>> update to glibc 2.2.3 is strongly advised. There's a nasty bug in\n>> strcoll() in 2.2.x.\n>> \n> I think i'm running 2.2.3, but i'm not 100% sure.\n> from config.status:\n> ./configure --enable-multibyte --with-maxbackends=128 --with-openssl\n> - --enable-odbc --with-CXX --with-gnu-ld --enable-syslog\n\nSince you didn't use --enable-locale, it's irrelevant; AFAIK we don't\ncall strcoll() unless that option's been selected. The known forms of\nthe strcoll problem wouldn't cause a postmaster crash anyway, only\nbackend crashes. So you've got something new. Please keep us posted.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 13:51:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thu, 1 Nov 2001, Tom Lane wrote:\n\n> \"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com> writes:\n> > I don't have a core file, it died overnight both times so i don't know\n> > exactly but I can give you the general query it performs. By crash i mean\n> > the postmaster process is gone along with it's sub-processes or threads.\n>\n> Postmaster dies too? Wow. If you aren't seeing a core file, perhaps\n> it's because you are starting the postmaster under \"ulimit -c 0\".\n> You need the process context to be \"ulimit -c unlimited\" to allow cores\n> to be dropped. Might be worth running with -d 2 to enable query logging\n> as well.\n>\n I'm not sure about the ulimit, I restarted postmaster with -d 1 a few\nminutes ago, I'll check the ulimit and restart it again and hopefully it\ndies with some useful info this time.\n\n> Since you didn't use --enable-locale, it's irrelevant; AFAIK we don't\n> call strcoll() unless that option's been selected. The known forms of\n> the strcoll problem wouldn't cause a postmaster crash anyway, only\n> backend crashes. So you've got something new. Please keep us posted.\n>\n\nI probably won't have any new info till tommorow morning EST, it died once\naround 4am, the other at 5:25am so it's kinda hard to tell what made it go\nbelly up at this point.\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: Made with pgp4pine 1.76\n\niEYEARECAAYFAjvhmvkACgkQwtU6L/A4vVDjuwCdG0UhoVvE4weow0P1wPxAZuha\nhioAoK9SttvMZadAyoAJxKQgFTHHWmb8\n=Hjr9\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Thu, 1 Nov 2001 13:56:48 -0500 (EST)",
"msg_from": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
},
{
"msg_contents": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com> writes:\n> I probably won't have any new info till tommorow morning EST, it died once\n> around 4am, the other at 5:25am so it's kinda hard to tell what made it go\n> belly up at this point.\n\nOkay. One thing to keep in mind is that the postmaster will drop core\nin whatever directory you are in when you start it, whereas individual\nbackends drop core in the $PGDATA/base/dbnumber/ subdirectory of the\ndatabase they are attached to.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 14:10:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
},
{
"msg_contents": "\n\n",
"msg_date": "Thu, 01 Nov 2001 17:15:37 -0200",
"msg_from": "Sergio Okida <sokida@organox.com.br>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes"
},
{
"msg_contents": "On Thu, 1 Nov 2001, Mr. Shannon Aldinger wrote:\n\n> > > glibc-2.2.x.\n> >\n> > ... except perhaps that. If you compiled with --enable-locale, an\n> > update to glibc 2.2.3 is strongly advised. There's a nasty bug in\n> > strcoll() in 2.2.x.\n\n> I think i'm running 2.2.3, but i'm not 100% sure.\n\nTry:\n$ echo /lib/libc-2.2*.so\n\nMatthew.\n\n",
"msg_date": "Thu, 1 Nov 2001 19:29:07 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com> writes:\n> > I don't have a core file, it died overnight both times so i don't know\n> > exactly but I can give you the general query it performs. By crash i mean\n> > the postmaster process is gone along with it's sub-processes or threads.\n> \n> Postmaster dies too? Wow. If you aren't seeing a core file, perhaps\n> it's because you are starting the postmaster under \"ulimit -c 0\".\n> You need the process context to be \"ulimit -c unlimited\" to allow cores\n> to be dropped. Might be worth running with -d 2 to enable query logging\n> as well.\n\nI have seen the same thing, and I have been trying to reproduce it. I know for\na fact that it was in the middle of : (In a C application using libpq.)\n\ndeclare temp_curs binary cursor for select scene_name_full, track from\nfavorites where rating > 6 and track < 1000000000 order by scene_name_full\n\nPerforming a loop on:\n\nfetch 1000 from temp_curs\n\nI have lots of memory, lots of disk, I'm pretty sure it isn't a resource issue.\n(It could be a shared memory issue?) I have not been able to reproduce it. Hope\nthis helps.\n\n\ncdinfo=# explain select scene_name_full, track from favorites where rating >\n6 and track < 1000000000 order by scene_name_full;\nNOTICE: QUERY PLAN:\n\nSort (cost=517675.50..517675.50 rows=2091003 width=18)\n -> Seq Scan on favorites (cost=0.00..135699.52 rows=2091003 width=18)\n\nEXPLAIN\n",
"msg_date": "Thu, 01 Nov 2001 14:54:10 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes"
},
{
"msg_contents": "On Thu, Nov 01, 2001 at 01:51:21PM -0500, Tom Lane wrote:\n\n> > I think i'm running 2.2.3, but i'm not 100% sure.\n> > from config.status:\n> > ./configure --enable-multibyte --with-maxbackends=128 --with-openssl\n> > - --enable-odbc --with-CXX --with-gnu-ld --enable-syslog\n> \n> Since you didn't use --enable-locale, it's irrelevant; AFAIK we don't\n> call strcoll() unless that option's been selected. The known forms of\n> the strcoll problem wouldn't cause a postmaster crash anyway, only\n> backend crashes. So you've got something new. Please keep us posted.\n\n May be try compile it --enable-cassert.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Fri, 2 Nov 2001 09:48:21 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nOn Thu, 1 Nov 2001, Tom Lane wrote:\n\n> Postmaster dies too? Wow. If you aren't seeing a core file, perhaps\n> it's because you are starting the postmaster under \"ulimit -c 0\".\n> You need the process context to be \"ulimit -c unlimited\" to allow cores\n> to be dropped. Might be worth running with -d 2 to enable query logging\n> as well.\n>\nThere were no core files dropped under the data directory. However I did\nget a core file in ~postgres, presumably it's from the postmaster. I also\nput up the logfile which doesn't really contain anything too intresting at\nthe end.\n\nThe core and logfile can be found at:\nhttp://yinyang.hjsoft.com/core.gz\nhttp://yinyang.hjsoft.com/logfile.gz\n\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: Made with pgp4pine 1.76\n\niEYEARECAAYFAjvioVkACgkQwtU6L/A4vVBE8gCePeyyZbUBaNKE+qtSqi+BbZmp\nxDgAn24Win7EAkAWoRrq98keiMqHPAzx\n=0TDa\n-----END PGP SIGNATURE-----\n\n\n",
"msg_date": "Fri, 2 Nov 2001 08:36:12 -0500 (EST)",
"msg_from": "\"Mr. Shannon Aldinger\" <god@yinyang.hjsoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> May be try compile it --enable-cassert.\n\nExcellent recommendation.\n\n(Actually, I'd recommend --enable-cassert for anyone working with beta\ncode, whether you're currently chasing a problem or not. I'm not sure\nit's appropriate for production servers, because it turns what might be\nrelatively harmless errors into database restarts; but for development\nand testing it's essential.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Nov 2001 10:26:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Posgresql 7.2b1 crashes "
}
] |
[
{
"msg_contents": "The password-file cache implemented by src/backend/libpq/crypt.c is\nnow dysfunctional, because it is only loaded when a password check is\nrequested, which is after the postmaster's child process has forked\naway from the postmaster. The cache is always empty in the postmaster,\nand every new backend will read up and cache the whole file before\nprobing the cache ... once.\n\nOne fairly reasonable solution would be to have the postmaster load\nthe cache when receiving SIGHUP (when it also reloads its other config\nfiles). Then we could remove the password-file-reload-flag-file\nmechanism in favor of just kill(getppid(), SIGHUP), a mechanism we\nalready use in other places.\n\nIf we don't do that, I am strongly inclined to remove the password cache\nmechanism and just allow the code to reread pg_pwd when checking a\npassword.\n\nIf we do keep the cache, I think I will also tweak crypt.c to store\nthe cache in PostmasterContext palloc space, rather than malloc space,\nso that it will be freed when entering a new backend.\n\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 13:22:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Password-file caching is broken"
},
{
"msg_contents": "> The password-file cache implemented by src/backend/libpq/crypt.c is\n> now dysfunctional, because it is only loaded when a password check is\n> requested, which is after the postmaster's child process has forked\n> away from the postmaster. The cache is always empty in the postmaster,\n> and every new backend will read up and cache the whole file before\n> probing the cache ... once.\n\nYikes.\n\n> One fairly reasonable solution would be to have the postmaster load\n> the cache when receiving SIGHUP (when it also reloads its other config\n> files). Then we could remove the password-file-reload-flag-file\n> mechanism in favor of just kill(getppid(), SIGHUP), a mechanism we\n> already use in other places.\n\nI like kill() much better. I never liked that file-flag thing.\n\n> If we don't do that, I am strongly inclined to remove the password cache\n> mechanism and just allow the code to reread pg_pwd when checking a\n> password.\n> \n> If we do keep the cache, I think I will also tweak crypt.c to store\n> the cache in PostmasterContext palloc space, rather than malloc space,\n> so that it will be freed when entering a new backend.\n\nGood idea.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Nov 2001 15:12:38 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Password-file caching is broken"
}
] |
[
{
"msg_contents": "\"Thomas Yackel\" <yackelt@ohsu.edu> writes:\n> I was quite surprised that such a small input error could cause the\n> backend to shutdown. Should psql remove [CR]s that are contained\n> within ''? (at least for this command)?\n\nI have committed changes that forbid linefeeds and tabs within passwords\nand usernames. This should be sufficient to prevent the pg_pwd parser\nfrom becoming confused.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 01 Nov 2001 13:37:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: user authentication crash by Erik Luke (20-08-2001; 1.3kb) "
}
] |
[
{
"msg_contents": "My spam filter was misconfigured and I bounced back some messages I\nshouldn't have. Sorry.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Nov 2001 13:49:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Sorry for blocking email"
}
] |
[
{
"msg_contents": "Can someone look at this compiler warning I am seeing in ODBC:\n\n---------------------------------------------------------------------------\n\ngcc -O2 -pipe -m486 -Wall -Wmissing-prototypes -Wmissing-declarations -g -Wall -\nO1 -Wmissing-prototypes -Wmissing-declarations -fpic -I. -I../../../src/include \n-I/usr/local/include/readline -I/usr/contrib/include -DODBCINSTDIR='\"/usr/local/\npgsql/etc\"' -c -o info.o info.c\ninfo.c: In function `PGAPI_ForeignKeys':\ninfo.c:2901: warning: `pkey_text' might be used uninitialized in this function\ninfo.c:2903: warning: `fkey_text' might be used uninitialized in this function\ninfo.c:2905: warning: `pkt_text' might be used uninitialized in this function\ninfo.c:2907: warning: `fkt_text' might be used uninitialized in this function\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Nov 2001 15:11:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "compiler warnings in ODBC"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Can someone look at this compiler warning I am seeing in ODBC:\n> \n> ---------------------------------------------------------------------------\n> \n> gcc -O2 -pipe -m486 -Wall -Wmissing-prototypes -Wmissing-declarations -g -Wall -\n> O1 -Wmissing-prototypes -Wmissing-declarations -fpic -I. -I../../../src/include\n> -I/usr/local/include/readline -I/usr/contrib/include -DODBCINSTDIR='\"/usr/local/\n> pgsql/etc\"' -c -o info.o info.c\n> info.c: In function `PGAPI_ForeignKeys':\n> info.c:2901: warning: `pkey_text' might be used uninitialized in this function\n> info.c:2903: warning: `fkey_text' might be used uninitialized in this function\n> info.c:2905: warning: `pkt_text' might be used uninitialized in this function\n> info.c:2907: warning: `fkt_text' might be used uninitialized in this function\n> \n\nHmm you seem to be compiling it with multibyte enabled.\nOK I would suppress the warnings.\n\nBTW why are people configuring with --enable-odbc ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 02 Nov 2001 09:48:38 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "> Bruce Momjian wrote:\n> > \n> > Can someone look at this compiler warning I am seeing in ODBC:\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > gcc -O2 -pipe -m486 -Wall -Wmissing-prototypes -Wmissing-declarations -g -Wall -\n> > O1 -Wmissing-prototypes -Wmissing-declarations -fpic -I. -I../../../src/include\n> > -I/usr/local/include/readline -I/usr/contrib/include -DODBCINSTDIR='\"/usr/local/\n> > pgsql/etc\"' -c -o info.o info.c\n> > info.c: In function `PGAPI_ForeignKeys':\n> > info.c:2901: warning: `pkey_text' might be used uninitialized in this function\n> > info.c:2903: warning: `fkey_text' might be used uninitialized in this function\n> > info.c:2905: warning: `pkt_text' might be used uninitialized in this function\n> > info.c:2907: warning: `fkt_text' might be used uninitialized in this function\n> > \n> \n> Hmm you seem to be compiling it with multibyte enabled.\n> OK I would suppress the warnings.\n\nThanks.\n\n> BTW why are people configuring with --enable-odbc ?\n\nI enable all I can so I can check more of the code during a compile.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 1 Nov 2001 20:19:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian wrote:\n> > >\n> > > Can someone look at this compiler warning I am seeing in ODBC:\n> > >\n> > > \n> > >\n> \n> > BTW why are people configuring with --enable-odbc ?\n> \n> I enable all I can so I can check more of the code during a compile.\n> \n\nISTM neither you nor Philip Warner would use the driver\nin reality. I'm suspicious if --enable-odbc has a meaning\nwithout the environment. We could have 3 kind of ODBC\ndrivers under unix now.\n1) stand-alone driver made with --enable-odbc.\n2) iODBC driver made with --with-iodbc. \n3) unixODBC driver made with --with-unixODBC.\n\nBecause they are exclusive, it seems to have little meaning\nto make 1) in advance. In addition it seems misleading if\npeople would regard 1) as the standard PG driver.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 02 Nov 2001 15:15:32 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "> ISTM neither you nor Philip Warner would use the driver\n> in reality. I'm suspicious if --enable-odbc has a meaning\n> without the environment. We could have 3 kind of ODBC\n> drivers under unix now.\n> 1) stand-alone driver made with --enable-odbc.\n> 2) iODBC driver made with --with-iodbc. \n> 3) unixODBC driver made with --with-unixODBC.\n> \n> Because they are exclusive, it seems to have little meaning\n> to make 1) in advance. In addition it seems misleading if\n> people would regard 1) as the standard PG driver.\n\nI never run the code, just compile it. In fact, I don't use PostgreSQL\nat all except for PostgreSQL development, and never have.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Nov 2001 07:47:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can someone look at this compiler warning I am seeing in ODBC:\n\nFixed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Nov 2001 12:19:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > ISTM neither you nor Philip Warner would use the driver\n> > in reality. I'm suspicious if --enable-odbc has a meaning\n> > without the environment. We could have 3 kind of ODBC\n> > drivers under unix now.\n> > 1) stand-alone driver made with --enable-odbc.\n> > 2) iODBC driver made with --with-iodbc.\n> > 3) unixODBC driver made with --with-unixODBC.\n> >\n> > Because they are exclusive, it seems to have little meaning\n> > to make 1) in advance. In addition it seems misleading if\n> > people would regard 1) as the standard PG driver.\n> \n> It probably doesn't make the greatest possible sense, but it's backward\n> compatible and consistent with typical configure options.\n> \n> Btw., to get the iODBC driver, you need both options --with-iodbc and\n> --enable-odbc. If you only use the former, you get nothing at all.\n> Again, this could conceivably be done differently.\n\n--enable-odbc=standalone\n--enable-odbc=iODBC\n--enable-odbc=unixODBC\n\ncould be the logical way to do it ?\n\n----------------\nHannu\n",
"msg_date": "Sun, 04 Nov 2001 15:41:33 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Hiroshi Inoue writes:\n\n> ISTM neither you nor Philip Warner would use the driver\n> in reality. I'm suspicious if --enable-odbc has a meaning\n> without the environment. We could have 3 kind of ODBC\n> drivers under unix now.\n> 1) stand-alone driver made with --enable-odbc.\n> 2) iODBC driver made with --with-iodbc.\n> 3) unixODBC driver made with --with-unixODBC.\n>\n> Because they are exclusive, it seems to have little meaning\n> to make 1) in advance. In addition it seems misleading if\n> people would regard 1) as the standard PG driver.\n\nIt probably doesn't make the greatest possible sense, but it's backward\ncompatible and consistent with typical configure options.\n\nBtw., to get the iODBC driver, you need both options --with-iodbc and\n--enable-odbc. If you only use the former, you get nothing at all.\nAgain, this could conceivably be done differently.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 4 Nov 2001 14:04:20 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "> --enable-odbc=standalone\n> --enable-odbc=iODBC\n> --enable-odbc=unixODBC\n>\n> could be the logical way to do it ?\n>\n\nThis seems to make sense to me. Also; it would be nice if someone would write \na couple of m4 macros for detecting defaults (i.e. Is unixODBC being used on \nthe system? Is iODBC being used on the system?).\n\n-- \nPeter Harvey\nCodeByDesign - http://www.codebydesign.com\nDataArchitect - http://www.codebydesign.com/DataArchitect\n",
"msg_date": "Sun, 4 Nov 2001 12:05:30 -0800",
"msg_from": "Peter Harvey <pharvey@codebydesign.com>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Hiroshi Inoue writes:\n> \n> > ISTM neither you nor Philip Warner would use the driver\n> > in reality. I'm suspicious if --enable-odbc has a meaning\n> > without the environment. We could have 3 kind of ODBC\n> > drivers under unix now.\n> > 1) stand-alone driver made with --enable-odbc.\n> > 2) iODBC driver made with --with-iodbc.\n> > 3) unixODBC driver made with --with-unixODBC.\n> >\n> > Because they are exclusive, it seems to have little meaning\n> > to make 1) in advance. In addition it seems misleading if\n> > people would regard 1) as the standard PG driver.\n> \n> It probably doesn't make the greatest possible sense, but it's backward\n> compatible and consistent with typical configure options.\n\nThere seems to be pretty many users who only compile\nthe driver but how many real users are there ?\nThe driver hasn't been easy to use with iODBC and\nunfortunately I remember no response from PG users\nto the postings on ML like .. I can't connect to ..\nIMHO users shouldn't specify the option --enable-odbc\naimlessly and should choose either iODBC, unixODBC\nor stand-alone consciouly.\n\n> \n> Btw., to get the iODBC driver, you need both options --with-iodbc and\n> --enable-odbc. If you only use the former, you get nothing at all.\n\nReally ?\nI see the following in ./configure.\n\nif test \"$with_unixodbc\" = yes || test \"$with_iodbc\" = yes; then\n enable_odbc=yes\nfi\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 05 Nov 2001 11:41:45 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "> There seems to be pretty many users who only compile\n> the driver but how many real users are there ?\n> The driver hasn't been easy to use with iODBC and\n> unfortunately I remember no response from PG users\n> to the postings on ML like .. I can't connect to ..\n> IMHO users shouldn't specify the option --enable-odbc\n> aimlessly and should choose either iODBC, unixODBC\n> or stand-alone consciouly.\n\nThe version I use works great with unixODBC. Also; I would imagine that \n*most* people using ODBC would want to use a Driver Manager.\n\n-- \nPeter Harvey\nCodeByDesign - http://www.codebydesign.com\nDataArchitect - http://www.codebydesign.com/DataArchitect\n",
"msg_date": "Sun, 4 Nov 2001 21:06:34 -0800",
"msg_from": "Peter Harvey <pharvey@codebydesign.com>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Hannu Krosing writes:\n> \n> > > Btw., to get the iODBC driver, you need both options --with-iodbc and\n> > > --enable-odbc. If you only use the former, you get nothing at all.\n> > > Again, this could conceivably be done differently.\n> >\n> > --enable-odbc=standalone\n> > --enable-odbc=iODBC\n> > --enable-odbc=unixODBC\n> >\n> > could be the logical way to do it ?\n> \n> Logical maybe, but not consistent with typical configure options. If you\n> want to build your package while using some other package for it, then the\n> option is --with-package. I could imagine making --enable-odbc optional\n> in that case, though.\n>\n\nMy understanding was that different ODBC variants were mutually\nexclusive \nwhile other packages are not. Thus a separate option. it could be also\n\n--with-odbc=xxxODBC or --enable-odbc=xxxODBC\n\n------------\nHannu\n",
"msg_date": "Tue, 06 Nov 2001 01:00:39 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Hannu Krosing writes:\n\n> > Btw., to get the iODBC driver, you need both options --with-iodbc and\n> > --enable-odbc. If you only use the former, you get nothing at all.\n> > Again, this could conceivably be done differently.\n>\n> --enable-odbc=standalone\n> --enable-odbc=iODBC\n> --enable-odbc=unixODBC\n>\n> could be the logical way to do it ?\n\nLogical maybe, but not consistent with typical configure options. If you\nwant to build your package while using some other package for it, then the\noption is --with-package. I could imagine making --enable-odbc optional\nin that case, though.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 5 Nov 2001 21:05:03 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "Peter Harvey writes:\n\n> Also; it would be nice if someone would write a couple of m4 macros\n> for detecting defaults (i.e. Is unixODBC being used on the system? Is\n> iODBC being used on the system?).\n\nThis is sort of my long-term idea, i.e., just use \"the\" ODBC driver that's\navailable. However, I'm not sure if it's entirely appropriate to do that,\nfor a number of reasons, one of which is that unixODBC and iODBC aren't\npretending to be compatible, another is that if this were the way to go\nthen we wouldn't really have a place for the standalone driver. For now,\nI think giving people the explicit choice is a good way to see where this\nis going at all.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Mon, 5 Nov 2001 21:05:13 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
},
{
"msg_contents": "On Monday 05 November 2001 12:05, Peter Eisentraut wrote:\n> Peter Harvey writes:\n> > Also; it would be nice if someone would write a couple of m4 macros\n> > for detecting defaults (i.e. Is unixODBC being used on the system? Is\n> > iODBC being used on the system?).\n>\n> This is sort of my long-term idea, i.e., just use \"the\" ODBC driver that's\n> available. However, I'm not sure if it's entirely appropriate to do that,\n> for a number of reasons, one of which is that unixODBC and iODBC aren't\n> pretending to be compatible, another is that if this were the way to go\n> then we wouldn't really have a place for the standalone driver. For now,\n> I think giving people the explicit choice is a good way to see where this\n> is going at all.\n\nunixODBC and iODBC have actually been working towards the same goal. The \nspecification is clear enough. The difference is that they have had different \nlevels of resources and different priorities.\n\nHaving said that; I am just happy to see people taking the time to work these \nissues out :)\n\n-- \nPeter Harvey\nCodeByDesign - http://www.codebydesign.com\nDataArchitect - http://www.codebydesign.com/DataArchitect\n",
"msg_date": "Mon, 5 Nov 2001 19:47:45 -0800",
"msg_from": "Peter Harvey <pharvey@codebydesign.com>",
"msg_from_op": false,
"msg_subject": "Re: compiler warnings in ODBC"
}
] |
[
{
"msg_contents": "Hi all.\n\nI had a lot of problems upgrading from 7.1 to 7.2 with fields of\ntype oid. I ended up hand editing the dump to change them all to\ninteger. In my case, they should have been integer anyway, but there\nare legitimate uses for oid fields. Sorry if I can't be any more\nexplicit about the problem I havn't had time to delve deep. But\nit didn't seem to like nulls in oid columns. It also may or may\nnot be related to having an index on those oid columns. I suggest\nsomeone may want to make sure that an oid column in 7.2 that contains\nsome nulls and possibly has an index can be properly dumped and\nrestored, and preferably also check that it can be dumped from 7.1\nand restored 7.2.\n\n",
"msg_date": "Fri, 02 Nov 2001 13:51:33 +1100",
"msg_from": "Chris Bitmead <chris@bitmead.com>",
"msg_from_op": true,
"msg_subject": "OID problem 7.2"
}
] |
[
{
"msg_contents": "\n> > so it's linear growth here\n> This is what my colleague was afraid of: We would have linear growth\n> compared to the log(n) growth which is to be expected on MS SQL server\n\nThis is not true, since the index scan also neads to read the leaf pages\nin MS Sql. The number of leaf pages grows linear with number of rows\nthat qualify the where restriction.\n\nR = number of rows that qualify\n--> O(R + log(R))\n\nThe pg measurements showed, that PostgreSQL query performance can be\nexpected\nto stay nearly the same regardless of number of rows in the table as\nlong as \nthe number of rows that qualify the where restriction stays constant.\nThe response time is linear to the number of rows that qualify the where\n\nrestriction, but that linear behavior is also expected with MS Sql.\n\nAndreas\n",
"msg_date": "Fri, 2 Nov 2001 09:33:57 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
}
] |
[
{
"msg_contents": "At 10:22 1/11/01 -0500, Tom Lane wrote:\n>The best I could offer\n>you (short of a complete redesign of subqueries) would be to not pull up\n>views that have any subqueries, which would probably be a net loss.\n\nThat's probably true 90% percent of the time; it would be interesting to be\nable to turn this on & off on a per-query basis (or even a per-view basis).\nIs this hard?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Fri, 02 Nov 2001 20:16:24 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Another planner/optimizer question... "
}
] |
[
{
"msg_contents": "Hi,\n\nWhat's wrong with the patch mailingslist ? I can't read the history (page not \nfound). I have subscribed to the list, it's confirmed by mail, but i get no \nmail. I've posted a little patch, but i don't see it in de mailingslist.\nHave i done something wrong or ...\n\nFerdinand Smit\n",
"msg_date": "Fri, 2 Nov 2001 14:22:43 +0100",
"msg_from": "Ferdinand Smit <ferdinand@telegraafnet.nl>",
"msg_from_op": true,
"msg_subject": "The patch mailingslist"
}
] |
[
{
"msg_contents": "\nIP may take a bit of time to propogate around, but the server is back up\nagain on the Rackspace server ...\n\n\n\n",
"msg_date": "Fri, 2 Nov 2001 08:24:22 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Back online ..."
}
] |
[
{
"msg_contents": "The regression tests now pass okay in PST season. However, I took what\nin hindsight is an obvious precaution: I set my system clock forward to\nnext summer and tried them. In PDT season we still have a problem (see\nattached diff). Will leave it to you to select the most appropriate fix.\n\nWhile I was at it, I tried setting the clock forward to winter and\nsummer of 2025, and got the same regression-test behavior as for now.\nSo at least we don't have any near-term \"oops, this date is now in the\npast\" dependencies, like the one that bit us last June.\n\n\t\t\tregards, tom lane\n\n\n*** ./expected/horology.out\tFri Oct 19 21:02:21 2001\n--- ./results/horology.out\tSun Jul 7 11:21:19 2002\n***************\n*** 555,561 ****\n + interval '1 month 04:01' as timestamp without time zone) AS time) AS \"07:31:00\";\n 07:31:00 \n ----------\n! 07:31:00\n (1 row)\n \n SELECT interval '04:30' - time with time zone '01:02-05' AS \"20:32:00-05\";\n--- 555,561 ----\n + interval '1 month 04:01' as timestamp without time zone) AS time) AS \"07:31:00\";\n 07:31:00 \n ----------\n! 08:31:00\n (1 row)\n \n SELECT interval '04:30' - time with time zone '01:02-05' AS \"20:32:00-05\";\n\n======================================================================\n\n",
"msg_date": "Fri, 02 Nov 2001 11:34:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Not there yet on regression-test DST independence"
},
{
"msg_contents": "> The regression tests now pass okay in PST season. However, I took what\n> in hindsight is an obvious precaution: I set my system clock forward to\n> next summer and tried them. In PDT season we still have a problem (see\n> attached diff). Will leave it to you to select the most appropriate fix.\n\nFixed. I had to water down the test, but it seems to pass (in November\n*and* June ;).\n\nSince this was horology.sql, some of the output templates will need to\nbe updated. All tests pass on my Linux box...\n\n - Thomas\n",
"msg_date": "Tue, 06 Nov 2001 16:39:41 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Not there yet on regression-test DST independence"
}
] |
[
{
"msg_contents": "\ndoes this get through?\n\n\n",
"msg_date": "Fri, 2 Nov 2001 12:57:15 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "checking things over ..."
},
{
"msg_contents": "On Fri, 2 Nov 2001, Marc G. Fournier wrote:\n\n>\n> does this get through?\n\nno, why?\n\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Fri, 2 Nov 2001 13:07:22 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: checking things over ..."
},
{
"msg_contents": "\nodd, looks like i go through to me ... *puzzled look*\n\nOn Fri, 2 Nov 2001, Vince Vielhaber wrote:\n\n> On Fri, 2 Nov 2001, Marc G. Fournier wrote:\n>\n> >\n> > does this get through?\n>\n> no, why?\n>\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\n",
"msg_date": "Fri, 2 Nov 2001 13:08:35 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: checking things over ..."
},
{
"msg_contents": "What exactly didn't get through?\nI got three emails since the shutdown: two from\nMarc and one from Vince... Perhaps \neveryone else in the list did too, I guess...\n\n-s\n\n----- Original Message ----- \nFrom: Marc G. Fournier <scrappy@hub.org>\nSent: Friday, November 02, 2001 1:08 PM\n\n> odd, looks like i go through to me ... *puzzled look*\n> \n> On Fri, 2 Nov 2001, Vince Vielhaber wrote:\n> \n> > On Fri, 2 Nov 2001, Marc G. Fournier wrote:\n> >\n> > >\n> > > does this get through?\n> >\n> > no, why?\n\n\n",
"msg_date": "Fri, 2 Nov 2001 16:16:54 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": false,
"msg_subject": "Re: checking things over ..."
},
{
"msg_contents": "Err... \n\nVince was being smart. Like, as he replied, then of course it went\nthrough.\n\n:)\n\n+ Justin\n\n\nSerguei Mokhov wrote:\n> \n> What exactly didn't get through?\n> I got three emails since the shutdown: two from\n> Marc and one from Vince... Perhaps\n> everyone else in the list did too, I guess...\n> \n> -s\n> \n> ----- Original Message -----\n> From: Marc G. Fournier <scrappy@hub.org>\n> Sent: Friday, November 02, 2001 1:08 PM\n> \n> > odd, looks like i go through to me ... *puzzled look*\n> >\n> > On Fri, 2 Nov 2001, Vince Vielhaber wrote:\n> >\n> > > On Fri, 2 Nov 2001, Marc G. Fournier wrote:\n> > >\n> > > >\n> > > > does this get through?\n> > >\n> > > no, why?\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 03 Nov 2001 12:40:57 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: checking things over ..."
},
{
"msg_contents": "At 12:57 PM 11/2/01 -0500, Marc G. Fournier wrote:\n>\n>does this get through?\n>\n\n\"Can you hear me at the back?\"\n\nCrowd at back yells: \"NOOOOO!\"\n\n:)\n\nLink.\n\n\n",
"msg_date": "Sat, 03 Nov 2001 10:31:53 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: checking things over ..."
}
] |
[
{
"msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\ttgl@postgresql.org\t01/11/02 13:39:57\n\nModified files:\n\tdoc/src/sgml : client-auth.sgml runtime.sgml \n\tsrc/backend/commands: user.c \n\tsrc/backend/libpq: crypt.c \n\tsrc/backend/postmaster: postmaster.c \n\tsrc/include/libpq: crypt.h \n\nLog message:\n\tFix pg_pwd caching mechanism, which was broken by changes to fork\n\tpostmaster children before client auth step. Postmaster now rereads\n\tpg_pwd on receipt of SIGHUP, the same way that pg_hba.conf is handled.\n\tNo cycles need be expended to validate password cache validity during\n\tconnection startup.\n\n",
"msg_date": "Fri, 2 Nov 2001 13:39:57 -0500 (EST)",
"msg_from": "tgl@postgresql.org",
"msg_from_op": true,
"msg_subject": "pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ..."
},
{
"msg_contents": "> CVSROOT:\t/cvsroot\n> Module name:\tpgsql\n> Changes by:\ttgl@postgresql.org\t01/11/02 13:39:57\n> \n> Modified files:\n> \tdoc/src/sgml : client-auth.sgml runtime.sgml \n> \tsrc/backend/commands: user.c \n> \tsrc/backend/libpq: crypt.c \n> \tsrc/backend/postmaster: postmaster.c \n> \tsrc/include/libpq: crypt.h \n> \n> Log message:\n> \tFix pg_pwd caching mechanism, which was broken by changes to fork\n> \tpostmaster children before client auth step. Postmaster now rereads\n> \tpg_pwd on receipt of SIGHUP, the same way that pg_hba.conf is handled.\n> \tNo cycles need be expended to validate password cache validity during\n> \tconnection startup.\n\nTom, does a client do a kill() to its parent on password change?\n\nIf this is true, people can't depend on editing pg_hba.conf and having\nthe change take affect _only_ when they sighup the postmaster. If\nsomeone changes a password, pg_hba.conf gets reread anyway, right? Not\na problem but something we should be aware of.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Nov 2001 17:30:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Fix pg_pwd caching mechanism, which was broken by changes to fork\n>> postmaster children before client auth step. Postmaster now rereads\n>> pg_pwd on receipt of SIGHUP, the same way that pg_hba.conf is handled.\n\n> Tom, does a client do a kill() to its parent on password change?\n\nRight, it's basically the same as the way we handle checkpoint and\nSI-overrun signaling:\n\n\t/*\n\t * Signal the postmaster to reload its password-file cache.\n\t */\n\tif (IsUnderPostmaster)\n\t\tkill(getppid(), SIGHUP);\n\n> If this is true, people can't depend on editing pg_hba.conf and having\n> the change take affect _only_ when they sighup the postmaster.\n\nTrue. But recall that in all previous releases it's been completely\nunsafe to edit pg_hba.conf in place, so I don't regard this as a big\nstep backwards.\n\nWe could possibly set up the password-file-reload action to occur on\nsome other, presently unused signal. But there aren't a lot of spare\nsignal numbers left, and I'm not eager to use one up for this...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Nov 2001 17:56:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ... "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> Fix pg_pwd caching mechanism, which was broken by changes to fork\n> >> postmaster children before client auth step. Postmaster now rereads\n> >> pg_pwd on receipt of SIGHUP, the same way that pg_hba.conf is handled.\n> \n> > Tom, does a client do a kill() to its parent on password change?\n> \n> Right, it's basically the same as the way we handle checkpoint and\n> SI-overrun signaling:\n> \n> \t/*\n> \t * Signal the postmaster to reload its password-file cache.\n> \t */\n> \tif (IsUnderPostmaster)\n> \t\tkill(getppid(), SIGHUP);\n> \n> > If this is true, people can't depend on editing pg_hba.conf and having\n> > the change take affect _only_ when they sighup the postmaster.\n> \n> True. But recall that in all previous releases it's been completely\n> unsafe to edit pg_hba.conf in place, so I don't regard this as a big\n> step backwards.\n> \n> We could possibly set up the password-file-reload action to occur on\n> some other, presently unused signal. But there aren't a lot of spare\n> signal numbers left, and I'm not eager to use one up for this...\n\nI think your solution is fine. I just wanted to make it clear so we\ndon't encourage people to edit those files and wait around thinking they\ncan control when the reload happens. I will check the docs to make sure\nI didn't add any suggestion of that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 2 Nov 2001 18:16:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think your solution is fine. I just wanted to make it clear so we\n> don't encourage people to edit those files and wait around thinking they\n> can control when the reload happens. I will check the docs to make sure\n> I didn't add any suggestion of that.\n\nI already looked and didn't see anyplace claiming it was safe to do\nthat. However, there's also not any explicit statement pointing out\nthat it's not safe.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 02 Nov 2001 18:19:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ... "
},
{
"msg_contents": "Hi,\n\nI am looking at changing the jdbc driver to be asynchronous, in other\nwords retrieve the result set when asked as opposed to retrieving it all\nbefore returning.\n\nThis will involve understanding the client/backend protocol completely.\nSo far I have found one reference to this. Does anyone have a flow chart\nor can point me to any other references.\n\nDave\n\n",
"msg_date": "Sat, 3 Nov 2001 11:22:31 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Backend Protocol"
},
{
"msg_contents": "\"Dave Cramer\" <dave@fastcrypt.com> writes:\n> This will involve understanding the client/backend protocol completely.\n> So far I have found one reference to this. Does anyone have a flow chart\n> or can point me to any other references.\n\nThe only documentation I know of is the protocol chapter in the PG\nDeveloper's Guide:\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/protocol.html\n\nThe development-sources version of this has additional entries for the\nnew authentication methods added in 7.2. It's still missing any\ndiscussion of SSL encryption :-(\n\nIt would probably be useful to add something flowchart-ish to section\n4.2.2 to show the typical sequence of messages for various commands.\nHowever, I'd caution you against wiring in more assumptions than you\nabsolutely must about message order. It's best to build the client\nlibrary as a state machine that will accept any message type at any\ntime that the message makes any sense.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Nov 2001 13:02:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Backend Protocol "
},
{
"msg_contents": "Tom Lane writes:\n\n> > Tom, does a client do a kill() to its parent on password change?\n>\n> Right, it's basically the same as the way we handle checkpoint and\n> SI-overrun signaling:\n>\n> \t/*\n> \t * Signal the postmaster to reload its password-file cache.\n> \t */\n> \tif (IsUnderPostmaster)\n> \t\tkill(getppid(), SIGHUP);\n\nBut does this mean that postgresql.conf will be reread globally (i.e., by\nthe postmaster), when the user signals HUP only to a single backend? I\nguess this is actually a good idea, but it would be a change of\nfunctionality. Admittedly, the window of utility of signalling only a\nsingle backend is small, but let's make sure the hupping behavior is\ncorrectly documented, because this is possibly not expected.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://funkturm.homeip.net/~peter\n\n",
"msg_date": "Sun, 4 Nov 2001 14:05:24 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> But does this mean that postgresql.conf will be reread globally (i.e., by\n> the postmaster), when the user signals HUP only to a single backend?\n\nNo. What it does mean is that ADD/DROP/ALTER USER will cause a global\nreread of the conf files. That is kinda annoying, I agree.\n\nWe could avoid this by using a different signal number, but there's a\nshortage of available signals. I was toying with the notion of unifying\nall three of the existing reasons for signalling the postmaster\n(SIGUSR1, SIGUSR2, SIGHUP) into a single child-to-parent signal number,\nsay SIGUSR1. A flag array in shared memory could be used to indicate\nwhat the reason(s) are for the most recent signal. This would actually\nfree up one signal number, which seems like a good idea in the long run.\nComments?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 11:13:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ... "
},
{
"msg_contents": "> Peter Eisentraut <peter_e@gmx.net> writes:\n> > But does this mean that postgresql.conf will be reread globally (i.e., by\n> > the postmaster), when the user signals HUP only to a single backend?\n> \n> No. What it does mean is that ADD/DROP/ALTER USER will cause a global\n> reread of the conf files. That is kinda annoying, I agree.\n> \n> We could avoid this by using a different signal number, but there's a\n> shortage of available signals. I was toying with the notion of unifying\n> all three of the existing reasons for signalling the postmaster\n> (SIGUSR1, SIGUSR2, SIGHUP) into a single child-to-parent signal number,\n> say SIGUSR1. A flag array in shared memory could be used to indicate\n> what the reason(s) are for the most recent signal. This would actually\n> free up one signal number, which seems like a good idea in the long run.\n> Comments?\n\nWhile is not ideal, I am not too concerned that USER commands will\nreread all config files. Maybe we should wait to see if anyone reports\na problem with this behavior before adding code to correct it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 13:00:17 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> While is not ideal, I am not too concerned that USER commands will\n> reread all config files. Maybe we should wait to see if anyone reports\n> a problem with this behavior before adding code to correct it.\n\nBut Peter's already complaining ;-) ... and you were concerned about it\ntoo, yesterday.\n\nI went ahead and made the changes, because I think we'd have been forced\ninto it sooner or later anyway. We didn't have any spare signal numbers\nin the postmaster, so the next new reason for child processes to signal\nthe postmaster would've forced creation of this mechanism anyhow.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 15:15:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm ... "
},
{
"msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > While is not ideal, I am not too concerned that USER commands will\n> > reread all config files. Maybe we should wait to see if anyone reports\n> > a problem with this behavior before adding code to correct it.\n> \n> But Peter's already complaining ;-) ... and you were concerned about it\n> too, yesterday.\n\nOh, I didn't know he was complaining. I was asking only so I understood\nthe behavior and could help people if it surprised them.\n\n> I went ahead and made the changes, because I think we'd have been forced\n> into it sooner or later anyway. We didn't have any spare signal numbers\n> in the postmaster, so the next new reason for child processes to signal\n> the postmaster would've forced creation of this mechanism anyhow.\n\nVery true.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 15:19:03 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/ oc/src/sgml/client-auth.sgml oc/src/sgm"
},
{
"msg_contents": "Dave,\n\nThis is documented in the Developer's Guide (Chapter 4 Frontend/Backend \nProtocol of the 7.1 docs).\n\nA few days ago I added this same item the the jdbc todo list. I don't \nthink the implementation will need to do anything different at the FE/BE \nprotocol level. Instead my thoughts on this are to wrap all jdbc \nStatements/Prepared Statements doing selects in a sql cursor, and then \nissuing fetch statements to set sets of rows instead of the entire \nresult set.\n\nThus 'select foo from bar'\n\nWhould get executed by the driver as:\n\ndeclare barcursor cursor as select foo from bar;\n\nfollowed by:\n\nfetch forward 10 from barcursor;\n\n(to select the first 10 rows)\n\nadditional fetches as needed...\n\nfinally:\n\nclose barcursor;\n\nJust when this should be done (i.e. you probably don't want to overhead \nof using a cursor for every select statement), how many rows to fetch at \na time (perhaps you might want to fetch 100 rows per call to minimize \nnetwork roundtrips) would need to be configurable somehow.\n\nAlso not the the jdbc2 spec has methods for setCursorName(), etc. and \njust how they would play into this I don't know.\n\nthanks,\n--Barry\n\n\n\nDave Cramer wrote:\n\n> Hi,\n> \n> I am looking at changing the jdbc driver to be asynchronous, in other\n> words retrieve the result set when asked as opposed to retrieving it all\n> before returning.\n> \n> This will involve understanding the client/backend protocol completely.\n> So far I have found one reference to this. Does anyone have a flow chart\n> or can point me to any other references.\n> \n> Dave\n> \n> \n\n\n",
"msg_date": "Mon, 05 Nov 2001 19:11:38 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: Backend Protocol"
}
] |
[
{
"msg_contents": "\nAnother mild planning oddity; this time, the query does not seem to rem,ove\nan unreferenced column from the plan. No big deal, but for larger queries\nit can significantly increase the cost.\n\ncreate table g(n text, rn text); \ncreate table r(n text, p int);\ncreate table t(p int, x int);\n\n-- Basically LOJ t->r->g, and return 'n' from g if found.\ncreate view tv as select\n\tt.p,\n\tg.n as gn,\n\tx\nfrom \n\tt left outer join r on (r.p=t.p)\n\tleft outer join g on (g.rn = r.n)\n\t;\n\nexplain select \n\t(select r.n from r where r.p=tv.p), -- no reference to gn!\n\tsum(x)\nFrom\n\ttv\nGroup by 1\n;\n\nAggregate (cost=3378.54..3503.54 rows=2500 width=76)\n -> Group (cost=3378.54..3441.04 rows=25000 width=76)\n -> Sort (cost=3378.54..3378.54 rows=25000 width=76)\n -> Merge Join (cost=584.18..911.68 rows=25000 width=76)\n -> Sort (cost=514.35..514.35 rows=5000 width=44)\n -> Merge Join (cost=139.66..207.16 rows=5000\nwidth=44)\n -> Sort (cost=69.83..69.83 rows=1000\nwidth=8)\n -> Seq Scan on t (cost=0.00..20.00\nrows=1000 width=8)\n -> Sort (cost=69.83..69.83 rows=1000\nwidth=36)\n -> Seq Scan on r (cost=0.00..20.00\nrows=1000 width=36)\n -> Sort (cost=69.83..69.83 rows=1000 width=32)\n!!!!!! -> Seq Scan on g (cost=0.00..20.00 rows=1000\nwidth=32)\n SubPlan\n!? -> Seq Scan on r (cost=0.00..22.50 rows=5 width=32)\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sat, 03 Nov 2001 19:43:25 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Another planner oddity"
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> explain select \n> \t(select r.n from r where r.p=tv.p), -- no reference to gn!\n> \tsum(x)\n> From\n> \ttv\n\nWhat's your point? We can't omit the join to g, as that would change\nthe set of returned rows. (In general, anyway; in this case the\ndependency is that multiple matches in g would change sum(x) for\nany given r.n.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Nov 2001 10:53:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another planner oddity "
},
{
"msg_contents": "At 10:53 3/11/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>> explain select \n>> \t(select r.n from r where r.p=tv.p), -- no reference to gn!\n>> \tsum(x)\n>> From\n>> \ttv\n>\n>What's your point? We can't omit the join to g, as that would change\n>the set of returned rows. (In general, anyway; in this case the\n>dependency is that multiple matches in g would change sum(x) for\n>any given r.n.)\n\nOops. Left out too much. Make each of the ref'd tables unique (so only one\nmatch for given t.p):\n\ncreate table g(n text, rn text unique); \ncreate table r(n text, p int primary key);\ncreate table t(p int, x int);\n\ncreate view tv as select\n\tt.p,\n\tg.n as gn,\n\tx\nfrom \n\tt left outer join r on (r.p=t.p)\n\tleft outer join g on (g.rn = r.n)\n\t;\n\nexplain select \n\t(select r.n from r where r.p=tv.p), -- no reference to gn!\n\tsum(x)\nFrom\n\ttv\nGroup by 1\n;\n\nAggregate (cost=308.49..313.49 rows=100 width=76)\n -> Group (cost=308.49..310.99 rows=1000 width=76)\n -> Sort (cost=308.49..308.49 rows=1000 width=76)\n -> Merge Join (cost=189.16..258.66 rows=1000 width=76)\n -> Index Scan using g_rn_key on g (cost=0.00..52.00\nrows=1000 width=32)\n -> Sort (cost=189.16..189.16 rows=1000 width=44)\n -> Merge Join (cost=69.83..139.33 rows=1000\nwidth=44)\n -> Index Scan using r_pkey on r\n(cost=0.00..52.00 rows=1000 width=36)\n -> Sort (cost=69.83..69.83 rows=1000\nwidth=8)\n -> Seq Scan on t (cost=0.00..20.00\nrows=1000 width=8)\n SubPlan\n -> Index Scan using r_pkey on r (cost=0.00..4.82\nrows=1 width=32)\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 04 Nov 2001 10:19:10 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Another planner oddity "
},
{
"msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 10:53 3/11/01 -0500, Tom Lane wrote:\n>> What's your point? We can't omit the join to g, as that would change\n>> the set of returned rows.\n\n> Oops. Left out too much. Make each of the ref'd tables unique (so only one\n> match for given t.p):\n\nHmm. That in combination with the LEFT OUTER JOIN might be sufficient\nto ensure that the output is the same with or without scanning g ...\nbut it seems far too fragile and specialized a chain of reasoning to\nconsider trying to get the planner to duplicate it.\n\nWe have to consider not only the potential benefit of any suggested\nplanner optimization, but also how often it's likely to win and how\nmany cycles we're likely to waste testing for the condition when it\ndoesn't hold. This seems very unpromising.\n\nMy thoughts here are probably colored by bad past experience: before\nabout 6.5, the planner would in fact discard unreferenced relations\nfrom its plan, with the result that it gave wrong answers for\nperfectly-reasonable queries like \"SELECT count(1) FROM foo\".\nI won't put back such an optimization without strong guarantees that\nit's correct, and that implies a lot of cycles expended to determine\nwhether the optimization applies.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Nov 2001 18:51:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Another planner oddity "
}
] |
[
{
"msg_contents": "Hi all,\n\nMy company has a pretty big Web-based application written in PL/SQL\nusing an Oracle 8 backend database. We're thinking about moving from\nOracle to Postgresql. I wondered if there's any quick way to do the\nporting:\n\n- Any auomated tools to translate the PL/SQL code & to create tables?\n- What process do you recommend for doing the porting? \n- Other suggestions?\n\nThanks a lot.\n\nAndy\n",
"msg_date": "3 Nov 2001 02:02:45 -0800",
"msg_from": "angelflow@yahoo.com (Andy)",
"msg_from_op": true,
"msg_subject": "Porting Web application written in Oracle 8 PL/SQL to Postgresql"
},
{
"msg_contents": "Look at http://www.openacs.org in openacs4 comunity.\n",
"msg_date": "3 Nov 2001 10:16:55 -0800",
"msg_from": "domingo@dad-it.com (Domingo Alvarez Duarte)",
"msg_from_op": false,
"msg_subject": "Re: Porting Web application written in Oracle 8 PL/SQL to Postgresql"
},
{
"msg_contents": "At 02:02 03/11/01 -0800, you wrote:\n>- Any auomated tools to translate the PL/SQL code & to create tables?\n>- What process do you recommend for doing the porting?\n>- Other suggestions?\n\nHello,\n\nTry have a look at http://pgadmin.postgresql.org\nIt features a PL/pgSQL function editor with migration wizard (database only).\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Mon, 05 Nov 2001 16:35:01 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Porting Web application written in Oracle 8"
},
{
"msg_contents": "Andy,\n\nPlease do NOT cross-post mail to 3 different lists in the future! It\nshows a lack of respect for the other list subscribers.\n\n> My company has a pretty big Web-based application written in PL/SQL\n> using an Oracle 8 backend database. We're thinking about moving from\n> Oracle to Postgresql. I wondered if there's any quick way to do the\n> porting:\n\nSee Roberto Mello's PL/SQL porting guide at TechDocs:\nhttp://techdocs.postgresql.org/\n\n-Josh\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Mon, 05 Nov 2001 08:49:09 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: Porting Web application written in Oracle 8 PL/SQL"
},
{
"msg_contents": "Hi Andy,\n\nI suggest you read the articles on:\n\nhttp://techdocs.postgresql.org/\n\nAs there are some Oracle->PostgreSQL documents there.\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Andy\n> Sent: Saturday, 3 November 2001 6:03 PM\n> To:\n> pgsql-hackers@postgresql.org.pgsql-general@postgresql.org.pgsql-sql@post\n> gresql.org\n> Subject: [HACKERS] Porting Web application written in Oracle 8 PL/SQL to\n> Postgresql\n> \n> \n> Hi all,\n> \n> My company has a pretty big Web-based application written in PL/SQL\n> using an Oracle 8 backend database. We're thinking about moving from\n> Oracle to Postgresql. I wondered if there's any quick way to do the\n> porting:\n> \n> - Any auomated tools to translate the PL/SQL code & to create tables?\n> - What process do you recommend for doing the porting? \n> - Other suggestions?\n> \n> Thanks a lot.\n> \n> Andy\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n",
"msg_date": "Tue, 6 Nov 2001 10:24:00 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Porting Web application written in Oracle 8 PL/SQL to Postgresql"
},
{
"msg_contents": "Just to complement Domingo's and Josh's good responses...\n\nOn Sat, Nov 03, 2001 at 02:02:45AM -0800, Andy wrote:\n> \n> My company has a pretty big Web-based application written in PL/SQL\n> using an Oracle 8 backend database. We're thinking about moving from\n> Oracle to Postgresql. I wondered if there's any quick way to do the\n> porting:\n\nNot a quick way. But there's a way :)\n \n> - Any auomated tools to translate the PL/SQL code & to create tables?\n> - What process do you recommend for doing the porting? \n> - Other suggestions?\n\nAs Domingo suggested, http://openacs.org/ is a pretty big web application\nframework and applications built for Oracle and PostgreSQL. You can find\nideas in its data model of how to cope with the differences. It has\nhundreds of PL/SQL (and PL/pgSQL) functions, and it's GPL'd.\n\nThe Porting Guide that Josh suggested is also included in the PostgreSQL\ndocumentation, in the Programmer's Guide.\n\n-Roberto\n-- \n+----| http://fslc.usu.edu USU Free Software & GNU/Linux Club |------+\n Roberto Mello - Computer Science, USU - http://www.brasileiro.net \n http://www.sdl.usu.edu - Space Dynamics Lab, Developer \nYou just watch yourself. I have the death sentence in twelve systems.\n",
"msg_date": "Wed, 7 Nov 2001 06:57:45 -0700",
"msg_from": "Roberto Mello <rmello@cc.usu.edu>",
"msg_from_op": false,
"msg_subject": "Re: Porting Web application written in Oracle 8 PL/SQL to Postgresql"
}
] |
[
{
"msg_contents": "Hello Dave,\n\nI upgraded code and binaries from CVS to latest versions. I am having a \nlook at a possible Cygwin plug-in (this is even more important than view \nand trigger pseudo modification and can done quickly).\n\nThe plug-in menu works great from the binary version of pgAdmin2.\nIn the latest version of pgAdmin, I get error \"incompatible type\".\n\nAny idea?\n\nBest regards\nJean-Michel\n\n",
"msg_date": "Sat, 03 Nov 2001 11:47:04 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "pgAdmin2 plug-in"
}
] |
[
{
"msg_contents": "Hello my friends! Sorry for my english!\n\nI need to do a application that listen the port of Postgresql and return all commands actually in process in Postgresql. Like a monitor. I think that to do it i need to do a socket (sniffer) and understand the structure for Postgresql protocol. Somebody can help me?\n\n\n\n\n\n\n\nHello my friends! Sorry for my \nenglish!\n \nI need to do a application that listen the port of \nPostgresql and return all commands actually in process in Postgresql. Like a \nmonitor. I think that to do it i need to do a socket (sniffer) and understand \nthe structure for Postgresql protocol. Somebody can help \nme?",
"msg_date": "Sat, 3 Nov 2001 11:23:17 -0300",
"msg_from": "=?iso-8859-1?Q?F=E1bio_Santana?= <fabio3c@terra.com.br>",
"msg_from_op": true,
"msg_subject": "LISTENING THE PORT"
},
{
"msg_contents": "> Hello my friends! Sorry for my english!\n> \n> I need to do a application that listen the port of Postgresql\n> and return all commands actually in process in Postgresql. Like\n> a monitor. I think that to do it i need to do a socket (sniffer)\n> and understand the structure for Postgresql protocol. Somebody\n> can help me?\n\nHave you tried pgmonitor at gborg.postgresql.org?\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Nov 2001 11:22:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LISTENING THE PORT"
}
] |
[
{
"msg_contents": "=?iso-8859-1?Q?F=E1bio_Santana?= <fabio3c@terra.com.br> writes:\n> I need to do a application that listen the port of Postgresql\n> and return all commands actually in process in Postgresql.\n\nIn 7.2 beta, try select * from pg_stat_activity (note you need\nto enable collection of query strings...)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 03 Nov 2001 12:53:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Fw: LISTENING THE PORT "
},
{
"msg_contents": "This page is with problems... (gborg.postgresql.org)\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"F�bio Santana\" <fabio3c@terra.com.br>\nCc: \"Hackers Postgresql\" <pgsql-hackers@postgresql.org>\nSent: Saturday, November 03, 2001 1:22 PM\nSubject: Re: [HACKERS] LISTENING THE PORT\n\n\n> > Hello my friends! Sorry for my english!\n> >\n> > I need to do a application that listen the port of Postgresql\n> > and return all commands actually in process in Postgresql. Like\n> > a monitor. I think that to do it i need to do a socket (sniffer)\n> > and understand the structure for Postgresql protocol. Somebody\n> > can help me?\n>\n> Have you tried pgmonitor at gborg.postgresql.org?\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sat, 3 Nov 2001 15:34:51 -0300",
"msg_from": "=?iso-8859-1?Q?F=E1bio_Santana?= <fabio3c@terra.com.br>",
"msg_from_op": false,
"msg_subject": "Fw: LISTENING THE PORT"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Jean-Michel POURE [mailto:jm.poure@freesurf.fr] \n> Sent: 03 November 2001 10:47\n> To: pgsql-hackers@postgresql.org\n> Cc: dpage@vale-housing.co.uk\n> Subject: pgAdmin2 plug-in\n> \n> \n> Hello Dave,\n> \n> I upgraded code and binaries from CVS to latest versions. I \n> am having a \n> look at a possible Cygwin plug-in (this is even more \n> important than view \n> and trigger pseudo modification and can done quickly).\n> \n> The plug-in menu works great from the binary version of \n> pgAdmin2. In the latest version of pgAdmin, I get error \n> \"incompatible type\".\n\nTry running buildall.bat - it may be that one of your plugins is compiled\nfor a different version of pgSchema than you have.\n\nRegards, Dave,\n",
"msg_date": "Sat, 3 Nov 2001 21:19:45 -0000 ",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: pgAdmin2 plug-in"
}
] |
[
{
"msg_contents": "We have seen very few bug reports since going beta. That means either\nno one is testing the beta, which I don't believe, or that the beta is\nquite stable. Maybe we should start thinking about a date for the final\n7.2 release, perhaps mid to end November.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Nov 2001 17:05:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Beta going well"
},
{
"msg_contents": "\nI don't think so ... let's talk beta2 in mid November, with a much broader\nannouncement then just -hackers, with maybe an rc1 around the end of\nNovember, schedualing a release for January 1st or there abouts ...\n\nFew ppl ever jump onto Beta1 of anything ... there have been some changes\nsince Beta, so a Beta2 is warranted ... rc1 is when we've always stated\nthat we are confident with it for release, so that more ppl start to jump\non ...\n\n\n On Sat, 3 Nov 2001, Bruce Momjian wrote:\n\n> We have seen very few bug reports since going beta. That means either\n> no one is testing the beta, which I don't believe, or that the beta is\n> quite stable. Maybe we should start thinking about a date for the final\n> 7.2 release, perhaps mid to end November.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sat, 3 Nov 2001 20:51:22 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Saturday 03 November 2001 07:51 pm, Marc G. Fournier wrote:\n> I don't think so ... let's talk beta2 in mid November, with a much broader\n> announcement then just -hackers, with maybe an rc1 around the end of\n> November, schedualing a release for January 1st or there abouts ...\n\nFYI, I went to the developers page on the web site and clicked on the link \n\"Beta versions of PostgreSQL\". That took me to \nhttp://developer.postgresql.org/beta.php which said, \"No beta software \ncurrently available\" Since beta 1 has been released, is the site just out of \ndate, or is it intentionally not listed there?\n",
"msg_date": "Sat, 3 Nov 2001 20:43:05 -0600",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "> \n> I don't think so ... let's talk beta2 in mid November, with a much broader\n> announcement then just -hackers, with maybe an rc1 around the end of\n> November, schedualing a release for January 1st or there abouts ...\n> \n> Few ppl ever jump onto Beta1 of anything ... there have been some changes\n> since Beta, so a Beta2 is warranted ... rc1 is when we've always stated\n> that we are confident with it for release, so that more ppl start to jump\n> on ...\n\nI am afraid you may be correct. It bothers me that we will spend\nanother two months doing little but waiting, and considering post-final,\nthere could be three months of downtime here. Yuck.\n\nFolks, can we shorten this up? If no one is reporting on beta1, let's\nroll a beta2, and if we don't get anything major in a week, can't we go\nto rc1?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Nov 2001 22:06:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Saturday 03 November 2001 07:51 pm, Marc G. Fournier wrote:\n> I don't think so ... let's talk beta2 in mid November, with a much broader\n> announcement then just -hackers, with maybe an rc1 around the end of\n> November, schedualing a release for January 1st or there abouts ...\n>\n> Few ppl ever jump onto Beta1 of anything ... there have been some changes\n> since Beta, so a Beta2 is warranted ... rc1 is when we've always stated\n> that we are confident with it for release, so that more ppl start to jump\n> on ...\n\nIn addition to my previous post, I have been trying unsuccessfully for the \nlast 10 - 15 mintues to find somewhere that I can download the beta software \nfrom. ftp.postgresql.org is at it's user limit and the message references \nthe mirror list found at \n\nhttp://www.postgresql.org/sites.html \n\nHowever when I go to this link I am redirected to \n\nhttp://www.postgresql.org/\n",
"msg_date": "Sat, 3 Nov 2001 21:08:54 -0600",
"msg_from": "\"Matthew T. O'Connor\" <matthew@zeut.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "\n\n\"Matthew T. O'Connor\" wrote:\n> \n<snip>\n> \n> http://www.postgresql.org/sites.html\n\nJust tried this link (using Netscape) and got a :\n\nNot Found\n\nThe requested URL /css/depts.css was not found on this server.\n\nSeems to be a missing CSS. :(\n\n+ Justin\n\n> \n> However when I go to this link I am redirected to\n> \n> http://www.postgresql.org/\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sun, 04 Nov 2001 14:31:39 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": ">\n> Folks, can we shorten this up? If no one is reporting on beta1, let's\n> roll a beta2, and if we don't get anything major in a week, can't we go\n> to rc1?\n\nI've asked it before and I'll ask it again, is there somewhere TO report\nsuccess or only failures?\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Sat, 3 Nov 2001 22:39:13 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Sat, Nov 03, 2001 at 09:08:54PM -0600, Matthew T. O'Connor wrote:\n> > [beta1 talk]\n> \n> In addition to my previous post, I have been trying unsuccessfully for the \n> last 10 - 15 mintues to find somewhere that I can download the beta software \n> from. ftp.postgresql.org is at it's user limit and the message references \n> the mirror list found at \n> \n> http://www.postgresql.org/sites.html \n> \n> However when I go to this link I am redirected to \n> \n> http://www.postgresql.org/\n\nWorse, that's been the case for at least a few weeks. I've only satisfied\nmy repeated downloads of 7.1.3 by having known about ftp.us.postgresql.org.\nThe web site has been looping, in various states of broken, as you say.\n\n-jeremy\n_____________________________________________________________________\njeremy wohl ..: http://igmus.org\n",
"msg_date": "Sat, 3 Nov 2001 19:43:44 -0800",
"msg_from": "Jeremy Wohl <jeremyw-pghackers@igmus.org>",
"msg_from_op": false,
"msg_subject": "Re: Broken downloads (was: Beta going well)"
},
{
"msg_contents": "> >\n> > Folks, can we shorten this up? If no one is reporting on beta1, let's\n> > roll a beta2, and if we don't get anything major in a week, can't we go\n> > to rc1?\n> \n> I've asked it before and I'll ask it again, is there somewhere TO report\n> success or only failures?\n\nThat is a good question. Right now we only get problem reports. \nHowever, considering we sort of stopped adding stuff around\nmid-September, we really have been in beta for 1.5 months now so it is\nno surprise things are looking very stable.\n\nI just hate to wait around if there is a low probability that something\nmajor will be reported.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 3 Nov 2001 22:51:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "bpalmer wrote:\n> \n> >\n> > Folks, can we shorten this up? If no one is reporting on beta1, let's\n> > roll a beta2, and if we don't get anything major in a week, can't we go\n> > to rc1?\n> \n> I've asked it before and I'll ask it again, is there somewhere TO report\n> success or only failures?\n\nWe have a place where we can report platforms which pass the regression\ntests, called the \"Regression Test Database\" :\n\nhttp://developer.postgresql.org/regress/\n\nIs this what you're after?\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> - Brandon\n> \n> ----------------------------------------------------------------------------\n> c: 646-456-5455 h: 201-798-4983\n> b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sun, 04 Nov 2001 15:49:01 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": ">> I don't think so ... let's talk beta2 in mid November, with a much broader\n>> announcement then just -hackers, with maybe an rc1 around the end of\n>> November, schedualing a release for January 1st or there abouts ...\n\n> I am afraid you may be correct. It bothers me that we will spend\n> another two months doing little but waiting, and considering post-final,\n> there could be three months of downtime here. Yuck.\n\n> Folks, can we shorten this up?\n\nIn the past we've usually targeted a one-month beta cycle, haven't we?\nOften it took longer, but with so few trouble reports I can't see a\njustification for suddenly changing the target to ten weeks.\n\nI would almost say that we could plan to do beta2 this week, rc1 the\nweek of Thanksgiving, and final about Dec 1 (barring major trouble\nreports of course).\n\nBut ... there are a couple of flies in the ointment. One is that with\nthe kinks still not completely worked out of the new server setup,\nI don't have much confidence that there are really a lot of people doing\nbeta testing. (If Marc ever put out an actual announcement of beta1,\nI didn't get it. And we know some people have been unable to download\nthe beta.) The other is that with the holiday season coming up, many\npeople will have less spare time than usual to spend on Postgres. So\nmaybe Marc's unaggressive schedule proposal is appropriate.\n\nOn the whole, though, I agree with Bruce. We've been in \"almost beta\"\nmode for two months now, we shouldn't need another two months to get to\nrelease.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 11:32:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "\"Matthew T. O'Connor\" <matthew@zeut.net> writes:\n> In addition to my previous post, I have been trying unsuccessfully for the \n> last 10 - 15 mintues to find somewhere that I can download the beta software \n> from.\n\nTry ftp://ftp.us.postgresql.org/beta/\n\n> ftp.postgresql.org is at it's user limit and the message references \n> the mirror list found at \n> http://www.postgresql.org/sites.html \n> However when I go to this link I am redirected to \n> http://www.postgresql.org/\n\nHmm, you're right; the index-of-mirrors page has disappeared from view.\nVince, can you straighten this out?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 11:37:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "On Sat, 3 Nov 2001, Matthew T. O'Connor wrote:\n\n> On Saturday 03 November 2001 07:51 pm, Marc G. Fournier wrote:\n> > I don't think so ... let's talk beta2 in mid November, with a much broader\n> > announcement then just -hackers, with maybe an rc1 around the end of\n> > November, schedualing a release for January 1st or there abouts ...\n>\n> FYI, I went to the developers page on the web site and clicked on the link\n> \"Beta versions of PostgreSQL\". That took me to\n> http://developer.postgresql.org/beta.php which said, \"No beta software\n> currently available\" Since beta 1 has been released, is the site just out of\n> date, or is it intentionally not listed there?\n\nNone of the above, I forgot to uncomment the listing in the php code!\nIt's there now.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 4 Nov 2001 12:22:58 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Sat, 3 Nov 2001, Matthew T. O'Connor wrote:\n\n> On Saturday 03 November 2001 07:51 pm, Marc G. Fournier wrote:\n> > I don't think so ... let's talk beta2 in mid November, with a much broader\n> > announcement then just -hackers, with maybe an rc1 around the end of\n> > November, schedualing a release for January 1st or there abouts ...\n> >\n> > Few ppl ever jump onto Beta1 of anything ... there have been some changes\n> > since Beta, so a Beta2 is warranted ... rc1 is when we've always stated\n> > that we are confident with it for release, so that more ppl start to jump\n> > on ...\n>\n> In addition to my previous post, I have been trying unsuccessfully for the\n> last 10 - 15 mintues to find somewhere that I can download the beta software\n> from. ftp.postgresql.org is at it's user limit and the message references\n> the mirror list found at\n>\n> http://www.postgresql.org/sites.html\n>\n> However when I go to this link I am redirected to\n>\n> http://www.postgresql.org/\n\nThe machine the database is on doesn't appear to be available. I guess\nthings will be disrupted until the game of musical machines is over :(\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 4 Nov 2001 12:25:11 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "> We have a place where we can report platforms which pass the regression\n> tests, called the \"Regression Test Database\" :\n>\n> http://developer.postgresql.org/regress/\n\nSure, can this list be reset for 7.2b1?\n\n- Brandon\n\n----------------------------------------------------------------------------\n c: 646-456-5455 h: 201-798-4983\n b. palmer, bpalmer@crimelabs.net pgp:crimelabs.net/bpalmer.pgp5\n\n",
"msg_date": "Sun, 4 Nov 2001 12:25:58 -0500 (EST)",
"msg_from": "bpalmer <bpalmer@crimelabs.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Sun, 4 Nov 2001, Justin Clift wrote:\n\n>\n>\n> \"Matthew T. O'Connor\" wrote:\n> >\n> <snip>\n> >\n> > http://www.postgresql.org/sites.html\n>\n> Just tried this link (using Netscape) and got a :\n>\n> Not Found\n>\n> The requested URL /css/depts.css was not found on this server.\n>\n> Seems to be a missing CSS. :(\n\nfixed. lynx seems to be acting differently than it was before and rather\nthan doing a redirect it's downloading the redirected source.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Sun, 4 Nov 2001 12:28:57 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "> In the past we've usually targeted a one-month beta cycle, haven't we?\n> Often it took longer, but with so few trouble reports I can't see a\n> justification for suddenly changing the target to ten weeks.\n\nThat was my reading of events.\n\n> I would almost say that we could plan to do beta2 this week, rc1 the\n> week of Thanksgiving, and final about Dec 1 (barring major trouble\n> reports of course).\n\nIt would be great if we could do this. If there were server problems,\nthat would be a good reason to get those all ironed out, if they aren't\nalready, and announce a big beta2, and tell people they have ~1 week and\nif we don't hear anything, we are going RC1. That will get their\nattention. :-)\n\n> But ... there are a couple of flies in the ointment. One is that with\n> the kinks still not completely worked out of the new server setup,\n> I don't have much confidence that there are really a lot of people doing\n> beta testing. (If Marc ever put out an actual announcement of beta1,\n> I didn't get it. And we know some people have been unable to download\n> the beta.) The other is that with the holiday season coming up, many\n> people will have less spare time than usual to spend on Postgres. So\n> maybe Marc's unaggressive schedule proposal is appropriate.\n\nActually, the Christmas holiday season is often busy with development,\nespecially the week between Christmas and New Years because most offices\nare slow during the period. Also, lots of people like to upgrade during\nthat period for the same reason, which makes releaseing a final prior to\nChristmas a double win -- we get development time and they can upgrade\nduring a slow period.\n\n> On the whole, though, I agree with Bruce. We've been in \"almost beta\"\n> mode for two months now, we shouldn't need another two months to get to\n> release.\n\nIn previois betas, we have had the occasional \"Wow, I am glad that\ndidn't get into final\" bugs, but I haven't seen _any_ yet and I am\ndoubting if I will.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 13:08:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Sunday 04 November 2001 01:08 pm, Bruce Momjian wrote:\n> > In the past we've usually targeted a one-month beta cycle, haven't we?\n> > Often it took longer, but with so few trouble reports I can't see a\n> > justification for suddenly changing the target to ten weeks.\n\n> That was my reading of events.\n\nI am concerned in that I have yet to receive a request for RPMs of the beta.\n\nIn previous release cycles, those requests have come hot and heavy shortly \nafter the beta announcements. Unless the interest in RPM has drastically \nreduced in the last couple of months, my gut feel is that our beta1 has had \nvery little exposure outside this list -- and the netizens of this list \naren't typically RPM addicts.\n\nI have not had time in the past couple of weeks to do much with them, though. \nLooking for a break in the action this week, hopefully.\n\nI would, however, guard against rushing a release -- we _do_ have a 'solid \nrelease' reputation to protect.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Sun, 4 Nov 2001 19:40:49 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> ... my gut feel is that our beta1 has had \n> very little exposure outside this list\n\nProbably not, considering it has not been announced anywhere outside\nthis list. Ahem.\n\nSince we've made a number of fixes in the past two weeks, I think\nour next step should be to roll a beta2, and then actually announce it\n[as in pgsql-announce]. We can argue more about schedule after that's\nbeen out for a week or so.\n\nAnyone have stuff that they need to get in there before beta2?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 19:59:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> But ... there are a couple of flies in the ointment. One is that with\n> the kinks still not completely worked out of the new server setup,\n> I don't have much confidence that there are really a lot of people doing\n> beta testing. (If Marc ever put out an actual announcement of beta1,\n> I didn't get it. And we know some people have been unable to download\n> the beta.) The other is that with the holiday season coming up, many\n> people will have less spare time than usual to spend on Postgres. So\n> maybe Marc's unaggressive schedule proposal is appropriate.\n\nI never got a beta notification, it's not mentioned on the PostgreSQL page,\nand I personally have no idea from where to download it. I looked though\nthe Australian mirror FTP site, but I could find no trace of a beta\ndownload.\n\nChris\n\n",
"msg_date": "Mon, 5 Nov 2001 09:48:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <announce@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "On 04 Nov 2001 at 19:59 (-0500), Tom Lane wrote:\n| Lamar Owen <lamar.owen@wgcr.org> writes:\n| > ... my gut feel is that our beta1 has had \n| > very little exposure outside this list\n| \n| Probably not, considering it has not been announced anywhere outside\n| this list. Ahem.\n| \n| Since we've made a number of fixes in the past two weeks, I think\n| our next step should be to roll a beta2, and then actually announce it\n| [as in pgsql-announce]. We can argue more about schedule after that's\n| been out for a week or so.\n| \n| Anyone have stuff that they need to get in there before beta2?\n\n I've an in progress (er, stalled ATM) attempt to squash a bug\nin ALTER TABLE RENAME, where the args to referenced in function\ncalls are updtead to reflect the new name[1]. There is also the\noutstanding resolution to David Ford's libpq EINTR issues[2], on which \nI agree with your assessment of handling EINTR the same as EINPROGRESS\nfor connect(), though [AFAICT] David has not confirmed that your \nproposed solution solves his problem.\n\n I'll do my best to get a patch to the list tonight for the ALTER \nTABLE RENAME fixes. If I don't wrap this up tonight, it will be \nnext weekend before I'll have time to dive back in. I don't consider\nthis pending work any reason to hold up any beta2/rc1 scheduling, \nsince the change is very localized, but this fix should be in before \n7.2 is released.\n\ncheers,\n Brent\n\n1) http://fts.postgresql.org/db/mw/msg.html?mid=1038272\n2) http://fts.postgresql.org/db/mw/msg.html?mid=1041165\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Sun, 4 Nov 2001 20:48:59 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "> Probably not, considering it has not been announced anywhere outside\n> this list. Ahem.\n> \n> Since we've made a number of fixes in the past two weeks, I think\n> our next step should be to roll a beta2, and then actually announce it\n> [as in pgsql-announce]. We can argue more about schedule after that's\n> been out for a week or so.\n> \n> Anyone have stuff that they need to get in there before beta2?\n\nYes. doesn't compile on AIX 5L. I would like to fix it before beta2\n(see attached pacthes below).\n\nHowever I'm not sure if it's a correct solution. Problem is, AIX 5L\nhas sys/inttypes.h where int8, int16, int32 and int64 are\ndefined. Should we detect them in configure? Also, I'm afraid it would\nbreak other AIX versions. Comments?\n--\nTatsuo Ishii\n\n*** include/c.h.orig\tMon Oct 29 11:58:33 2001\n--- include/c.h\tMon Oct 29 12:08:13 2001\n***************\n*** 205,213 ****\n--- 205,215 ----\n *\t\tfrontend/backend protocol.\n */\n #ifndef __BEOS__\t\t\t\t/* this shouldn't be required, but is is! */\n+ #if !defined(_AIX)\n typedef signed char int8;\t\t/* == 8 bits */\n typedef signed short int16;\t\t/* == 16 bits */\n typedef signed int int32;\t\t/* == 32 bits */\n+ #endif /* _AIX */\n #endif\t /* __BEOS__ */\n \n /*\n***************\n*** 275,281 ****\n--- 277,285 ----\n #else\n #ifdef HAVE_LONG_LONG_INT_64\n /* We have working support for \"long long int\", use that */\n+ #if !defined(_AIX)\n typedef long long int int64;\n+ #endif /* _AIX */\n typedef unsigned long long int uint64;\n \n #else\n\n",
"msg_date": "Mon, 05 Nov 2001 11:47:49 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> However I'm not sure if it's a correct solution. Problem is, AIX 5L\n> has sys/inttypes.h where int8, int16, int32 and int64 are\n> defined. Should we detect them in configure? Also, I'm afraid it would\n> break other AIX versions. Comments?\n\nPerhaps have configure test for the presence of <sys/inttypes.h>\nand then let c.h do\n\n\t#ifdef HAVE_SYS_INTTYPES_H\n\t#include <sys/inttypes.h>\n\t#else\n\ttypedef signed char int8;\n\t... etc\n\t#endif\n\nCould this substitute for the ugly #ifndef __BEOS__ as well?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 21:54:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> I never got a beta notification, it's not mentioned on the PostgreSQL page,\n> and I personally have no idea from where to download it. I looked though\n> the Australian mirror FTP site, but I could find no trace of a beta\n> download.\n\nI think Tom's suggestion was best. Let's package beta2, publicise it,\nand see what things look like one week after that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 23:29:24 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "Thus spake Tatsuo Ishii\n> > Anyone have stuff that they need to get in there before beta2?\n> \n> Yes. doesn't compile on AIX 5L. I would like to fix it before beta2\n> (see attached pacthes below).\n> \n> However I'm not sure if it's a correct solution. Problem is, AIX 5L\n> has sys/inttypes.h where int8, int16, int32 and int64 are\n> defined. Should we detect them in configure? Also, I'm afraid it would\n> break other AIX versions. Comments?\n\nI see the same problem on rs6000-ibm-aix4.3.3.0. Your patch fixes it\nthere as well.\n\n-- \nD'Arcy J.M. Cain <darcy@{druid|vex}.net> | Democracy is three wolves\nhttp://www.druid.net/darcy/ | and a sheep voting on\n+1 416 425 1212 (DoD#0082) (eNTP) | what's for dinner.\n",
"msg_date": "Mon, 5 Nov 2001 08:30:46 -0500 (EST)",
"msg_from": "darcy@druid.net (D'Arcy J.M. Cain)",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": ">>>>> \"Tom\" == Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n Tom> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n >> However I'm not sure if it's a correct solution. Problem is,\n >> AIX 5L has sys/inttypes.h where int8, int16, int32 and int64\n >> are defined. Should we detect them in configure? Also, I'm\n >> afraid it would break other AIX versions. Comments?\n\n Tom> Perhaps have configure test for the presence of\n Tom> <sys/inttypes.h> and then let c.h do\n\n Tom> \t#ifdef HAVE_SYS_INTTYPES_H #include <sys/inttypes.h> #else\n Tom> typedef signed char int8; ... etc #endif\n\n Tom> Could this substitute for the ugly #ifndef __BEOS__ as well?\n\nHmm, AIX 4.2 and 4.1 define these in sys/ltypes.h. How that affects\nthe result of this discussion I have no idea.\n\nSincerely,\n\nAdrian Phillips\n\n-- \nYour mouse has moved.\nWindows NT must be restarted for the change to take effect.\nReboot now? [OK]\n",
"msg_date": "05 Nov 2001 15:43:28 +0100",
"msg_from": "Adrian Phillips <adrianp@powertech.no>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Sunday 04 November 2001 07:59 pm, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > ... my gut feel is that our beta1 has had\n> > very little exposure outside this list\n\n> Probably not, considering it has not been announced anywhere outside\n> this list. Ahem.\n\nMe needs another podondectomy..... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 5 Nov 2001 09:45:51 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Mon, 5 Nov 2001, Christopher Kings-Lynne wrote:\n\n> > But ... there are a couple of flies in the ointment. One is that with\n> > the kinks still not completely worked out of the new server setup,\n> > I don't have much confidence that there are really a lot of people doing\n> > beta testing. (If Marc ever put out an actual announcement of beta1,\n> > I didn't get it. And we know some people have been unable to download\n> > the beta.) The other is that with the holiday season coming up, many\n> > people will have less spare time than usual to spend on Postgres. So\n> > maybe Marc's unaggressive schedule proposal is appropriate.\n>\n> I never got a beta notification, it's not mentioned on the PostgreSQL page,\n> and I personally have no idea from where to download it. I looked though\n> the Australian mirror FTP site, but I could find no trace of a beta\n> download.\n\nWhich \"PostgreSQL page\" and there is no official Australian mirror ftp\n(or web for all that matter) site so they wouldn't have a copy of the\nbeta.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 5 Nov 2001 10:14:35 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "On Sun, 4 Nov 2001, Lamar Owen wrote:\n\n> I am concerned in that I have yet to receive a request for RPMs of\n> the beta.\n\nSorry Lamar. I thought I saw a message saying you were putting the RPMs\ntogether so I was waiting to hear they were ready.\n\nI'd like the RPMs!!!\n\nI have a RHL7.1 system that I can use to test the RPMs and probably do\nthe regression tests and friends on also.\n\nWaiting for my new SDSL line so it will be a couple of days before I\nreport back as I have to downlaod to a fast-connected system, move onto\ntape, and take home for loading and testing.\n\n\nRod\n-- \n Let Accuracy Triumph Over Victory\n\n Zetetic Institute\n \"David's Sling\"\n Marc Stiegler\n\n",
"msg_date": "Mon, 5 Nov 2001 07:35:56 -0800 (PST)",
"msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Monday 05 November 2001 10:35 am, Roderick A. Anderson wrote:\n> On Sun, 4 Nov 2001, Lamar Owen wrote:\n> > I am concerned in that I have yet to receive a request for RPMs of\n> > the beta.\n\n> Sorry Lamar. I thought I saw a message saying you were putting the RPMs\n> together so I was waiting to hear they were ready.\n\n> I'd like the RPMs!!!\n\nAh, a customer... :-)\n\nI'll wait for 7.2b2, which fixes some things that currently prevent RPM \nbuilding without patching from CVS.\n\nPlus, my time has been extremely tight the last couple of months, but a few \ndays off actually looks possible now -- and building this set shouldn't be \ntoo difficult, thanks to PeterE's patchwork.\n\nRPM's will be built at this time on RHL7.1and RHL7.2 -- although the 7.1 \noption will disappear here when I upgrade my devel server to 7.2. As stated \nbefore, I don't currently have other machines (except LER's OpenUnix system) \non which to built and test -- so you having 7.1 will be nice further down the \ncycle.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 5 Nov 2001 11:23:33 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Mon, 5 Nov 2001, Lamar Owen wrote:\n\n> Ah, a customer... :-)\n\nToo bad it wasn't the millionth so I could win a door prize or\nsomething.\n\n> I'll wait for 7.2b2, which fixes some things that currently prevent RPM \n> building without patching from CVS.\n\nSounds good to me.\n\n> Plus, my time has been extremely tight the last couple of months, but a few \n> days off actually looks possible now -- and building this set shouldn't be \n> too difficult, thanks to PeterE's patchwork.\n\nWhen I get the DSL line in I'd like to help more with the RPM packaging.\nDialup works OK but only OK.\n\n> RPM's will be built at this time on RHL7.1and RHL7.2 -- although the 7.1 \n> option will disappear here when I upgrade my devel server to 7.2. As stated \n> before, I don't currently have other machines (except LER's OpenUnix system) \n> on which to built and test -- so you having 7.1 will be nice further down the \n> cycle.\n\nYeah I'm a late migrator. I can do 7.1 for quite awhile.\n\nLet us(me) know when and where.\n\n\nCheers,\nRod\n-- \n Let Accuracy Triumph Over Victory\n\n Zetetic Institute\n \"David's Sling\"\n Marc Stiegler\n\n",
"msg_date": "Mon, 5 Nov 2001 09:10:08 -0800 (PST)",
"msg_from": "\"Roderick A. Anderson\" <raanders@tincan.org>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "On Tuesday 06 November 2001 02:35, Roderick A. Anderson wrote:\n> On Sun, 4 Nov 2001, Lamar Owen wrote:\n> > I am concerned in that I have yet to receive a request for RPMs of\n> > the beta.\n>\n> Sorry Lamar. I thought I saw a message saying you were putting the RPMs\n> together so I was waiting to hear they were ready.\n>\n> I'd like the RPMs!!!\n>\n> I have a RHL7.1 system that I can use to test the RPMs and probably do\n> the regression tests and friends on also.\n\nAnd so would I! I have a RH7.2 system plus a Mandrake 8.1 system eager to \nstart testing. \n\nHorst\n",
"msg_date": "Tue, 6 Nov 2001 05:02:03 +1100",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well"
},
{
"msg_contents": "> Which \"PostgreSQL page\" and there is no official Australian mirror ftp\n> (or web for all that matter) site so they wouldn't have a copy of the\n> beta.\n\nOn www.postgresql.org and the Australian(-ish) mirror site is:\n\nhttp://postgresql.planetmirror.com/\n\nChris\n\n",
"msg_date": "Tue, 6 Nov 2001 10:26:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "On Tue, 6 Nov 2001, Christopher Kings-Lynne wrote:\n\n> > Which \"PostgreSQL page\" and there is no official Australian mirror ftp\n> > (or web for all that matter) site so they wouldn't have a copy of the\n> > beta.\n>\n> On www.postgresql.org and the Australian(-ish) mirror site is:\n>\n> http://postgresql.planetmirror.com/\n\nwww.postgresql.org is a list of mirror sites. As to the mirror you\nrefer to, I timestamp ALL mirrors:\n\nhttp://postgresql.planetmirror.com/timestamp.txt\n\nshows: Thu Oct 18 00:01:00 EDT 2001 not exactly up to date.\n\nhttp://www.at.postgresql.org/timestamp.txt is the Austrian mirror,\nit shows: Mon Nov 5 00:01:00 EST 2001\n\nThe stamp is written at midnite local time. As I said above, there\nis currently no official PostgreSQL mirror in Australia. Beta info\nis found on the Developer's website, I keep it separated because of\nour history of stable releases, many people come to expect it out of\nthe betas as well and aren't happy if it doesn't have release version\nstability. The developer's website is: http://developer.postgresql.org/\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 5 Nov 2001 21:44:00 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> The stamp is written at midnite local time. As I said above, there\n> is currently no official PostgreSQL mirror in Australia. Beta info\n> is found on the Developer's website, I keep it separated because of\n> our history of stable releases, many people come to expect it out of\n> the betas as well and aren't happy if it doesn't have release version\n> stability. The developer's website is: http://developer.postgresql.org/\n\nMay I ask why there isn't an active Australian mirror? I, for one, would\nfind it quite helpful. I did notice when it was removed from the\nwww.postgresql.org 'flags' list tho.\n\nChris\n\n",
"msg_date": "Tue, 6 Nov 2001 10:57:18 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "On Tue, 6 Nov 2001, Christopher Kings-Lynne wrote:\n\n> > The stamp is written at midnite local time. As I said above, there\n> > is currently no official PostgreSQL mirror in Australia. Beta info\n> > is found on the Developer's website, I keep it separated because of\n> > our history of stable releases, many people come to expect it out of\n> > the betas as well and aren't happy if it doesn't have release version\n> > stability. The developer's website is: http://developer.postgresql.org/\n>\n> May I ask why there isn't an active Australian mirror? I, for one, would\n> find it quite helpful. I did notice when it was removed from the\n> www.postgresql.org 'flags' list tho.\n\nWe don't solicit mirrors. If someone wants to mirror, they contact us.\nRecently we changed the rsync host and requirements for mirror hosts,\nplanetmirror hasn't responded.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 5 Nov 2001 22:03:45 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The correct way would be to check for the existance of int8, int16, etc.\n\nGood in theory ... but ... are you sure you have included the correct\nset of system headers before checking this? (It's not at all clear\nto me that we know what \"correct\" is in this context.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 17:49:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "Tom Lane writes:\n\n> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > However I'm not sure if it's a correct solution. Problem is, AIX 5L\n> > has sys/inttypes.h where int8, int16, int32 and int64 are\n> > defined. Should we detect them in configure? Also, I'm afraid it would\n> > break other AIX versions. Comments?\n>\n> Perhaps have configure test for the presence of <sys/inttypes.h>\n> and then let c.h do\n>\n> \t#ifdef HAVE_SYS_INTTYPES_H\n> \t#include <sys/inttypes.h>\n> \t#else\n> \ttypedef signed char int8;\n> \t... etc\n> \t#endif\n\nThis is not correct, since we don't have any portable guarantees about the\ncontent of <sys/inttypes.h>.\n\nThe correct way would be to check for the existance of int8, int16, etc.\nThe Autoconf <=2.13 macros for existance of types have been deprecated in\n2.50 because they're broken. (If the type is missing the replacement type\nis #define'd, not typedef'd, which is claimed to be incorrect. I don't\nknow why offhand, but let's not start finding out now.)\n\nA possible workaround until we update our Autoconf (not now) would be\n\nAC_CHECK_SIZEOF(int8)\n\n#if SIZEOF_INT8 == 0\ntypedef signed char int8;\n#endif\n\nbecause the sizeof check results in 0 if the type doesn't exist.\n\nI have attached a patch to this effect which I ask the affected people to\ntry out.\n\nThis patch could theoretically be insufficient if 'long int' is 64 bits\nbut int64 exists and is actually 'long long int'. You would probably get\nwarnings about mismatched printf arguments, but I don't think there would\nbe actual problems.\n\n-- \nPeter Eisentraut peter_e@gmx.net",
"msg_date": "Tue, 6 Nov 2001 23:50:47 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "Tom Lane writes:\n\n> > The correct way would be to check for the existance of int8, int16, etc.\n>\n> Good in theory ... but ... are you sure you have included the correct\n> set of system headers before checking this?\n\nI'm sure I haven't, that's why someone is supposed to check this.\n\n> (It's not at all clear to me that we know what \"correct\" is in this\n> context.)\n\nIf the compiler is complaining that int8 is defined twice we need to check\nif its already defined once and avoid a second declaration. The problem\nis setting up an appropriate environment to be relatively sure about the\nresult of \"already defined once\". That's the usual procedure in autoconf\nprogramming.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 7 Nov 2001 18:28:24 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
}
] |
[
{
"msg_contents": "Christopher wanted \"ADD\" used in:\n\n\tALTER TABLE / ADD PRIMARY-UNIQUE\n ^^^\n\nbut not in:\n\n\n\tCREATE TABLE / PRIMARY-UNIQUE\n\nThe problem is that the same analyze.c function is used for both cases. \nI have added code to conditionally use \"ADD\" only in fhe first case.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 00:40:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "\"ADD\" notice in CREATE/ALTER TABLE"
},
{
"msg_contents": "At 00:40 4/11/01 -0500, Bruce Momjian wrote:\n>Christopher wanted \"ADD\" used in:\n>\n>\tALTER TABLE / ADD PRIMARY-UNIQUE\n> ^^^\n>\n>but not in:\n>\n>\n>\tCREATE TABLE / PRIMARY-UNIQUE\n>\n>The problem is that the same analyze.c function is used for both cases. \n>I have added code to conditionally use \"ADD\" only in fhe first case.\n>\n\nWhat's this in relation to? Shouldn't the syntax be:\n\n\tALTER TABLE ADD CONSTRAINT....\n\nOr am I misreading the original message?\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n",
"msg_date": "Sun, 04 Nov 2001 18:00:46 +1100",
"msg_from": "Philip Warner <pjw@rhyme.com.au>",
"msg_from_op": false,
"msg_subject": "Re: \"ADD\" notice in CREATE/ALTER TABLE"
},
{
"msg_contents": "> At 00:40 4/11/01 -0500, Bruce Momjian wrote:\n> >Christopher wanted \"ADD\" used in:\n> >\n> >\tALTER TABLE / ADD PRIMARY-UNIQUE\n> > ^^^\n> >\n> >but not in:\n> >\n> >\n> >\tCREATE TABLE / PRIMARY-UNIQUE\n> >\n> >The problem is that the same analyze.c function is used for both cases. \n> >I have added code to conditionally use \"ADD\" only in fhe first case.\n> >\n> \n> What's this in relation to? Shouldn't the syntax be:\n> \n> \tALTER TABLE ADD CONSTRAINT....\n> \n> Or am I misreading the original message?\n\nUhh, this was a change if elog(NOTICE) display for Christopher. He\nadded alter table regression tests. The old message didn't have the\nADD, and I added that. My guess is that we don't spell out the word\nCONSTRAINT in the message, just the ALTER and ADD parts.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 02:04:10 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: \"ADD\" notice in CREATE/ALTER TABLE"
}
] |
[
{
"msg_contents": "On Fri, 2 Nov 2001, Zeugswetter Andreas SB SD wrote:\n\n> This is not true, since the index scan also neads to read the leaf pages\n> in MS Sql. The number of leaf pages grows linear with number of rows\n> that qualify the where restriction.\n>\n> R = number of rows that qualify\n> --> O(R + log(R))\n>\n> The pg measurements showed, that PostgreSQL query performance can be\n> expected\n> to stay nearly the same regardless of number of rows in the table as\n> long as\n> the number of rows that qualify the where restriction stays constant.\n> The response time is linear to the number of rows that qualify the where\n>\n> restriction, but that linear behavior is also expected with MS Sql.\nWell, may be you are right here but I talked once more with my colleague\nabout specifications. We can assure that the input of data is about 1GB.\nWe can be sure about that because it is defined what has to be stored\nis fixed in the German law about infectious diseases. We have no online\nshop system or something else. <sarcastic>If the recent anthrax problem\nwould increase exponential we could be into trouble, but chances are\nlow.</sarcastic> So we have good chances to estimate the amount of data\nquite well. It is a linear growth of 1GB per year. If MS SQL server is\nnow fast enough we can grow with normal hardware performance increase\nover the year. This is a fact I have to accept.\n\nAdditional constraint is that the underlying data modell with an\nAccess application is running by about 170 clients which have an amount\nof data of about 100 - 500 data sets which they export once a week into\nour central server. The developers tried hard to get the Access application\nand the MS SQL server solution in sync and having a third application\n(by rewriting some 500 queries) would be a lot of work. (I�m not afraid\nthis work but I must be sure it would make sense before I start and so\nI hope for advice of people who perhaps did so.)\n\nI discussed the issue of using statistics tables to speed up certain\nqueries. He told me that those technique is known as OLAP tubes in\nMS SQL server and that there are tools to build such things. Is this\na valid comparison? He did not use it because it would disable the\naccess solution of our clients. Are there any tools for PostgreSQL for\nsuch stuff besides the manual creating tables and triggers?\n\nCurrently I see two solutions to solve my problem:\n 1. Hoping that 'index coverage' coverage is implented (perhaps by a\n patch ... sombody asked about it but no response) in 7.2 or at\n least 7.3.\n In this case I would try to do my best with the statistic tables\n but I wouldn�t cope with it if at some stage our data model would\n change and I would rework all such stuff.\n 2. Giving MySQL a trial because I expect it to solve my problem in\n the fashion I need. (Well - readonly is OK, surely no such features\n like MVCC and thus perhaps faster index scans.) I would definitely\n come back to PostgreSQL once 'index coverage' or any other method\n to speed up index search will be implemented.\n\nCould somebody give any advise what would be the best strategy? (Perhaps\nI should switch back to pgsql-general for this question, but I definitely\nwant to hear a statement from the hackers about future implementation\nplans!)\n\nBy the way in my former postings I forgot to mention a further problem\nwhich stayed unanswered in my questions on pgsql-general is the fact that\nwhile observing \"top\" while doing a query (over some 30 seconds) the\nmemory load from postgresql increases heavily when executing a query.\nI wonder if it could help if there would be some mechanism to let keep\nsome information of the database resident in memory. I surely know that\nmemory handling of Linux/UNIX is different from Win (and this is a great\nfeature ;-) ), but if I have a plenty of free memory (2GB) and my box\nwasn�t swapping at any time I wonder if it shouldn�t be possible to\nhold some information in memory in favour of simply relying on the hard\ndisk cache of the OS. Any opinions?\n\nKind regards\n\n Andreas.\n\n\n\n\n",
"msg_date": "Sun, 4 Nov 2001 17:24:04 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": true,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Monday 05 November 2001 03:24, Tille, Andreas wrote:\n\n> I discussed the issue of using statistics tables to speed up certain\n> queries. He told me that those technique is known as OLAP tubes in\n> MS SQL server and that there are tools to build such things. Is this\n> a valid comparison? He did not use it because it would disable the\n> access solution of our clients. Are there any tools for PostgreSQL for\n> such stuff besides the manual creating tables and triggers?\n\nI still don't understand your guy. Knowing that the table (and with it the \nperformance demands) will grow, it is quite stubborn and certainly not \nelegant at all to insist on the blunt query instead of a smart solution. The \nsmart solution as outlined always returns results instantly and needs next to \nno memory or other ressources as compared to the blunt query, regardless of \nthe growth of your database. It would only impact the growth *rate* due to \nthe fired triggers, but then, your application does not seem to have a heavy \ninsert load anyway and you could always queue the inserts with middleware as \nyou have no realtime demands.\n\nBtw, what is wrong with creating a few tables and a few trigger functions \n\"manually\"? Writing, testing, and debugging them should not cost more than \na couple of days. Why would I want a tool for it? I might spend a couple of \nhours writing a python script if I would need similar triggers for many \ntables over and over again, but your problem does not seem to have the need \nfor this.\n\nHorst\n",
"msg_date": "Tue, 6 Nov 2001 02:05:42 +1100",
"msg_from": "Horst Herb <horst@hherb.com>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
},
{
"msg_contents": "On Monday 05 November 2001 03:24, Tille, Andreas wrote:\n\n> I discussed the issue of using statistics tables to speed up certain\n> queries. He told me that those technique is known as OLAP tubes in\n> MS SQL server and that there are tools to build such things. Is this\n> a valid comparison? He did not use it because it would disable the\n> access solution of our clients. Are there any tools for PostgreSQL for\n> such stuff besides the manual creating tables and triggers?\n\nI still don't understand your guy. Knowing that the table (and with it the \nperformance demands) will grow, it is quite stubborn and certainly not \nelegant at all to insist on the blunt query instead of a smart solution. The \nsmart solution as outlined always returns results instantly and needs next to \nno memory or other ressources as compared to the blunt query, regardless of \nthe growth of your database. It would only impact the growth *rate* due to \nthe fired triggers, but then, your application does not seem to have a heavy \ninsert load anyway and you could always queue the inserts with middleware as \nyou have no realtime demands.\n\nBtw, what is wrong with creating a few tables and a few trigger functions \n\"manually\"? Writing, testing, and debugging them should not cost more than \na couple of days. Why would I want a tool for it? I might spend a couple of \nhours writing a python script if I would need similar triggers for many \ntables over and over again, but your problem does not seem to have the need \nfor this.\n\nHorst\n",
"msg_date": "Tue, 6 Nov 2001 04:58:08 +1100",
"msg_from": "Horst Herb <hherb@malleenet.net.au>",
"msg_from_op": false,
"msg_subject": "Re: Serious performance problem"
}
] |
[
{
"msg_contents": "The JDBC driver's test suite with current CVS still has one\nfailure in the TimestampTest. This is with Liam's fixes of a\ncouple of weeks ago already applied.\n\nI did some debugging and tracing and I have a hard time\nexplaining what's going on. Perhaps someone can help me out\nhere.\n\nBelow is a detailed transcript of what's happening in the\nrelevant parts of testGetTimestamp() and testSetTimestamp().\nBoth client and server were running in the CET (+1:00) timezone.\n\nTest cases 1-3 construct a SQL string with a hard coded date\nwithout a timezone indication, so conversion from localtime to\nUTC is done by the backend. Test cases 4-6 go through\nStatement.setTimestamp() which converts to UTC in the driver.\n\nThe funny thing is that test cases 1 and 2/3 use the same code,\nwhile 1 succeeds and 2 and 3 fail. The only difference appears\nto be the actual date used in the test. The explanation may be\nin test cases 5 and 6, which succeed with the same dates but\nwith different code. For some reason, the 1970 date gets a 2\nhour time shift from CET (+1) to UTC, while the 1950 date gets a\n1 hour time shift as I expected.\n\nSo it appears that the time shift algorithm in the backend\ndiffers from the time shift algorithm used in setTimestamp() in\nthe driver. The driver gives the 1970 date a different time\nshift than the 1950 date, whereas the backend treats them both\nthe same. \n\nThis is the mapping table:\n\n Timestamp in CET (+1) In UTC\n\nBackend 1950-02-07 15:00:00 1950-02-07 14:00:00.0\n 1970-06-02 07:13:00 1970-06-02 06:13:00.0\n ^^\n\nDriver 1950-02-07 15:00:00.0 1950-02-07 14:00:00.0\n 1970-06-02 08:13:00.0 1970-06-02 06:13:00.0\n ^^\n\nDoes anyone understand why this is happening and which of the\ntwo algorithms is correct?\n\n\nTest case 1: passes\n-------------------\ntestGetTimestamp():\nstmt.executeUpdate(JDBC2Tests.insertSQL(\"testtimestamp\",\"'1950-02-07\n15:00:00'\"))\nSends to the backend: INSERT INTO testtimestamp VALUES\n('1950-02-07 15:00:00')\nBackend returns: 1950-02-07 15:00:00+01\nMatches: getTimestamp(1950, 2, 7, 15, 0, 0, 0)\n\nTest case 2: fails\n------------------\ntestGetTimestamp():\nstmt.executeUpdate(JDBC2Tests.insertSQL(\"testtimestamp\",\n\"'\"+getTimestamp(1970, 6, 2, 8, 13, 0, 0).toString() + \"'\"))\nSends to the backend: INSERT INTO testtimestamp VALUES\n('1970-06-02 08:13:00.0')\nBackend returns: 1970-06-02 08:13:00+01\nDoes not match: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n\nTest case 3: passes\n-------------------\ntestGetTimestamp():\nstmt.executeUpdate(JDBC2Tests.insertSQL(\"testtimestamp\",\"'1970-06-02\n08:13:00'\"))\t\t\tSends to the backend: INSERT\nINTO testtimestamp VALUES ('1970-06-02 08:13:00')\nBackend returns: 1970-06-02 08:13:00+01\nDoes not match: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n\nTest case 4: passes\n-------------------\npstmt.setTimestamp(1, getTimestamp(1950, 2, 7, 15, 0, 0, 0));\nSends to the backend: INSERT INTO testtimestamp VALUES\n('1950-02-07 14:00:00.0+00')\nBackend returns: 1950-02-07 15:00:00+01\nMatches: getTimestamp(1950, 2, 7, 15, 0, 0, 0)\n\nTest case 5: passes\n-------------------\npstmt.setTimestamp(1, getTimestamp(1970, 6, 2, 8, 13, 0, 0));\nSends to the backend: INSERT INTO testtimestamp VALUES\n('1970-06-02 06:13:00.0+00')\nBackend returns: 1970-06-02 07:13:00+01\nMatches: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n\nTest case 6: passes\n-------------------\npstmt.setTimestamp(1, getTimestamp(1970, 6, 2, 8, 13, 0, 0));\nSends to the backend: INSERT INTO testtimestamp VALUES\n('1970-06-02 06:13:00.0+00')\nBackend returns: 1970-06-02 07:13:00+01\nMatches: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Sun, 04 Nov 2001 18:44:22 +0100",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": true,
"msg_subject": "Funny timezone shift causes failure in test suite"
},
{
"msg_contents": "On Sun, 04 Nov 2001 18:44:22 +0100, I wrote:\n>Test case 3: passes\n\nOops, test case 3 failed as well.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Sun, 04 Nov 2001 19:54:17 +0100",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": true,
"msg_subject": "Re: Funny timezone shift causes failure in test suite"
},
{
"msg_contents": "\nOk, after having stared at things for a while, I believe that the\nproblem is that Rene's backend (computer?) is not recognizing local\nsummer time (daylight savings).\n\nWhen a timestamp is inserted into a table, the time is changed to UTC.\nThe amount of the time shift is determined by whether or not the date to\nbe inserted is in normal time or local summer (daylight savings). If no\ntimezone is specified along with the timestamp, the current timezone is\ndetermined by various means (TZ, PGTZ, or SQL set time zone).\n\nThe dates of test cases 1 and 4 do not fall in daylight savings time so\nthere is no issue there. For cases 2, 3, 5, and 6, the date is June 2,\n1970 so daylight savings time is in effect (Rene's zone is now CEST [+2]\ninstead of CET [+1]). Now, for cases 2 and 3, the time is being shifted\nby the backend but the shift is one hour instead of two! So the date\nreturned by the backend, while correct given the one hour shift, is not\ncorrect when changed back to UTC during comparisons. It seems that\npostgresql is not realizing that the timezone that Rene is in observes\ndaylight savings time. Those two cases passed for me in Toronto because\npostgresql knows that EST observes daylight savings time by becoming\nEDT.\n\nRene, CET becomes CEST in summer, but does your locale actually observe\nit? (Like Saskatchewan, Canada, which is in Canada/Central but doesn't\nobserve daylight savings). \n\nWhy do tests 5 and 6 not fail? The setTimestamp method of\nPreparedStatement uses Java's internal date/time processing\nfunctionality to shift the date to UTC before sending to the backend.\nJava does know about CET and CEST so the shift is performed correctly.\nWhen the timestamps are retrieved from the database, they are retrieved\nin CET, but 1970-06-02 07:13:00+01 (what the backend returns) is\nthe same as 1970-06-02 08:13:00+02 so the tests pass. For these tests,\nthe backend should actually be returning 1970-06-02 08:13:00+02.\n\nThe JDBC interface is fine (on the assumption that Java does correct\nshifting).. The problem is with the backend and/or Rene's computer (or\nsome wackyness in timezone observances).\n\nLiam\n\nOn Sun, Nov 04, 2001 at 06:44:22PM +0100, Rene Pijlman wrote:\n> The JDBC driver's test suite with current CVS still has one\n> failure in the TimestampTest. This is with Liam's fixes of a\n> couple of weeks ago already applied.\n> \n> I did some debugging and tracing and I have a hard time\n> explaining what's going on. Perhaps someone can help me out\n> here.\n> \n> Below is a detailed transcript of what's happening in the\n> relevant parts of testGetTimestamp() and testSetTimestamp().\n> Both client and server were running in the CET (+1:00) timezone.\n> \n> Test cases 1-3 construct a SQL string with a hard coded date\n> without a timezone indication, so conversion from localtime to\n> UTC is done by the backend. Test cases 4-6 go through\n> Statement.setTimestamp() which converts to UTC in the driver.\n> \n> The funny thing is that test cases 1 and 2/3 use the same code,\n> while 1 succeeds and 2 and 3 fail. The only difference appears\n> to be the actual date used in the test. The explanation may be\n> in test cases 5 and 6, which succeed with the same dates but\n> with different code. For some reason, the 1970 date gets a 2\n> hour time shift from CET (+1) to UTC, while the 1950 date gets a\n> 1 hour time shift as I expected.\n> \n> So it appears that the time shift algorithm in the backend\n> differs from the time shift algorithm used in setTimestamp() in\n> the driver. The driver gives the 1970 date a different time\n> shift than the 1950 date, whereas the backend treats them both\n> the same. \n> \n> This is the mapping table:\n> \n> Timestamp in CET (+1) In UTC\n> \n> Backend 1950-02-07 15:00:00 1950-02-07 14:00:00.0\n> 1970-06-02 07:13:00 1970-06-02 06:13:00.0\n> ^^\n> \n> Driver 1950-02-07 15:00:00.0 1950-02-07 14:00:00.0\n> 1970-06-02 08:13:00.0 1970-06-02 06:13:00.0\n> ^^\n> \n> Does anyone understand why this is happening and which of the\n> two algorithms is correct?\n> \n> \n> Test case 1: passes\n> -------------------\n> testGetTimestamp():\n> stmt.executeUpdate(JDBC2Tests.insertSQL(\"testtimestamp\",\"'1950-02-07\n> 15:00:00'\"))\n> Sends to the backend: INSERT INTO testtimestamp VALUES\n> ('1950-02-07 15:00:00')\n> Backend returns: 1950-02-07 15:00:00+01\n> Matches: getTimestamp(1950, 2, 7, 15, 0, 0, 0)\n> \n> Test case 2: fails\n> ------------------\n> testGetTimestamp():\n> stmt.executeUpdate(JDBC2Tests.insertSQL(\"testtimestamp\",\n> \"'\"+getTimestamp(1970, 6, 2, 8, 13, 0, 0).toString() + \"'\"))\n> Sends to the backend: INSERT INTO testtimestamp VALUES\n> ('1970-06-02 08:13:00.0')\n> Backend returns: 1970-06-02 08:13:00+01\n> Does not match: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n> \n> Test case 3: passes\n> -------------------\n> testGetTimestamp():\n> stmt.executeUpdate(JDBC2Tests.insertSQL(\"testtimestamp\",\"'1970-06-02\n> 08:13:00'\"))\t\t\tSends to the backend: INSERT\n> INTO testtimestamp VALUES ('1970-06-02 08:13:00')\n> Backend returns: 1970-06-02 08:13:00+01\n> Does not match: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n> \n> Test case 4: passes\n> -------------------\n> pstmt.setTimestamp(1, getTimestamp(1950, 2, 7, 15, 0, 0, 0));\n> Sends to the backend: INSERT INTO testtimestamp VALUES\n> ('1950-02-07 14:00:00.0+00')\n> Backend returns: 1950-02-07 15:00:00+01\n> Matches: getTimestamp(1950, 2, 7, 15, 0, 0, 0)\n> \n> Test case 5: passes\n> -------------------\n> pstmt.setTimestamp(1, getTimestamp(1970, 6, 2, 8, 13, 0, 0));\n> Sends to the backend: INSERT INTO testtimestamp VALUES\n> ('1970-06-02 06:13:00.0+00')\n> Backend returns: 1970-06-02 07:13:00+01\n> Matches: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n> \n> Test case 6: passes\n> -------------------\n> pstmt.setTimestamp(1, getTimestamp(1970, 6, 2, 8, 13, 0, 0));\n> Sends to the backend: INSERT INTO testtimestamp VALUES\n> ('1970-06-02 06:13:00.0+00')\n> Backend returns: 1970-06-02 07:13:00+01\n> Matches: getTimestamp(1970, 6, 2, 8, 13, 0, 0)\n> \n> Regards,\n> Ren� Pijlman <rene@lab.applinet.nl>\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Mon, 5 Nov 2001 18:26:28 -0500",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Funny timezone shift causes failure in test suite"
},
{
"msg_contents": "On Tue, Nov 06, 2001 at 05:27:44AM +0000, Thomas Lockhart wrote:\n> I glanced at the CET timezone database on my Linux box and it seems that\n> there is a DST gap between 1944 and 1977. Certainly I see entries in,\n> for example, PST8PDT for this time period whereas for CET I see no\n> entries at all.\n\nEurope/Amsterdam has a nice big gap between 1945 and 1977. Perhaps they\nstopped observing it for a while and Java's not that bright?\n\n> Is Java guaranteed to use the system timezone database? In any case,\n> istm that there is a discrepency between Rene's expectations of DST\n> behavior and the info in the zoneinfo database.\n\nI haven't looked the internals of any JRE, but I would think that the\nsystem's timezone database would not even be acknowledged. Java most\nlikely has its own internal database.\n\nLiam\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Tue, 6 Nov 2001 12:22:38 -0500",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Funny timezone shift causes failure in test suite"
},
{
"msg_contents": "On Mon, 5 Nov 2001 18:26:28 -0500, you wrote:\n>Ok, after having stared at things for a while, I believe that the\n>problem is that Rene's backend (computer?) is not recognizing local\n>summer time (daylight savings).\n\nIts running Red Hat Linux 7.1. Is that buggy? ;-)\n\nlinuxconf says zone is \"Europe/Amsterdam\" and I remember\nselecting that when I installed it. date +%Z says \"CET\".\n\nThis is /etc/sysconfig/clock:\nZONE=\"Europe/Amsterdam\"\nUTC=false\nARC=false\n\nBy the way, according to a reliable local source there was no\nsummer time in the Netherlands between 1945 and 1977, but I'm\nnot sure if the timezone configs of Red Hat are aware of that\n:-)\n\n>Rene, CET becomes CEST in summer, but does your locale actually observe\n>it? \n\nEuh... how can I tell?\n\nThanks for your efforts.\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Wed, 07 Nov 2001 23:01:13 +0100",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": true,
"msg_subject": "Re: Funny timezone shift causes failure in test suite"
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 11:01:13PM +0100, Rene Pijlman wrote:\n> On Mon, 5 Nov 2001 18:26:28 -0500, you wrote:\n> >Ok, after having stared at things for a while, I believe that the\n> >problem is that Rene's backend (computer?) is not recognizing local\n> >summer time (daylight savings).\n> \n> Its running Red Hat Linux 7.1. Is that buggy? ;-)\n> \n> linuxconf says zone is \"Europe/Amsterdam\" and I remember\n> selecting that when I installed it. date +%Z says \"CET\".\n> \n> This is /etc/sysconfig/clock:\n> ZONE=\"Europe/Amsterdam\"\n> UTC=false\n> ARC=false\n> \n> By the way, according to a reliable local source there was no\n> summer time in the Netherlands between 1945 and 1977, but I'm\n> not sure if the timezone configs of Red Hat are aware of that\n> :-)\n\nThat's it. (At least) Sun's JRE seems to be braindead when it comes to\ntimezones.. I made a small test program that runs through a sequence of\nyears and for each year, it takes a date in the winter and a date in the\nsummer and checks whether or not the calendar is in daylight savings\ntime. From 1945 to 1977, it reports that in the summer, daylight savings\ntime is observed, which it shouldn't be. \n\nAfter poking around some, I think that Java isn't concerned with\nhistorical behaviour (ICU [Internation Components for Unicode] isn't\nAFAICS, and the ICU Java classes developped by Taligent were integrated\ninto Sun's JDK 1.1...)\n\nLiam\n\n-- \nLiam Stewart :: Red Hat Canada, Ltd. :: liams@redhat.com\n",
"msg_date": "Tue, 13 Nov 2001 16:28:57 -0500",
"msg_from": "Liam Stewart <liams@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: Funny timezone shift causes failure in test suite"
},
{
"msg_contents": "[about a failure with a historical date in the JDBC driver's\ntest suite, when ran in the CET timezone]\n\nOn Tue, 13 Nov 2001 16:28:57 -0500, you wrote:\n>At least) Sun's JRE seems to be braindead when it comes to\n>timezones.. I made a small test program that runs through a sequence of\n>years and for each year, it takes a date in the winter and a date in the\n>summer and checks whether or not the calendar is in daylight savings\n>time. From 1945 to 1977, it reports that in the summer, daylight savings\n>time is observed, which it shouldn't be. \n>\n>After poking around some, I think that Java isn't concerned with\n>historical behaviour (ICU [Internation Components for Unicode] isn't\n>AFAICS, and the ICU Java classes developped by Taligent were integrated\n>into Sun's JDK 1.1...)\n\nYes, you're right.\n\nThis bug description confirms that the JVM up to 1.3 does not\nhave a historically correct timezone implementation:\nhttp://developer.java.sun.com/developer/bugParade/bugs/4257314.html\n(registration required), and it says this is fixed in 1.4 aka\n'merlin'.\n\nI tested it with Sun's JDK 1.4 beta 3 and the Sun's J2EE 1.3\nreference implementation on Red Hat Linux 7.1 and indeed the\ntest suite ran fine with 0 failures (I had to tweak some\nunimplemented methods to be able to run the driver with this\nJVM/J2EE).\n\nCase closed. We don't need to do anything in the driver. Liam,\nthanks a lot for your help!\n\nRegards,\nRen� Pijlman <rene@lab.applinet.nl>\n",
"msg_date": "Sun, 25 Nov 2001 20:00:29 +0100",
"msg_from": "Rene Pijlman <rene@lab.applinet.nl>",
"msg_from_op": true,
"msg_subject": "Re: Funny timezone shift causes failure in test suite"
}
] |
[
{
"msg_contents": "Hi,\n\nWould it be possible to sync pgident runs\nso that the *.po files also get updated\nWRT the changes in the source code? It's\njust line numbers of particular messages change,\nso pgident can invoke gettext tools to update\n*.po files as well...\n\n--\nSerguei A. Mokhov\n \n\n",
"msg_date": "Sun, 4 Nov 2001 14:21:05 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "sync pgident run with *.po updates"
},
{
"msg_contents": "> Hi,\n> \n> Would it be possible to sync pgident runs\n> so that the *.po files also get updated\n> WRT the changes in the source code? It's\n> just line numbers of particular messages change,\n> so pgident can invoke gettext tools to update\n> *.po files as well...\n\nGood question. I wish I knew.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 14:54:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: sync pgident run with *.po updates"
}
] |
[
{
"msg_contents": "Correct, not too ambitious, not too\nlong for an enhancement history item? :)\n\n--\nSerguei A. Mokhov",
"msg_date": "Sun, 4 Nov 2001 15:32:42 -0500",
"msg_from": "\"Serguei Mokhov\" <sa_mokho@alcor.concordia.ca>",
"msg_from_op": true,
"msg_subject": "NLS HISTORY.patch.txt"
},
{
"msg_contents": " \nVery nice to have more detail on this item.\n\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\n> Correct, not too ambitious, not too\n> long for an enhancement history item? :)\n> \n> --\n> Serguei A. Mokhov\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 22:10:56 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: NLS HISTORY.patch.txt"
}
] |
[
{
"msg_contents": "I dumped a 7.1devel database, and when reloading into postgres of 31 Oct\n2001 20:23 GMT I get:\n\npsql:foobar.db:60: ERROR: copy: line 2247, Bad timestamp external\nrepresentation 'Fri 01 Aug 00:00:00 1941 BDST'\npsql:foobar.db:60: lost synchronization with server, resetting connection\n\nMy zoneinfo files know about BDST, but src/backend/utils/adt/datetime.c\ndoesn't...\n\nThis may have been fixed within the last few days, and when I get to the\noffice I could see what changed in datetime.c, but I thought I would mention\nit now..\n\nCheers,\n\nPatrick\n",
"msg_date": "Sun, 4 Nov 2001 20:53:32 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "British Double Summer Time"
},
{
"msg_contents": "> I dumped a 7.1devel database, and when reloading into postgres of 31 Oct\n> 2001 20:23 GMT I get:\n> psql:foobar.db:60: ERROR: copy: line 2247, Bad timestamp external\n> representation 'Fri 01 Aug 00:00:00 1941 BDST'\n> psql:foobar.db:60: lost synchronization with server, resetting connection\n> My zoneinfo files know about BDST, but src/backend/utils/adt/datetime.c\n> doesn't...\n\nIn 6 years of date/time work, this is the first I've heard of \"BDST\".\nCan you give me a definition? What is the \"double\" part of it?? Does it\nactually stand for \"British Daylight Savings Time\"?\n\n - Thomas\n",
"msg_date": "Mon, 05 Nov 2001 14:39:03 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: British Double Summer Time"
},
{
"msg_contents": "> My zoneinfo files know about BDST, but src/backend/utils/adt/datetime.c\n> doesn't...\n\nNot fixed yet, but I found the definition of BDST on line, and will\ncommit it soon.\n\nIf you need a fix earlier, here is a patch...\n\n - Thomas",
"msg_date": "Mon, 05 Nov 2001 14:44:55 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: British Double Summer Time"
},
{
"msg_contents": "On Mon, Nov 05, 2001 at 02:44:55PM +0000, Thomas Lockhart wrote:\n> > My zoneinfo files know about BDST, but src/backend/utils/adt/datetime.c\n> > doesn't...\n> \n> Not fixed yet, but I found the definition of BDST on line, and will\n> commit it soon.\n> \n> If you need a fix earlier, here is a patch...\n\nThat was quick! Thank you!\n\nPatrick\n",
"msg_date": "Mon, 5 Nov 2001 16:24:50 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: British Double Summer Time"
}
] |
[
{
"msg_contents": "In 31 Oct 2001 20:23 GMT source, doc/src/sgml/trigger.sgml mentions:\n\n#include \"executor/spi.h\" /* this is what you need to work with SPI */\n#include \"commands/trigger.h\" /* -\"- and triggers */\n\nfor writing triggers in C, yet:\n\n% cd src/include\n% gmake -n install\n/bin/sh ../../config/mkinstalldirs /usr/local/pgsql/include/libpq /usr/local/pgsql/include/internal/libpq /usr/local/pgsql/include/internal/lib\nfor file in fmgr.h postgres.h access/attnum.h commands/trigger.h \\\n executor/spi.h utils/elog.h utils/geo_decls.h utils/mcxt.h \\\n utils/palloc.h; do \\\n if cmp -s ./$file /usr/local/pgsql/include/$file; \\\n then \\\n : ; \\\n else \\\n rm -f /usr/local/pgsql/include/$file; \\\n fi ; \\\ndone\n...\n\n\nseems to actively want to get rid of those files (?!) Anyway, they are\ndefinitely not installed on my system. So, have things changed and the\ndocumentation lagged, or should those include files be installed?\n(I have never written a C trigger function - yet.)\n\nCheers,\n\nPatrick\n",
"msg_date": "Sun, 4 Nov 2001 22:01:29 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "triggers and C include files"
},
{
"msg_contents": "Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> seems to actively want to get rid of those files (?!) Anyway, they are\n> definitely not installed on my system. So, have things changed and the\n> documentation lagged, or should those include files be installed?\n\n\"make install\" doesn't install headers for server-side development.\nDo \"make install-all-headers\" if you want the full include tree.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 04 Nov 2001 17:37:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: triggers and C include files "
},
{
"msg_contents": "On Sun, Nov 04, 2001 at 05:37:51PM -0500, Tom Lane wrote:\n> Patrick Welche <prlw1@newn.cam.ac.uk> writes:\n> > seems to actively want to get rid of those files (?!) Anyway, they are\n> > definitely not installed on my system. So, have things changed and the\n> > documentation lagged, or should those include files be installed?\n> \n> \"make install\" doesn't install headers for server-side development.\n> Do \"make install-all-headers\" if you want the full include tree.\n\nOops - user error.. Thank you!\n\nPatrick\n",
"msg_date": "Sun, 4 Nov 2001 22:38:53 +0000",
"msg_from": "Patrick Welche <prlw1@newn.cam.ac.uk>",
"msg_from_op": true,
"msg_subject": "Re: triggers and C include files"
}
] |
[
{
"msg_contents": "I have accepted employment with SRA in Tokyo, Japan. I am excited to be\nworking with them. As some of you know, I visited Japan last year and\nwas surprised to see how widespread PostgreSQL use was in that country. \nSRA is the leading PostgreSQL support company in Japan and I hope to\nassist them in continuing PostgreSQL's popularity.\n\nTatsuo Ishii, who many of you know, also works for SRA. He has been\ninvolved with PostgreSQL since its start in 1996.\n\nSRA has employed me to continue working and publicizing PostgreSQL, a\njob which I enjoy very much. I will be visiting Japan in early December\nto make several PostgreSQL presentations.\n\nI am looking forward to this new opportunity to work on PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 4 Nov 2001 23:23:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "My new job with SRA"
},
{
"msg_contents": "\nMost cool, congratulations ...\n\nOn Sun, 4 Nov 2001, Bruce Momjian wrote:\n\n> I have accepted employment with SRA in Tokyo, Japan. I am excited to be\n> working with them. As some of you know, I visited Japan last year and\n> was surprised to see how widespread PostgreSQL use was in that country.\n> SRA is the leading PostgreSQL support company in Japan and I hope to\n> assist them in continuing PostgreSQL's popularity.\n>\n> Tatsuo Ishii, who many of you know, also works for SRA. He has been\n> involved with PostgreSQL since its start in 1996.\n>\n> SRA has employed me to continue working and publicizing PostgreSQL, a\n> job which I enjoy very much. I will be visiting Japan in early December\n> to make several PostgreSQL presentations.\n>\n> I am looking forward to this new opportunity to work on PostgreSQL.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n",
"msg_date": "Mon, 5 Nov 2001 08:01:19 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: My new job with SRA"
},
{
"msg_contents": "Congratulations!\n\n\n\nDave\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of Bruce Momjian\nSent: November 4, 2001 11:24 PM\nTo: PostgreSQL-development\nSubject: [HACKERS] My new job with SRA\n\n\nI have accepted employment with SRA in Tokyo, Japan. I am excited to be\nworking with them. As some of you know, I visited Japan last year and\nwas surprised to see how widespread PostgreSQL use was in that country. \nSRA is the leading PostgreSQL support company in Japan and I hope to\nassist them in continuing PostgreSQL's popularity.\n\nTatsuo Ishii, who many of you know, also works for SRA. He has been\ninvolved with PostgreSQL since its start in 1996.\n\nSRA has employed me to continue working and publicizing PostgreSQL, a\njob which I enjoy very much. I will be visiting Japan in early December\nto make several PostgreSQL presentations.\n\nI am looking forward to this new opportunity to work on PostgreSQL.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n",
"msg_date": "Mon, 5 Nov 2001 08:52:52 -0500",
"msg_from": "\"Dave Cramer\" <dave@fastcrypt.com>",
"msg_from_op": false,
"msg_subject": "Re: My new job with SRA"
}
] |
[
{
"msg_contents": "Hi,\n Can someone plz to do specify the features and more important the limitations in using\nPostgreSQL. More info regarding performace etc shall be of immense help\nRegards\nBv :-)\n\n\n\n\n\n\n\nHi,\n Can someone plz to do specify the features and more important \nthe limitations in using\nPostgreSQL. More info regarding performace etc shall be of immense \nhelp\nRegards\nBv :-)",
"msg_date": "Mon, 5 Nov 2001 12:04:40 +0530",
"msg_from": "\"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com>",
"msg_from_op": true,
"msg_subject": "Limitations on PGSQL"
},
{
"msg_contents": "At 12:04 05/11/01 +0530, you wrote:\n>Hi,\n> Can someone plz to do specify the features and more important the \n> limitations in using\n>PostgreSQL. More info regarding performace etc shall be of immense help\n>Regards\n>Bv :-)\n\nHello Balaji,\n\nThere are no real limitations when using PostgreSQL smart programming \nfeatures: views, triggers, rules, types and plpgsql server-side language.\n\nFor example:\n\n1) FAST READINGS: triggers can store display values instead of performing \nseveral LEFT JOINS or calling PL/pgSQL functions. Similarly, you can use \ntriggers to perform complex initialization or maintain consistency when \nadding/modifying a record. Cron jobs and functions can perform queries and \nstore results for instant results (ex: statistics tables).This makes your \ndatabase very fast in complex readings (ex: web environment). This concept \nof storing values is the base of optimization.\n2) SAFETY: postgreSQL is a real transactional system. When using a \ncombination of views and rules, you can control data modification very \nneatly. Example: you can define a sub-select of a table and control the \nscope of queries. This is very important in a commercial environment when \nyou data is valuable and must not be deleted or modified given a set of rules.\n3) CODING: server-side coding is mainly performed in PL/pgSQL, a very easy \nand powerful server-side language.\n\nThis is paradise if you are a programmer. IMHO, the only few drawbacks are:\n\n1) TABLE DEFINITION: it is Impossible to delete a column or to \npromote/demote a column type. You have to drop the table and import old \nvalues into a new table. This makes life harder when working on large \ndatabases. You are always afraid of loosing your data. Even with backups, \nit is always 'heart breaking' to modify a table. You have to perform tests \nto ensure all data is there and safe.\n\n2) VIEWS/TRIGGERS cannot be modified. You have to drop them and create them \nagain. This makes programming a little bit tricky. Further more, if you \ncreate a view, let's say \"SELECT table1.*, table2.* FROM table1 a LEFT JOIN \ntable2 b on a.oid=b.oida\", the resulting view displays all fields, hence \nmaking it harder for a non programmer to read view content.\n\nThis is very little drawback compared to power and reliability of PostgreSQL.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Mon, 05 Nov 2001 11:33:48 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Limitations on PGSQL"
},
{
"msg_contents": "IMHO Postgres' drawbacks are the following:\n\nSeverely limited access/grants system - postgres gives little or no control over anything beyond controlling access to whole tables. -Yes you can create views but views have a couple of drawbacks too... This is especially a problem with regard to functions (no trusted functions).\n\nLong connection time - if you are using the web you will have to use some sort of persistant scheme e.g. Apache::DBI otherwise you will handle around 5 requests per sec on a decent computer. I wonder whether it would be possible for it to either reconnect, keeping the connection to a new database or user, or reuse it's kids - like Apache.\n\nNo schema/tablespaces/cross-database access (- And it's listed on EXOTIC :()\n- You can emulate some of these features yet it's not the same.\n\nError messages take a long time to get used to and generally figuring things out may take some time (at least for me)\n\nIf you create a function/trigger/view/rule etc. which accesses a table, and then you drop that table, and recreate it, you may have to recreate the function etc.\n\nIt's advantages are:\n\nRuns on practically any platform (I run OpenBSD so it matters).\n\nSupports triggers, rules (statement level triggers), views and stored procedures!\n\nfast - my queries - which may be quite complex at times, are generally fast, and if they are not I can always speed them up with EXPLAIN, indexes, triggers creating derived tables and so on.\n\nDid I say stored procedures?\n\nLicense - Do ANYTHING you want with it (more or less) not as communistic as the obiquitous GPL.\n\nPrice - Depending on your internet connection generally less than $0.02...\n\nGreat community - Does not mind answering questions and seems to forgive quickly as well.\n\nWrite Ahead logging, and many other functions I haven't really exploited yet.\n\nRegards,\n\nAasmund\n\n\n\n\nOn Mon, 05 Nov 2001 11:33:48 +0100, Jean-Michel POURE <jm.poure@freesurf.fr> wrote:\n> At 12:04 05/11/01 +0530, you wrote:\n> \n> Hello Balaji,\n> \n> There are no real limitations when using PostgreSQL smart programming \n> features: views, triggers, rules, types and plpgsql server-side language.\n> \n> For example:\n> \n> 1) FAST READINGS: triggers can store display values instead of performing \n> several LEFT JOINS or calling PL/pgSQL functions. Similarly, you can use \n> triggers to perform complex initialization or maintain consistency when \n> adding/modifying a record. Cron jobs and functions can perform queries and \n> store results for instant results (ex: statistics tables).This makes your \n> database very fast in complex readings (ex: web environment). This concept \n> of storing values is the base of optimization.\n> 2) SAFETY: postgreSQL is a real transactional system. When using a \n> combination of views and rules, you can control data modification very \n> neatly. Example: you can define a sub-select of a table and control the \n> scope of queries. This is very important in a commercial environment when \n> you data is valuable and must not be deleted or modified given a set of rules.\n> 3) CODING: server-side coding is mainly performed in PL/pgSQL, a very easy \n> and powerful server-side language.\n> \n> This is paradise if you are a programmer. IMHO, the only few drawbacks are:\n> \n> 1) TABLE DEFINITION: it is Impossible to delete a column or to \n> promote/demote a column type. You have to drop the table and import old \n> values into a new table. This makes life harder when working on large \n> databases. You are always afraid of loosing your data. Even with backups, \n> it is always 'heart breaking' to modify a table. You have to perform tests \n> to ensure all data is there and safe.\n> \n> 2) VIEWS/TRIGGERS cannot be modified. You have to drop them and create them \n> again. This makes programming a little bit tricky. Further more, if you \n> create a view, let's say \"SELECT table1.*, table2.* FROM table1 a LEFT JOIN \n> table2 b on a.oid=b.oida\", the resulting view displays all fields, hence \n> making it harder for a non programmer to read view content.\n> \n> This is very little drawback compared to power and reliability of PostgreSQL.\n> \n> Best regards,\n> Jean-Michel POURE\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\nAasmund Midttun Godal\n\naasmund@godal.com - http://www.godal.com/\n+47 40 45 20 46\n",
"msg_date": "Mon, 05 Nov 2001 11:10:27 GMT",
"msg_from": "\"Aasmund Midttun Godal\" <postgresql@envisity.com>",
"msg_from_op": false,
"msg_subject": "Re: Limitations on PGSQL"
},
{
"msg_contents": "Hi Jeff, Poure, Godal,\n Sorry Jeff i was not too sure about where to post.\nShall hereafter post questions like these in pgsql-general. Sorry for the\ninconvenience.\nAnd Poure and Godal really thnx for the timely info that u ppl have given\nme, hope this\nshall be of great help\nCheers\nBv :-)\n\n----- Original Message -----\nFrom: \"Jeff Davis\" <list-pgsql-hackers@dynworks.com>\nTo: \"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com>\nSent: Monday, November 05, 2001 4:22 PM\nSubject: Re: [HACKERS] Limitations on PGSQL\n\n\n> You'll want to ask this type of question on pgsql-general not -hackers.\nMore\n> importantly, you should narrow your question because what you ask below\n> leaves too much for us to comment on.\n>\n> There are great docs at postgresql.org that cover most of what you need to\n> know. Try reading a few entries in the docs (not every word, just see if\nyou\n> can find some general answers to your questions) and then see what\nquestions\n> you still have. Another approach is to ask whether PostgreSQL will work\nfor\n> your needs (but nobody knows what those needs are).\n>\n> PostgreSQL is an awesome database. I hope it works out great for you.\n>\n> Regards,\n> Jeff Davis\n>\n> On Sunday 04 November 2001 10:34 pm, you wrote:\n> > Hi,\n> > Can someone plz to do specify the features and more important the\n> > limitations in using PostgreSQL. More info regarding performace etc\nshall\n> > be of immense help Regards\n> > Bv :-)\n\n",
"msg_date": "Mon, 5 Nov 2001 16:54:51 +0530",
"msg_from": "\"Balaji Venkatesan\" <balaji.venkatesan@megasoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Limitations on PGSQL"
},
{
"msg_contents": "\n>Long connection time - if you are using the web you will have to use some \n>sort of persistant scheme e.g. Apache::DBI otherwise you will handle \n>around 5 requests per sec on a decent computer. I wonder whether it would \n>be possible for it to either reconnect, keeping the connection to a new \n>database or user, or reuse it's kids - like Apache.\n\nPhp allows persistent connections. Don't you think?\n\nhttp://uk.php.net/manual/en/configuration.php#ini.sect.pgsql\nPostgres Configuration Directives\npgsql.allow_persistent boolean\nWhether to allow persistent Postgres connections.\npgsql.max_persistent integer\nThe maximum number of persistent Postgres connections per process.\npgsql.max_links integer\nThe maximum number of Postgres connections per process, including \npersistent connections.\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Mon, 05 Nov 2001 15:52:44 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: Limitations on PGSQL"
},
{
"msg_contents": "First time user of Postgresql....\nAfter created the database, how do I check what foreign keys (constraint\nreferences in Postgresql term) were created? I looked around using\n\"psql\" and \"pgaccess\", but no success,\n\nThanks for the help,\n\nSam,\n",
"msg_date": "Mon, 21 Jan 2002 12:44:42 -0600",
"msg_from": "Sam Cao <scao@verio.net>",
"msg_from_op": false,
"msg_subject": "Foreign Key?"
},
{
"msg_contents": "\nOn Mon, 21 Jan 2002, Sam Cao wrote:\n\n> First time user of Postgresql....\n> After created the database, how do I check what foreign keys (constraint\n> references in Postgresql term) were created? I looked around using\n> \"psql\" and \"pgaccess\", but no success,\n\nBest thing to look at probably is the \"Referential Integrity Tutorial &\nHacking the Referential Integrity Tables\" tutorial at\nhttp://techdocs.postgresql.org/\n\nI believe that includes a view definition that gets alot of that\ninformation out.\n\n\n",
"msg_date": "Mon, 21 Jan 2002 13:13:59 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Foreign Key?"
},
{
"msg_contents": "\nI'm running postgres 7.1.2 on a freebsd machine -- Celeron 500 with 128\nmegs of ram (256 swap). Not the best for a real gut wrenching machine, but\nwhat was around to get the feel of what was wanted.\n\nA question was asked which i through to the database to see how it was\nable to handle the question at hand and it failed . . . after 50 minutes\nof processing it flopped to the ground killed: out of swap space.\n\nGranted the query was a large one (explanations below) but a few\nquestions.. \n\nIs there a way to predict the requirements a system would need to handle a\nquery of specific size / complexity? (and how?)\n\nIs there a way to pull this type of query off on this system? (is there a\nsolution other than throw more ram / swap at it?) (one would easily be to\nhandle it in chunks, but other suggestions are welcome)\n\nWhat would this type of query need to execute? How about to execute well?\n\nTable and query explanations follow...\n\nThe query was joining three tables, which i know is not quite a good idea,\nbut didn't see much of another way. The question was posed to find all\nthe subcategories all customers have ordered from a company.\n\nThe history table (history of orders) contains the id, date, cost,\nand orderid and has 838500 records.\n\nThe ordered table (line items of orders) contains the orderid and a sku\nand has 2670000 records\n\nThe subcategories table has the sku and subcategory and has 20000 records.\n\neach customer can have many orders which can have many items which can\nhave many subcategories.\n\nthe query was posed as: \n SELECT history.id, sub \n FROM insub\n WHERE history.orderid = ordered.orderid\n AND ordered.items = insub.sku\n ORDER BY ID;\n\nAny help would be greatly appreciated.\nThanks in advance.\n\n.jtp \n\n\n",
"msg_date": "Thu, 28 Feb 2002 14:54:34 -0500 (EST)",
"msg_from": "jtp <john@akadine.com>",
"msg_from_op": false,
"msg_subject": "killed select?"
},
{
"msg_contents": "jtp <john@akadine.com> writes:\n> A question was asked which i through to the database to see how it was\n> able to handle the question at hand and it failed . . . after 50 minutes\n> of processing it flopped to the ground killed: out of swap space.\n\nMy guess is that what actually bombed out was psql, which tries to\nbuffer the entire result of a query. (Well, actually it's libpq not\npsql that does that, but anyway the client side is what's failing.)\n\nI suspect that your query is insufficiently constrained and will return\nmany millions of rows --- are you sure you have the WHERE clauses right?\n\nIf you actually do need to process a query that returns gazillions of\nrows, the best bet is to declare a cursor so you can fetch the result\nin bite-size chunks, say a few hundred rows at a time.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Feb 2002 16:48:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: killed select? "
},
{
"msg_contents": "\n\nHi, just a general design question and wondering how postgres would handle\neither situation.\n\nI have a gobb of information (400000+ records) on individual accounts. I\nneed to store all of their personal information (name, adress, etc) as\nwell as all of their more dynamic company information (last purchase,\ntimes ordered, etc). \n\nOne: All their dynamic information can be rebuilt from other tables,\nbut it will be called upon rather frequently, so the redundency so as to\nnot have to rebuild on every call seems acceptable by me. (smack me if i'm\nwrong)\n\nTwo: There is only a one to one ration between an account (personal\ninformation) and that account's account information (makes sense,\neh?). But does it make sense to keep this information in the same table or\nto break it up? I estimate about 20 fields in two separate tables or 40\nin one big one. The personal information will almost always be index\nsearched by name or zipcode. Whereas the other information they (they\nproverbial they) will probably want sorted in weirdass ways that the\ndesign was never intended for. Basically, it will be be subjected to more\nsequential scans than something with close to a half million records\nshould be. My basic question ends up being: does postgres handle\nsequntial scans across tables with fewer fields better? Is there any\nperformance increase by separating this into two tables?\n\nThanks for any hints you could give me.\n.jtp\n\n",
"msg_date": "Fri, 19 Apr 2002 15:14:03 -0400 (EDT)",
"msg_from": "jtp <john@akadine.com>",
"msg_from_op": false,
"msg_subject": "general design question"
},
{
"msg_contents": "On Fri, 19 Apr 2002, jtp wrote:\n\n> One: All their dynamic information can be rebuilt from other tables,\n> but it will be called upon rather frequently, so the redundency so as to\n> not have to rebuild on every call seems acceptable by me. (smack me if i'm\n> wrong)\n\nIt's quite reasonable to keep a summary table of information for\nfast reference. The only difficulty you have to deal with is how\nyou keep it up to date. (Update every time the summarized data\nchange? Update once an hour? Once a day? That kind of thing. It\ndepends on your application.)\n\n> My basic question ends up being: does postgres handle\n> sequntial scans across tables with fewer fields better?\n\nDefinitely. Given the same number of rows, a narrower table (fewer\ncolumns, shorter data types, that kind of thing) will always be\nscanned faster than a wider one simply because you need to read\nless data from the disk. This is database-independent, in fact.\n\nSince vacuuming also effectively involves a sequential scan, you'll\nalso vacuum faster on a narrower table. So it makes sense to separate\nfrequently updated data from less frequently updated data, and\nvacuum the frequently updated table more often, I would think.\n\nHowever, for tables that are already narrow, you may get little\nperformance gain, or in some cases performance may even get worse,\nnot to mention your data size blowing up bigger. Postgres has a\nquite high per-tuple overhead (31 bytes or more) so splitting small\ntables can actually cause growth and make things slower, if you\nfrequently access both tables.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sat, 20 Apr 2002 11:55:58 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: general design question"
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> However, for tables that are already narrow, you may get little\n> performance gain, or in some cases performance may even get worse,\n> not to mention your data size blowing up bigger. Postgres has a\n> quite high per-tuple overhead (31 bytes or more) so splitting small\n> tables can actually cause growth and make things slower, if you\n> frequently access both tables.\n\nRight. The *minimum* row overhead in Postgres is 36 bytes (32-byte\ntuple header plus 4-byte line pointer). More, the actual data space\nwill be rounded up to the next MAXALIGN boundary, either 4 or 8 bytes\ndepending on your platform. On an 8-byte-MAXALIGN platform like mine,\na table containing a single int4 column will actually occupy 44 bytes\nper row. Ouch. So database designs involving lots of narrow tables\nare not to be preferred over designs with a few wide tables.\n\nAFAIK, all databases have nontrivial per-row overheads; PG might be\na bit worse than average, but this is a significant issue no matter\nwhich DB you use.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 19 Apr 2002 23:37:35 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: general design question "
},
{
"msg_contents": "On Fri, 19 Apr 2002, Tom Lane wrote:\n\n> Right. The *minimum* row overhead in Postgres is 36 bytes (32-byte\n> tuple header plus 4-byte line pointer).\n\nAh, right! The line pointer is four bytes because it includes the length\nof the tuple.\n\nBut I'm not sure why we need this length, possibly because I don't\nunderstand the function of the LP_USED and LP_DELETED flags in the line\npointer. (I'm guessing that if LP_USED is not set, the line pointer does\nnot point to any data, and that if LP_DELETED is set, it points to a\nchunk of free space.)\n\nWhy could we not just make all unallocated space be pointed to by\nLP_DELETED pointers, and then when we need space, use it from those\n(splitting and joining as necessary)? That gets rid of the need for\na length. Then we could declare that all tuples must be aligned on a\nfour-byte boundary, use the top 14 bits of a 16-bit line pointer as the\naddress, and the bottom two bits for the LP_USED and LP_DELETED flag.\nThis would slightly simplify the code for determining the flags, and\nincidently boost the maximum page size to 64K.\n\nIf you're willing to use a mask and shift to determine the address,\nrather than just a mask, you could make the maximum page size 128K,\nuse the top 15 bits of the line pointer as the address, and use the\nremaining bit as the LP_USED flag, since I don't see why we would then\nneed the LP_DELETED flag at all.\n\nOr am I smoking crack here?\n\n> AFAIK, all databases have nontrivial per-row overheads; PG might be\n> a bit worse than average, but this is a significant issue no matter\n> which DB you use.\n\nFor certain types of tables, such the sort of table joining two\nothers for which I forget the proper term:\n\n\tCREATE TABLE folder_contents (\n\t folder_id\tint NOT NULL,\n\t item_id\t\tint NOT NULL,\n\t PRIMARY KEY (folder_id, item_id))\n\nsome databases are much better. In MS SQL server, for example, since\nthere are no variable length columns, the tuple format will be:\n\n\t1 byte\t\tstatus bits A\n\t1 byte\t\tstatus bits B\n\t2 bytes\t\tfixed-length columns data length\n\t4 bytes\t\tDATA: folder_id\n\t4 bytes\t\tDATA: item_id\n\t2 bytes\t\tnumber of columns\n\t1 byte\t\tnull bitmap (unfortunately doesn't go away in SQL\n\t\t\tserver even when there are no nullable columns)\n\n(If there were variable length columns, you would have after this:\ntwo bytes for the number of columns, 2 bytes per column for the\ndata offsets within the tuple, and then the variable data.)\n\nSo in Postgres this would take, what, 44 bytes per tuple? But in\nSQL Server this takes 17 bytes per tuple (including the two byte\nline pointer in what they call the page's \"row offset array), or\nabout 40% of the space.\n\nNeedless to say, in my last job, where I was dealing with a table\nlike this with 85 million rows, I was happy for this to be a 1.3\nGB table instead of a 3.5 GB table. Not that this made much\nperformance difference in that application anyway, since, with a\nclustered index and typical folder sizes at a couple of dozen to\na hundred or so items, I was basically never going to read more\nthan one or two pages from disk to find the contents of a folder.\n\nHm. I guess this really should be on hackers, shouldn't it?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sat, 20 Apr 2002 13:55:38 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: general design question "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> ... Then we could declare that all tuples must be aligned on a\n> four-byte boundary, use the top 14 bits of a 16-bit line pointer as the\n> address, and the bottom two bits for the LP_USED and LP_DELETED flag.\n> This would slightly simplify the code for determining the flags, and\n> incidently boost the maximum page size to 64K.\n\nHmm. Maybe, but the net effect would only be to reduce the minimum row\noverhead from 36 to 34 bytes. Not sure it's worth worrying about.\nEliminating redundancy from the item headers has its downside, too,\nin terms of ability to detect problems.\n\n> ... I don't see why we would then\n> need the LP_DELETED flag at all.\n\nI believe we do want to distinguish three states: live tuple, dead\ntuple, and empty space. Otherwise there will be cases where you're\nforced to move data immediately to collapse empty space, when there's\nnot a good reason to except that your representation can't cope.\n\n> Hm. I guess this really should be on hackers, shouldn't it?\n\nYup...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Apr 2002 01:27:08 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: general design question "
},
{
"msg_contents": "On Sat, Apr 20, 2002 at 01:55:38PM +0900, Curt Sampson wrote:\n> > AFAIK, all databases have nontrivial per-row overheads; PG might be\n> > a bit worse than average, but this is a significant issue no matter\n> > which DB you use.\n> \n> For certain types of tables, such the sort of table joining two\n> others for which I forget the proper term:\n> \n> \tCREATE TABLE folder_contents (\n> \t folder_id\tint NOT NULL,\n> \t item_id\t\tint NOT NULL,\n> \t PRIMARY KEY (folder_id, item_id))\n> \n> some databases are much better. In MS SQL server, for example, since\n> there are no variable length columns, the tuple format will be:\n> \n> \t1 byte\t\tstatus bits A\n> \t1 byte\t\tstatus bits B\n> \t2 bytes\t\tfixed-length columns data length\n> \t4 bytes\t\tDATA: folder_id\n> \t4 bytes\t\tDATA: item_id\n> \t2 bytes\t\tnumber of columns\n> \t1 byte\t\tnull bitmap (unfortunately doesn't go away in SQL\n> \t\t\tserver even when there are no nullable columns)\n\nWhere is the information needed to determine visibility for transactions? In\nPostgres that's at least 16 bytes (cmin,cmax,xmin,xmax). How does SQL server\ndo that?\n\n> (If there were variable length columns, you would have after this:\n> two bytes for the number of columns, 2 bytes per column for the\n> data offsets within the tuple, and then the variable data.)\n\nIn postgres, variable length columns don't cost anything if you don't use\nthem. An int is always 4 bytes, even if there are variable length columns\nelsewhere. The only other overhead is 4 bytes for the OID and 6 bytes for\nthe CTID, which I guess may be unnecessary.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n",
"msg_date": "Sat, 20 Apr 2002 17:11:13 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: general design question"
},
{
"msg_contents": "[I've moved this discussion about changing the line pointer from four\nbytes to two from -general to -hackers, since it's fairly technical.\nThe entire message Tom is responding to is appended to this one.]\n\nOn Sat, 20 Apr 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > ... Then we could declare that all tuples must be aligned on a\n> > four-byte boundary, use the top 14 bits of a 16-bit line pointer as the\n> > address, and the bottom two bits for the LP_USED and LP_DELETED flag.\n> > This would slightly simplify the code for determining the flags, and\n> > incidently boost the maximum page size to 64K.\n>\n> Hmm. Maybe, but the net effect would only be to reduce the minimum row\n> overhead from 36 to 34 bytes. Not sure it's worth worrying about.\n\nWell, unless the implementation is hideously complex, I'd say that\nevery byte is worth worrying about, given the amount of overhead that's\ncurrently there. 36 to 34 bytes could give something approaching a 5%\nperformance increase for tables with short rows. (Actually, do we prefer\nthe tables/rows or relations/tuples terminology here? I guess I kinda\ntend to use the latter for physical stuff.)\n\nIf we could drop the OID from the tuple when it's not being used,\nthat would be another four bytes, bringing the performance increase\nup towards 15% on tables with short rows.\n\nOf course I understand that all this is contingent not only on such\nchanges being acceptable, but someone actually caring enough to\nwrite them.\n\nWhile we're at it, would someone have the time to explain to me\nhow the on-disk CommandIds are used? A quick look at the code\nindicates that this is used for cursor consistency, among other\nthings, but it's still a bit mysterious to me.\n\n> > ... I don't see why we would then\n> > need the LP_DELETED flag at all.\n>\n> I believe we do want to distinguish three states: live tuple, dead\n> tuple, and empty space. Otherwise there will be cases where you're\n> forced to move data immediately to collapse empty space, when there's\n> not a good reason to except that your representation can't cope.\n\nI don't understand this. Why do you need to collapse empty space\nimmediately? Why not just wait until you can't find an empty fragment\nin the page that's big enough, and then do the collapse?\n\nOh, on a final unrelated note, <john@akadine.com>, you're bouncing\nmail from my host for reasons not well explained (\"550 Access\ndenied.\") I tried postmaster at your site, but that bounces mail\ntoo. If you want to work out the problem, drop me e-mail from some\naddress at which you can be responded to.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n------- Previous Message --------\n>From cjs@cynic.net Sat Apr 20 16:56:29 2002\nDate: Sat, 20 Apr 2002 13:55:38 +0900 (JST)\nFrom: Curt Sampson <cjs@cynic.net>\nTo: Tom Lane <tgl@sss.pgh.pa.us>\nCc: jtp <john@akadine.com>, pgsql-general@postgresql.org\nSubject: Re: [GENERAL] general design question\n\nOn Fri, 19 Apr 2002, Tom Lane wrote:\n\n> Right. The *minimum* row overhead in Postgres is 36 bytes (32-byte\n> tuple header plus 4-byte line pointer).\n\nAh, right! The line pointer is four bytes because it includes the length\nof the tuple.\n\nBut I'm not sure why we need this length, possibly because I don't\nunderstand the function of the LP_USED and LP_DELETED flags in the line\npointer. (I'm guessing that if LP_USED is not set, the line pointer does\nnot point to any data, and that if LP_DELETED is set, it points to a\nchunk of free space.)\n\nWhy could we not just make all unallocated space be pointed to by\nLP_DELETED pointers, and then when we need space, use it from those\n(splitting and joining as necessary)? That gets rid of the need for\na length. Then we could declare that all tuples must be aligned on a\nfour-byte boundary, use the top 14 bits of a 16-bit line pointer as the\naddress, and the bottom two bits for the LP_USED and LP_DELETED flag.\nThis would slightly simplify the code for determining the flags, and\nincidently boost the maximum page size to 64K.\n\nIf you're willing to use a mask and shift to determine the address,\nrather than just a mask, you could make the maximum page size 128K,\nuse the top 15 bits of the line pointer as the address, and use the\nremaining bit as the LP_USED flag, since I don't see why we would then\nneed the LP_DELETED flag at all.\n\nOr am I smoking crack here?\n\n> AFAIK, all databases have nontrivial per-row overheads; PG might be\n> a bit worse than average, but this is a significant issue no matter\n> which DB you use.\n\nFor certain types of tables, such the sort of table joining two\nothers for which I forget the proper term:\n\n\tCREATE TABLE folder_contents (\n\t folder_id\tint NOT NULL,\n\t item_id\t\tint NOT NULL,\n\t PRIMARY KEY (folder_id, item_id))\n\nsome databases are much better. In MS SQL server, for example, since\nthere are no variable length columns, the tuple format will be:\n\n\t1 byte\t\tstatus bits A\n\t1 byte\t\tstatus bits B\n\t2 bytes\t\tfixed-length columns data length\n\t4 bytes\t\tDATA: folder_id\n\t4 bytes\t\tDATA: item_id\n\t2 bytes\t\tnumber of columns\n\t1 byte\t\tnull bitmap (unfortunately doesn't go away in SQL\n\t\t\tserver even when there are no nullable columns)\n\n(If there were variable length columns, you would have after this:\ntwo bytes for the number of columns, 2 bytes per column for the\ndata offsets within the tuple, and then the variable data.)\n\nSo in Postgres this would take, what, 44 bytes per tuple? But in\nSQL Server this takes 17 bytes per tuple (including the two byte\nline pointer in what they call the page's \"row offset array), or\nabout 40% of the space.\n\nNeedless to say, in my last job, where I was dealing with a table\nlike this with 85 million rows, I was happy for this to be a 1.3\nGB table instead of a 3.5 GB table. Not that this made much\nperformance difference in that application anyway, since, with a\nclustered index and typical folder sizes at a couple of dozen to\na hundred or so items, I was basically never going to read more\nthan one or two pages from disk to find the contents of a folder.\n\nHm. I guess this really should be on hackers, shouldn't it?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n",
"msg_date": "Sat, 20 Apr 2002 17:07:17 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "On-disk Tuple Size"
},
{
"msg_contents": "[Moved from general to -hackers.]\n\nOn Sat, 20 Apr 2002, Martijn van Oosterhout wrote:\n\n> > In MS SQL server, for example....\n>\n> Where is the information needed to determine visibility for transactions? In\n> Postgres that's at least 16 bytes (cmin,cmax,xmin,xmax). How does SQL server\n> do that?\n\nSQL Server doesn't use MVCC; it uses locking. (This is not necessarially\nless advanced, IMHO; it has the nice properties of saving a bunch of\nspace and ensuring that, when transaction isolation is serializable,\ncommits won't fail due to someone else doing updates. But it has costs,\ntoo, as we all know.)\n\n> > (If there were variable length columns, you would have after this:\n> > two bytes for the number of columns, 2 bytes per column for the\n> > data offsets within the tuple, and then the variable data.)\n>\n> In postgres, variable length columns don't cost anything if you don't use\n> them.\n\nRight; just as in SQL server. This was just sort of a side note\nfor those who are curious.\n\n> An int is always 4 bytes, even if there are variable length columns\n> elsewhere. The only other overhead is 4 bytes for the OID....\n\nWhich would be good to get rid of, if we can.\n\n> ...and 6 bytes for the CTID, which I guess may be unnecessary.\n\nReally? How would things work without it?\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n\n",
"msg_date": "Sat, 20 Apr 2002 17:22:20 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: On-Disk Tuple Size"
},
{
"msg_contents": "On Sat, Apr 20, 2002 at 05:22:20PM +0900, Curt Sampson wrote:\n> > ...and 6 bytes for the CTID, which I guess may be unnecessary.\n> \n> Really? How would things work without it?\n\nWell, from my examination of the on-disk data the CTID stored there is the\nsame as its location in the file, so it could just be filled in while\nreading.\n\nUnless I'm misunderstanding its purpose.\n\n-- \nMartijn van Oosterhout <kleptog@svana.org> http://svana.org/kleptog/\n> Canada, Mexico, and Australia form the Axis of Nations That\n> Are Actually Quite Nice But Secretly Have Nasty Thoughts About America\n",
"msg_date": "Sat, 20 Apr 2002 19:04:10 +1000",
"msg_from": "Martijn van Oosterhout <kleptog@svana.org>",
"msg_from_op": false,
"msg_subject": "Re: On-Disk Tuple Size"
},
{
"msg_contents": "Martijn van Oosterhout <kleptog@svana.org> writes:\n> Well, from my examination of the on-disk data the CTID stored there is the\n> same as its location in the file, so it could just be filled in while\n> reading.\n\nNope. CTID is used as a forward link from an updated tuple to its newer\nversion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Apr 2002 11:27:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: On-Disk Tuple Size "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> While we're at it, would someone have the time to explain to me\n> how the on-disk CommandIds are used?\n\nTo determine visibility of tuples for commands within a transaction.\nJust as you don't want your transaction's effects to become visible\nuntil you commit, you don't want an individual command's effects to\nbecome visible until you do CommandCounterIncrement. Among other\nthings this solves the Halloween problem for us (how do you stop\nan UPDATE from trying to re-update the tuples it's already emitted,\nshould it chance to hit them during its table scan).\n\nThe command IDs aren't interesting anymore once the originating\ntransaction is over, but I don't see a realistic way to recycle\nthe space ...\n\n>> I believe we do want to distinguish three states: live tuple, dead\n>> tuple, and empty space. Otherwise there will be cases where you're\n>> forced to move data immediately to collapse empty space, when there's\n>> not a good reason to except that your representation can't cope.\n\n> I don't understand this.\n\nI thought more about this in the shower this morning, and realized the\nfundamental drawback of the scheme you are suggesting: it requires the\nline pointers and physical storage to be in the same order. (Or you\ncould make it work in reverse order, by looking at the prior pointer\ninstead of the next one to determine item size; that would actually\nwork a little better. But in any case line pointer order and physical\nstorage order are tied together.)\n\nThis is clearly a loser for index pages: most inserts would require\na data shuffle. But it is also a loser for heap pages, and the reason\nis that on heap pages we cannot change a tuple's index (line pointer\nnumber) once it's been created. If we did, it'd invalidate CTID\nforward links, index entries, and heapscan cursor positions for open\nscans. Indeed, pretty much the whole point of having the line pointers\nis to provide a stable ID for a tuple --- if we didn't need that we\ncould just walk through the physical storage.\n\nWhen VACUUM removes a dead tuple, it compacts out the physical space\nand marks the line pointer as unused. (Of course, it makes sure all\nreferences to the tuple are gone first.) The next time we want to\ninsert a tuple on that page, we can recycle the unused line pointer\ninstead of allocating a new one from the end of the line pointer array.\nHowever, the physical space for the new tuple should come from the\nmain free-space pool in the middle of the page. To implement the\npointers-without-sizes representation, we'd be forced to shuffle data\nto make room for the tuple between the two adjacent-by-line-number tuples.\n\nThe three states of a line pointer that I referred to are live\n(pointing at a good tuple), dead (pointing at storage that used\nto contain a good tuple, doesn't anymore, but hasn't been compacted\nout yet), and empty (doesn't point at storage at all; the space it\nused to describe has been merged into the middle-of-the-page free\npool). ISTM a pointers-only representation can handle the live and\ndead cases nicely, but the empty case is going to be a real headache.\n\nIn short, a pointers-only representation would give us a lot less\nflexibility in free space management. It's an interesting idea but\nI doubt that saving two bytes per row is worth the extra overhead.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 20 Apr 2002 11:57:11 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
},
{
"msg_contents": "On Sat, 20 Apr 2002, Tom Lane wrote:\n\n> Curt Sampson <cjs@cynic.net> writes:\n> > While we're at it, would someone have the time to explain to me\n> > how the on-disk CommandIds are used?\n>\n> To determine visibility of tuples for commands within a transaction.\n> Just as you don't want your transaction's effects to become visible\n> until you commit, you don't want an individual command's effects to\n> become visible until you do CommandCounterIncrement. Among other\n> things this solves the Halloween problem for us (how do you stop\n> an UPDATE from trying to re-update the tuples it's already emitted,\n> should it chance to hit them during its table scan).\n>\n> The command IDs aren't interesting anymore once the originating\n> transaction is over, but I don't see a realistic way to recycle\n> the space ...\n\nAh, I see. So basically, it's exactly parallel to the transaction IDs\nexcept it's for commands instead of transactions?\n\nSo this seems to imply to me that the insert command ID fields are of\ninterest only to the transaction that did the insert. In other words, if\nyour transaction ID is not the one listed in t_xmin, the t_cmin field is\nalways ignored. And the same goes for t_cmax and t_xmax, right?\n\nIf this is the case, would it be possible to number the commands\nper-transaction, rather than globally? Then the t_cmin for a particular\ntuple might be say, 7, but though there might be many transactions that\nhave processed or will process command number 7, we would know which\ntransaction this belongs to by the t_xmin field.\n\nDoes this work for cursors, which currently seem to rely on a global\ncommand ID? If you keep track of the transaction ID as well, I think so,\nright?\n\nHaving per-transaction command IDs might allow us to reduce the range of\nthe t_cmin and t_cmax fields. Unfortunately, probably by not all that\nmuch, since one doesn't want to limit the number of commands within a\nsingle transaction to something as silly as 65536.\n\nBut perhaps we don't need to increment the command ID for every command.\nIf I do an insert, but I know that the previous command was also an\ninsert, I know that there were no intervening reads in this transaction,\nso can I use the previous command's ID? Could it be that we need to\nincrement the command ID only when we switch from writing to reading\nor vice versa? There could still be transactions that would run into\nproblems, of course, but these might all be rather pathological cases.\n\nOr is everybody wishing they had some of whatever I'm smoking? :-)\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sun, 21 Apr 2002 15:35:14 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
},
{
"msg_contents": "On Sat, 20 Apr 2002, Tom Lane wrote:\n\n> >> I believe we do want to distinguish three states: live tuple, dead\n> >> tuple, and empty space. Otherwise there will be cases where you're\n> >> forced to move data immediately to collapse empty space, when there's\n> >> not a good reason to except that your representation can't cope.\n>\n> > I don't understand this.\n>\n> I thought more about this in the shower this morning, and realized the\n> fundamental drawback of the scheme you are suggesting: it requires the\n> line pointers and physical storage to be in the same order.\n> ...But in any case line pointer order and physical storage order are\n> tied together.)\n\nI thought that for a while too. As you point out, you want an list of\nline pointers ordered by address in the block to build your map of space\nafter the free space in the middle of the page because you get the end\nof the current block from the start address of the next block. (We know\nwhere the free space in the middle is from the end of the line pointer\narray.)\n\nHowever, there's no reason it has to be stored in this order on the\ndisk. You can build a sorted list of line pointers in a separate area of\nmemory after you read the page.\n\nYes, this uses a bit more CPU, but I think it's going to be a pretty\ntrivial amount. It's a short list, and since you're touching the data\nanyway, it's going to be in the CPU cache. The real cost you'll pay is\nin the time to access the area of memory where you're storing the sorted\nlist of line pointers. But the potential saving here is up to 5% in I/O\ncosts (due to using less disk space).\n\n> The three states of a line pointer that I referred to are live\n> (pointing at a good tuple), dead (pointing at storage that used\n> to contain a good tuple, doesn't anymore, but hasn't been compacted\n> out yet), and empty (doesn't point at storage at all; the space it\n> used to describe has been merged into the middle-of-the-page free\n> pool).\n\nRight. I now realize that we still do still need the three states,\nwhich are in my case:\n\n live:\tpoints to tuple data in use\n\n free space:\tpoints to unused space in the page, i.e., a dead tuple.\n\n unused:\ta line pointer that doesn't point to anything at all.\n\n> ISTM a pointers-only representation can handle the live and\n> dead cases nicely, but the empty case is going to be a real headache.\n\nThis doesn't need a separate flag, since we can just have the line\npointer point to something obviously invalid, such as the page\nheader. (0 seems quite convenient for this.)\n\nIn the header, we need a count of the number of line pointers\n(line_id_count above), but we can drop the beginning/end of free\nspace pointers, since we know that data space starts after the last\nline pointer, and ends at the beginnning of special space.\n\nSo here's an example of a page layout. Sizes are arbitrary ones I\npicked for the sake of the example, except for the line_id sizes.\n\n Address\tSize\tItem\n\n 0\t\t24\tpage header (line_id_count = 6)\n\n 24\t\t2\tline_id: 7751 (free space 1)\n 26\t\t2\tline_id: 7800 (tuple 1)\n 28\t\t2\tline_id: 0 (unused)\n 30\t\t2\tline_id: 7600 (tuple 2)\n 32\t\t2\tline_id: 8000 (tuple 3)\n 34\t\t2\tline_id: 7941 (free space 2)\n\n 36\t\t7564\tfree space in the middle of the page\n\n 7600\t150\ttuple 2\n 7750\t50\tfree space 1\n 7800\t100\ttuple 1\n 7940\t60\tfree space 2\n 8000\t96\ttuple 3\n 8096\t96\tspecial space\n\nNote above that the free space pointers have the LSB set to indicate\nthat they point to free space, not tuples. So the first line_id\nactually points to 7750.\n\nWhen I do an insert, the first thing I do is scan for a free line\npointer. Finding a free one at 28, I decide to re-use that. Then\nI look for the smallest block of free space that will hold the data\nthat I need to insert. If it fits, exactly, I use it. If not, I\nneed to extend the line pointer array by one and make that point\nto the remaining free space in the block of free space I used.\n\nIf a big enough block of free space doesn't exist, I compact the\npage and try again.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Sun, 21 Apr 2002 16:46:22 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
},
{
"msg_contents": "> Having per-transaction command IDs might allow us to reduce the\nrange of\n> the t_cmin and t_cmax fields. Unfortunately, probably by not all\nthat\n> much, since one doesn't want to limit the number of commands within\na\n> single transaction to something as silly as 65536.\n\nIf you can figure out how to make that roll over sure, but thats a\nvery small number.\n\nConsider users who do most of their stuff via functions (one\ntransaction). Now consider the function that builds reports, stats,\netc. for some department. It's likley these work on a per account\nbasis.\n\nWe have a function making invoices that would wrap around that atleast\n10x.\n\n",
"msg_date": "Sun, 21 Apr 2002 09:28:08 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> If this is the case, would it be possible to number the commands\n> per-transaction, rather than globally?\n\nThey are.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Apr 2002 10:39:04 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
},
{
"msg_contents": "Curt Sampson <cjs@cynic.net> writes:\n> Yes, this uses a bit more CPU, but I think it's going to be a pretty\n> trivial amount. It's a short list, and since you're touching the data\n> anyway, it's going to be in the CPU cache. The real cost you'll pay is\n> in the time to access the area of memory where you're storing the sorted\n> list of line pointers. But the potential saving here is up to 5% in I/O\n> costs (due to using less disk space).\n\nAt this point you're essentially arguing that it's faster to recompute\nthe list of item sizes than it is to read it off disk. Given that the\nrecomputation would require sorting the list of item locations (with\nup to a couple hundred entries --- more than that if blocksize > 8K)\nI'm not convinced of that.\n\nAnother difficulty is that we'd lose the ability to record item sizes\nto the exact byte. What we'd reconstruct from the item locations are\nsizes rounded up to the next MAXALIGN boundary. I am not sure that\nthis is a problem, but I'm not sure it's not either.\n\nThe part of this idea that I actually like is overlapping the status\nbits with the low order part of the item location, using the assumption\nthat MAXALIGN is at least 4. That would allow us to support BLCKSZ up\nto 64K, and probably save a cycle or two in fetching/storing the item\nfields as well. The larger BLCKSZ limit isn't nearly as desirable\nas it used to be, because of TOAST, and in fact it could be a net loser\nbecause of increased WAL traffic. But it'd be interesting to try it\nand see.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 21 Apr 2002 15:10:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
},
{
"msg_contents": "On Sun, 21 Apr 2002, Tom Lane wrote:\n\n> At this point you're essentially arguing that it's faster to recompute\n> the list of item sizes than it is to read it off disk. Given that the\n> recomputation would require sorting the list of item locations (with\n> up to a couple hundred entries --- more than that if blocksize > 8K)\n> I'm not convinced of that.\n\nNo, not at all. What I'm arguing is that the I/O savings gained from\nremoving two bytes from the tuple overhead will more than compensate for\nhaving to do a little bit more computation after reading the block.\n\nHow do I know? Well, I have very solid figures. I know because I pulled\nthem straight out of my....anyway. :-) Yeah, it's more or less instinct\nthat says to me that this would be a win. If others don't agree, there's\na pretty reasonable chance that I'm wrong here. But I think it might\nbe worthwile spending a bit of effort to see what we can do to reduce\nour tuple overhead. After all, there is a good commerical DB that has\nmuch, much lower overhead, even if it's not really comparable because it\ndoesn't use MVCC. The best thing really would be to see what other good\nMVCC databases do. I'm going to go to the bookshop in the next few days\nand try to find out what Oracle's physical layout is.\n\n> Another difficulty is that we'd lose the ability to record item sizes\n> to the exact byte. What we'd reconstruct from the item locations are\n> sizes rounded up to the next MAXALIGN boundary. I am not sure that\n> this is a problem, but I'm not sure it's not either.\n\nWell, I don't see any real problem with it, but yeah, I might well be\nmissing something here.\n\n> The larger BLCKSZ limit isn't nearly as desirable as it used to be,\n> because of TOAST, and in fact it could be a net loser because of\n> increased WAL traffic. But it'd be interesting to try it and see.\n\nMmmm, I hadn't thought about the WAL side of things. In an ideal world,\nit wouldn't be a problem because WAL writes would be related only to\ntuple size, and would have nothing to do with block size. Or so it seems\nto me. But I have to go read the WAL code a bit before I care to make\nany real assertions there.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n",
"msg_date": "Mon, 22 Apr 2002 04:50:55 +0900 (JST)",
"msg_from": "Curt Sampson <cjs@cynic.net>",
"msg_from_op": false,
"msg_subject": "Re: On-disk Tuple Size "
}
] |
[
{
"msg_contents": "\n> > Anyone have stuff that they need to get in there before beta2?\n> \n> Yes. doesn't compile on AIX 5L. I would like to fix it before beta2\n> (see attached pacthes below).\n\nIIRC not all (old) versions of AIX define those, thus your patch would \nbreak those that havent :-( I am not sure we care about those old\nversions\nanymore though. \nAlso I actually see those defines as a bug in AIX, since a comment\nstates,\nthat BSD requires them, I certainly havent heard BSD'ers complain about \nredefines ?\n\nAlso I don't understand why your compiler stops ? Mine only give a\nwarning.\n\nThese defines are only included if _ALL_SOURCE is defined. I think this\ndefine \nstems from another system header file, that gets included somewhere\nelse.\nMaybe a better fix would be to #undef _ALL_SOURCE before including\ninttypes.h ?\n\nAndreas\n",
"msg_date": "Mon, 5 Nov 2001 10:45:54 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> IIRC not all (old) versions of AIX define those, thus your patch would \n> break those that havent :-( I am not sure we care about those old\n> versions\n> anymore though. \n> Also I actually see those defines as a bug in AIX, since a comment\n> states,\n> that BSD requires them, I certainly havent heard BSD'ers complain about \n> redefines ?\n> \n> Also I don't understand why your compiler stops ? Mine only give a\n> warning.\n> \n> These defines are only included if _ALL_SOURCE is defined. I think this\n> define \n> stems from another system header file, that gets included somewhere\n> else.\n> Maybe a better fix would be to #undef _ALL_SOURCE before including\n> inttypes.h ?\n\nOk, let me see if it fixes the problems.\n--\nTatsuo Ishii\n",
"msg_date": "Mon, 05 Nov 2001 21:23:15 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> Maybe a better fix would be to #undef _ALL_SOURCE before including\n> inttypes.h ?\n\nIs it possible to avoid including inttypes.h altogether?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Nov 2001 10:43:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> Also I don't understand why your compiler stops ? Mine only give a\n> warning.\n\nMaybe due to the difference of the compiler verion. xlc coming with\nAIX 5L stops. Note that gcc doesn't. What is your OS version?\n\n> These defines are only included if _ALL_SOURCE is defined. I think this\n> define \n> stems from another system header file, that gets included somewhere\n> else.\n> Maybe a better fix would be to #undef _ALL_SOURCE before including\n> inttypes.h ?\n\nIt seems inttypes.h is included by types.h which is included by\nstdio.h which is included by c.h. I inserted #undef into c.h but it\ndoes not help at all:-<\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Nov 2001 11:56:47 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> \"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> > Maybe a better fix would be to #undef _ALL_SOURCE before including\n> > inttypes.h ?\n> \n> Is it possible to avoid including inttypes.h altogether?\n\nIt seems not possible. inttypes.h is included by types.h which is\nincluded by stdio.h which is included by c.h.\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Nov 2001 11:58:28 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "From: Jean-Michel POURE <jm.poure@freesurf.fr>\nSubject: Java's Unicode Notation \nDate: Thu, 08 Nov 2001 14:12:04 +0100\nMessage-ID: <4.2.0.58.20011108141018.00a59dc0@pop.freesurf.fr>\n\n> Dear Tatsuo,\n> \n> Could it be possible to use the Java Unicode Notation to define UTF-8 \n> strings in PostgreSQL 7.2.\n\nNo. It's too late. We are in the beta freeze stage.\n\n> Information can be found on http://czyborra.com/utf/\n> \n> Do you think it is hard to implement?\n> \n> Best regards,\n> Jean-Michel POURE\n> \n> ************************************************\n> Java's Unicode Notation\n> There are some less compact but more readable ASCII transformations the \n> most important of which is the Java Unicode Notation as allowed in Java \n> source code and processed by Java's native2ascii converter:\n> putwchar(c)\n> {\n> if (c >= 0x10000) {\n> printf (\"\\\\u%04x\\\\u%04x\" , 0xD7C0 + (c >> 10), 0xDC00 | c & 0x3FF);\n> }\n> else if (c >= 0x100) printf (\"\\\\u%04x\", c);\n> else putchar (c);\n> }\n> The advantage of the \\u20ac notation is that it is very easy to type it in \n> on any old ASCII keyboard and easy to look up the intended character if you \n> happen to have a copy of the Unicode book or the \n> {unidata2,names2,unihan}.txt files from the Unicode FTP site or CD-ROM or \n> know what U+20AC is the �.\n> What's not so nice about the \\u20ac notation is that the small letters are \n> quite unusual for Unicode characters, the backslashes have to be quoted for \n> many Unix tools, the four hexdigits without a terminator may appear merged \n> with the following word as in \\u00a333 for ��33, it is unclear when and how \n> you have to escape the backslash character itself, 6 bytes for one \n> character may be considered wasteful, and there is no way to clearly \n> present the characters beyond \\uffff without \\ud800\\udc00 surrogates, and \n> last but not least the plain hexnumbers may not be very helpful.\n> JAVA is one of the target and source encodings of yudit and its uniconv \n> converter.\n> \n",
"msg_date": "Sun, 11 Nov 2001 19:04:22 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Java's Unicode Notation "
}
] |
[
{
"msg_contents": "\n> Perhaps have configure test for the presence of <sys/inttypes.h>\n> and then let c.h do\n\nIt is directly in /usr/include/inttypes.h in AIX 4.3.2 :-(\n\nAndreas\n",
"msg_date": "Mon, 5 Nov 2001 10:47:57 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well "
},
{
"msg_contents": "> It is directly in /usr/include/inttypes.h in AIX 4.3.2 :-(\n\nIsn't it linked to /usr/include/sys/inttypes.h ?\n--\nTatsuo Ishii\n",
"msg_date": "Tue, 06 Nov 2001 11:23:09 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Beta going well "
}
] |
[
{
"msg_contents": "Hi,\n\nWhat's wrong with the patch mailingslist ? I can't read the history (page not \nfound). I have subscribed to the list, it's confirmed by mail, but i get no \nnew mail. I've posted a little patch, but i don't see it in de mailingslist.\nHave i done something wrong or ...\n\nFerdinand Smit\n",
"msg_date": "Mon, 5 Nov 2001 13:41:46 +0100",
"msg_from": "Ferdinand Smit <ferdinand@telegraafnet.nl>",
"msg_from_op": true,
"msg_subject": "Patch mailingslist"
}
] |
[
{
"msg_contents": "\nOkay ... with everything that has been going on, hardware/server wise,\nthis whole release cycle has turned into one big nightmare ...\n\nUnless someone has something they are sitting on, I'd like to wrap up a\n7.2b2 this afternoon, and do a proper release announcement for it like\ndidn't happen for 7.2b1 ...\n\nAnyone object?\n\n",
"msg_date": "Mon, 5 Nov 2001 08:00:10 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Proposal: 7.2b2 today"
},
{
"msg_contents": "Marc,\n\nI suggest to announce beta @ freshmeat.net and slashdot.org also.\n\n\tRegards,\n\n\t\tOleg\n\nOn Mon, 5 Nov 2001, Marc G. Fournier wrote:\n\n>\n> Okay ... with everything that has been going on, hardware/server wise,\n> this whole release cycle has turned into one big nightmare ...\n>\n> Unless someone has something they are sitting on, I'd like to wrap up a\n> 7.2b2 this afternoon, and do a proper release announcement for it like\n> didn't happen for 7.2b1 ...\n>\n> Anyone object?\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Mon, 5 Nov 2001 16:45:44 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "On Monday 05 November 2001 08:00 am, Marc G. Fournier wrote:\n> Okay ... with everything that has been going on, hardware/server wise,\n> this whole release cycle has turned into one big nightmare ...\n\nFirst of all, you have my sympathies. Moving servers around is never easy, \nand you have really handled it well, all things considered.\n\n> Unless someone has something they are sitting on, I'd like to wrap up a\n> 7.2b2 this afternoon, and do a proper release announcement for it like\n> didn't happen for 7.2b1 ...\n\nSounds good. Can you hold the wide release announcement until the mirrors \npopulate, though? Especially if an announcement is made to freshmeat.....\n\nAlthough that may be exactly what was meant by 'a proper release \nannouncement'........:-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 5 Nov 2001 09:44:43 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "\nI've generally tried to do a release packaging one day, with a quick\nannounce to -hackers to test it, and then a full announce everywhere about\na day after that ... but missed the 'full annonunce' step for beta1 :)\n\n\nOn Mon, 5 Nov 2001, Lamar Owen wrote:\n\n> On Monday 05 November 2001 08:00 am, Marc G. Fournier wrote:\n> > Okay ... with everything that has been going on, hardware/server wise,\n> > this whole release cycle has turned into one big nightmare ...\n>\n> First of all, you have my sympathies. Moving servers around is never easy,\n> and you have really handled it well, all things considered.\n>\n> > Unless someone has something they are sitting on, I'd like to wrap up a\n> > 7.2b2 this afternoon, and do a proper release announcement for it like\n> > didn't happen for 7.2b1 ...\n>\n> Sounds good. Can you hold the wide release announcement until the mirrors\n> populate, though? Especially if an announcement is made to freshmeat.....\n>\n> Although that may be exactly what was meant by 'a proper release\n> announcement'........:-)\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n>\n\n",
"msg_date": "Mon, 5 Nov 2001 09:47:06 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "On Monday 05 November 2001 09:47 am, Marc G. Fournier wrote:\n> I've generally tried to do a release packaging one day, with a quick\n> announce to -hackers to test it, and then a full announce everywhere about\n> a day after that ... but missed the 'full annonunce' step for beta1 :)\n\n> On Mon, 5 Nov 2001, Lamar Owen wrote:\n> > Although that may be exactly what was meant by 'a proper release\n> > announcement'........:-)\n\nJust making sure I remembered things properly.... :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n",
"msg_date": "Mon, 5 Nov 2001 09:55:25 -0500",
"msg_from": "Lamar Owen <lamar.owen@wgcr.org>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "> \n> Okay ... with everything that has been going on, hardware/server wise,\n> this whole release cycle has turned into one big nightmare ...\n> \n> Unless someone has something they are sitting on, I'd like to wrap up a\n> 7.2b2 this afternoon, and do a proper release announcement for it like\n> didn't happen for 7.2b1 ...\n\nI have been working with Tom on some pgindent issues and have made\nslight improvements to the script. Because we are early in beta and no\none has outstanding patches, I would like to run it again and commit the\nchanges. It should improve variables defined as structs and alignment\nof include/catalog/*.h files.\n\nI will commit shortly. Thanks.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Nov 2001 12:08:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "> \n> Okay ... with everything that has been going on, hardware/server wise,\n> this whole release cycle has turned into one big nightmare ...\n> \n> Unless someone has something they are sitting on, I'd like to wrap up a\n> 7.2b2 this afternoon, and do a proper release announcement for it like\n> didn't happen for 7.2b1 ...\n> \n> Anyone object?\n\nI am all done. Thanks.\n\nAlso, I will start maintaining a list of open items for 7.2 like I have\ndone for previous releases. It will be at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/open_items\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Nov 2001 12:46:10 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "> \n> Okay ... with everything that has been going on, hardware/server wise,\n> this whole release cycle has turned into one big nightmare ...\n> \n> Unless someone has something they are sitting on, I'd like to wrap up a\n> 7.2b2 this afternoon, and do a proper release announcement for it like\n> didn't happen for 7.2b1 ...\n\nLet me add that the majority of the pgindent changes was from:\n\n struct {\n int x;\n } var;\n\nto:\n\n struct {\n int x;\n } var;\n\nand this:\n\n #endif /* demo */\n\nto this:\n\n #endif /* demo */\n\nPlus some minor cleanup for breakage from the previous run.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Nov 2001 12:52:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "> I have been working with Tom on some pgindent issues and have made\n> slight improvements to the script. Because we are early in beta and no\n> one has outstanding patches, I would like to run it again and commit the\n> changes. It should improve variables defined as structs and alignment\n> of include/catalog/*.h files.\n\n> I will commit shortly. Thanks.\n\nConsidering the size of the diff you mailed me, I'd say \"hold off until\nsomeone else has looked at this\". This is obviously not a small change.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Nov 2001 14:20:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today "
},
{
"msg_contents": "> > I have been working with Tom on some pgindent issues and have made\n> > slight improvements to the script. Because we are early in beta and no\n> > one has outstanding patches, I would like to run it again and commit the\n> > changes. It should improve variables defined as structs and alignment\n> > of include/catalog/*.h files.\n> \n> > I will commit shortly. Thanks.\n> \n> Considering the size of the diff you mailed me, I'd say \"hold off until\n> someone else has looked at this\". This is obviously not a small change.\n\nSure, it is at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/diff\n\n99% is space tighening.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 5 Nov 2001 17:49:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> Unless someone has something they are sitting on, I'd like to wrap up a\n> 7.2b2 this afternoon, and do a proper release announcement for it like\n> didn't happen for 7.2b1 ...\n>\n> Anyone object?\n\nAgain, why are they called \"b\" and no longer \"beta\"?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 6 Nov 2001 23:53:55 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Proposal: 7.2b2 today"
}
] |
[
{
"msg_contents": "\nJust raised default limit for number of anon users from 10 to 25, to ease\nthat a little bit ...\n\nJust set a seperate class for 'real users' so that developers can get in\nproperly ...\n\n",
"msg_date": "Mon, 5 Nov 2001 08:12:32 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Changes to core ftp server ..."
}
] |
[
{
"msg_contents": "Hi all, \n\nThe following is a description of a patch I am proposing for 7.3. \nPlease read and comment.\n\nThanks\nJim\n\n\nThis proposal covers the ability to allow a DBA (and general users) to \nspecify where a database and it's individual objects will reside. I \npropose to add a default data location, index and temporary locations \nto the pg_shadow table to allow a DBA to specify locations for each \nuser when they create databases, tables and indexes or need temporary \ndisk storage (either for temporary tables or sort files). The \"CREATE \nDATABASE\" command will be changed to also take an INDEX location and \ntemporary location. All 3 locations will default to the values from \npg_shadow for the user that is creating the database. Both the \"CREATE \nTABLE\" and \"CREATE INDEX\" commands will be changed to add \"WITH \nLOCATION\" optional argument (location will default to values from \nPG_DATABASE which were set by the \"CREATE DATABASE\" command).\n\nThe following system tables will be changed as follows\nPG_SHADOW add dat_location, idx_location, tmp_location (all default to \nPG_DATA)\nPG_DATABASE add dat_location, idx_location, tmp_location (all default \nto same from PG_SHADOW)\nPG_CLASS add rellocation (default to dat_location for tables, \nidx_location for indexes from PG_DATABASE)\n\n\nAdd a GLOBAL table pg_locations to track valid locations\n\nAdd the following commands to manage locations\nCREATE LOCATION locname PATH 'file system directory';\nDROP LOCATION locname; (this will have to look into each db to make \nsure that any objects are not using it. Don't know how this will be \ndone yet!)\n\nI propose to change the names of the on disk directories from 999999 to \n99999_DATA, 99999_INDEX and 99999_TEMP (where 99999 is the OID from \nPG_DATABASE). A SYMLINK from 99999_INDEX and 99999_TEMP will be made \nback to 99999_DATA will be made so the WAL functions will continue to \nwork.\n\n\nAgain from my earlier attempt at this patch, I believe this capability \nwill not only improve performance (see my earlier emails. Where \ndepending on the type of disks the improvement was between 0% and 100% \nperformance gain running pg_bench) but also give DBA's the flexibility \nto spread the data files over multiple disks without having to \"hack\" \nthe system using symbolic links. \n\n\n",
"msg_date": "Mon, 5 Nov 2001 09:53:12 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "\"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> I propose to add a default data location, index and temporary locations \n> to the pg_shadow table to allow a DBA to specify locations for each \n> user when they create databases, tables and indexes or need temporary \n> disk storage (either for temporary tables or sort files).\n\nHave you read any of the previous discussions about tablespaces?\nThis seems to be tablespaces with an off-the-cuff syntax. I'd\nsuggest taking a hard look at Oracle's tablespace facility and\nseeing how closely we want to duplicate that.\n\n> PG_SHADOW add dat_location, idx_location, tmp_location (all default to \n> PG_DATA)\n\nWhat does location have to do with users?\n\n> I propose to change the names of the on disk directories from 999999 to \n> 99999_DATA, 99999_INDEX and 99999_TEMP (where 99999 is the OID from \n> PG_DATABASE).\n\nNo, that doesn't scale to arbitrary locations; furthermore it requires\nan unseemly amount of knowledge in low-level file access code about\nexactly what kind of object each table is. The symlinks should just\nbe named after the OIDs of the locations' rows in pg_location.\n\nThe direction I've been envisioning for this is that each table has\na logical identification <pg_database OID>, <pg_class OID> as well\nas a physical identification <pg_location OID>, <relfilenode OID>.\nThe path to access the table can be constructed entirely from the\nphysical identification: $PGDATA/base/<pg_location OID>/<relfilenode OID>.\n\nOne problem to be addressed if multiple databases can share a single\nphysical location is how to prevent relfilenode collisions. Perhaps\nwe could avoid the issue by adding another layer of subdirectories:\n$PGDATA/base/<pg_location OID>/<pg_database OID>/<relfilenode OID>.\nThat is, each database would have a subdirectory within each location\nthat it's ever used. (This would make DROP DATABASE a lot easier,\namong other things.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Nov 2001 11:17:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> > I propose to add a default data location, index and temporary locations\n> > to the pg_shadow table to allow a DBA to specify locations for each\n> > user when they create databases, tables and indexes or need temporary\n> > disk storage (either for temporary tables or sort files).\n> \n> Have you read any of the previous discussions about tablespaces?\n> This seems to be tablespaces with an off-the-cuff syntax. I'd\n> suggest taking a hard look at Oracle's tablespace facility and\n> seeing how closely we want to duplicate that.\n\nSorry I missed the conversation about tablespaces. One of the reasons I think\nPostgres is so usable is because it does not require the use of tablespace\nfiles. If by tablespace, you mean to declare a directory on a device as a\ntablespace, then cool. If you want to create tablespace \"files\" ala Oracle, you\nare heading toward an administration nightmare. Don't get me wrong, the ability\nto use a file as a tablespace would be kind of cool, i.e. you can probably use\nraw devices, but please to not abandon the way postgres currently works.\n\nOn our Oracle server, we have run out of space on our tablespace files and not\nknown it was coming. I am the system architect, not the DBA, so I don't have\n(nor want) direct control over the oracle database operation. Our newbe DBA did\nnot make the table correctly, so they did not grow. Alas he was laid off, thus\nwe were left trying to figure out what was happening.\n\nPostgres is easier to configure and get right. IMHO that is one of its very\nimportant strengths. It is almost trivial to get a working SQL system up and\nrunning which performs well.\n",
"msg_date": "Tue, 06 Nov 2001 08:55:14 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Tom Lane wrote:\n>> This seems to be tablespaces with an off-the-cuff syntax. I'd\n>> suggest taking a hard look at Oracle's tablespace facility and\n>> seeing how closely we want to duplicate that.\n\n> Sorry I missed the conversation about tablespaces. One of the reasons I think\n> Postgres is so usable is because it does not require the use of tablespace\n> files. If by tablespace, you mean to declare a directory on a device as a\n> tablespace, then cool. If you want to create tablespace \"files\" ala Oracle, you\n> are heading toward an administration nightmare.\n\nNo, that's not one of the parts of Oracle's facility that I want to\nduplicate.\n\nI think our idea of a tablespace/location/whatchacallit should just be\na directory somewhere that table files can be created in. What seems\nworthwhile to steal from Oracle is the syntax that assigns particular\ntables to particular tablespaces. If we're compatible on syntax, that\nshould ease porting of existing applications --- and as far as I can see\nat the moment, there's no reason *not* to be compatible at that level.\nI don't want to borrow Oracle's ideas about space management semantics,\nhowever.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 23:49:56 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "I just wanted to affirm that Tom's description sounds like av very good\nway to go.\n\nYou get the best of two worlds with the possibility to tune servers and\nyet still\nvery easy to manage. i.e. If you don't need it, don't mess with it and\neverything\nwill work just fine.\nI don't either see any reason not to use the Oracle syntax since it is\nso widely used\nand it works very well for those of us that also work on Oracle (but in\npostgresql\nwithout the extent and storage clauses).\n\nRegards\nStefan\n\nTom Lane wrote:\n\n> mlw <markw@mohawksoft.com> writes:\n> > Tom Lane wrote:\n> >> This seems to be tablespaces with an off-the-cuff syntax. I'd\n> >> suggest taking a hard look at Oracle's tablespace facility and\n> >> seeing how closely we want to duplicate that.\n>\n> > Sorry I missed the conversation about tablespaces. One of the reasons I think\n> > Postgres is so usable is because it does not require the use of tablespace\n> > files. If by tablespace, you mean to declare a directory on a device as a\n> > tablespace, then cool. If you want to create tablespace \"files\" ala Oracle, you\n> > are heading toward an administration nightmare.\n>\n> No, that's not one of the parts of Oracle's facility that I want to\n> duplicate.\n>\n> I think our idea of a tablespace/location/whatchacallit should just be\n> a directory somewhere that table files can be created in. What seems\n> worthwhile to steal from Oracle is the syntax that assigns particular\n> tables to particular tablespaces. If we're compatible on syntax, that\n> should ease porting of existing applications --- and as far as I can see\n> at the moment, there's no reason *not* to be compatible at that level.\n> I don't want to borrow Oracle's ideas about space management semantics,\n> however.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Thu, 08 Nov 2001 13:52:40 +0100",
"msg_from": "Stefan Rindeskar <sr@globecom.net>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "Stefan Rindeskar wrote:\n> \n> I just wanted to affirm that Tom's description sounds like av very good\n> way to go.\n> \n> You get the best of two worlds with the possibility to tune servers and\n> yet still\n> very easy to manage. i.e. If you don't need it, don't mess with it and\n> everything\n> will work just fine.\n> I don't either see any reason not to use the Oracle syntax since it is\n> so widely used\n> and it works very well for those of us that also work on Oracle (but in\n> postgresql\n> without the extent and storage clauses).\n> \n\nI absolutely agree with the concept of defining a location for data from within\nthe database. No argument.\n\nThe only two issues I can see are:\n\n(1) Do not require the use of files as table spaces ala Oracle. That is an\nadmin nightmare. (Again, it would be cool, however, to be able to use table\nspace files so that PostgreSQL could have raw access as long as it is not a\nrequirement.) I don't think Tom is thinking about table space files, so I'm not\nworried.\n\n(2) I have a concern about expected behavior vs existing syntax. If PostgreSQL\nuses \"create tablespace\" in such a way that an Oracle DBA will expect it to\nwork as Oracle does, it may cause a bit of confusion. We all know that\n\"confusion\" between an open source solution and a \"defacto\" solution is used as\nclub.\n",
"msg_date": "Thu, 08 Nov 2001 08:52:22 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "\nJim, I see now that you submitted a new version. Folks, do we have a\ndirection for this patch. Discussion of the patch is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\n---------------------------------------------------------------------------\n\nJim Buttafuoco wrote:\n> Hi all, \n> \n> The following is a description of a patch I am proposing for 7.3. \n> Please read and comment.\n> \n> Thanks\n> Jim\n> \n> \n> This proposal covers the ability to allow a DBA (and general users) to \n> specify where a database and it's individual objects will reside. I \n> propose to add a default data location, index and temporary locations \n> to the pg_shadow table to allow a DBA to specify locations for each \n> user when they create databases, tables and indexes or need temporary \n> disk storage (either for temporary tables or sort files). The \"CREATE \n> DATABASE\" command will be changed to also take an INDEX location and \n> temporary location. All 3 locations will default to the values from \n> pg_shadow for the user that is creating the database. Both the \"CREATE \n> TABLE\" and \"CREATE INDEX\" commands will be changed to add \"WITH \n> LOCATION\" optional argument (location will default to values from \n> PG_DATABASE which were set by the \"CREATE DATABASE\" command).\n> \n> The following system tables will be changed as follows\n> PG_SHADOW add dat_location, idx_location, tmp_location (all default to \n> PG_DATA)\n> PG_DATABASE add dat_location, idx_location, tmp_location (all default \n> to same from PG_SHADOW)\n> PG_CLASS add rellocation (default to dat_location for tables, \n> idx_location for indexes from PG_DATABASE)\n> \n> \n> Add a GLOBAL table pg_locations to track valid locations\n> \n> Add the following commands to manage locations\n> CREATE LOCATION locname PATH 'file system directory';\n> DROP LOCATION locname; (this will have to look into each db to make \n> sure that any objects are not using it. Don't know how this will be \n> done yet!)\n> \n> I propose to change the names of the on disk directories from 999999 to \n> 99999_DATA, 99999_INDEX and 99999_TEMP (where 99999 is the OID from \n> PG_DATABASE). A SYMLINK from 99999_INDEX and 99999_TEMP will be made \n> back to 99999_DATA will be made so the WAL functions will continue to \n> work.\n> \n> \n> Again from my earlier attempt at this patch, I believe this capability \n> will not only improve performance (see my earlier emails. Where \n> depending on the type of disks the improvement was between 0% and 100% \n> performance gain running pg_bench) but also give DBA's the flexibility \n> to spread the data files over multiple disks without having to \"hack\" \n> the system using symbolic links. \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Feb 2002 15:01:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Jim, I see now that you submitted a new version. Folks, do we have a\n> direction for this patch.\n\nI didn't like it at the time, and still don't. We are not that far away\nfrom having proper tablespaces, and I think that kluges that provide\npart of the functionality will just get in the way when it comes time\nto do it right.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Feb 2002 19:20:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "On Fri, 22 Feb 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Jim, I see now that you submitted a new version. Folks, do we have a\n> > direction for this patch.\n>\n> I didn't like it at the time, and still don't. We are not that far away\n> from having proper tablespaces, and I think that kluges that provide\n> part of the functionality will just get in the way when it comes time\n> to do it right.\n\nWhat kind of time frame is \"not that far away\"? For v7.3?\n\nIf not, and someone can clarify what I'm understanding this patch will do,\nits essentially going to setup a directory structure of:\n\ndata/base/<dboid>/<tbloid>.idx/indx\n\n?\n\nIf we aren't going to have tablespaces for v7.3, there we are talking 6->8\nmonths before we do, and the above sounds like a reasonable interim\nsolution for this ...\n\n",
"msg_date": "Fri, 22 Feb 2002 21:52:21 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> On Fri, 22 Feb 2002, Tom Lane wrote:\n>> I didn't like it at the time, and still don't. We are not that far away\n>> from having proper tablespaces, and I think that kluges that provide\n>> part of the functionality will just get in the way when it comes time\n>> to do it right.\n\n> What kind of time frame is \"not that far away\"? For v7.3?\n\nMy guess is that any of the inner circle of hackers could make this\nhappen with about a week's work. Whether someone will find time before\n7.3 is unknown (particularly seeing that we haven't set a target date\nfor 7.3). Personally, schemas are a higher priority for me ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Feb 2002 11:53:24 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "All,\n\nI still believe that postgresql needs this feature. I have many postgresql\nsystems that have over 500GB of data+indexes. Using symbolic links is a BIG\npain in the A??. Every time I run vacuum I have to go and fix the links\nagain. Also I have many disks that are running out of space. This patch\nwould allow me the ability to move my tables and indexes around. I\npersonally don't see the difference between my patch and what people are\ncalling \"Tablespaces\" . Oracle's definition is \"A group of files that contain\ndatabase objects\" , under my patch tablespaces and locations are the same\nthing except postgresql uses file system directories to contain the group of\nobjects. \n\nTo recap my patch (location = tablespace here)\n\nAllow the DBA to create locations with a CREATE LOCATION command or CREATE\nTABLESPACE command if you like tablespace instead of LOCATION.\n\nThen for DATABASES (and schemas when available) CREATE DATABASE WITH\nDATA_LOCATION = XXX and INDEX_LOCATION = YYY where XXX and YYY are the\nDEFAULT values for OBJECT creation if not LOCATION is given.\n\nCREATE TABLE and CREATE INDEX will create tables and indexes in the defaults\nfrom the CREATE DATABASE/SCHEMA commands above.\n\nCREATE TABLE WITH LOCATION=AAA and CREATE INDEX WITH LOCATION BBB would create\nthe table/index with the alternate location (only if the location was created\nwith a CREATE LOCATION command)\n\n\nThe create table command would also have to be change to support primary key/\nunique index syntax.\n\ncreate table SAMPLE\n(\n\tc1\ttext primary key location CCC,\n\tc2 text unique location DDD\n);\n\n\nI hope this explains my patch better. As I said before and I believe this\nto be true, This patch will enable the DBA to place tables/indexes on any\ndisk either for performance and/or space reasons. Also I believe this is\nanother check off item for people looking at postgresql when comparing with\nOracle/Sybase/DB2 ... \n\nThanks for your time\nJim\n\n\n\n\n> Jim, I see now that you submitted a new version. Folks, do we have a\n> direction for this patch. Discussion of the patch is at:\n> \n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n> \n> ---------------------------------------------------------------------------\n> \n> Jim Buttafuoco wrote:\n> > Hi all, \n> > \n> > The following is a description of a patch I am proposing for 7.3. \n> > Please read and comment.\n> > \n> > Thanks\n> > Jim\n> > \n> > \n> > This proposal covers the ability to allow a DBA (and general users) to \n> > specify where a database and it's individual objects will reside. I \n> > propose to add a default data location, index and temporary locations \n> > to the pg_shadow table to allow a DBA to specify locations for each \n> > user when they create databases, tables and indexes or need temporary \n> > disk storage (either for temporary tables or sort files). The \"CREATE \n> > DATABASE\" command will be changed to also take an INDEX location and \n> > temporary location. All 3 locations will default to the values from \n> > pg_shadow for the user that is creating the database. Both the \"CREATE \n> > TABLE\" and \"CREATE INDEX\" commands will be changed to add \"WITH \n> > LOCATION\" optional argument (location will default to values from \n> > PG_DATABASE which were set by the \"CREATE DATABASE\" command).\n> > \n> > The following system tables will be changed as follows\n> > PG_SHADOW add dat_location, idx_location, tmp_location (all default to \n> > PG_DATA)\n> > PG_DATABASE add dat_location, idx_location, tmp_location (all default \n> > to same from PG_SHADOW)\n> > PG_CLASS add rellocation (default to dat_location for tables, \n> > idx_location for indexes from PG_DATABASE)\n> > \n> > \n> > Add a GLOBAL table pg_locations to track valid locations\n> > \n> > Add the following commands to manage locations\n> > CREATE LOCATION locname PATH 'file system directory';\n> > DROP LOCATION locname; (this will have to look into each db to make \n> > sure that any objects are not using it. Don't know how this will be \n> > done yet!)\n> > \n> > I propose to change the names of the on disk directories from 999999 to \n> > 99999_DATA, 99999_INDEX and 99999_TEMP (where 99999 is the OID from \n> > PG_DATABASE). A SYMLINK from 99999_INDEX and 99999_TEMP will be made \n> > back to 99999_DATA will be made so the WAL functions will continue to \n> > work.\n> > \n> > \n> > Again from my earlier attempt at this patch, I believe this capability \n> > will not only improve performance (see my earlier emails. Where \n> > depending on the type of disks the improvement was between 0% and 100% \n> > performance gain running pg_bench) but also give DBA's the flexibility \n> > to spread the data files over multiple disks without having to \"hack\" \n> > the system using symbolic links. \n> > \n> > \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n\n\n",
"msg_date": "Sun, 3 Mar 2002 10:34:08 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "\nI think Jim has some very good points here. What does his\nimplementation lack? Seems pretty valuable to me.\n\n---------------------------------------------------------------------------\n\nJim Buttafuoco wrote:\n> All,\n> \n> I still believe that postgresql needs this feature. I have many postgresql\n> systems that have over 500GB of data+indexes. Using symbolic links is a BIG\n> pain in the A??. Every time I run vacuum I have to go and fix the links\n> again. Also I have many disks that are running out of space. This patch\n> would allow me the ability to move my tables and indexes around. I\n> personally don't see the difference between my patch and what people are\n> calling \"Tablespaces\" . Oracle's definition is \"A group of files that contain\n> database objects\" , under my patch tablespaces and locations are the same\n> thing except postgresql uses file system directories to contain the group of\n> objects. \n> \n> To recap my patch (location = tablespace here)\n> \n> Allow the DBA to create locations with a CREATE LOCATION command or CREATE\n> TABLESPACE command if you like tablespace instead of LOCATION.\n> \n> Then for DATABASES (and schemas when available) CREATE DATABASE WITH\n> DATA_LOCATION = XXX and INDEX_LOCATION = YYY where XXX and YYY are the\n> DEFAULT values for OBJECT creation if not LOCATION is given.\n> \n> CREATE TABLE and CREATE INDEX will create tables and indexes in the defaults\n> from the CREATE DATABASE/SCHEMA commands above.\n> \n> CREATE TABLE WITH LOCATION=AAA and CREATE INDEX WITH LOCATION BBB would create\n> the table/index with the alternate location (only if the location was created\n> with a CREATE LOCATION command)\n> \n> \n> The create table command would also have to be change to support primary key/\n> unique index syntax.\n> \n> create table SAMPLE\n> (\n> \tc1\ttext primary key location CCC,\n> \tc2 text unique location DDD\n> );\n> \n> \n> I hope this explains my patch better. As I said before and I believe this\n> to be true, This patch will enable the DBA to place tables/indexes on any\n> disk either for performance and/or space reasons. Also I believe this is\n> another check off item for people looking at postgresql when comparing with\n> Oracle/Sybase/DB2 ... \n> \n> Thanks for your time\n> Jim\n> \n> \n> \n> \n> > Jim, I see now that you submitted a new version. Folks, do we have a\n> > direction for this patch. Discussion of the patch is at:\n> > \n> > \thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > Jim Buttafuoco wrote:\n> > > Hi all, \n> > > \n> > > The following is a description of a patch I am proposing for 7.3. \n> > > Please read and comment.\n> > > \n> > > Thanks\n> > > Jim\n> > > \n> > > \n> > > This proposal covers the ability to allow a DBA (and general users) to \n> > > specify where a database and it's individual objects will reside. I \n> > > propose to add a default data location, index and temporary locations \n> > > to the pg_shadow table to allow a DBA to specify locations for each \n> > > user when they create databases, tables and indexes or need temporary \n> > > disk storage (either for temporary tables or sort files). The \"CREATE \n> > > DATABASE\" command will be changed to also take an INDEX location and \n> > > temporary location. All 3 locations will default to the values from \n> > > pg_shadow for the user that is creating the database. Both the \"CREATE \n> > > TABLE\" and \"CREATE INDEX\" commands will be changed to add \"WITH \n> > > LOCATION\" optional argument (location will default to values from \n> > > PG_DATABASE which were set by the \"CREATE DATABASE\" command).\n> > > \n> > > The following system tables will be changed as follows\n> > > PG_SHADOW add dat_location, idx_location, tmp_location (all default to \n> > > PG_DATA)\n> > > PG_DATABASE add dat_location, idx_location, tmp_location (all default \n> > > to same from PG_SHADOW)\n> > > PG_CLASS add rellocation (default to dat_location for tables, \n> > > idx_location for indexes from PG_DATABASE)\n> > > \n> > > \n> > > Add a GLOBAL table pg_locations to track valid locations\n> > > \n> > > Add the following commands to manage locations\n> > > CREATE LOCATION locname PATH 'file system directory';\n> > > DROP LOCATION locname; (this will have to look into each db to make \n> > > sure that any objects are not using it. Don't know how this will be \n> > > done yet!)\n> > > \n> > > I propose to change the names of the on disk directories from 999999 to \n> > > 99999_DATA, 99999_INDEX and 99999_TEMP (where 99999 is the OID from \n> > > PG_DATABASE). A SYMLINK from 99999_INDEX and 99999_TEMP will be made \n> > > back to 99999_DATA will be made so the WAL functions will continue to \n> > > work.\n> > > \n> > > \n> > > Again from my earlier attempt at this patch, I believe this capability \n> > > will not only improve performance (see my earlier emails. Where \n> > > depending on the type of disks the improvement was between 0% and 100% \n> > > performance gain running pg_bench) but also give DBA's the flexibility \n> > > to spread the data files over multiple disks without having to \"hack\" \n> > > the system using symbolic links. \n> > > \n> > > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > > \n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 01:31:42 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think Jim has some very good points here. What does his\n> implementation lack?\n\nForward compatibility to a future tablespace implementation.\nIf we do this, we'll be stuck with supporting this feature set,\nnot to mention this syntax; neither of which have garnered any\nsupport from the assembled hackers.\n\nI went back to look at TODO.detail/tablespaces, and find that it's\nbadly in need of editing. Much of the discussion there is\nback-and-forthing about the question of naming files by OID,\nwhich is now a done deal. But it is clear that people wanted to\nhave a notion of tablespaces as objects somewhat orthogonal to\ndatabases. I didn't see any support for hard-wiring tablespace\nassignments on the basis of \"tables here, indexes there\", either.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 05 Mar 2002 02:30:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think Jim has some very good points here. What does his\n> > implementation lack?\n> \n> Forward compatibility to a future tablespace implementation.\n> If we do this, we'll be stuck with supporting this feature set,\n> not to mention this syntax; neither of which have garnered any\n> support from the assembled hackers.\n> \n> I went back to look at TODO.detail/tablespaces, and find that it's\n> badly in need of editing. Much of the discussion there is\n> back-and-forthing about the question of naming files by OID,\n\nAgreed.\n\n> which is now a done deal. But it is clear that people wanted to\n> have a notion of tablespaces as objects somewhat orthogonal to\n> databases. I didn't see any support for hard-wiring tablespace\n> assignments on the basis of \"tables here, indexes there\", either.\n\nOK, I read through it. Wow, it was long. Exactly what is missing from\nthe patch that he can add? It is mostly having tablespaces independent\nof databases? There were so many proposals in there I am not sure\nwhere it all landed. The cleaned out version link to from the TODO list\nshould be slightly easier reading, but it clearly goes all over the\nplace. I can try and read through it and distill down the ideas if that\nwould help.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 5 Mar 2002 02:51:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
},
{
"msg_contents": "...\n> Forward compatibility to a future tablespace implementation.\n> If we do this, we'll be stuck with supporting this feature set,\n> not to mention this syntax; neither of which have garnered any\n> support from the assembled hackers.\n\nThe feature set (in some incarnation) is exactly something we should\nhave. \"Tablespace\" could mean almost anything, since (I recall that) we\nare not slavishly copying the Oracle features having a similar name. The\nsyntax (or something similar) seems acceptable to me. I haven't looked\nat the implementation itself.\n\nSo, I'll guess that the particular objection to this implementation is\nalong the lines of wanting to be able to manage tablespaces/locations as\na single entity? So that one could issue commands like (forgive the\nsyntax) \"move tablespace xxx to yyy;\" and be able to yank the entire\ncontents from one place to another in a single line?\n\nJim's patches don't explicitly tie the pieces residing in a single\nlocation together. Is that the objection? In all other respects (and\nperhaps in all respects period) it seems to be a good starting point at\nleast.\n\nI know that you have said that you want to look at \"tablespaces\" for\n7.3. If we get there with a feature set we all find acceptable, then\ngreat. If we don't, then Jim's subset of features would be great to\nhave.\n\nComments?\n\n - Thomas\n",
"msg_date": "Tue, 05 Mar 2002 06:02:47 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
}
] |
[
{
"msg_contents": "Can anyone tell me what exit code 11 means on a backend failure? I can't\nfind any definition of backend exit codes. Is there any documentation of\nthese?\n\n------- Forwarded Message\nSubject: Bug#101177: postgresql: Postgres died, won't restart -- another case\nFrom: Ken Harris <kbh7@cornell.edu>\nDate: Fri, 02 Nov 2001 15:40:58 -0500 (20:40 GMT)\nTo: Debian Bug Tracking System <101177@bugs.debian.org>\n\nPackage: postgresql\nVersion: 7.1.3-4\n\nI'm seeing something similar here. Postgres died while a program\nwas dumping data into it, and I can't restart it. I'm getting\n\n /usr/lib/postgresql/bin/postmaster: Startup proc 10178 exited with status \n11 - abort\n\nin /var/log/postgres.log (11 = segfault?).\n\nIt was asked if 7.1.3-4 cures this; I'm already using 7.1.3-4.\n\nIt was said Postgres creates a 16-20MB file on startup; I have 1.0GB free.\n(Is this startup file size dependent on the database size?)\n\n-- System Information\nDebian Release: testing/unstable\nArchitecture: i386\nKernel: Linux picea 2.4.12 #2 Thu Nov 1 13:01:34 EST 2001 i686\nLocale: LANG=C, LC_CTYPE=\n\n------- End of Forwarded Message\n\n------- Forwarded Message\nDate: Mon, 05 Nov 2001 10:44:46 -0500\nFrom: Ken Harris <kbh7@cornell.edu>\nTo: Oliver Elphick <olly@lfix.co.uk>\nSubject: Re: Bug#101177: postgresql: Postgres died, won't restart -- another ca\n\t se\n\n\n>No. Your problem does not sound like lack of disk space (I recently had\n>that and that gives an exit status of 512).\n>\n\nOk, good. (Or, darn, I don't get to ask my boss for a bigger disk. :-)\n\n>Is there anything in the logs? If not, set debug_level = 2 in\n>/etc/postgresql/postgresql.conf and try again.\n>\n\nThe line I quoted above (\"exited with status 11\") was the only thing \nthat showed up in the log. With debug_level=2, I see:\n\ninvoking IpcMemoryCreate(size=2211840)\nFindExec: found \"/usr/lib/postgresql/bin/postmaster\" using argv[0]\n/usr/lib/postgresql/bin/postmaster: reaping dead processes...\n/usr/lib/postgresql/bin/postmaster: Startup proc 14219 exited with \nstatus 11 - abort\n\n(Doesn't look terribly helpful, I'm afraid.)\n\nThis database isn't mission-critical, by any means, and I'm working on a \nprogram (JDBC) to create it from my raw data, so it gets re-created from \nscratch all the time. I'm curious to know why Postgres died, though, \nand I'll be glad to run any sort of diagnostics you can think of.\n\nThanks,\n\n- - Ken\n\n\n------- End of Forwarded Message\n\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"No man can serve two masters; for either he will \n hate the one, and love the other; or else he will hold\n to the one, and despise the other. Ye cannot serve \n God and mammon.\" Matthew 6:24 \n\n\n",
"msg_date": "Mon, 05 Nov 2001 16:18:58 +0000",
"msg_from": "\"Oliver Elphick\" <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: Bug#101177: postgresql: Postgres died, won't restart -- another "
},
{
"msg_contents": "\"Oliver Elphick\" <olly@lfix.co.uk> writes:\n> Can anyone tell me what exit code 11 means on a backend failure?\n\nLook in <signal.h>. I believe it's SIGSEGV on most Unixen, but you\nshould check your machine.\n\nIn any case, getting a backtrace from the coredump would be the next\nstep to take.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Nov 2001 17:27:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug#101177: postgresql: Postgres died, won't restart -- another "
}
] |
[
{
"msg_contents": "\n> > Maybe a better fix would be to #undef _ALL_SOURCE before including\n> > inttypes.h ?\n> \n> Is it possible to avoid including inttypes.h altogether?\n\nLooks like we get it from arpa/inet.h. I don't see any content why we \nwould need inttypes.h directly.\n\nAndreas\n",
"msg_date": "Mon, 5 Nov 2001 17:28:59 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well "
}
] |
[
{
"msg_contents": "Dear all,\n\nAre there PostgreSQL 7.2 beta available?\n\nBest regards,\nJean-Michel POURE\n",
"msg_date": "Mon, 05 Nov 2001 17:47:26 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "PostgreSQL 7.2 beta RPMs"
}
] |
[
{
"msg_contents": "Tom,\n\nYes, locations = tablespaces (I really don't care if we call them\nlocaitons or tablespaces, I was just using LOCATIONS because that's what\nwe have now...) is there a SQL standard for this???.\n\nAs for locations and user, Under Oracle a user is assigned a default\ntablespace and a temporary tablespace via the \"CREATE USER\" command. \nAlso \"CREATE DATABASE\" allows you to specify the SYSTEM tablespace where\nall objects will go unless a storage clause is added duration object\ncreation. \"CREATE TABLE\" and \"CREATE INDEX\" both take a storage clause.\n\n\n As for the actual data file location, I believe under each loc oid we\nwould have pg_port #/DB OID/pg_class OID might be the way to go. \n\nThe example below has 3 tablespaces/locations PGDATA/DB1/DB2\nPG_LOCATIONS (or PG_TABLESPACES) would have the following rows\nPGDATA | /usr/local/pgsql/data\nDB1 | /db1\nDB2 | /db2\n\n\n/usr/local/pgsql/data/5432/1 <<template1\n ^----------- <<default location/tablespace\n ^--------- <<Default PG Port\n\n/db1/data/5432\n ^-------------------------<< second location default PG PORT\n/db1/data/5432/65894834/99999999 \n ^------<< somedb/sometable\n/db1/data/5432/65894834/88888888\n ^------<< somedb/someindex\n \n/db2/data/5432 \n ^-------------------------<< DB2\n\n\n> \"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> > I propose to add a default data location, index and temporary\nlocations \n> > to the pg_shadow table to allow a DBA to specify locations for each \n> > user when they create databases, tables and indexes or need\ntemporary \n> > disk storage (either for temporary tables or sort files).\n> \n> Have you read any of the previous discussions about tablespaces?\n> This seems to be tablespaces with an off-the-cuff syntax. I'd\n> suggest taking a hard look at Oracle's tablespace facility and\n> seeing how closely we want to duplicate that.\n> \n> > PG_SHADOW add dat_location, idx_location, tmp_location (all default\nto \n> > PG_DATA)\n> \n> What does location have to do with users?\n> \n> > I propose to change the names of the on disk directories from 999999\nto \n> > 99999_DATA, 99999_INDEX and 99999_TEMP (where 99999 is the OID from \n> > PG_DATABASE).\n> \n> No, that doesn't scale to arbitrary locations; furthermore it requires\n> an unseemly amount of knowledge in low-level file access code about\n> exactly what kind of object each table is. The symlinks should just\n> be named after the OIDs of the locations' rows in pg_location.\n> \n> The direction I've been envisioning for this is that each table has\n> a logical identification <pg_database OID>, <pg_class OID> as well\n> as a physical identification <pg_location OID>, <relfilenode OID>.\n> The path to access the table can be constructed entirely from the\n> physical identification: $PGDATA/base/<pg_location OID>/<relfilenode\nOID>.\n> \n> One problem to be addressed if multiple databases can share a single\n> physical location is how to prevent relfilenode collisions. Perhaps\n> we could avoid the issue by adding another layer of subdirectories:\n> $PGDATA/base/<pg_location OID>/<pg_database OID>/<relfilenode OID>.\n> That is, each database would have a subdirectory within each location\n> that it's ever used. (This would make DROP DATABASE a lot easier,\n> among other things.)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n",
"msg_date": "Mon, 5 Nov 2001 12:26:17 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "\"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> As for the actual data file location, I believe under each loc oid we\n> would have pg_port #/DB OID/pg_class OID might be the way to go. \n\nIntroducing pg_port into the paths would be a bad idea, since it\nwould prevent restarting a postmaster with a different port number.\nI think if a DBA is running multiple postmasters, it's up to him\nto avoid pointing more than one of them at the same \"location\"\ndirectory. (Maybe we could enforce that with lock files? Not\nsure it's worth the trouble though.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Nov 2001 17:53:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
},
{
"msg_contents": "On Mon, Nov 05, 2001 at 12:26:17PM -0500, Jim Buttafuoco allegedly wrote:\n> The example below has 3 tablespaces/locations PGDATA/DB1/DB2\n> PG_LOCATIONS (or PG_TABLESPACES) would have the following rows\n> PGDATA | /usr/local/pgsql/data\n> DB1 | /db1\n> DB2 | /db2\n\n<SNIP>\n\n> /db1/data/5432\n> ^-------------------------<< second location default PG PORT\n> /db1/data/5432/65894834/99999999 \n> ^------<< somedb/sometable\n> /db1/data/5432/65894834/88888888\n> ^------<< somedb/someindex\n> \n> /db2/data/5432 \n> ^-------------------------<< DB2\n\nShould data/ even be in there? /db2/5432 seems to be the correct value.\nEither that or change the location to /db2/data. Implicitly creating an\nextra directory isn't something I would like to happen, especially if it\ndoesn't happen for PGDATA itself.\n\nMy $.02,\n\nMathijs\n--\nAnd the beast shall be made legion. Its numbers shall be increased a\nthousand thousand fold. The din of a million keyboards like unto a great\nstorm shall cover the earth, and the followers of Mammon shall tremble.\n",
"msg_date": "Fri, 23 Nov 2001 12:08:47 +0100",
"msg_from": "Mathijs Brands <mathijs@ilse.nl>",
"msg_from_op": false,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3"
}
] |
[
{
"msg_contents": "Barry Lind <barry@xythos.com> writes:\n> It happened to me this afternoon while running a 'vacuum analyze \n> verbose'. I have attached the stack trace below.\n\nThat trace is certainly not from a vacuum operation.\n\nI'd suggest rebuilding with --enable-debug; we won't be able to learn\nmuch without that. Until you do that, possibly it'd help to turn on\nquery logging so that we can learn what query is crashing.\n\nI find the presence of EvalPlanQual in the backtrace suggestive.\nI don't trust that code at all ;-) ... but without a lot more info\nwe're not going to be able to figure out anything.\n\nBTW, EvalPlanQual is only called if the query is an UPDATE or DELETE\nthat tries to update a row that's already been updated by a\nnot-yet-committed transaction. That probably explains why you don't\nsee the crash often --- if you deliberately set up the right\ncircumstances, you could perhaps reproduce it on-demand.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 05 Nov 2001 20:31:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Core dump on 7.1.3 on Linux 2.2.19 "
},
{
"msg_contents": "On a production server I am getting periodic core dumps from postgres. \nThe server can go for days or weeks fine without any problems, but does \ndump core every so often.\n\nIt happened to me this afternoon while running a 'vacuum analyze \nverbose'. I have attached the stack trace below. I looked at a core \nfrom the vacuum as well as another core file from a prior operation \n(which wasn't a vacuum) and they had the same stack. So I don't think \nthis is a vacuum problem.\n\nAny ideas? (I intend to rebuild to get some better info in the stack \ntrace, but it may be a while before I get around to that).\n\nthanks,\n--Barry\n\n[root@xythos1 26382]# gdb postgres core\nGNU gdb 5.0\nCopyright 2000 Free Software Foundation, Inc.\nGDB is free software, covered by the GNU General Public License, and you are\nwelcome to change it and/or distribute copies of it under certain \nconditions.\nType \"show copying\" to see the conditions.\nThere is absolutely no warranty for GDB. Type \"show warranty\" for details.\nThis GDB was configured as \"i386-redhat-linux\"...\n\nwarning: core file may not match specified executable file.\nCore was generated by `postgres: postgres files 127.0.0.1 SELECT \n '.\nProgram terminated with signal 11, Segmentation fault.\nReading symbols from /usr/lib/libz.so.1...done.\nLoaded symbols for /usr/lib/libz.so.1\nReading symbols from /lib/libcrypt.so.1...done.\nLoaded symbols for /lib/libcrypt.so.1\nReading symbols from /lib/libresolv.so.2...done.\nLoaded symbols for /lib/libresolv.so.2\nReading symbols from /lib/libnsl.so.1...done.\nLoaded symbols for /lib/libnsl.so.1\nReading symbols from /lib/libdl.so.2...done.\nLoaded symbols for /lib/libdl.so.2\nReading symbols from /lib/libm.so.6...done.\nLoaded symbols for /lib/libm.so.6\nReading symbols from /usr/lib/libreadline.so.4.1...done.\nLoaded symbols for /usr/lib/libreadline.so.4.1\nReading symbols from /lib/libtermcap.so.2...done.\nLoaded symbols for /lib/libtermcap.so.2\nReading symbols from /lib/libc.so.6...done.\nLoaded symbols for /lib/libc.so.6\nReading symbols from /lib/ld-linux.so.2...done.\nLoaded symbols for /lib/ld-linux.so.2\nReading symbols from /usr/local/pgsql/lib/plpgsql.so...done.\nLoaded symbols for /usr/local/pgsql/lib/plpgsql.so\n#0 0x80b9693 in ExecEvalVar ()\n(gdb) where\n#0 0x80b9693 in ExecEvalVar ()\n#1 0x80ba219 in ExecEvalExpr ()\n#2 0x80b9c6b in ExecEvalFuncArgs ()\n#3 0x80b9ce4 in ExecMakeFunctionResult ()\n#4 0x80b9e81 in ExecEvalOper ()\n#5 0x80ba289 in ExecEvalExpr ()\n#6 0x80ba39a in ExecQual ()\n#7 0x80bdca1 in IndexNext ()\n#8 0x80ba83f in ExecScan ()\n#9 0x80bde78 in ExecIndexScan ()\n#10 0x80b8fc1 in ExecProcNode ()\n#11 0x80bf6f4 in ExecNestLoop ()\n#12 0x80b8ffd in ExecProcNode ()\n#13 0x80b8c5e in EvalPlanQualNext ()\n#14 0x80b8c35 in EvalPlanQual ()\n#15 0x80b818a in ExecutePlan ()\n#16 0x80b7738 in ExecutorRun ()\n#17 0x80fc3af in ProcessQuery ()\n#18 0x80faebe in pg_exec_query_string ()\n#19 0x80fbea6 in PostgresMain ()\n#20 0x80e6fc8 in DoBackend ()\n#21 0x80e6bc7 in BackendStartup ()\n#22 0x80e5e3d in ServerLoop ()\n#23 0x80e5888 in PostmasterMain ()\n#24 0x80c7107 in main ()\n#25 0x400ecf31 in __libc_start_main (main=0x80c6fd4 <main>, argc=3,\n ubp_av=0xbffffa74, init=0x8065314 <_init>, fini=0x813e19c <_fini>,\n rtld_fini=0x4000e274 <_dl_fini>, stack_end=0xbffffa6c)\n at ../sysdeps/generic/libc-start.c:129\n\n\n",
"msg_date": "Mon, 05 Nov 2001 17:41:33 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Core dump on 7.1.3 on Linux 2.2.19"
},
{
"msg_contents": "On О©╫О©╫О©╫, 2001-11-06 at 04:41, Barry Lind wrote:\n> On a production server I am getting periodic core dumps from postgres. \n> The server can go for days or weeks fine without any problems, but does \n> dump core every so often.\n> \n> It happened to me this afternoon while running a 'vacuum analyze \n> verbose'. I have attached the stack trace below. I looked at a core \n> from the vacuum as well as another core file from a prior operation \n> (which wasn't a vacuum) and they had the same stack. So I don't think \n> this is a vacuum problem.\n> \n> Any ideas? (I intend to rebuild to get some better info in the stack \n> trace, but it may be a while before I get around to that).\n> \nI experienced problem with 'vacuum analyze' with postgres 7.1.2 on\nglibc-2.2.2. And it was bug in libc. Upgrading to glibc-2.2.3 solved my\nproblem.\n\nRegards,\nDmitry\n\n\n",
"msg_date": "06 Nov 2001 12:19:43 +0300",
"msg_from": "\"Dmitry G. Mastrukov\" =?koi8-r?Q?=E4=CD=C9=D4=D2=C9=CA_?=\n\t=?koi8-r?Q?=E7=C5=CE=CE=C1=C4=D8=C5=D7=C9=DE_?=\n\t=?koi8-r?Q?=ED=C1=D3=D4=D2=C0=CB=CF=D7?= <dmitry@taurussoft.org>",
"msg_from_op": false,
"msg_subject": "Re: Core dump on 7.1.3 on Linux 2.2.19"
}
] |
[
{
"msg_contents": "\n> > It is directly in /usr/include/inttypes.h in AIX 4.3.2 :-(\n> \n> Isn't it linked to /usr/include/sys/inttypes.h ?\n\nYes, I didn't see that, sorry.\n\nAndreas\n",
"msg_date": "Tue, 6 Nov 2001 10:01:37 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Beta going well "
}
] |
[
{
"msg_contents": "Hi,\n\n I've almost got the ALTER TABLE RENAME fixed so it doesn't break\ntriggers referring to the renamed column. The final problem is that\nafter the pg_trigger.tgargs is updated, that change is not visible \nin the current backend. I see that there are a couple of interesting\nfunctions to refresh the RelationCache:\n\n RelationClearRelation(Relation, bool);\n\n RelationFlushRelation(Relation);\n\nWhich one of these should I use to have the new pg_trigger data visible\nin the current backend? Or is there a better way than either of these?\n\nI should be able to clean up this work and send a patch tomorrow \nevening if I get this kink worked out.\n\nThanks,\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 6 Nov 2001 04:38:45 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "RelationFlushRelation() or RelationClearRelation()"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> I've almost got the ALTER TABLE RENAME fixed so it doesn't break\n> triggers referring to the renamed column. The final problem is that\n> after the pg_trigger.tgargs is updated, that change is not visible \n> in the current backend.\n\nThis should happen automatically as a consequence of the relcache flush\nmessage. Doing a manual RelationClearRelation or whatever is NOT the\nanswer; if you find yourself doing that, it means that other backends\naren't hearing about the change either.\n\nThe usual way to force a relcache flush is to update the relation's\npg_class row. Now that I think about it, I'm not sure ALTER TABLE\nRENAME COLUMN would have any direct reason to do that, so it may be\nbroken already in this regard. Does the relcache entry's column\ndata get updated with the new name?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 14:54:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation() "
},
{
"msg_contents": "On 06 Nov 2001 at 14:54 (-0500), Tom Lane wrote:\n| Brent Verner <brent@rcfile.org> writes:\n| > I've almost got the ALTER TABLE RENAME fixed so it doesn't break\n| > triggers referring to the renamed column. The final problem is that\n| > after the pg_trigger.tgargs is updated, that change is not visible \n| > in the current backend.\n| \n| This should happen automatically as a consequence of the relcache flush\n| message. Doing a manual RelationClearRelation or whatever is NOT the\n| answer; if you find yourself doing that, it means that other backends\n| aren't hearing about the change either.\n\ngotcha. I don't know what you mean by 'relcache flush message,' but\nI'll figure that out soon ;-)\n\n| The usual way to force a relcache flush is to update the relation's\n| pg_class row. Now that I think about it, I'm not sure ALTER TABLE\n| RENAME COLUMN would have any direct reason to do that, so it may be\n| broken already in this regard. Does the relcache entry's column\n| data get updated with the new name?\n\nThe relation->triggerdesc still has the old tgargs after updating the\npg_trigger table, so the triggered RI_ function is called with the\nold arguments.\n\nIt is probably noteworthy that I am directly modifying the tgargs\nin the pg_trigger table, via simple_heap_update(). This modfication\nis made at the end of renameatt() in rename.c. Could making this\nchange prior to the column rename cause the 'relcache flush message' \nto do the right thing? [I'm going to try this as soon as I'm off work.]\n\nAlso, is directly updating the pg_trigger table advisable? I'll\nlook further at trigger.c to see if I overlooked any utility to do\nthis cleaner.\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 6 Nov 2001 18:07:26 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation()"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> It is probably noteworthy that I am directly modifying the tgargs\n> in the pg_trigger table, via simple_heap_update(). This modfication\n> is made at the end of renameatt() in rename.\n\nSeems reasonable.\n\nNow that I think about it, the problem is almost certainly that the\nrelcache+sinval mechanism isn't recognizing this update as requiring\nrelcache-entry rebuild. If you update a pg_class row, it definitely\ndoes recognize that event as forcing relcache rebuild, and I had thought\nthat updating a pg_attribute row associated with a relcache entry would\ncause one too. But maybe not. We should either extend the sinval code\nto make that happen, or tweak renameatt to force a relcache flush\nexplicitly.\n\nAlternatively, maybe you're expecting too much? The relcache rebuild\ndoesn't (and isn't supposed to) happen until the next transaction commit\nor CommandCounterIncrement. If you've structured the code in a way that\nneeds the relcache change to happen sooner, then I think we need to find\na way to avoid expecting that to happen.\n\n> Also, is directly updating the pg_trigger table advisable?\n\nsimple_heap_update seems pretty direct to me :-) ... what did you have\nin mind?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 18:50:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation() "
},
{
"msg_contents": "I said:\n> Now that I think about it, the problem is almost certainly that the\n> relcache+sinval mechanism isn't recognizing this update as requiring\n> relcache-entry rebuild. If you update a pg_class row, it definitely\n> does recognize that event as forcing relcache rebuild, and I had thought\n> that updating a pg_attribute row associated with a relcache entry would\n> cause one too. But maybe not.\n\nIt sure looks to me like the update of the pg_attribute row\nshould be sufficient to queue a relcache flush message (see\nRelationInvalidateHeapTuple and subsidiary routines in\nbackend/utils/cache/inval.c). We could argue about whether\nPrepareForTupleInvalidation needs to test for a wider variety of\nrelcache-invalidating updates, but nonetheless I don't see why\nrenameatt would be failing to trigger one. Are you sure it's\nnot working?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 19:08:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation() "
},
{
"msg_contents": "On 06 Nov 2001 at 19:08 (-0500), Tom Lane wrote:\n| I said:\n| > Now that I think about it, the problem is almost certainly that the\n| > relcache+sinval mechanism isn't recognizing this update as requiring\n| > relcache-entry rebuild. If you update a pg_class row, it definitely\n| > does recognize that event as forcing relcache rebuild, and I had thought\n| > that updating a pg_attribute row associated with a relcache entry would\n| > cause one too. But maybe not.\n| \n| It sure looks to me like the update of the pg_attribute row\n| should be sufficient to queue a relcache flush message (see\n| RelationInvalidateHeapTuple and subsidiary routines in\n| backend/utils/cache/inval.c). We could argue about whether\n| PrepareForTupleInvalidation needs to test for a wider variety of\n| relcache-invalidating updates, but nonetheless I don't see why\n| renameatt would be failing to trigger one. Are you sure it's\n| not working?\n\nPretty darned sure that I've accurately described my symptoms.\nDoing the tgargs update before or after the actual column update\ndid not affect the behavior -- the cached relation->triggerdesc\nstill contains incorrect tgargs. I'll clean up what I have and \npost a patch for review.\n\nTo reply to your earlier email asking what do I expect.\n\nI'd like to be able to say...\n\n brent# create table parent (id int UNIQUE);\n brent# create table child (id int4 references parent(id) on update cascade);\n brent# alter table parent RENAME id to hello;\n brent# insert into parent values (1);\n brent# insert into child values (1);\n\nAfter running the above and without (re)starting a new backend yields\nthe following error. After getting a new backend, the behavior is as \ndesired.\n\n brent=# insert into child values(1);\n ERROR: constraint <unnamed>: table parent does not have an attribute id\n\n brent=# select tgargs from pg_trigger;\n tgargs \n ----------------------------------------------------------------\n \n <unnamed>\\000child\\000parent\\000UNSPECIFIED\\000id\\000hello\\000\n <unnamed>\\000child\\000parent\\000UNSPECIFIED\\000id\\000hello\\000\n <unnamed>\\000child\\000parent\\000UNSPECIFIED\\000id\\000hello\\000\n\n\nYour comments on this matter have been much appreciated. I'll next\nlook (further) into the sinval mechanism for a way to forcibly \ninvalidate the cached relation->triggerdesc.\n\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Tue, 6 Nov 2001 23:38:44 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation()"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> After running the above and without (re)starting a new backend yields\n> the following error. After getting a new backend, the behavior is as \n> desired.\n\n> brent=# insert into child values(1);\n> ERROR: constraint <unnamed>: table parent does not have an attribute id\n\nI wonder whether you're looking in the right place. The RI trigger code\ncaches query plans --- could that caching be the source of the problem?\n(See backend/utils/adt/ri_triggers.c)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 23:57:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation() "
},
{
"msg_contents": "On 06 Nov 2001 at 23:57 (-0500), Tom Lane wrote:\n| Brent Verner <brent@rcfile.org> writes:\n| > After running the above and without (re)starting a new backend yields\n| > the following error. After getting a new backend, the behavior is as \n| > desired.\n| \n| > brent=# insert into child values(1);\n| > ERROR: constraint <unnamed>: table parent does not have an attribute id\n| \n| I wonder whether you're looking in the right place. The RI trigger code\n| caches query plans --- could that caching be the source of the problem?\n| (See backend/utils/adt/ri_triggers.c)\n\nTom,\n\n This code is now working as desired. I believe the problem I was\nseeing was due to my incorrect (and STUPID) approach to modifying\nthe bytea pg_trigger->tgargs directly... I've since learned about \nheap_modifytuple(). Anyway, I'm cleaning this patch up, and will \nbe sending it to -patches shortly.\n\n Thanks for your assistance with this, and hopefully the next time I \ndecide to hack at PG, I'll choose something a bit more my speed :-P\n\ncheers.\n Brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Fri, 9 Nov 2001 23:32:39 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation()"
},
{
"msg_contents": "Brent Verner <brent@rcfile.org> writes:\n> Thanks for your assistance with this, and hopefully the next time I \n> decide to hack at PG, I'll choose something a bit more my speed :-P\n\nSounds like \"your speed\" has advanced a couple notches. Hang in there\n... how do you think the rest of us learned? ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 09 Nov 2001 23:41:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation() "
},
{
"msg_contents": "On 09 Nov 2001 at 23:41 (-0500), Tom Lane wrote:\n| Brent Verner <brent@rcfile.org> writes:\n| > Thanks for your assistance with this, and hopefully the next time I \n| > decide to hack at PG, I'll choose something a bit more my speed :-P\n| \n| Sounds like \"your speed\" has advanced a couple notches. Hang in there\n| ... how do you think the rest of us learned? ;-)\n\nI sure hope it didn't hurt anyone elses head as much as it hurts mine.\nMy initial 'success' was sheer luck. In testing, I noticed certain\nlength column names would do the wrong thing... Turns out I had to\nmalloc the struct varlena* so it was padded to account for the size\nof the attached vl_dat data, to keep vl_dat from stepping all over\nthe place... oh, /that's/ why I like perl ;-)\n\ncheers.\n brent\n\n-- \n\"Develop your talent, man, and leave the world something. Records are \nreally gifts from people. To think that an artist would love you enough\nto share his music with anyone is a beautiful thing.\" -- Duane Allman\n",
"msg_date": "Sat, 10 Nov 2001 06:07:50 -0500",
"msg_from": "Brent Verner <brent@rcfile.org>",
"msg_from_op": true,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation()"
},
{
"msg_contents": "> On 09 Nov 2001 at 23:41 (-0500), Tom Lane wrote:\n> | Brent Verner <brent@rcfile.org> writes:\n> | > Thanks for your assistance with this, and hopefully the next time I \n> | > decide to hack at PG, I'll choose something a bit more my speed :-P\n> | \n> | Sounds like \"your speed\" has advanced a couple notches. Hang in there\n> | ... how do you think the rest of us learned? ;-)\n> \n> I sure hope it didn't hurt anyone elses head as much as it hurts mine.\n\nWhen I see those long function names in subject lines, I hope Tom Lane\ntakes the topic because I used to have to struggle though answering\nthose type of questions. I makes everyone's head hurt, but it's good\nfor you. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 10 Nov 2001 11:11:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RelationFlushRelation() or RelationClearRelation()"
}
] |
[
{
"msg_contents": "I've been trying to get some clustered PostgreSQL service at work, and\nhave basicaly narrowed it down to only pgReplicator.\n\nThen I remembered a friend of mine telling me about rserv. It seems to\nme that rserv is a much cleaner implementation, but how good is it really\nand how 'stable' is it?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nCuba iodine colonel terrorist FSF Treasury Marxist Iran attack\nammunition supercomputer Waco, Texas smuggle World Trade Center 747\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Nov 2001 15:04:04 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "pgReplicator vs. rserv"
},
{
"msg_contents": "...\n> Then I remembered a friend of mine telling me about rserv. It seems to\n> me that rserv is a much cleaner implementation, but how good is it really\n> and how 'stable' is it?\n\neRserv from pgsql.com is being used in commercial settings requiring\nhigh reliability and high performance. contrib/rserv/ does not include\nall of the features of the commercial version, but formed the basis for\nit and could be suitable for many applications.\n\n - Thomas\n",
"msg_date": "Tue, 06 Nov 2001 16:35:54 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: pgReplicator vs. rserv"
},
{
"msg_contents": ">>>>> \"Thomas\" == Thomas Lockhart <lockhart@fourpalms.org> writes:\n\n Thomas> ...\n >> Then I remembered a friend of mine telling me about rserv. It\n >> seems to me that rserv is a much cleaner implementation, but\n >> how good is it really and how 'stable' is it?\n\n Thomas> eRserv from pgsql.com is being used in commercial settings\n Thomas> requiring high reliability and high\n Thomas> performance. contrib/rserv/ does not include all of the\n Thomas> features of the commercial version, but formed the basis\n Thomas> for it and could be suitable for many applications.\n\nSounds nice, where can I find it? I saw a link to 'www.erserver.com', but\nthat domain don't exists...\n\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nterrorist Ft. Meade Serbian critical subway Saddam Hussein Rule Psix\niodine Clinton bomb nuclear explosion SDI DES kibo\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "06 Nov 2001 18:19:12 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": true,
"msg_subject": "Re: pgReplicator vs. rserv"
}
] |
[
{
"msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql\nChanges by:\ttgl@postgresql.org\t01/11/06 13:02:48\n\nModified files:\n\tsrc/backend/postmaster: postmaster.c \n\nLog message:\n\tClean up formatting of child process exit-status reports so that they\n\tare correct, consistent, and complete ... motivated by gripe from\n\tOliver Elphick, but I see someone had already made an incomplete stab\n\tat this.\n\n",
"msg_date": "Tue, 6 Nov 2001 13:02:48 -0500 (EST)",
"msg_from": "tgl@postgresql.org",
"msg_from_op": true,
"msg_subject": "pgsql/src/backend/postmaster postmaster.c"
},
{
"msg_contents": "tgl@postgresql.org writes:\n\n> \tClean up formatting of child process exit-status reports so that they\n> \tare correct, consistent, and complete ... motivated by gripe from\n> \tOliver Elphick, but I see someone had already made an incomplete stab\n> \tat this.\n\nSorry, this sort of thing doesn't work with message internationalization.\nI suggest you revert this and fix the one remaining message in the style\nthe other ones are in.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 7 Nov 2001 02:12:45 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/postmaster postmaster.c"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Sorry, this sort of thing doesn't work with message internationalization.\n\nWhy not? Certainly the messages are in two parts, but I doubt there is\nany language with grammar so irregular that it can't be made to work.\n\n> I suggest you revert this and fix the one remaining message in the style\n> the other ones are in.\n\nIf it were only the one erroneous message, I wouldn't have troubled.\nBut there were four (soon to be five) places that all had the same\nproblem, ie failure to cover the \"can't happen\" case. Repeating that\nlogic five times, producing fifteen somewhat-redundant error messages\nto translate, didn't seem like a win. Especially not when I fully\nexpect there to be some #ifdefs in there soon to cover platforms that\ndon't have WIFEXITED and friends. The code as committed has one place\nto fix such problems, not five.\n\nI thought about alternative strategies like passing the noun phrase into\nthe formatExitStatus subroutine, but that didn't seem materially better.\nCan you give a concrete example of a language where this really doesn't\nwork, keeping in mind that the original isn't exactly the Queen's\nEnglish either?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 06 Nov 2001 23:41:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pgsql/src/backend/postmaster postmaster.c "
},
{
"msg_contents": "Tom Lane writes:\n\n> > Sorry, this sort of thing doesn't work with message internationalization.\n>\n> Why not? Certainly the messages are in two parts, but I doubt there is\n> any language with grammar so irregular that it can't be made to work.\n\nSurely everything can be made to work, but I'll bet lunch that there are\nplenty of languages where it won't work the way it is now.\n\n> If it were only the one erroneous message, I wouldn't have troubled.\n> But there were four (soon to be five) places that all had the same\n> problem, ie failure to cover the \"can't happen\" case. Repeating that\n> logic five times, producing fifteen somewhat-redundant error messages\n> to translate, didn't seem like a win.\n\nI agree it's ugly, and I did consider several options when I first changed\nit to be this way, but I don't think it can work without a bit of\nbutchering up the message. (See below.)\n\n> Especially not when I fully expect there to be some #ifdefs in there\n> soon to cover platforms that don't have WIFEXITED and friends. The\n> code as committed has one place to fix such problems, not five.\n\nThat path is already covered. There is precendent in portable packages to\nuse the W* macros. Only in some cases you might have to define them, but\nthat is all.\n\n> I thought about alternative strategies like passing the noun phrase\n> into the formatExitStatus subroutine, but that didn't seem materially\n> better. Can you give a concrete example of a language where this\n> really doesn't work, keeping in mind that the original isn't exactly\n> the Queen's English either?\n\nI know for a fact that the translation of \"exited\" and \"terminated\" will\nvary depending on the noun phrase in Russian and related languages. Verbs\ndepending on nouns is a common pattern, and in most cases you can't paint\nover it with parentheses because the clarity of the message will suffer.\n\nHowever, while I don't have actual examples, there are other\npossibilities, such as the noun phrase depending on what the rest is,\nbecause what is passive in English might have to be translated differently\nso the subject becomes the object. Or the word order is different and the\nnoun phrase is in the middle of the sentence. Or what if subject and verb\nare pasted to become one word, or the whole sentence becomes one\nhieroglyph? Or you write right-to-left (ugh, does that even work?).\n\nThe bottom line is, constructing sentences dynamically, even if it's\n\"simple enough\" or known to work in all 3642 civilized languages, is a\ndead end, because human languages don't make any sense. It's best to not\neven try.\n\nNow back to reality. I think passing in the noun phrase as you suggested\nshould be okay:\n\nDEBUG: process %d (%s) exited with status %d\nDEBUG: process %d (%s) was terminated by signal %d\n\nwhere %s is the noun phrase.\n\nIt loses some elegance, but it should allow grammatically sound\ntranslations. (Okay, we assume that all languages allow for parenthetical\nnotes, but that is not a matter of grammar.)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 8 Nov 2001 01:26:31 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/postmaster postmaster.c "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Now back to reality. I think passing in the noun phrase as you suggested\n> should be okay:\n\nI'm happy to do it that way if you prefer, but I'm a tad baffled as to\nwhy it solves anything other than word-order issues. Seems like the\ninflection issues are still there.\n\n> It loses some elegance, but it should allow grammatically sound\n> translations. (Okay, we assume that all languages allow for parenthetical\n> notes, but that is not a matter of grammar.)\n\nWhat I'm intending is to pass in the noun phrase and the PID, allowing\nthe translatable messages in the subroutine to look like\n\n\t%s (pid %d) exited with status %d\n\nA variant would be to pass in the adjective for \"process\":\n\n\t%s process (pid %d) exited with status %d\n\nDoes that seem better, worse, indifferent? If the inflection issues\nreach to the root noun but not the adjectives, methinks that might\nwork better.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 20:21:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/postmaster postmaster.c "
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Now back to reality. I think passing in the noun phrase as you suggested\n> > should be okay:\n>\n> I'm happy to do it that way if you prefer, but I'm a tad baffled as to\n> why it solves anything other than word-order issues. Seems like the\n> inflection issues are still there.\n\nIf you put the noun phrase in parenthesis it won't affect the grammar of\nthe sentence outside.\n\n> > It loses some elegance, but it should allow grammatically sound\n> > translations. (Okay, we assume that all languages allow for parenthetical\n> > notes, but that is not a matter of grammar.)\n>\n> What I'm intending is to pass in the noun phrase and the PID, allowing\n> the translatable messages in the subroutine to look like\n>\n> \t%s (pid %d) exited with status %d\n\nThis is not effectively different from what we have now, it only inverses\nwhich part of the sentence gets pasted where.\n\n> A variant would be to pass in the adjective for \"process\":\n>\n> \t%s process (pid %d) exited with status %d\n>\n> Does that seem better, worse, indifferent? If the inflection issues\n> reach to the root noun but not the adjectives, methinks that might\n> work better.\n\nAssuming that there will be an adjective in the translation is already\nassuming too much.\n\nHow about this:\n\nelog(xxx, \"whatever process (pid %d) terminated abnormally (%s)\", formatExitStatus(exit_status));\n\nwhere formatExitStatus() returns either of\n\n\"exit status 77\"\n\"signal 11\"\n\n(Except for the first invocation in CleanupProc, including the word\n\"abnormally\" adds more clarity for the user than trying to format the\nnumeric details inline.)\n\nI see you already made some changes. Sorry that it took me a while to\nrespond, but I can make these changes if we can agree.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 11 Nov 2001 16:40:54 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/postmaster postmaster.c "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> How about this:\n\n> elog(xxx, \"whatever process (pid %d) terminated abnormally (%s)\", formatExitStatus(exit_status));\n\n> where formatExitStatus() returns either of\n\n> \"exit status 77\"\n> \"signal 11\"\n\nBut exit status 0 is not abnormal. I guess in the CleanupProc case\nyou could leave out the word \"abnormally\" and just say terminated (%s).\n\n> I see you already made some changes. Sorry that it took me a while to\n> respond, but I can make these changes if we can agree.\n\nI took your lack of comment as assent ... if you want to change it as\nabove, I won't object, but I think what's there now is workable as long\nas the translator understands that the two sets of messages go together.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 11:57:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [COMMITTERS] pgsql/src/backend/postmaster postmaster.c "
}
] |
[
{
"msg_contents": "Lovely undocumented feature.\n\nCREATE TEMP SEQUENCE junk;\n\nFound it looking through gram.y :) Seems to work perfectly.\n\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.",
"msg_date": "Tue, 6 Nov 2001 15:38:19 -0500",
"msg_from": "\"Rod Taylor\" <rbt@barchord.com>",
"msg_from_op": true,
"msg_subject": "CREATE TEMP SEQUENCE"
},
{
"msg_contents": "> Lovely undocumented feature.\n> \n> CREATE TEMP SEQUENCE junk;\n> \n> Found it looking through gram.y :) Seems to work perfectly.\n> \n\nCheck developement docs. It is there. Added in 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 16:15:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: CREATE TEMP SEQUENCE"
}
] |
[
{
"msg_contents": "Tom,\n\nI was just looking for a way to keep different postmasters out of each\nothers way. If you think the DBA can do it great or if the DBA changes\nport have him/her fix the locations.\n\nJim\n\n> \"Jim Buttafuoco\" <jim@buttafuoco.net> writes:\n> > As for the actual data file location, I believe under each loc oid\nwe\n> > would have pg_port #/DB OID/pg_class OID might be the way to go. \n> \n> Introducing pg_port into the paths would be a bad idea, since it\n> would prevent restarting a postmaster with a different port number.\n> I think if a DBA is running multiple postmasters, it's up to him\n> to avoid pointing more than one of them at the same \"location\"\n> directory. (Maybe we could enforce that with lock files? Not\n> sure it's worth the trouble though.)\n> \n> \t\t\tregards, tom lane\n> \n> \n\n\n",
"msg_date": "Tue, 6 Nov 2001 17:58:30 -0500",
"msg_from": "\"Jim Buttafuoco\" <jim@buttafuoco.net>",
"msg_from_op": true,
"msg_subject": "Re: Storage Location Patch Proposal for V7.3 "
}
] |
[
{
"msg_contents": "the table has about 30k records. \n simple select statement, by primary key, requires plenty of cpu\ntime when the primary key has three columns\n when the primary key has two columns several times less cpu is\nrequired (even though the contents of the table is the same.\n\nso:\n PRIMARY KEY(C_ID, C_D_ID, C_W_ID) -> PRIMARY KEY (C_ID, C_D_ID)\n select cpu: 600 -> select cpu 60\n\n\n----------- how can i work around it? -----------------\n\nand no, I can't blend the two columns into one, the spec does not\nallow that - so is there some tuning parameter?\n\n\nP.S. The table definition:\n\n\nCREATE TABLE CUSTOMER (\n\tC_ID SMALLINT NOT NULL, /*--*/\n\tC_D_ID SMALLINT NOT NULL, /*--*/\n\tC_W_ID SMALLINT NOT NULL, /*--*/\n\tC_FIRST VARCHAR(16) NOT NULL,\n\tC_MIDDLE VARCHAR(2) NOT NULL,\n\tC_LAST VARCHAR(16) NOT NULL,\n\tC_STREET_1 VARCHAR(20) NOT NULL,\n\tC_STREET_2 VARCHAR(20) NOT NULL,\n\tC_CITY VARCHAR(20) NOT NULL,\n\tC_STATE VARCHAR(2) NOT NULL,\n\tC_ZIP INTEGER NOT NULL, /*--*/\n\tC_PHONE NUMERIC(16) NOT NULL,\n\tC_SINCE timestamp,\n\tC_CREDIT VARCHAR(2) NOT NULL,\n\tC_CREDIT_LIM NUMERIC(12,2) NOT NULL,\n\tC_DISCOUNT NUMERIC(4,4) NOT NULL,\n\tC_BALANCE NUMERIC(12,2) NOT NULL,\n\tC_YTD_PAYMENT NUMERIC(12,2) NOT NULL,\n\tC_PAYMENT_CNT SMALLINT NOT NULL, /*--*/\n\tC_DELIVERY_CNT SMALLINT NOT NULL, /*--*/\n\tC_DATA VARCHAR(500) NOT NULL,\n\tPRIMARY KEY (C_ID, C_D_ID, C_W_ID)\n);\n",
"msg_date": "6 Nov 2001 15:29:31 -0800",
"msg_from": "czl@iname.com (charles)",
"msg_from_op": true,
"msg_subject": "performance problem with 3-column indexes"
},
{
"msg_contents": "\nCan you give an example query and explain output for both cases?\nHave you run vacuum analyze?\n\nSince I haven't seen the query, one thing that might bite you would\nbe if you aren't casting your constants to smallint, although I\ndon't know why that would change on the index definition.\n\nOn 6 Nov 2001, charles wrote:\n\n> the table has about 30k records.\n> simple select statement, by primary key, requires plenty of cpu\n> time when the primary key has three columns\n> when the primary key has two columns several times less cpu is\n> required (even though the contents of the table is the same.\n>\n> so:\n> PRIMARY KEY(C_ID, C_D_ID, C_W_ID) -> PRIMARY KEY (C_ID, C_D_ID)\n> select cpu: 600 -> select cpu 60\n>\n\n",
"msg_date": "Wed, 7 Nov 2001 09:26:00 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: performance problem with 3-column indexes"
},
{
"msg_contents": "czl@iname.com (charles) writes:\n> simple select statement, by primary key, requires plenty of cpu\n> time when the primary key has three columns\n> when the primary key has two columns several times less cpu is\n> required (even though the contents of the table is the same.\n\nWhat does EXPLAIN show in the two cases? What PG version is this?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 12:57:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: performance problem with 3-column indexes "
}
] |
[
{
"msg_contents": "Hi all,\nI've just started to write an interface to some part (it's huge)\nof GSL as C user written functions, in form of a contrib\nmodule. It's only the first (very small :-)) release, \nand since it's a rather big work, I'd like to know if \nsomeone other is interested in helping me.\n\nFollowing the FAQ (5.2) suggestion, I've attached \na .tgz file to be expanded into the contrib directory.\nIt compiles cleanly with my v7.1.3 on GNU/Linux (RH7.2).\n\nMy question: is this the right manner to contribute modules ???\nSo I can keep up posting to this list my updates (I've written\na couple of other modules, a libCurl interface and a nettle\n(a crypto library) interface: I can post to the list them, too).\n\n\nAny suggestion is welcome.\n\nThanks a lot\n/gp\n \n-- \nDiscussion: How do you feel about Open Source firms making \nmillions through public offerings?\n\n\"I wish these companies were making the same millions without\ndistributing any non-free, user-subjugating software.\" --\n Richard Stallman \n\n\"We're so forward that sometimes we reach ourself.\" --\n g.p.\n\n\"It is amazing how much you can achieve when you don't have \nto do the real work yourself.\" -- \n Joe Celko\n\n Gian Paolo Ciceri Via B.Diotti 45 - 20153 Milano MI ITALY\n CTO @ Louise mobile : ++39 347 4106213 \n : ++39 348 3658272 \n eMail : gp.ciceri@acm.org, \n gp.ciceri@computer.org \n : gp.ciceri@louise.it \n webSite: http://www.louise.it\n ICQ # : 94620118",
"msg_date": "Wed, 07 Nov 2001 00:39:28 +0100",
"msg_from": "\"g.p.ciceri\" <gp.ciceri@acm.org>",
"msg_from_op": true,
"msg_subject": "GSL (GNU Scientific library, numerical routines) interface as a\n\tcontributed module: pg-GSL.0.0.0"
},
{
"msg_contents": "> Hi all,\n> I've just started to write an interface to some part (it's huge)\n> of GSL as C user written functions, in form of a contrib\n> module. It's only the first (very small :-)) release, \n> and since it's a rather big work, I'd like to know if \n> someone other is interested in helping me.\n> \n> Following the FAQ (5.2) suggestion, I've attached \n> a .tgz file to be expanded into the contrib directory.\n> It compiles cleanly with my v7.1.3 on GNU/Linux (RH7.2).\n> \n> My question: is this the right manner to contribute modules ???\n> So I can keep up posting to this list my updates (I've written\n> a couple of other modules, a libCurl interface and a nettle\n> (a crypto library) interface: I can post to the list them, too).\n\nThe hackers list is best for discussion. I would send the patches to\nthe patches list. These are in the perfect format for inclusion into\nPostgreSQL. The only question is whether they are of general interest\nenough to add to our /contrib tree, or whether these would be better on\na separate web site like gborg.postgresql.org.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 10:15:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: GSL (GNU Scientific library, numerical routines) interface"
}
] |
[
{
"msg_contents": "OK, 7.2 is looking _very_ good. We have very few open items. They are:\n\n\tSource Code Changes\n\t-------------------\n\tCompile in syslog feature by default? (Peter, Tom)\n\tAIX compile (Tatsuo)\n\tLibpq++ compile on Solaris (Peter)\n\t\n\tDocumentation Changes\n\t---------------------\n\nThe always-updated list is at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nI also have created a post-7.2 list of items that are either patches\nthat need to be applied or discussed for 7.3. That is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\nThis list is longer than usual. Seems we have quite a number of things\nin-progress that can be worked on as soon as 7.2 is complete. If there\nare things there than can be decided now, please dig in and send an\nemail to the hackers list.\n\nOnce we start 7.3, I will use that list to request patches to complete\nthese items. Because we are done development on 7.2, people can start\nworking on patches now. If you send them to the lists, I will load them\nup on the page and apply them as soon as 7.3 starts. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 6 Nov 2001 22:45:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Status of 7.2"
},
{
"msg_contents": "Thought you might like to know that I should be able to upload regression\ntest reports for:\n\nIRIX 6.5\nFreeBSD 4.4 on Intel\nFreeBSD 4.4 on Alpha\nVMS on Alpha\n\nFor 7.2b2 when it's available. Is Postgres supported on all these\nplatforms?\n\nChris\n\n-----Original Message-----\nFrom: pgsql-hackers-owner@postgresql.org\n[mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\nSent: Wednesday, 7 November 2001 11:45 AM\nTo: PostgreSQL-development\nSubject: [HACKERS] Status of 7.2\n\n\nOK, 7.2 is looking _very_ good. We have very few open items. They are:\n\n\tSource Code Changes\n\t-------------------\n\tCompile in syslog feature by default? (Peter, Tom)\n\tAIX compile (Tatsuo)\n\tLibpq++ compile on Solaris (Peter)\n\n\tDocumentation Changes\n\t---------------------\n\nThe always-updated list is at:\n\n\tftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nI also have created a post-7.2 list of items that are either patches\nthat need to be applied or discussed for 7.3. That is at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n\nThis list is longer than usual. Seems we have quite a number of things\nin-progress that can be worked on as soon as 7.2 is complete. If there\nare things there than can be decided now, please dig in and send an\nemail to the hackers list.\n\nOnce we start 7.3, I will use that list to request patches to complete\nthese items. Because we are done development on 7.2, people can start\nworking on patches now. If you send them to the lists, I will load them\nup on the page and apply them as soon as 7.3 starts.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n",
"msg_date": "Wed, 7 Nov 2001 12:37:15 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": "\nI'll be announcing v7.2b2 tomorrow afternoon ... its packaged and ready to\ngo, ifyou want to get a head start (ftp.postgresql.org), but am giving a\nbit of time for mirrors to catch up ...\n\n\nOn Wed, 7 Nov 2001, Christopher Kings-Lynne wrote:\n\n> Thought you might like to know that I should be able to upload regression\n> test reports for:\n>\n> IRIX 6.5\n> FreeBSD 4.4 on Intel\n> FreeBSD 4.4 on Alpha\n> VMS on Alpha\n>\n> For 7.2b2 when it's available. Is Postgres supported on all these\n> platforms?\n>\n> Chris\n>\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Wednesday, 7 November 2001 11:45 AM\n> To: PostgreSQL-development\n> Subject: [HACKERS] Status of 7.2\n>\n>\n> OK, 7.2 is looking _very_ good. We have very few open items. They are:\n>\n> \tSource Code Changes\n> \t-------------------\n> \tCompile in syslog feature by default? (Peter, Tom)\n> \tAIX compile (Tatsuo)\n> \tLibpq++ compile on Solaris (Peter)\n>\n> \tDocumentation Changes\n> \t---------------------\n>\n> The always-updated list is at:\n>\n> \tftp://candle.pha.pa.us/pub/postgresql/open_items.\n>\n> I also have created a post-7.2 list of items that are either patches\n> that need to be applied or discussed for 7.3. That is at:\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n>\n> This list is longer than usual. Seems we have quite a number of things\n> in-progress that can be worked on as soon as 7.2 is complete. If there\n> are things there than can be decided now, please dig in and send an\n> email to the hackers list.\n>\n> Once we start 7.3, I will use that list to request patches to complete\n> these items. Because we are done development on 7.2, people can start\n> working on patches now. If you send them to the lists, I will load them\n> up on the page and apply them as soon as 7.3 starts.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Wed, 7 Nov 2001 00:26:00 -0500 (EST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": "Is there any list of changes between 7.1.3 and 7.2b2 available?\n\n-- \n Turbo __ _ Debian GNU Unix _IS_ user friendly - it's just \n ^^^^^ / /(_)_ __ _ ___ __ selective about who its friends are \n / / | | '_ \\| | | \\ \\/ / Debian Certified Linux Developer \n _ /// / /__| | | | | |_| |> < Turbo Fredriksson turbo@tripnet.se\n \\\\\\/ \\____/_|_| |_|\\__,_/_/\\_\\ Stockholm/Sweden\n\nsecurity counter-intelligence [Hello to all my fans in domestic\nsurveillance] Soviet Legion of Doom South Africa SEAL Team 6 subway\niodine $400 million in gold bullion Ft. Meade Delta Force killed\nattack Waco, Texas\n[See http://www.aclu.org/echelonwatch/index.html for more about this]\n",
"msg_date": "07 Nov 2001 08:26:58 +0100",
"msg_from": "Turbo Fredriksson <turbo@bayour.com>",
"msg_from_op": false,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": "On Tue, 6 Nov 2001, Bruce Momjian wrote:\n\n> I also have created a post-7.2 list of items that are either patches\n> that need to be applied or discussed for 7.3. That is at:\n>\n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches2\n>\n> This list is longer than usual. Seems we have quite a number of things\n> in-progress that can be worked on as soon as 7.2 is complete. If there\n> are things there than can be decided now, please dig in and send an\n> email to the hackers list.\n>\n> Once we start 7.3, I will use that list to request patches to complete\n> these items. Because we are done development on 7.2, people can start\n> working on patches now. If you send them to the lists, I will load them\n> up on the page and apply them as soon as 7.3 starts.\nSorry, I�m really unable to send patches but I have a feature request\nwhich was addressed in the thread \"Serious performance problem\" on this\nlist. It mainly concerns the performance increase if there would be\nan index scan method which doesn�t have to check the validity of data\nin the table. I�m just waiting for a statement from you guys if you\nthink it will be doable in 7.3 (while now started to optimize my\ndatabase as you suggested ;-).) I think this would increase acceptance\nof PostgreSQL for certain people here in Germany which have real influence\non decisions about database in medical diagnostics and care in Germany.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Wed, 7 Nov 2001 08:49:48 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": "Bruce Momjian wrote:\n\n> OK, 7.2 is looking _very_ good. We have very few open items. They are:\n>\n> Source Code Changes\n> -------------------\n> Compile in syslog feature by default? (Peter, Tom)\n> AIX compile (Tatsuo)\n> Libpq++ compile on Solaris (Peter)\n>\n> Documentation Changes\n> ---------------------\n>\n> The always-updated list is at:\n>\n> ftp://candle.pha.pa.us/pub/postgresql/open_items.\n>\n> I also have created a post-7.2 list of items that are either patches\n> that need to be applied or discussed for 7.3. That is at:\n>\n> http://candle.pha.pa.us/cgi-bin/pgpatches2\n>\n> This list is longer than usual. Seems we have quite a number of things\n> in-progress that can be worked on as soon as 7.2 is complete. If there\n> are things there than can be decided now, please dig in and send an\n> email to the hackers list.\n\nI would suggest to schedule my patch (the last on the list) for 7.2 since it\nfinishes the work I began for 7.2.\nSince some patches (part of the work/redesign) are in but the last two are\nyet unapplied (IIRC Michael is really busy at the moment), I'd vote for not\nleaving this work half-done.\n\nChristof\n\n\n",
"msg_date": "Wed, 07 Nov 2001 13:07:52 +0100",
"msg_from": "Christof Petig <christof@petig-baender.de>",
"msg_from_op": false,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": "> Is there any list of changes between 7.1.3 and 7.2b2 available?\n> \n\nSure see /HISTORY in the source tarball.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 10:18:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": "> > This list is longer than usual. Seems we have quite a number of things\n> > in-progress that can be worked on as soon as 7.2 is complete. If there\n> > are things there than can be decided now, please dig in and send an\n> > email to the hackers list.\n> \n> I would suggest to schedule my patch (the last on the list) for 7.2 since it\n> finishes the work I began for 7.2.\n> Since some patches (part of the work/redesign) are in but the last two are\n> yet unapplied (IIRC Michael is really busy at the moment), I'd vote for not\n> leaving this work half-done.\n\nOK, this is for ecpg. If you can get an OK from Michael, I will be glad\nto apply them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 10:25:52 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Status of 7.2"
},
{
"msg_contents": ">> I would suggest to schedule my patch (the last on the list) for 7.2 since it\n>> finishes the work I began for 7.2.\n>> Since some patches (part of the work/redesign) are in but the last two are\n>> yet unapplied (IIRC Michael is really busy at the moment), I'd vote for not\n>> leaving this work half-done.\n\n> OK, this is for ecpg. If you can get an OK from Michael, I will be glad\n> to apply them.\n\nMore to the point, I don't think it's core's business to overrule\nMichael's technical decisions about ecpg. If he thinks the patch\nis okay, but hasn't time to apply it, then we can do that for him.\nBut we won't apply it without his review and okay.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 12:15:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Status of 7.2 "
},
{
"msg_contents": "On Wed, 7 Nov 2001, Tille, Andreas wrote:\n\n> Sorry, I�m really unable to send patches but I have a feature request\n> which was addressed in the thread \"Serious performance problem\" on this\n> list. It mainly concerns the performance increase if there would be\n> an index scan method which doesn�t have to check the validity of data\n> in the table. I�m just waiting for a statement from you guys if you\n> think it will be doable in 7.3 (while now started to optimize my\n> database as you suggested ;-).) I think this would increase acceptance\n> of PostgreSQL for certain people here in Germany which have real influence\n> on decisions about database in medical diagnostics and care in Germany.\nIs it possible that hackers do any statement according this issue.\nI want to repeat the problem. It�s hard to argue for PostgreSQL (and\nI would really like to advocate for PostgreSQL) against MS SQL if we\ntalk about an imaginary possible dataloss if my colleague has not ever\nfaced dataloss and certainly know that other power users of MS SQL are\nusing it. It�s much more hard to argue if there are cases in which\nMS SQL outperforms PostgreSQL in the order of magnitude. It�s hard\nto convince somebody if I tell him that the reason is his bad database\ndesign. He really isn�t sooo bad and he claims that MS SQL has transparent\ntransaction *and* fast index usage. Don�t ask me how they do this.\nI repeat that my colleague is in the position to decide about software\nusage of several medicine related projects in Germany.\n\nI just want to know now if this is an issue for PostgreSQL hackers:\n\n [ ] yes\n [ ] no\n [ ] we are discussing about that\n\nIn case of \"no\" I would be happy if you could provide me with some\ntechnical reasons which could help me arguing.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Fri, 16 Nov 2001 08:55:08 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "> I just want to know now if this is an issue for PostgreSQL hackers:\n> [X] yes\n> [X] no\n> [X] we are discussing about that\n> In case of \"no\" I would be happy if you could provide me with some\n> technical reasons which could help me arguing.\n\nThe hacker community has a wide range of interests.\n\n From my POV, the overall performance of PostgreSQL is more than\ncompetitive with other database products, including M$SQL. There is not\nmuch point in arguing a specific query case, though we are happy to talk\nabout specific overall applications and to offer suggestions on how to\nbuild databases that are generally well designed and that will perform\nwell on more than one product.\n\nIf you have a colleague who firmly believes that M$SQL is the best\nsolution, it sounds like he is not listening to all of the facts. That\ncertainly can be frustrating, eh? Maybe after a few more years of\ncrashed machines and increasing costs he will be more open to\nalternatives ;)\n\n - Thomas\n",
"msg_date": "Fri, 16 Nov 2001 16:03:43 +0000",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "\n> I just want to know now if this is an issue for PostgreSQL hackers:\n>\n> [ ] yes\n> [ ] no\n> [ ] we are discussing about that\n>\n> In case of \"no\" I would be happy if you could provide me with some\n> technical reasons which could help me arguing.\n\nMy guess is that its likely to get discussed again when 7.3 development\nstarts if someone brings it up. I think right now alot of discussion\nis towards the 7.2betas and bugs and stuff that might possibly get put off\nthat was already talked about earlier in this cycle.\n\n",
"msg_date": "Fri, 16 Nov 2001 08:33:04 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "Tille, Andreas writes:\n\n> > Sorry, I�m really unable to send patches but I have a feature request\n> > which was addressed in the thread \"Serious performance problem\" on this\n> > list. It mainly concerns the performance increase if there would be\n> > an index scan method which doesn�t have to check the validity of data\n> > in the table.\n\n> I just want to know now if this is an issue for PostgreSQL hackers:\n>\n> [ ] yes\n> [ ] no\n> [ ] we are discussing about that\n\nWe are always willing to discuss changes that improve performance,\nreliability, standards compliance, etc. However, \"MS SQL does it, and MS\nSQL is fast\" is not sufficient proof that a feature would improve average\nperformance in PostgreSQL. This issue has been brought up with similarly\nunsatisfactory arguments in the past, so you should be able to find out\nabout the discussion in the archives. Some of the arguments against this\nchange were bigger indexes, slower write operations, non-existent proof\nthat it's really faster, putting the index on a different disk will mostly\nobsolete the issue. Consequently, this is currently not something that\nhas got a chance to be implemented anytime soon.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 16 Nov 2001 17:38:15 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "> We are always willing to discuss changes that improve performance,\n> reliability, standards compliance, etc. However, \"MS SQL does it, and MS\n> SQL is fast\" is not sufficient proof that a feature would improve average\n> performance in PostgreSQL. This issue has been brought up with similarly\n> unsatisfactory arguments in the past, so you should be able to find out\n> about the discussion in the archives. Some of the arguments against this\n> change were bigger indexes, slower write operations, non-existent proof\n> that it's really faster, putting the index on a different disk will mostly\n> obsolete the issue. Consequently, this is currently not something that\n> has got a chance to be implemented anytime soon.\n\nI personally would like to have index scans that look up heap rows\nrecord the heap expired status into the index entry via one bit of\nstorage. This will not _prevent_ checking the heap but it will prevent\nheap lookups for index entries that have been exipred for a long time. \nHowever, with the new vacuum, and perhaps autovacuum coming soon, may be\nlittle need for this optimization.\n\nThe underlying problem the user is seeing is how to _know_ an index\ntuple is valid without checking the heap, and I don't see how to do that\nunless we start storing the transaction id in the index tuple, and that\nrequires extra storage.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 16 Nov 2001 12:02:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "On Fri, 16 Nov 2001, Thomas Lockhart wrote:\n\n> The hacker community has a wide range of interests.\nFor sure, but there will be a raodmap with general consensus of the\nhackers.\n\n> From my POV, the overall performance of PostgreSQL is more than\n> competitive with other database products, including M$SQL.\nI never doubt you point of view, but it hardly counts as an\nargument for my current problem. There is a technical reason\nwhy MS SQL server is faster here and they claim to do it safely.\n(Well personally I do not give a cent for thigs that MS claims\nabout but this does not help here.)\n\n> There is not much point in arguing a specific query case,\nIt is no specific query case. It is the speed of an index scan which\ngoes like N if you do it with PostgreSQL and it goes like log N if\nyou do not have to look back into the table like MS SQL server does.\n\n> though we are happy to talk\n> about specific overall applications and to offer suggestions on how to\n> build databases that are generally well designed and that will perform\n> well on more than one product.\nI doubt that you could care about any database designer who does\npoor database design and just does a straigtforeward index scan.\nIf you think that PostgreSQL is only targeted to high professional\ndatabase designers which know how to avoid index scans I doubt that\nPostgreSQL will get the user base it would deserve.\nI could imagine several cases like my colleague who might think about\nporting their application and get into the trap as me that the first\nsimple question they try performs that badly. I really want to say\nthat we should address this issue in the documentation. If there\nexists such cases we should make it clear *why* PostgreSQL fails\nthis performance test (and perhaps include your text in your mail\nas a base of this documentation). If we ignore that we will not\nattrakt users.\n\n> If you have a colleague who firmly believes that M$SQL is the best\n> solution, it sounds like he is not listening to all of the facts.\nHe is a little bit MS centric but in principle knows the advantage\nof OpenSource. On the other hand he is led by pragmatism and just\nasks: Which software gives the solution quickly. And he found his\nanswer.\nOn the other hand we should also listen to things he presents as\n\"facts\" ...\n\n> That certainly can be frustrating, eh?\nYes.\n\n> Maybe after a few more years of\n> crashed machines and increasing costs he will be more open to\n> alternatives ;)\nThis does not help currently.\nI repeat: We should at least upgrade PostgreSQL documentation to address\nthose issues.\n\nKind regards\n\n Andreas.\n\nPS: I prefer not to be CCed if I do not explicite ask for this service.\n It seems to be common habit on PostgreSQL lists to CC users. Does\n this make any sense? On many other lists such bahaviour is banned.\n",
"msg_date": "Mon, 19 Nov 2001 12:44:53 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "On Fri, 16 Nov 2001, Peter Eisentraut wrote:\n\n> We are always willing to discuss changes that improve performance,\n> reliability, standards compliance, etc. However, \"MS SQL does it, and MS\n> SQL is fast\" is not sufficient proof that a feature would improve average\n> performance in PostgreSQL. This issue has been brought up with similarly\n> unsatisfactory arguments in the past, so you should be able to find out\n> about the discussion in the archives.\nSorry, I do not see any favour for PostgreSQL if we want people who\nconsider switching to PostgreSQL to search the archive for useful information.\nJust stating the issues and principles clearly could convince people.\nIf not PostgreSQL is faster removed from the list of available\nalternatives of database servers than a web browser is fired up.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Mon, 19 Nov 2001 13:03:17 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "On Fri, 16 Nov 2001, Bruce Momjian wrote:\n\n> I personally would like to have index scans that look up heap rows\n> record the heap expired status into the index entry via one bit of\n> storage. This will not _prevent_ checking the heap but it will prevent\n> heap lookups for index entries that have been exipred for a long time.\n> However, with the new vacuum, and perhaps autovacuum coming soon, may be\n> little need for this optimization.\n>\n> The underlying problem the user is seeing is how to _know_ an index\n> tuple is valid without checking the heap, and I don't see how to do that\n> unless we start storing the transaction id in the index tuple, and that\n> requires extra storage.\nFor my special case I think doubling main memory is about the same\nprice as a MS SQL server license. I can�t say which further problems\nmight occure.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Mon, 19 Nov 2001 13:06:09 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "\n\nTille, Andreas wrote:\n\n>On Fri, 16 Nov 2001, Bruce Momjian wrote:\n>\n>>I personally would like to have index scans that look up heap rows\n>>record the heap expired status into the index entry via one bit of\n>>storage. This will not _prevent_ checking the heap but it will prevent\n>>heap lookups for index entries that have been exipred for a long time.\n>>However, with the new vacuum, and perhaps autovacuum coming soon, may be\n>>little need for this optimization.\n>>\n>>The underlying problem the user is seeing is how to _know_ an index\n>>tuple is valid without checking the heap,\n>>\nI'd propose a memory-only (or heavily cached) structure of tuple death \ntransaction\nids for all transactions since the oldest live trx. And when that oldest \nfinishes then\nthe tombstone marks for all tuples deleted between that and the new \noldest are\nmoved to relevant indexes (or the index keys are deleted) by concurrent \nvacuum\nor similar process.\n\nWe could even try to use the index itself as that structure by favoring \nchanged index pages\nwhen making caching decisions. It is much safer to cache indexes than it \nis to cache data\npages as for indexes we only need to detect (by keeping info in WAL for \nexample) that it\nis broken and not what it contained as it can always be rebuilt after \ncomputer crash.\n\nThe problem with using an ndex for this is _which_ index to use when \nthere are many per table.\nPerhaps a good choice would be the PRIMARY KEY.\n\nOTOH, keeping this info in index and not in a dedicated structure makes \nthe amount of\ndata needing to be cached well bigger and thus the whole operation more \nexpensive.\n\n>> and I don't see how to do that\n>>unless we start storing the transaction id in the index tuple, and that\n>>requires extra storage.\n>>\n>For my special case I think doubling main memory is about the same\n>price as a MS SQL server license. I can�t say which further problems\n>might occure.\n>\nThen you must have really huge amounts of memory already ;)\n\n------------------\nHannu\n\n\n",
"msg_date": "Mon, 19 Nov 2001 18:54:01 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> I'd propose a memory-only (or heavily cached) structure of tuple death \n> transaction\n> ids for all transactions since the oldest live trx.\n\nSeems like just a special-purpose reimplementation of disk pages sitting\nin shared buffers. If you've got the memory to keep track of tuples\nyou've killed recently, then you've probably got the memory to hold the\npages they're in, so a redundant separate caching structure is not\nobviously a win.\n\nThe possible win of marking index entries dead (once their tuple is\nknown dead for all transactions) is that it saves visiting disk pages\nthat have *not* been visited recently, and thus that aren't likely to\nbe hanging around in buffers.\n\nOTOH there are a lot of potential problems, most notably that\nis-the-tuple-dead-for-ALL-transactions is not the normal tuple time\nqual check, and so it'd represent extra overhead in indexscans.\nI'm also concerned about how to do it without introducing lots of\nugly interactions (maybe even deadlocks) between the index access\nmethods and the heap access code.\n\nIf concurrent vacuuming turns out to be cheap enough, just running\nvacuum frequently might be a better answer than trying to push the\nmaintenance work into the main line of execution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 19 Nov 2001 12:54:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2) "
},
{
"msg_contents": "\nOn Mon, 19 Nov 2001, Tille, Andreas wrote:\n\n> On Fri, 16 Nov 2001, Thomas Lockhart wrote:\n\n> > There is not much point in arguing a specific query case,\n> It is no specific query case. It is the speed of an index scan which\n> goes like N if you do it with PostgreSQL and it goes like log N if\n> you do not have to look back into the table like MS SQL server does.\n\nBut it is in some way. It's dependant on the number of rows returned\nby the query. For a small enough number of rows returned, having the\nadditional information in the index could very well make the query\nslower even if it avoids the reads from the heap. Keeping the information\nin some other fashion where it doesn't directly do that may alleviate\nthat, but it's not a straightforward one is better than the other in\nall cases. It's not like postgres never uses an index on a large\ntable, it's just that after a certain amount of expected returns it\nswitches over.\n\n\n\n",
"msg_date": "Mon, 19 Nov 2001 09:59:33 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "> > There is not much point in arguing a specific query case,\n> It is no specific query case. It is the speed of an index scan which\n> goes like N if you do it with PostgreSQL and it goes like log N if\n> you do not have to look back into the table like MS SQL server does.\n\nHave you tried using CLUSTER to match the heap order with the index\norder. That should help with index scans looking up heap rows.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 19 Nov 2001 15:07:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>I'd propose a memory-only (or heavily cached) structure of tuple death \n>>transaction\n>>ids for all transactions since the oldest live trx.\n>>\n>\n>Seems like just a special-purpose reimplementation of disk pages sitting\n>in shared buffers. If you've got the memory to keep track of tuples\n>you've killed recently, then you've probably got the memory to hold the\n>pages they're in, so a redundant separate caching structure is not\n>obviously a win.\n>\nI suspect that for even the border case of a table containing just 1 \nCHAR(1) field the\nabove structure will be a lot smaller than the page cache for the same \ntuples.\n\n>The possible win of marking index entries dead (once their tuple is\n>known dead for all transactions) is that it saves visiting disk pages\n>that have *not* been visited recently, and thus that aren't likely to\n>be hanging around in buffers\n>\n>\n>\n>OTOH there are a lot of potential problems, most notably that\n>is-the-tuple-dead-for-ALL-transactions is not the normal tuple time\n>qual check, and so it'd represent extra overhead in indexscans.\n>I'm also concerned about how to do it without introducing lots of\n>ugly interactions (maybe even deadlocks) between the index access\n>methods and the heap access code.\n>\n>If concurrent vacuuming turns out to be cheap enough, just running\n>vacuum frequently might be a better answer than trying to push the\n>maintenance work into the main line of execution.\n>\nWhat I proposed would have been mostly the job of concurrent vacuum\n(marking/removing dead index tuples)\n\nPerhaps it would be an overall win for fast changing (vs. fast-growing) \ndatabases if\nwe kept the tuple metainfo (attnum < 0) on separate (cache) pages, as it \nsaves writes of\ntmax updates on both UPDATE and DELETE.\n\nIf we kept them in a separate table as well that could make the metainfo \n\"table\"\nessentially a kind of index. That table/index could of course be \nconcealed inside\nthe main table by using typed data pages.\n\n---------------\nHannu\n\n",
"msg_date": "Tue, 20 Nov 2001 02:11:09 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "On Mon, 19 Nov 2001, Bruce Momjian wrote:\n\n> > > There is not much point in arguing a specific query case,\n> > It is no specific query case. It is the speed of an index scan which\n> > goes like N if you do it with PostgreSQL and it goes like log N if\n> > you do not have to look back into the table like MS SQL server does.\n>\n> Have you tried using CLUSTER to match the heap order with the index\n> order. That should help with index scans looking up heap rows.\nYes, I�ve tried even that and it increase PostgreSQLs performance a little\nbit for this special query but it did not get nearly the speed of the\nsame query on the MS SQL server. Moreover there are tables with more than\none index and I guess it makes only sense to cluster one index per table.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 20 Nov 2001 11:35:33 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "> On Mon, 19 Nov 2001, Bruce Momjian wrote:\n> \n> > > > There is not much point in arguing a specific query case,\n> > > It is no specific query case. It is the speed of an index scan which\n> > > goes like N if you do it with PostgreSQL and it goes like log N if\n> > > you do not have to look back into the table like MS SQL server does.\n> >\n> > Have you tried using CLUSTER to match the heap order with the index\n> > order. That should help with index scans looking up heap rows.\n\n> Yes, I?ve tried even that and it increase PostgreSQLs performance a little\n> bit for this special query but it did not get nearly the speed of the\n> same query on the MS SQL server. Moreover there are tables with more than\n> one index and I guess it makes only sense to cluster one index per table.\n\nYes, CLUSTER only matches one index.\n\nSomething I just realized, that other probably figured out, is that\nwhile we have plans to backfill expired tuple status into the index\ntuples, it is not easy to backfill enough information to know a tuple is\nvalid.\n\nSetting aside the problem of different tuple visibilities for different\nbackends, one problem is that when we go to expire a tuple, we would\nhave to update all the index tuples that point to the heap tuple. That\nis an expensive operation because you have to use the keys in the heap\nto find the index.\n\nSo, while we do have plans to mark some index tuples so we _know_ they\nare expired, we don't know how to efficiently mark index tuples so we\n_know_ they are valid.\n\nThis is what I believe you want, where we can scan the index without\nchecking the heap at all.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 20 Nov 2001 10:14:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "On Tue, 20 Nov 2001, Bruce Momjian wrote:\n\n> So, while we do have plans to mark some index tuples so we _know_ they\n> are expired, we don't know how to efficiently mark index tuples so we\n> _know_ they are valid.\n>\n> This is what I believe you want, where we can scan the index without\n> checking the heap at all.\nAn new index type (say READONLY INDEX or some reasonable name) which is\nvalid all the time between two vacuum processes would suffice for my\napplication. It would fit the needs of people who do a daily database\nupdate and vacuum after this.\n\nOf course it�s your descision if this makes sense and fits PostgreSQL\nphilosophy, but I think it would speed up some kind of applications.\n\nKind regards\n\n Andreas.\n",
"msg_date": "Tue, 20 Nov 2001 17:11:26 +0100 (CET)",
"msg_from": "\"Tille, Andreas\" <TilleA@rki.de>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "\n\nTille, Andreas wrote:\n\n>On Tue, 20 Nov 2001, Bruce Momjian wrote:\n>\n>>So, while we do have plans to mark some index tuples so we _know_ they\n>>are expired, we don't know how to efficiently mark index tuples so we\n>>_know_ they are valid.\n>>\n>>This is what I believe you want, where we can scan the index without\n>>checking the heap at all.\n>>\n>An new index type (say READONLY INDEX or some reasonable name) which is\n>valid all the time between two vacuum processes would suffice for my\n>application. It would fit the needs of people who do a daily database\n>update and vacuum after this.\n>\nOr perhaps MAINTAINED INDEX, meaning that it has always both tmin and tmax\nup-to-date.\nBtw 7.2 still has broken behaviour of xmax which by definition should \nnot have a\nnon-0 value for live tuples\n\npg72b2=# create table parent(pid int primary key);\nNOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n'parent_pkey' for table 'parent'\nCREATE\npg72b2=# create table child(cid int, pid int references parent);\nNOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY \ncheck(s)\nCREATE\npg72b2=# insert into parent values(1);\nINSERT 16809 1\npg72b2=# insert into child values(1,1);\nINSERT 16810 1\npg72b2=# update child set pid=2;\nERROR: <unnamed> referential integrity violation - key referenced from \nchild not found in parent\npg72b2=# select xmin,xmax,* from child;\n xmin | xmax | cid | pid\n------+------+-----+-----\n 171 | 172 | 1 | 1\n(1 row)\n\npg72b2=#\n\n>\n>\n>Of course it�s your descision if this makes sense and fits PostgreSQL\n>philosophy, but I think it would speed up some kind of applications.\n>\n>Kind regards\n>\n> Andreas.\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 3: if posting/reading through Usenet, please send an appropriate\n>subscribe-nomail command to majordomo@postgresql.org so that your\n>message can get through to the mailing list cleanly\n>\n\n\n",
"msg_date": "Wed, 21 Nov 2001 01:19:00 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "\nHuh, a non-zero XMAX is fine. You mark the XMAX when you _think_ you\nare updating it. It is only expired when the XMAX on the tuple is\ncommitted.\n\n> Or perhaps MAINTAINED INDEX, meaning that it has always both tmin and tmax\n> up-to-date.\n> Btw 7.2 still has broken behaviour of xmax which by definition should \n> not have a\n> non-0 value for live tuples\n> \n> pg72b2=# create table parent(pid int primary key);\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index \n> 'parent_pkey' for table 'parent'\n> CREATE\n> pg72b2=# create table child(cid int, pid int references parent);\n> NOTICE: CREATE TABLE will create implicit trigger(s) for FOREIGN KEY \n> check(s)\n> CREATE\n> pg72b2=# insert into parent values(1);\n> INSERT 16809 1\n> pg72b2=# insert into child values(1,1);\n> INSERT 16810 1\n> pg72b2=# update child set pid=2;\n> ERROR: <unnamed> referential integrity violation - key referenced from \n> child not found in parent\n> pg72b2=# select xmin,xmax,* from child;\n> xmin | xmax | cid | pid\n> ------+------+-----+-----\n> 171 | 172 | 1 | 1\n> (1 row)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 21 Nov 2001 15:59:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Huh, a non-zero XMAX is fine. You mark the XMAX when you _think_ you\n> are updating it. It is only expired when the XMAX on the tuple is\n> committed.\n\nBut\n\nhttp://www.postgresql.org/idocs/index.php?sql-syntax-columns.html\n\nclaims:\n\nxmax The identity (transaction ID) of the deleting transaction,\n or zero for an undeleted tuple. In practice, this is \n never nonzero for a visible tuple.\n\ncmax The command identifier within the deleting transaction,\n or zero. Again, this is never nonzero for a visible tuple.\n\n\nWhich is IMHO good and useful behaviour, for example for all kinds of\nmirroring\n\nI also think that this kas historically been the behaviour and that \nthis was broken sometime in not too distant past (i.e after postgres95\n;)\nby foreign keys and/or somesuch.\n\nTom Lane once told me about a way to determine the visibility of a tuple \nby other means than [x|c][min|max] but I can't find/remember it anymore\n;(\n\n-----------------\nHannu\n",
"msg_date": "Thu, 22 Nov 2001 10:59:50 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> But\n> http://www.postgresql.org/idocs/index.php?sql-syntax-columns.html\n\nThat documentation is in error (my fault). Current docs say\n\nxmax\n\n The identity (transaction ID) of the deleting transaction, or zero\n for an undeleted tuple. It is possible for this field to \n be nonzero in a visible tuple: that usually indicates that the\n deleting transaction hasn't committed yet, or that an \n attempted deletion was rolled back. \n\n> I also think that this kas historically been the behaviour \n\nNo, it wasn't.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Nov 2001 11:25:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>But\n>>http://www.postgresql.org/idocs/index.php?sql-syntax-columns.html\n>>\n>\n>That documentation is in error (my fault). Current docs say\n>\n>xmax\n>\n> The identity (transaction ID) of the deleting transaction, or zero\n> for an undeleted tuple. It is possible for this field to \n> be nonzero in a visible tuple: that usually indicates that the\n> deleting transaction hasn't committed yet,\n>\nThat seems reasonable\n\n> or that an attempted deletion was rolled back. \n>\nBut could we not make it so that rollback will also reset xmax and cmax \nto 0.\nIt should be quite cheap to do so as it's on the same page with the \ncommit bits ?\n\nThe meaning \"last transaction that attempted to delete this tuple\" seems \nsomewhat weird\n\n>>I also think that this kas historically been the behaviour \n>>\n>No, it wasn't.\n>\nAre you sure that it was a bug not in code but in docs ?\n\n---------------\nHannu\n\n\n",
"msg_date": "Fri, 23 Nov 2001 00:14:39 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
},
{
"msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> But could we not make it so that rollback will also reset xmax and cmax \n> to 0.\n\nWe never have done that and I don't see why we should start.\n(And no, I'm not sure that it'd be entirely safe; there are\nconcurrency/atomicity issues involved, because we do not\ninsist on getting exclusive lock to set the it's-dead-Jim\nflag bit.)\n\nWe could make the user readout of xmax/cmax be zeroes if the flag\nbits show they are invalid. But this really just begs the question\nof what use they are to users in the first place. I can't see any;\nand if we make them read as zeroes then they for sure won't have any.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 22 Nov 2001 20:26:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2) "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Hannu Krosing <hannu@tm.ee> writes:\n>\n>>But could we not make it so that rollback will also reset xmax and cmax \n>>to 0.\n>>\n>\n>We never have done that and I don't see why we should start.\n>(And no, I'm not sure that it'd be entirely safe; there are\n>concurrency/atomicity issues involved, because we do not\n>insist on getting exclusive lock to set the it's-dead-Jim\n>flag bit.)\n>\n>We could make the user readout of xmax/cmax be zeroes if the flag\n>bits show they are invalid.\n>\nIf there is a cheap way to get a list of pending transactions, then we \ncould make them\nread out as 0 if they are about to be deleted (ie xmax in \npending_transactions()) and\nelse show the value of the transaction that is about to delete them.\n\n>But this really just begs the question\n>of what use they are to users in the first place. I can't see any;\n>and if we make them read as zeroes then they for sure won't have any.\n>\nI can see some use for xmax user-visible only while being deleted.\n\nAt least this would be more useful than themeaning\nlast-trx-that-was-about-to-delete.\n\nAnother way for getting equivalent functionality would be to make the\npending_transactions() function available to users.\n\n---------------\nHannu\n\n\n",
"msg_date": "Fri, 23 Nov 2001 14:58:19 +0500",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Further open item (Was: Status of 7.2)"
}
] |
[
{
"msg_contents": "Hey folks,\n\nI don't see MD5-based password code in the JDBC CVS tree. Is anyone\nworking on this?\n\nI'll take a stab, if not.\n\nthanks,\n-jeremy\n_____________________________________________________________________\njeremy wohl ..: http://igmus.org\n",
"msg_date": "Tue, 6 Nov 2001 21:03:59 -0800",
"msg_from": "Jeremy Wohl <jeremyw-pgjdbc@igmus.org>",
"msg_from_op": true,
"msg_subject": "MD5-based passwords"
},
{
"msg_contents": "> Hey folks,\n> \n> I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> working on this?\n> \n> I'll take a stab, if not.\n\nThere is no one working on it. ODBC needs it too. It wasn't on the\nTODO list but I just added it.\n\nI can assist with any questions. See libpq for a sample implementation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 00:27:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > Hey folks,\n> > \n> > I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> > working on this?\n> > \n> > I'll take a stab, if not.\n> \n> There is no one working on it. ODBC needs it too. It wasn't on the\n> TODO list but I just added it.\n> \n> I can assist with any questions. See libpq for a sample implementation.\n\nWhere are the MD5 passwords used by the driver? Sorry for my ignorance.\nJava has MD5 support in java.security.MessageDigest so it shouldn't\nbe too hard...\n\nTom.\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n",
"msg_date": "Wed, 7 Nov 2001 15:10:59 +0900",
"msg_from": "\"Thomas O'Dowd\" <tom@nooper.com>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "> On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > > Hey folks,\n> > > \n> > > I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> > > working on this?\n> > > \n> > > I'll take a stab, if not.\n> > \n> > There is no one working on it. ODBC needs it too. It wasn't on the\n> > TODO list but I just added it.\n> > \n> > I can assist with any questions. See libpq for a sample implementation.\n> \n> Where are the MD5 passwords used by the driver? Sorry for my ignorance.\n> Java has MD5 support in java.security.MessageDigest so it shouldn't\n> be too hard...\n\nSee libpq/fe-auth.c for the libpq version of the MD5 communication with\nthe backend. md5.c has the actual md5 computations.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 01:15:19 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > Hey folks,\n> > \n> > I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> > working on this?\n> > \n> > I'll take a stab, if not.\n> \n> There is no one working on it. ODBC needs it too. It wasn't on the\n> TODO list but I just added it.\n> \n> I can assist with any questions. See libpq for a sample implementation.\n\nOK, how about this? Someone will have to help me with appropriate exception\nbehavior and where the bytesToHex util is placed.\n\nI'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why isn't\nthis (4 + ...?\n\nIndex: Connection.java\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/Connection.java,v\nretrieving revision 1.34\ndiff -r1.34 Connection.java\n6a7\n> import java.security.*;\n65a67\n> private static final int AUTH_REQ_MD5 = 5;\n183c185\n< \t\t\t\t\t// Get the password salt if there is one\n---\n> \t\t\t\t\t// Get the crypt password salt if there is one\n190c192,204\n< \t\t\t\t\t\tDriverManager.println(\"Salt=\" + salt);\n---\n> \t\t\t\t\t\tDriverManager.println(\"Crypt salt=\" + salt);\n> \t\t\t\t\t}\n> \n> \t\t\t\t\t// Or get the md5 password salt if there is one\n> \t\t\t\t\tif (areq == AUTH_REQ_MD5)\n> \t\t\t\t\t{\n> \t\t\t\t\t\tbyte[] rst = new byte[4];\n> \t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[2] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[3] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\tsalt = new String(rst, 0, 4);\n> \t\t\t\t\t\tDriverManager.println(\"MD5 salt=\" + salt);\n197,198c211,212\n< \t\t\t\t\t\tbreak;\n< \n---\n> \t\t\t\t\t break;\n> \t\t\t\t\t\t\n223a238,266\n> \t\t\t\t\tcase AUTH_REQ_MD5:\n> \t\t\t\t\t try {\n> \t\t\t\t\t\t MessageDigest md = MessageDigest.getInstance(\"MD5\");\n> \t\t\t\t\t\t byte[] temp_digest, pass_digest;\n> \t\t\t\t\t\t byte[] hex_digest = new byte[35];\n> \n> \t\t\t\t\t\t DriverManager.println(\"postgresql: MD5\");\n> \n> \t\t\t\t\t\t md.update(PG_PASSWORD.getBytes());\n> \t\t\t\t\t\t md.update(PG_USER.getBytes());\n> \t\t\t\t\t\t temp_digest = md.digest();\n> \n> \t\t\t\t\t\t bytesToHex(temp_digest, hex_digest, 0);\n> \t\t\t\t\t\t md.update(hex_digest, 0, 32);\n> \t\t\t\t\t\t md.update(salt.getBytes());\n> \t\t\t\t\t\t pass_digest = md.digest();\n> \n> \t\t\t\t\t\t bytesToHex(pass_digest, hex_digest, 3);\n> \t\t\t\t\t\t hex_digest[0] = 'm'; hex_digest[1] = 'd'; hex_digest[2] = '5';\n> \n> \t\t\t\t\t\t pg_stream.SendInteger(5 + hex_digest.length, 4);\n> \t\t\t\t\t\t pg_stream.Send(hex_digest);\n> \t\t\t\t\t\t pg_stream.SendInteger(0, 1);\n> \t\t\t\t\t\t pg_stream.flush();\n> \t\t\t\t\t\t} catch (Exception e) {\n> \t\t\t\t\t\t ; // \"MessageDigest failure; \" + e\n> \t\t\t\t\t\t}\n> \t\t\t\t\t\tbreak;\n> \n310a354,368\n> \n> private static void bytesToHex(byte[] bytes, byte[] hex, int offset)\n> {\n> \t final char lookup[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n> \t\t\t\t\t'a', 'b', 'c', 'd', 'e', 'f' };\n> \n> \t\tint i, c, j, pos = offset;\n> \n> \t\tfor (i = 0; i < 16; i++) {\n> \t\t c = bytes[i] & 0xFF; j = c >> 4;\n> \t\t hex[pos++] = (byte) lookup[j];\n> \t\t j = (c & 0xF);\n> \t\t hex[pos++] = (byte) lookup[j];\n> \t\t}\n> }\n\n-jeremy\n_____________________________________________________________________\njeremy wohl ..: http://igmus.org",
"msg_date": "Wed, 7 Nov 2001 10:28:59 -0800",
"msg_from": "Jeremy Wohl <jeremyw-pgjdbc@igmus.org>",
"msg_from_op": true,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "\nLooks good. Can I ask for a context diff, \"diff -c\"?\n\n---------------------------------------------------------------------------\n\n> On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > > Hey folks,\n> > > \n> > > I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> > > working on this?\n> > > \n> > > I'll take a stab, if not.\n> > \n> > There is no one working on it. ODBC needs it too. It wasn't on the\n> > TODO list but I just added it.\n> > \n> > I can assist with any questions. See libpq for a sample implementation.\n> \n> OK, how about this? Someone will have to help me with appropriate exception\n> behavior and where the bytesToHex util is placed.\n> \n> I'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why isn't\n> this (4 + ...?\n> \n> Index: Connection.java\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/Connection.java,v\n> retrieving revision 1.34\n> diff -r1.34 Connection.java\n> 6a7\n> > import java.security.*;\n> 65a67\n> > private static final int AUTH_REQ_MD5 = 5;\n> 183c185\n> < \t\t\t\t\t// Get the password salt if there is one\n> ---\n> > \t\t\t\t\t// Get the crypt password salt if there is one\n> 190c192,204\n> < \t\t\t\t\t\tDriverManager.println(\"Salt=\" + salt);\n> ---\n> > \t\t\t\t\t\tDriverManager.println(\"Crypt salt=\" + salt);\n> > \t\t\t\t\t}\n> > \n> > \t\t\t\t\t// Or get the md5 password salt if there is one\n> > \t\t\t\t\tif (areq == AUTH_REQ_MD5)\n> > \t\t\t\t\t{\n> > \t\t\t\t\t\tbyte[] rst = new byte[4];\n> > \t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> > \t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> > \t\t\t\t\t\trst[2] = (byte)pg_stream.ReceiveChar();\n> > \t\t\t\t\t\trst[3] = (byte)pg_stream.ReceiveChar();\n> > \t\t\t\t\t\tsalt = new String(rst, 0, 4);\n> > \t\t\t\t\t\tDriverManager.println(\"MD5 salt=\" + salt);\n> 197,198c211,212\n> < \t\t\t\t\t\tbreak;\n> < \n> ---\n> > \t\t\t\t\t break;\n> > \t\t\t\t\t\t\n> 223a238,266\n> > \t\t\t\t\tcase AUTH_REQ_MD5:\n> > \t\t\t\t\t try {\n> > \t\t\t\t\t\t MessageDigest md = MessageDigest.getInstance(\"MD5\");\n> > \t\t\t\t\t\t byte[] temp_digest, pass_digest;\n> > \t\t\t\t\t\t byte[] hex_digest = new byte[35];\n> > \n> > \t\t\t\t\t\t DriverManager.println(\"postgresql: MD5\");\n> > \n> > \t\t\t\t\t\t md.update(PG_PASSWORD.getBytes());\n> > \t\t\t\t\t\t md.update(PG_USER.getBytes());\n> > \t\t\t\t\t\t temp_digest = md.digest();\n> > \n> > \t\t\t\t\t\t bytesToHex(temp_digest, hex_digest, 0);\n> > \t\t\t\t\t\t md.update(hex_digest, 0, 32);\n> > \t\t\t\t\t\t md.update(salt.getBytes());\n> > \t\t\t\t\t\t pass_digest = md.digest();\n> > \n> > \t\t\t\t\t\t bytesToHex(pass_digest, hex_digest, 3);\n> > \t\t\t\t\t\t hex_digest[0] = 'm'; hex_digest[1] = 'd'; hex_digest[2] = '5';\n> > \n> > \t\t\t\t\t\t pg_stream.SendInteger(5 + hex_digest.length, 4);\n> > \t\t\t\t\t\t pg_stream.Send(hex_digest);\n> > \t\t\t\t\t\t pg_stream.SendInteger(0, 1);\n> > \t\t\t\t\t\t pg_stream.flush();\n> > \t\t\t\t\t\t} catch (Exception e) {\n> > \t\t\t\t\t\t ; // \"MessageDigest failure; \" + e\n> > \t\t\t\t\t\t}\n> > \t\t\t\t\t\tbreak;\n> > \n> 310a354,368\n> > \n> > private static void bytesToHex(byte[] bytes, byte[] hex, int offset)\n> > {\n> > \t final char lookup[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n> > \t\t\t\t\t'a', 'b', 'c', 'd', 'e', 'f' };\n> > \n> > \t\tint i, c, j, pos = offset;\n> > \n> > \t\tfor (i = 0; i < 16; i++) {\n> > \t\t c = bytes[i] & 0xFF; j = c >> 4;\n> > \t\t hex[pos++] = (byte) lookup[j];\n> > \t\t j = (c & 0xF);\n> > \t\t hex[pos++] = (byte) lookup[j];\n> > \t\t}\n> > }\n> \n> -jeremy\n> _____________________________________________________________________\n> jeremy wohl ..: http://igmus.org\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 14:14:54 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "> On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > > Hey folks,\n> > > \n> > > I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> > > working on this?\n> > > \n> > > I'll take a stab, if not.\n> > \n> > There is no one working on it. ODBC needs it too. It wasn't on the\n> > TODO list but I just added it.\n> > \n> > I can assist with any questions. See libpq for a sample implementation.\n> \n> OK, how about this? Someone will have to help me with appropriate exception\n> behavior and where the bytesToHex util is placed.\n> \n> I'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why isn't\n> this (4 + ...?\n\nI think the 5+ is correct. Looking at fe-auth.c, I see:\n\n ret = pqPacketSend(conn, crypt_pwd, strlen(crypt_pwd) + 1);\n\nand pqPacketSend() has:\n\n if (pqPutInt(4 + len, 4, conn))\n\nso I think it is the +1 and the +4 added together to make 5. If you\nwant to put 4+1+, that would be fine too and perhaps be clearer.\n\nOne more question. Have you tested this against a 7.2 backend to see if\nit actually does MD5 encryption correctly?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 7 Nov 2001 14:23:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 02:23:28PM -0500, Bruce Momjian wrote:\n> > On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > I'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why isn't\n> > this (4 + ...?\n> \n> I think the 5+ is correct. Looking at fe-auth.c, I see:\n> \n> ret = pqPacketSend(conn, crypt_pwd, strlen(crypt_pwd) + 1);\n> \n> and pqPacketSend() has:\n> \n> if (pqPutInt(4 + len, 4, conn))\n> \n> so I think it is the +1 and the +4 added together to make 5. If you\n> want to put 4+1+, that would be fine too and perhaps be clearer.\n\nRight. I read it right the first time, and proceeded to convince myself\nthe wrong way..\n\n> One more question. Have you tested this against a 7.2 backend to see if\n> it actually does MD5 encryption correctly?\n\nYes, that's what I'm using. Tested that the unpatched code fails, that the\npatched code succeeds and md5-allows removed from pg_hba.conf still works with\ncrypt-based passwords.\n\nA context diff is attached. My indenting is probably off.\n\np.s. Your mailer doesn't seem to put \"Jeremy wrote\" tags anywhere. Useful\n for following the conversation.\np.p.s. You don't need to Cc me. I'm on the list. :)\n\n-jeremy\n_____________________________________________________________________\njeremy wohl ..: http://igmus.org",
"msg_date": "Wed, 7 Nov 2001 11:43:59 -0800",
"msg_from": "Jeremy Wohl <jeremyw-pgjdbc@igmus.org>",
"msg_from_op": true,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "Jeremy,\n\nI think I would recommend moving most of this logic into a new class \nunder org.postgresql.util, called MD5.java?? (that is where the \nUnixCrypt class is located).\n\nI couldn't get what you sent to compile for me, I needed to add some casts:\n\nhex_digest[0] = 'm'; hex_digest[1] = 'd'; hex_digest[2] = '5';\n\nbecame:\n\nhex_digest[0] = (byte)'m'; hex_digest[1] = (byte)'d'; hex_digest[2] = \n(byte)'5';\n\nOtherwise with this above fix it compiled fine under both jdk1.1 and \njdk1.2. I didn't do any testing of the result though.\n\nthanks,\n--Barry\n\nPS. When sending diffs please use context diff format (i.e.-c). It \nmakes it much easier to review.\n\n\n\n\nJeremy Wohl wrote:\n\n> On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> \n>>>Hey folks,\n>>>\n>>>I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n>>>working on this?\n>>>\n>>>I'll take a stab, if not.\n>>>\n>>There is no one working on it. ODBC needs it too. It wasn't on the\n>>TODO list but I just added it.\n>>\n>>I can assist with any questions. See libpq for a sample implementation.\n>>\n> \n> OK, how about this? Someone will have to help me with appropriate exception\n> behavior and where the bytesToHex util is placed.\n> \n> I'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why isn't\n> this (4 + ...?\n> \n> Index: Connection.java\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql/src/interfaces/jdbc/org/postgresql/Connection.java,v\n> retrieving revision 1.34\n> diff -r1.34 Connection.java\n> 6a7\n> \n>>import java.security.*;\n>>\n> 65a67\n> \n>> private static final int AUTH_REQ_MD5 = 5;\n>>\n> 183c185\n> < \t\t\t\t\t// Get the password salt if there is one\n> ---\n> \n>>\t\t\t\t\t// Get the crypt password salt if there is one\n>>\n> 190c192,204\n> < \t\t\t\t\t\tDriverManager.println(\"Salt=\" + salt);\n> ---\n> \n>>\t\t\t\t\t\tDriverManager.println(\"Crypt salt=\" + salt);\n>>\t\t\t\t\t}\n>>\n>>\t\t\t\t\t// Or get the md5 password salt if there is one\n>>\t\t\t\t\tif (areq == AUTH_REQ_MD5)\n>>\t\t\t\t\t{\n>>\t\t\t\t\t\tbyte[] rst = new byte[4];\n>>\t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n>>\t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n>>\t\t\t\t\t\trst[2] = (byte)pg_stream.ReceiveChar();\n>>\t\t\t\t\t\trst[3] = (byte)pg_stream.ReceiveChar();\n>>\t\t\t\t\t\tsalt = new String(rst, 0, 4);\n>>\t\t\t\t\t\tDriverManager.println(\"MD5 salt=\" + salt);\n>>\n> 197,198c211,212\n> < \t\t\t\t\t\tbreak;\n> < \n> ---\n> \n>>\t\t\t\t\t break;\n>>\t\t\t\t\t\t\n>>\n> 223a238,266\n> \n>>\t\t\t\t\tcase AUTH_REQ_MD5:\n>>\t\t\t\t\t try {\n>>\t\t\t\t\t\t MessageDigest md = MessageDigest.getInstance(\"MD5\");\n>>\t\t\t\t\t\t byte[] temp_digest, pass_digest;\n>>\t\t\t\t\t\t byte[] hex_digest = new byte[35];\n>>\n>>\t\t\t\t\t\t DriverManager.println(\"postgresql: MD5\");\n>>\n>>\t\t\t\t\t\t md.update(PG_PASSWORD.getBytes());\n>>\t\t\t\t\t\t md.update(PG_USER.getBytes());\n>>\t\t\t\t\t\t temp_digest = md.digest();\n>>\n>>\t\t\t\t\t\t bytesToHex(temp_digest, hex_digest, 0);\n>>\t\t\t\t\t\t md.update(hex_digest, 0, 32);\n>>\t\t\t\t\t\t md.update(salt.getBytes());\n>>\t\t\t\t\t\t pass_digest = md.digest();\n>>\n>>\t\t\t\t\t\t bytesToHex(pass_digest, hex_digest, 3);\n>>\t\t\t\t\t\t hex_digest[0] = 'm'; hex_digest[1] = 'd'; hex_digest[2] = '5';\n>>\n>>\t\t\t\t\t\t pg_stream.SendInteger(5 + hex_digest.length, 4);\n>>\t\t\t\t\t\t pg_stream.Send(hex_digest);\n>>\t\t\t\t\t\t pg_stream.SendInteger(0, 1);\n>>\t\t\t\t\t\t pg_stream.flush();\n>>\t\t\t\t\t\t} catch (Exception e) {\n>>\t\t\t\t\t\t ; // \"MessageDigest failure; \" + e\n>>\t\t\t\t\t\t}\n>>\t\t\t\t\t\tbreak;\n>>\n>>\n> 310a354,368\n> \n>> private static void bytesToHex(byte[] bytes, byte[] hex, int offset)\n>> {\n>>\t final char lookup[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n>>\t\t\t\t\t'a', 'b', 'c', 'd', 'e', 'f' };\n>>\n>>\t\tint i, c, j, pos = offset;\n>>\n>>\t\tfor (i = 0; i < 16; i++) {\n>>\t\t c = bytes[i] & 0xFF; j = c >> 4;\n>>\t\t hex[pos++] = (byte) lookup[j];\n>>\t\t j = (c & 0xF);\n>>\t\t hex[pos++] = (byte) lookup[j];\n>>\t\t}\n>> }\n>>\n> \n> -jeremy\n> _____________________________________________________________________\n> jeremy wohl ..: http://igmus.org\n> \n> \n> ------------------------------------------------------------------------\n> \n> package org.postgresql;\n> \n> import java.io.*;\n> import java.net.*;\n> import java.sql.*;\n> import java.util.*;\n> import java.security.*;\n> import org.postgresql.Field;\n> import org.postgresql.fastpath.*;\n> import org.postgresql.largeobject.*;\n> import org.postgresql.util.*;\n> import org.postgresql.core.*;\n> \n> /**\n> * $Id: Connection.java,v 1.34 2001/11/01 01:08:36 barry Exp $\n> *\n> * This abstract class is used by org.postgresql.Driver to open either the JDBC1 or\n> * JDBC2 versions of the Connection class.\n> *\n> */\n> public abstract class Connection\n> {\n> \t// This is the network stream associated with this connection\n> \tpublic PG_Stream pg_stream;\n> \n> \tprivate String PG_HOST;\n> \tprivate int PG_PORT;\n> \tprivate String PG_USER;\n> \tprivate String PG_PASSWORD;\n> \tprivate String PG_DATABASE;\n> \tprivate boolean PG_STATUS;\n> \tprivate String compatible;\n> \n> \t/**\n> \t *\tThe encoding to use for this connection.\n> \t */\n> \tprivate Encoding encoding = Encoding.defaultEncoding();\n> \n> \tprivate String dbVersionNumber;\n> \n> \tpublic boolean CONNECTION_OK = true;\n> \tpublic boolean CONNECTION_BAD = false;\n> \n> \tpublic boolean autoCommit = true;\n> \tpublic boolean readOnly = false;\n> \n> \tpublic Driver this_driver;\n> \tprivate String this_url;\n> \tprivate String cursor = null;\t// The positioned update cursor name\n> \n> \t// These are new for v6.3, they determine the current protocol versions\n> \t// supported by this version of the driver. They are defined in\n> \t// src/include/libpq/pqcomm.h\n> \tprotected static final int PG_PROTOCOL_LATEST_MAJOR = 2;\n> \tprotected static final int PG_PROTOCOL_LATEST_MINOR = 0;\n> \tprivate static final int SM_DATABASE\t= 64;\n> \tprivate static final int SM_USER\t= 32;\n> \tprivate static final int SM_OPTIONS = 64;\n> \tprivate static final int SM_UNUSED\t= 64;\n> \tprivate static final int SM_TTY = 64;\n> \n> \tprivate static final int AUTH_REQ_OK = 0;\n> \tprivate static final int AUTH_REQ_KRB4 = 1;\n> \tprivate static final int AUTH_REQ_KRB5 = 2;\n> \tprivate static final int AUTH_REQ_PASSWORD = 3;\n> \tprivate static final int AUTH_REQ_CRYPT = 4;\n> private static final int AUTH_REQ_MD5 = 5;\n> \n> \t// New for 6.3, salt value for crypt authorisation\n> \tprivate String salt;\n> \n> \t// These are used to cache oids, PGTypes and SQLTypes\n> \tprivate static Hashtable sqlTypeCache = new Hashtable(); // oid -> SQLType\n> \tprivate static Hashtable pgTypeCache = new Hashtable(); // oid -> PGType\n> \tprivate static Hashtable typeOidCache = new Hashtable(); //PGType -> oid\n> \n> \t// Now handle notices as warnings, so things like \"show\" now work\n> \tpublic SQLWarning firstWarning = null;\n> \n> \t/**\n> \t * Cache of the current isolation level\n> \t */\n> \tprivate int isolationLevel = java.sql.Connection.TRANSACTION_READ_COMMITTED;\n> \n> \t// The PID an cancellation key we get from the backend process\n> \tpublic int pid;\n> \tpublic int ckey;\n> \n> \t/**\n> \t * This is called by Class.forName() from within org.postgresql.Driver\n> \t */\n> \tpublic Connection()\n> \t{}\n> \n> \t/**\n> \t * This method actually opens the connection. It is called by Driver.\n> \t *\n> \t * @param host the hostname of the database back end\n> \t * @param port the port number of the postmaster process\n> \t * @param info a Properties[] thing of the user and password\n> \t * @param database the database to connect to\n> \t * @param u the URL of the connection\n> \t * @param d the Driver instantation of the connection\n> \t * @return a valid connection profile\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tprotected void openConnection(String host, int port, Properties info, String database, String url, Driver d) throws SQLException\n> \t{\n> \t\t// Throw an exception if the user or password properties are missing\n> \t\t// This occasionally occurs when the client uses the properties version\n> \t\t// of getConnection(), and is a common question on the email lists\n> \t\tif (info.getProperty(\"user\") == null)\n> \t\t\tthrow new PSQLException(\"postgresql.con.user\");\n> \n> \t\tthis_driver = d;\n> \t\tthis_url = url;\n> \t\tPG_DATABASE = database;\n> \t\tPG_USER = info.getProperty(\"user\");\n> PG_PASSWORD = info.getProperty(\"password\",\"\");\n> \t\tPG_PORT = port;\n> \t\tPG_HOST = host;\n> \t\tPG_STATUS = CONNECTION_BAD;\n> \t\tif (info.getProperty(\"compatible\") == null)\n> \t\t{\n> \t\t\tcompatible = d.getMajorVersion() + \".\" + d.getMinorVersion();\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\tcompatible = info.getProperty(\"compatible\");\n> \t\t}\n> \n> \t\t// Now make the initial connection\n> \t\ttry\n> \t\t{\n> \t\t\tpg_stream = new PG_Stream(host, port);\n> \t\t}\n> \t\tcatch (ConnectException cex)\n> \t\t{\n> \t\t\t// Added by Peter Mount <peter@retep.org.uk>\n> \t\t\t// ConnectException is thrown when the connection cannot be made.\n> \t\t\t// we trap this an return a more meaningful message for the end user\n> \t\t\tthrow new PSQLException (\"postgresql.con.refused\");\n> \t\t}\n> \t\tcatch (IOException e)\n> \t\t{\n> \t\t\tthrow new PSQLException (\"postgresql.con.failed\", e);\n> \t\t}\n> \n> \t\t// Now we need to construct and send a startup packet\n> \t\ttry\n> \t\t{\n> \t\t\t// Ver 6.3 code\n> \t\t\tpg_stream.SendInteger(4 + 4 + SM_DATABASE + SM_USER + SM_OPTIONS + SM_UNUSED + SM_TTY, 4);\n> \t\t\tpg_stream.SendInteger(PG_PROTOCOL_LATEST_MAJOR, 2);\n> \t\t\tpg_stream.SendInteger(PG_PROTOCOL_LATEST_MINOR, 2);\n> \t\t\tpg_stream.Send(database.getBytes(), SM_DATABASE);\n> \n> \t\t\t// This last send includes the unused fields\n> \t\t\tpg_stream.Send(PG_USER.getBytes(), SM_USER + SM_OPTIONS + SM_UNUSED + SM_TTY);\n> \n> \t\t\t// now flush the startup packets to the backend\n> \t\t\tpg_stream.flush();\n> \n> \t\t\t// Now get the response from the backend, either an error message\n> \t\t\t// or an authentication request\n> \t\t\tint areq = -1; // must have a value here\n> \t\t\tdo\n> \t\t\t{\n> \t\t\t\tint beresp = pg_stream.ReceiveChar();\n> \t\t\t\tswitch (beresp)\n> \t\t\t\t{\n> \t\t\t\tcase 'E':\n> \t\t\t\t\t// An error occured, so pass the error message to the\n> \t\t\t\t\t// user.\n> \t\t\t\t\t//\n> \t\t\t\t\t// The most common one to be thrown here is:\n> \t\t\t\t\t// \"User authentication failed\"\n> \t\t\t\t\t//\n> \t\t\t\t\tthrow new SQLException(pg_stream.ReceiveString(encoding));\n> \n> \t\t\t\tcase 'R':\n> \t\t\t\t\t// Get the type of request\n> \t\t\t\t\tareq = pg_stream.ReceiveIntegerR(4);\n> \n> \t\t\t\t\t// Get the crypt password salt if there is one\n> \t\t\t\t\tif (areq == AUTH_REQ_CRYPT)\n> \t\t\t\t\t{\n> \t\t\t\t\t\tbyte[] rst = new byte[2];\n> \t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\tsalt = new String(rst, 0, 2);\n> \t\t\t\t\t\tDriverManager.println(\"Crypt salt=\" + salt);\n> \t\t\t\t\t}\n> \n> \t\t\t\t\t// Or get the md5 password salt if there is one\n> \t\t\t\t\tif (areq == AUTH_REQ_MD5)\n> \t\t\t\t\t{\n> \t\t\t\t\t\tbyte[] rst = new byte[4];\n> \t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[2] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[3] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\tsalt = new String(rst, 0, 4);\n> \t\t\t\t\t\tDriverManager.println(\"MD5 salt=\" + salt);\n> \t\t\t\t\t}\n> \n> \t\t\t\t\t// now send the auth packet\n> \t\t\t\t\tswitch (areq)\n> \t\t\t\t\t{\n> \t\t\t\t\tcase AUTH_REQ_OK:\n> \t\t\t\t\t break;\n> \t\t\t\t\t\t\n> \t\t\t\t\tcase AUTH_REQ_KRB4:\n> \t\t\t\t\t\tDriverManager.println(\"postgresql: KRB4\");\n> \t\t\t\t\t\tthrow new PSQLException(\"postgresql.con.kerb4\");\n> \n> \t\t\t\t\tcase AUTH_REQ_KRB5:\n> \t\t\t\t\t\tDriverManager.println(\"postgresql: KRB5\");\n> \t\t\t\t\t\tthrow new PSQLException(\"postgresql.con.kerb5\");\n> \n> \t\t\t\t\tcase AUTH_REQ_PASSWORD:\n> \t\t\t\t\t\tDriverManager.println(\"postgresql: PASSWORD\");\n> \t\t\t\t\t\tpg_stream.SendInteger(5 + PG_PASSWORD.length(), 4);\n> \t\t\t\t\t\tpg_stream.Send(PG_PASSWORD.getBytes());\n> \t\t\t\t\t\tpg_stream.SendInteger(0, 1);\n> \t\t\t\t\t\tpg_stream.flush();\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase AUTH_REQ_CRYPT:\n> \t\t\t\t\t\tDriverManager.println(\"postgresql: CRYPT\");\n> \t\t\t\t\t\tString crypted = UnixCrypt.crypt(salt, PG_PASSWORD);\n> \t\t\t\t\t\tpg_stream.SendInteger(5 + crypted.length(), 4);\n> \t\t\t\t\t\tpg_stream.Send(crypted.getBytes());\n> \t\t\t\t\t\tpg_stream.SendInteger(0, 1);\n> \t\t\t\t\t\tpg_stream.flush();\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tcase AUTH_REQ_MD5:\n> \t\t\t\t\t try {\n> \t\t\t\t\t\t MessageDigest md = MessageDigest.getInstance(\"MD5\");\n> \t\t\t\t\t\t byte[] temp_digest, pass_digest;\n> \t\t\t\t\t\t byte[] hex_digest = new byte[35];\n> \n> \t\t\t\t\t\t DriverManager.println(\"postgresql: MD5\");\n> \n> \t\t\t\t\t\t md.update(PG_PASSWORD.getBytes());\n> \t\t\t\t\t\t md.update(PG_USER.getBytes());\n> \t\t\t\t\t\t temp_digest = md.digest();\n> \n> \t\t\t\t\t\t bytesToHex(temp_digest, hex_digest, 0);\n> \t\t\t\t\t\t md.update(hex_digest, 0, 32);\n> \t\t\t\t\t\t md.update(salt.getBytes());\n> \t\t\t\t\t\t pass_digest = md.digest();\n> \n> \t\t\t\t\t\t bytesToHex(pass_digest, hex_digest, 3);\n> \t\t\t\t\t\t hex_digest[0] = 'm'; hex_digest[1] = 'd'; hex_digest[2] = '5';\n> \n> \t\t\t\t\t\t pg_stream.SendInteger(5 + hex_digest.length, 4);\n> \t\t\t\t\t\t pg_stream.Send(hex_digest);\n> \t\t\t\t\t\t pg_stream.SendInteger(0, 1);\n> \t\t\t\t\t\t pg_stream.flush();\n> \t\t\t\t\t\t} catch (Exception e) {\n> \t\t\t\t\t\t ; // \"MessageDigest failure; \" + e\n> \t\t\t\t\t\t}\n> \t\t\t\t\t\tbreak;\n> \n> \t\t\t\t\tdefault:\n> \t\t\t\t\t\tthrow new PSQLException(\"postgresql.con.auth\", new Integer(areq));\n> \t\t\t\t\t}\n> \t\t\t\t\tbreak;\n> \n> \t\t\t\tdefault:\n> \t\t\t\t\tthrow new PSQLException(\"postgresql.con.authfail\");\n> \t\t\t\t}\n> \t\t\t}\n> \t\t\twhile (areq != AUTH_REQ_OK);\n> \n> \t\t}\n> \t\tcatch (IOException e)\n> \t\t{\n> \t\t\tthrow new PSQLException(\"postgresql.con.failed\", e);\n> \t\t}\n> \n> \n> \t\t// As of protocol version 2.0, we should now receive the cancellation key and the pid\n> \t\tint beresp = pg_stream.ReceiveChar();\n> \t\tswitch (beresp)\n> \t\t{\n> \t\tcase 'K':\n> \t\t\tpid = pg_stream.ReceiveInteger(4);\n> \t\t\tckey = pg_stream.ReceiveInteger(4);\n> \t\t\tbreak;\n> \t\tcase 'E':\n> \t\tcase 'N':\n> \t\t\tthrow new SQLException(pg_stream.ReceiveString(encoding));\n> \t\tdefault:\n> \t\t\tthrow new PSQLException(\"postgresql.con.setup\");\n> \t\t}\n> \n> \t\t// Expect ReadyForQuery packet\n> \t\tberesp = pg_stream.ReceiveChar();\n> \t\tswitch (beresp)\n> \t\t{\n> \t\tcase 'Z':\n> \t\t\tbreak;\n> \t\tcase 'E':\n> \t\tcase 'N':\n> \t\t\tthrow new SQLException(pg_stream.ReceiveString(encoding));\n> \t\tdefault:\n> \t\t\tthrow new PSQLException(\"postgresql.con.setup\");\n> \t\t}\n> \n> \t\tfirstWarning = null;\n> \n> \t\t// \"pg_encoding_to_char(1)\" will return 'EUC_JP' for a backend compiled with multibyte,\n> \t\t// otherwise it's hardcoded to 'SQL_ASCII'.\n> \t\t// If the backend doesn't know about multibyte we can't assume anything about the encoding\n> \t\t// used, so we denote this with 'UNKNOWN'.\n> \t\t//Note: begining with 7.2 we should be using pg_client_encoding() which\n> \t\t//is new in 7.2. However it isn't easy to conditionally call this new\n> \t\t//function, since we don't yet have the information as to what server\n> \t\t//version we are talking to. Thus we will continue to call\n> \t\t//getdatabaseencoding() until we drop support for 7.1 and older versions\n> \t\t//or until someone comes up with a conditional way to run one or\n> \t\t//the other function depending on server version that doesn't require\n> \t\t//two round trips to the server per connection\n> \n> \t\tfinal String encodingQuery =\n> \t\t\t\"case when pg_encoding_to_char(1) = 'SQL_ASCII' then 'UNKNOWN' else getdatabaseencoding() end\";\n> \n> \t\t// Set datestyle and fetch db encoding in a single call, to avoid making\n> \t\t// more than one round trip to the backend during connection startup.\n> \n> \t\tjava.sql.ResultSet resultSet =\n> \t\t\tExecSQL(\"set datestyle to 'ISO'; select version(), \" + encodingQuery + \";\");\n> \n> \t\tif (! resultSet.next())\n> \t\t{\n> \t\t\tthrow new PSQLException(\"postgresql.con.failed\", \"failed getting backend encoding\");\n> \t\t}\n> \t\tString version = resultSet.getString(1);\n> \t\tdbVersionNumber = extractVersionNumber(version);\n> \n> \t\tString dbEncoding = resultSet.getString(2);\n> \t\tencoding = Encoding.getEncoding(dbEncoding, info.getProperty(\"charSet\"));\n> \n> \t\t// Initialise object handling\n> \t\tinitObjectTypes();\n> \n> \t\t// Mark the connection as ok, and cleanup\n> \t\tfirstWarning = null;\n> \t\tPG_STATUS = CONNECTION_OK;\n> \t}\n> \n> private static void bytesToHex(byte[] bytes, byte[] hex, int offset)\n> {\n> \t final char lookup[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n> \t\t\t\t\t'a', 'b', 'c', 'd', 'e', 'f' };\n> \n> \t\tint i, c, j, pos = offset;\n> \n> \t\tfor (i = 0; i < 16; i++) {\n> \t\t c = bytes[i] & 0xFF; j = c >> 4;\n> \t\t hex[pos++] = (byte) lookup[j];\n> \t\t j = (c & 0xF);\n> \t\t hex[pos++] = (byte) lookup[j];\n> \t\t}\n> }\n> \n> \t// These methods used to be in the main Connection implementation. As they\n> \t// are common to all implementations (JDBC1 or 2), they are placed here.\n> \t// This should make it easy to maintain the two specifications.\n> \n> \t/**\n> \t * This adds a warning to the warning chain.\n> \t * @param msg message to add\n> \t */\n> \tpublic void addWarning(String msg)\n> \t{\n> \t\tDriverManager.println(msg);\n> \n> \t\t// Add the warning to the chain\n> \t\tif (firstWarning != null)\n> \t\t\tfirstWarning.setNextWarning(new SQLWarning(msg));\n> \t\telse\n> \t\t\tfirstWarning = new SQLWarning(msg);\n> \n> \t\t// Now check for some specific messages\n> \n> \t\t// This is obsolete in 6.5, but I've left it in here so if we need to use this\n> \t\t// technique again, we'll know where to place it.\n> \t\t//\n> \t\t// This is generated by the SQL \"show datestyle\"\n> \t\t//if(msg.startsWith(\"NOTICE:\") && msg.indexOf(\"DateStyle\")>0) {\n> \t\t//// 13 is the length off \"DateStyle is \"\n> \t\t//msg = msg.substring(msg.indexOf(\"DateStyle is \")+13);\n> \t\t//\n> \t\t//for(int i=0;i<dateStyles.length;i+=2)\n> \t\t//if(msg.startsWith(dateStyles[i]))\n> \t\t//currentDateStyle=i+1; // this is the index of the format\n> \t\t//}\n> \t}\n> \n> \t/**\n> \t * Send a query to the backend. Returns one of the ResultSet\n> \t * objects.\n> \t *\n> \t * <B>Note:</B> there does not seem to be any method currently\n> \t * in existance to return the update count.\n> \t *\n> \t * @param sql the SQL statement to be executed\n> \t * @return a ResultSet holding the results\n> \t * @exception SQLException if a database error occurs\n> \t */\n> \tpublic java.sql.ResultSet ExecSQL(String sql) throws SQLException\n> \t{\n> \t\treturn ExecSQL(sql, null);\n> \t}\n> \n> \t/**\n> \t * Send a query to the backend. Returns one of the ResultSet\n> \t * objects.\n> \t *\n> \t * <B>Note:</B> there does not seem to be any method currently\n> \t * in existance to return the update count.\n> \t *\n> \t * @param sql the SQL statement to be executed\n> \t * @param stat The Statement associated with this query (may be null)\n> \t * @return a ResultSet holding the results\n> \t * @exception SQLException if a database error occurs\n> \t */\n> \tpublic java.sql.ResultSet ExecSQL(String sql, java.sql.Statement stat) throws SQLException\n> \t{\n> \t\treturn new QueryExecutor(sql, stat, pg_stream, this).execute();\n> \t}\n> \n> \t/**\n> \t * In SQL, a result table can be retrieved through a cursor that\n> \t * is named. The current row of a result can be updated or deleted\n> \t * using a positioned update/delete statement that references the\n> \t * cursor name.\n> \t *\n> \t * We support one cursor per connection.\n> \t *\n> \t * setCursorName sets the cursor name.\n> \t *\n> \t * @param cursor the cursor name\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic void setCursorName(String cursor) throws SQLException\n> \t{\n> \t\tthis.cursor = cursor;\n> \t}\n> \n> \t/**\n> \t * getCursorName gets the cursor name.\n> \t *\n> \t * @return the current cursor name\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic String getCursorName() throws SQLException\n> \t{\n> \t\treturn cursor;\n> \t}\n> \n> \t/**\n> \t * We are required to bring back certain information by\n> \t * the DatabaseMetaData class.\tThese functions do that.\n> \t *\n> \t * Method getURL() brings back the URL (good job we saved it)\n> \t *\n> \t * @return the url\n> \t * @exception SQLException just in case...\n> \t */\n> \tpublic String getURL() throws SQLException\n> \t{\n> \t\treturn this_url;\n> \t}\n> \n> \t/**\n> \t * Method getUserName() brings back the User Name (again, we\n> \t * saved it)\n> \t *\n> \t * @return the user name\n> \t * @exception SQLException just in case...\n> \t */\n> \tpublic String getUserName() throws SQLException\n> \t{\n> \t\treturn PG_USER;\n> \t}\n> \n> \t/**\n> \t * Get the character encoding to use for this connection.\n> \t */\n> \tpublic Encoding getEncoding() throws SQLException\n> \t{\n> \t\treturn encoding;\n> \t}\n> \n> \t/**\n> \t * This returns the Fastpath API for the current connection.\n> \t *\n> \t * <p><b>NOTE:</b> This is not part of JDBC, but allows access to\n> \t * functions on the org.postgresql backend itself.\n> \t *\n> \t * <p>It is primarily used by the LargeObject API\n> \t *\n> \t * <p>The best way to use this is as follows:\n> \t *\n> \t * <p><pre>\n> \t * import org.postgresql.fastpath.*;\n> \t * ...\n> \t * Fastpath fp = ((org.postgresql.Connection)myconn).getFastpathAPI();\n> \t * </pre>\n> \t *\n> \t * <p>where myconn is an open Connection to org.postgresql.\n> \t *\n> \t * @return Fastpath object allowing access to functions on the org.postgresql\n> \t * backend.\n> \t * @exception SQLException by Fastpath when initialising for first time\n> \t */\n> \tpublic Fastpath getFastpathAPI() throws SQLException\n> \t{\n> \t\tif (fastpath == null)\n> \t\t\tfastpath = new Fastpath(this, pg_stream);\n> \t\treturn fastpath;\n> \t}\n> \n> \t// This holds a reference to the Fastpath API if already open\n> \tprivate Fastpath fastpath = null;\n> \n> \t/**\n> \t * This returns the LargeObject API for the current connection.\n> \t *\n> \t * <p><b>NOTE:</b> This is not part of JDBC, but allows access to\n> \t * functions on the org.postgresql backend itself.\n> \t *\n> \t * <p>The best way to use this is as follows:\n> \t *\n> \t * <p><pre>\n> \t * import org.postgresql.largeobject.*;\n> \t * ...\n> \t * LargeObjectManager lo = ((org.postgresql.Connection)myconn).getLargeObjectAPI();\n> \t * </pre>\n> \t *\n> \t * <p>where myconn is an open Connection to org.postgresql.\n> \t *\n> \t * @return LargeObject object that implements the API\n> \t * @exception SQLException by LargeObject when initialising for first time\n> \t */\n> \tpublic LargeObjectManager getLargeObjectAPI() throws SQLException\n> \t{\n> \t\tif (largeobject == null)\n> \t\t\tlargeobject = new LargeObjectManager(this);\n> \t\treturn largeobject;\n> \t}\n> \n> \t// This holds a reference to the LargeObject API if already open\n> \tprivate LargeObjectManager largeobject = null;\n> \n> \t/**\n> \t * This method is used internally to return an object based around\n> \t * org.postgresql's more unique data types.\n> \t *\n> \t * <p>It uses an internal Hashtable to get the handling class. If the\n> \t * type is not supported, then an instance of org.postgresql.util.PGobject\n> \t * is returned.\n> \t *\n> \t * You can use the getValue() or setValue() methods to handle the returned\n> \t * object. Custom objects can have their own methods.\n> \t *\n> \t * In 6.4, this is extended to use the org.postgresql.util.Serialize class to\n> \t * allow the Serialization of Java Objects into the database without using\n> \t * Blobs. Refer to that class for details on how this new feature works.\n> \t *\n> \t * @return PGobject for this type, and set to value\n> \t * @exception SQLException if value is not correct for this type\n> \t * @see org.postgresql.util.Serialize\n> \t */\n> \tpublic Object getObject(String type, String value) throws SQLException\n> \t{\n> \t\ttry\n> \t\t{\n> \t\t\tObject o = objectTypes.get(type);\n> \n> \t\t\t// If o is null, then the type is unknown, so check to see if type\n> \t\t\t// is an actual table name. If it does, see if a Class is known that\n> \t\t\t// can handle it\n> \t\t\tif (o == null)\n> \t\t\t{\n> \t\t\t\tSerialize ser = new Serialize(this, type);\n> \t\t\t\tobjectTypes.put(type, ser);\n> \t\t\t\treturn ser.fetch(Integer.parseInt(value));\n> \t\t\t}\n> \n> \t\t\t// If o is not null, and it is a String, then its a class name that\n> \t\t\t// extends PGobject.\n> \t\t\t//\n> \t\t\t// This is used to implement the org.postgresql unique types (like lseg,\n> \t\t\t// point, etc).\n> \t\t\tif (o instanceof String)\n> \t\t\t{\n> \t\t\t\t// 6.3 style extending PG_Object\n> \t\t\t\tPGobject obj = null;\n> \t\t\t\tobj = (PGobject)(Class.forName((String)o).newInstance());\n> \t\t\t\tobj.setType(type);\n> \t\t\t\tobj.setValue(value);\n> \t\t\t\treturn (Object)obj;\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\t// If it's an object, it should be an instance of our Serialize class\n> \t\t\t\t// If so, then call it's fetch method.\n> \t\t\t\tif (o instanceof Serialize)\n> \t\t\t\t\treturn ((Serialize)o).fetch(Integer.parseInt(value));\n> \t\t\t}\n> \t\t}\n> \t\tcatch (SQLException sx)\n> \t\t{\n> \t\t\t// rethrow the exception. Done because we capture any others next\n> \t\t\tsx.fillInStackTrace();\n> \t\t\tthrow sx;\n> \t\t}\n> \t\tcatch (Exception ex)\n> \t\t{\n> \t\t\tthrow new PSQLException(\"postgresql.con.creobj\", type, ex);\n> \t\t}\n> \n> \t\t// should never be reached\n> \t\treturn null;\n> \t}\n> \n> \t/**\n> \t * This stores an object into the database.\n> \t * @param o Object to store\n> \t * @return OID of the new rectord\n> \t * @exception SQLException if value is not correct for this type\n> \t * @see org.postgresql.util.Serialize\n> \t */\n> \tpublic int putObject(Object o) throws SQLException\n> \t{\n> \t\ttry\n> \t\t{\n> \t\t\tString type = o.getClass().getName();\n> \t\t\tObject x = objectTypes.get(type);\n> \n> \t\t\t// If x is null, then the type is unknown, so check to see if type\n> \t\t\t// is an actual table name. If it does, see if a Class is known that\n> \t\t\t// can handle it\n> \t\t\tif (x == null)\n> \t\t\t{\n> \t\t\t\tSerialize ser = new Serialize(this, type);\n> \t\t\t\tobjectTypes.put(type, ser);\n> \t\t\t\treturn ser.store(o);\n> \t\t\t}\n> \n> \t\t\t// If it's an object, it should be an instance of our Serialize class\n> \t\t\t// If so, then call it's fetch method.\n> \t\t\tif (x instanceof Serialize)\n> \t\t\t\treturn ((Serialize)x).store(o);\n> \n> \t\t\t// Thow an exception because the type is unknown\n> \t\t\tthrow new PSQLException(\"postgresql.con.strobj\");\n> \n> \t\t}\n> \t\tcatch (SQLException sx)\n> \t\t{\n> \t\t\t// rethrow the exception. Done because we capture any others next\n> \t\t\tsx.fillInStackTrace();\n> \t\t\tthrow sx;\n> \t\t}\n> \t\tcatch (Exception ex)\n> \t\t{\n> \t\t\tthrow new PSQLException(\"postgresql.con.strobjex\", ex);\n> \t\t}\n> \t}\n> \n> \t/**\n> \t * This allows client code to add a handler for one of org.postgresql's\n> \t * more unique data types.\n> \t *\n> \t * <p><b>NOTE:</b> This is not part of JDBC, but an extension.\n> \t *\n> \t * <p>The best way to use this is as follows:\n> \t *\n> \t * <p><pre>\n> \t * ...\n> \t * ((org.postgresql.Connection)myconn).addDataType(\"mytype\",\"my.class.name\");\n> \t * ...\n> \t * </pre>\n> \t *\n> \t * <p>where myconn is an open Connection to org.postgresql.\n> \t *\n> \t * <p>The handling class must extend org.postgresql.util.PGobject\n> \t *\n> \t * @see org.postgresql.util.PGobject\n> \t */\n> \tpublic void addDataType(String type, String name)\n> \t{\n> \t\tobjectTypes.put(type, name);\n> \t}\n> \n> \t// This holds the available types\n> \tprivate Hashtable objectTypes = new Hashtable();\n> \n> \t// This array contains the types that are supported as standard.\n> \t//\n> \t// The first entry is the types name on the database, the second\n> \t// the full class name of the handling class.\n> \t//\n> \tprivate static final String defaultObjectTypes[][] = {\n> \t\t\t\t{\"box\", \"org.postgresql.geometric.PGbox\"},\n> \t\t\t\t{\"circle\", \"org.postgresql.geometric.PGcircle\"},\n> \t\t\t\t{\"line\", \"org.postgresql.geometric.PGline\"},\n> \t\t\t\t{\"lseg\", \"org.postgresql.geometric.PGlseg\"},\n> \t\t\t\t{\"path\", \"org.postgresql.geometric.PGpath\"},\n> \t\t\t\t{\"point\", \"org.postgresql.geometric.PGpoint\"},\n> \t\t\t\t{\"polygon\", \"org.postgresql.geometric.PGpolygon\"},\n> \t\t\t\t{\"money\", \"org.postgresql.util.PGmoney\"}\n> \t\t\t};\n> \n> \t// This initialises the objectTypes hashtable\n> \tprivate void initObjectTypes()\n> \t{\n> \t\tfor (int i = 0;i < defaultObjectTypes.length;i++)\n> \t\t\tobjectTypes.put(defaultObjectTypes[i][0], defaultObjectTypes[i][1]);\n> \t}\n> \n> \t// These are required by other common classes\n> \tpublic abstract java.sql.Statement createStatement() throws SQLException;\n> \n> \t/**\n> \t * This returns a resultset. It must be overridden, so that the correct\n> \t * version (from jdbc1 or jdbc2) are returned.\n> \t */\n> \tpublic abstract java.sql.ResultSet getResultSet(org.postgresql.Connection conn, java.sql.Statement stat, Field[] fields, Vector tuples, String status, int updateCount, int insertOID, boolean binaryCursor) throws SQLException;\n> \n> \t/**\n> \t * In some cases, it is desirable to immediately release a Connection's\n> \t * database and JDBC resources instead of waiting for them to be\n> \t * automatically released (cant think why off the top of my head)\n> \t *\n> \t * <B>Note:</B> A Connection is automatically closed when it is\n> \t * garbage collected. Certain fatal errors also result in a closed\n> \t * connection.\n> \t *\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic void close() throws SQLException\n> \t{\n> \t\tif (pg_stream != null)\n> \t\t{\n> \t\t\ttry\n> \t\t\t{\n> \t\t\t\tpg_stream.SendChar('X');\n> \t\t\t\tpg_stream.flush();\n> \t\t\t\tpg_stream.close();\n> \t\t\t}\n> \t\t\tcatch (IOException e)\n> \t\t\t{}\n> \t\t\tpg_stream = null;\n> \t\t}\n> \t}\n> \n> \t/**\n> \t * A driver may convert the JDBC sql grammar into its system's\n> \t * native SQL grammar prior to sending it; nativeSQL returns the\n> \t * native form of the statement that the driver would have sent.\n> \t *\n> \t * @param sql a SQL statement that may contain one or more '?'\n> \t *\tparameter placeholders\n> \t * @return the native form of this statement\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic String nativeSQL(String sql) throws SQLException\n> \t{\n> \t\treturn sql;\n> \t}\n> \n> \t/**\n> \t * The first warning reported by calls on this Connection is\n> \t * returned.\n> \t *\n> \t * <B>Note:</B> Sebsequent warnings will be changed to this\n> \t * SQLWarning\n> \t *\n> \t * @return the first SQLWarning or null\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic SQLWarning getWarnings() throws SQLException\n> \t{\n> \t\treturn firstWarning;\n> \t}\n> \n> \t/**\n> \t * After this call, getWarnings returns null until a new warning\n> \t * is reported for this connection.\n> \t *\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic void clearWarnings() throws SQLException\n> \t{\n> \t\tfirstWarning = null;\n> \t}\n> \n> \n> \t/**\n> \t * You can put a connection in read-only mode as a hunt to enable\n> \t * database optimizations\n> \t *\n> \t * <B>Note:</B> setReadOnly cannot be called while in the middle\n> \t * of a transaction\n> \t *\n> \t * @param readOnly - true enables read-only mode; false disables it\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic void setReadOnly(boolean readOnly) throws SQLException\n> \t{\n> \t\tthis.readOnly = readOnly;\n> \t}\n> \n> \t/**\n> \t * Tests to see if the connection is in Read Only Mode. Note that\n> \t * we cannot really put the database in read only mode, but we pretend\n> \t * we can by returning the value of the readOnly flag\n> \t *\n> \t * @return true if the connection is read only\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic boolean isReadOnly() throws SQLException\n> \t{\n> \t\treturn readOnly;\n> \t}\n> \n> \t/**\n> \t * If a connection is in auto-commit mode, than all its SQL\n> \t * statements will be executed and committed as individual\n> \t * transactions. Otherwise, its SQL statements are grouped\n> \t * into transactions that are terminated by either commit()\n> \t * or rollback(). By default, new connections are in auto-\n> \t * commit mode. The commit occurs when the statement completes\n> \t * or the next execute occurs, whichever comes first. In the\n> \t * case of statements returning a ResultSet, the statement\n> \t * completes when the last row of the ResultSet has been retrieved\n> \t * or the ResultSet has been closed. In advanced cases, a single\n> \t * statement may return multiple results as well as output parameter\n> \t * values.\tHere the commit occurs when all results and output param\n> \t * values have been retrieved.\n> \t *\n> \t * @param autoCommit - true enables auto-commit; false disables it\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic void setAutoCommit(boolean autoCommit) throws SQLException\n> \t{\n> \t\tif (this.autoCommit == autoCommit)\n> \t\t\treturn ;\n> \t\tif (autoCommit)\n> \t\t\tExecSQL(\"end\");\n> \t\telse\n> \t\t{\n> \t\t\tif (haveMinimumServerVersion(\"7.1\"))\n> \t\t\t{\n> \t\t\t\tExecSQL(\"begin;\" + getIsolationLevelSQL());\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\tExecSQL(\"begin\");\n> \t\t\t\tExecSQL(getIsolationLevelSQL());\n> \t\t\t}\n> \t\t}\n> \t\tthis.autoCommit = autoCommit;\n> \t}\n> \n> \t/**\n> \t * gets the current auto-commit state\n> \t *\n> \t * @return Current state of the auto-commit mode\n> \t * @exception SQLException (why?)\n> \t * @see setAutoCommit\n> \t */\n> \tpublic boolean getAutoCommit() throws SQLException\n> \t{\n> \t\treturn this.autoCommit;\n> \t}\n> \n> \t/**\n> \t * The method commit() makes all changes made since the previous\n> \t * commit/rollback permanent and releases any database locks currently\n> \t * held by the Connection.\tThis method should only be used when\n> \t * auto-commit has been disabled. (If autoCommit == true, then we\n> \t * just return anyhow)\n> \t *\n> \t * @exception SQLException if a database access error occurs\n> \t * @see setAutoCommit\n> \t */\n> \tpublic void commit() throws SQLException\n> \t{\n> \t\tif (autoCommit)\n> \t\t\treturn ;\n> \t\tif (haveMinimumServerVersion(\"7.1\"))\n> \t\t{\n> \t\t\tExecSQL(\"commit;begin;\" + getIsolationLevelSQL());\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\tExecSQL(\"commit\");\n> \t\t\tExecSQL(\"begin\");\n> \t\t\tExecSQL(getIsolationLevelSQL());\n> \t\t}\n> \t}\n> \n> \t/**\n> \t * The method rollback() drops all changes made since the previous\n> \t * commit/rollback and releases any database locks currently held by\n> \t * the Connection.\n> \t *\n> \t * @exception SQLException if a database access error occurs\n> \t * @see commit\n> \t */\n> \tpublic void rollback() throws SQLException\n> \t{\n> \t\tif (autoCommit)\n> \t\t\treturn ;\n> \t\tif (haveMinimumServerVersion(\"7.1\"))\n> \t\t{\n> \t\t\tExecSQL(\"rollback; begin;\" + getIsolationLevelSQL());\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\tExecSQL(\"rollback\");\n> \t\t\tExecSQL(\"begin\");\n> \t\t\tExecSQL(getIsolationLevelSQL());\n> \t\t}\n> \t}\n> \n> \t/**\n> \t * Get this Connection's current transaction isolation mode.\n> \t *\n> \t * @return the current TRANSACTION_* mode value\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic int getTransactionIsolation() throws SQLException\n> \t{\n> \t\tclearWarnings();\n> \t\tExecSQL(\"show xactisolevel\");\n> \n> \t\tSQLWarning warning = getWarnings();\n> \t\tif (warning != null)\n> \t\t{\n> \t\t\tString message = warning.getMessage();\n> \t\t\tclearWarnings();\n> \t\t\tif (message.indexOf(\"READ COMMITTED\") != -1)\n> \t\t\t\treturn java.sql.Connection.TRANSACTION_READ_COMMITTED;\n> \t\t\telse if (message.indexOf(\"READ UNCOMMITTED\") != -1)\n> \t\t\t\treturn java.sql.Connection.TRANSACTION_READ_UNCOMMITTED;\n> \t\t\telse if (message.indexOf(\"REPEATABLE READ\") != -1)\n> \t\t\t\treturn java.sql.Connection.TRANSACTION_REPEATABLE_READ;\n> \t\t\telse if (message.indexOf(\"SERIALIZABLE\") != -1)\n> \t\t\t\treturn java.sql.Connection.TRANSACTION_SERIALIZABLE;\n> \t\t}\n> \t\treturn java.sql.Connection.TRANSACTION_READ_COMMITTED;\n> \t}\n> \n> \t/**\n> \t * You can call this method to try to change the transaction\n> \t * isolation level using one of the TRANSACTION_* values.\n> \t *\n> \t * <B>Note:</B> setTransactionIsolation cannot be called while\n> \t * in the middle of a transaction\n> \t *\n> \t * @param level one of the TRANSACTION_* isolation values with\n> \t *\tthe exception of TRANSACTION_NONE; some databases may\n> \t *\tnot support other values\n> \t * @exception SQLException if a database access error occurs\n> \t * @see java.sql.DatabaseMetaData#supportsTransactionIsolationLevel\n> \t */\n> \tpublic void setTransactionIsolation(int level) throws SQLException\n> \t{\n> \t\t//In 7.1 and later versions of the server it is possible using\n> \t\t//the \"set session\" command to set this once for all future txns\n> \t\t//however in 7.0 and prior versions it is necessary to set it in\n> \t\t//each transaction, thus adding complexity below.\n> \t\t//When we decide to drop support for servers older than 7.1\n> \t\t//this can be simplified\n> \t\tisolationLevel = level;\n> \t\tString isolationLevelSQL;\n> \n> \t\tif (!haveMinimumServerVersion(\"7.1\"))\n> \t\t{\n> \t\t\tisolationLevelSQL = getIsolationLevelSQL();\n> \t\t}\n> \t\telse\n> \t\t{\n> \t\t\tisolationLevelSQL = \"SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL \";\n> \t\t\tswitch (isolationLevel)\n> \t\t\t{\n> \t\t\tcase java.sql.Connection.TRANSACTION_READ_COMMITTED:\n> \t\t\t\tisolationLevelSQL += \"READ COMMITTED\";\n> \t\t\t\tbreak;\n> \t\t\tcase java.sql.Connection.TRANSACTION_SERIALIZABLE:\n> \t\t\t\tisolationLevelSQL += \"SERIALIZABLE\";\n> \t\t\t\tbreak;\n> \t\t\tdefault:\n> \t\t\t\tthrow new PSQLException(\"postgresql.con.isolevel\",\n> \t\t\t\t\t\t\t\t\t\tnew Integer(isolationLevel));\n> \t\t\t}\n> \t\t}\n> \t\tExecSQL(isolationLevelSQL);\n> \t}\n> \n> \t/**\n> \t * Helper method used by setTransactionIsolation(), commit(), rollback()\n> \t * and setAutoCommit(). This returns the SQL string needed to\n> \t * set the isolation level for a transaction. In 7.1 and later it\n> \t * is possible to set a default isolation level that applies to all\n> \t * future transactions, this method is only necesary for 7.0 and older\n> \t * servers, and should be removed when support for these older\n> \t * servers are dropped\n> \t */\n> \tprotected String getIsolationLevelSQL() throws SQLException\n> \t{\n> \t\t//7.1 and higher servers have a default specified so\n> \t\t//no additional SQL is required to set the isolation level\n> \t\tif (haveMinimumServerVersion(\"7.1\"))\n> \t\t{\n> \t\t\treturn \"\";\n> \t\t}\n> \t\tStringBuffer sb = new StringBuffer(\"SET TRANSACTION ISOLATION LEVEL\");\n> \n> \t\tswitch (isolationLevel)\n> \t\t{\n> \t\tcase java.sql.Connection.TRANSACTION_READ_COMMITTED:\n> \t\t\tsb.append(\" READ COMMITTED\");\n> \t\t\tbreak;\n> \n> \t\tcase java.sql.Connection.TRANSACTION_SERIALIZABLE:\n> \t\t\tsb.append(\" SERIALIZABLE\");\n> \t\t\tbreak;\n> \n> \t\tdefault:\n> \t\t\tthrow new PSQLException(\"postgresql.con.isolevel\", new Integer(isolationLevel));\n> \t\t}\n> \t\treturn sb.toString();\n> \t}\n> \n> \t/**\n> \t * A sub-space of this Connection's database may be selected by\n> \t * setting a catalog name.\tIf the driver does not support catalogs,\n> \t * it will silently ignore this request\n> \t *\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic void setCatalog(String catalog) throws SQLException\n> \t{\n> \t\t//no-op\n> \t}\n> \n> \t/**\n> \t * Return the connections current catalog name, or null if no\n> \t * catalog name is set, or we dont support catalogs.\n> \t *\n> \t * @return the current catalog name or null\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic String getCatalog() throws SQLException\n> \t{\n> \t\treturn PG_DATABASE;\n> \t}\n> \n> \t/**\n> \t * Overides finalize(). If called, it closes the connection.\n> \t *\n> \t * This was done at the request of Rachel Greenham\n> \t * <rachel@enlarion.demon.co.uk> who hit a problem where multiple\n> \t * clients didn't close the connection, and once a fortnight enough\n> \t * clients were open to kill the org.postgres server.\n> \t */\n> \tpublic void finalize() throws Throwable\n> \t{\n> \t\tclose();\n> \t}\n> \n> \tprivate static String extractVersionNumber(String fullVersionString)\n> \t{\n> \t\tStringTokenizer versionParts = new StringTokenizer(fullVersionString);\n> \t\tversionParts.nextToken(); /* \"PostgreSQL\" */\n> \t\treturn versionParts.nextToken(); /* \"X.Y.Z\" */\n> \t}\n> \n> \t/**\n> \t * Get server version number\n> \t */\n> \tpublic String getDBVersionNumber()\n> \t{\n> \t\treturn dbVersionNumber;\n> \t}\n> \n> \tpublic boolean haveMinimumServerVersion(String ver) throws SQLException\n> \t{\n> \t\treturn (getDBVersionNumber().compareTo(ver) >= 0);\n> \t}\n> \n> \t/**\n> \t * This method returns true if the compatible level set in the connection\n> \t * (which can be passed into the connection or specified in the URL)\n> \t * is at least the value passed to this method. This is used to toggle\n> \t * between different functionality as it changes across different releases\n> \t * of the jdbc driver code. The values here are versions of the jdbc client\n> \t * and not server versions. For example in 7.1 get/setBytes worked on\n> \t * LargeObject values, in 7.2 these methods were changed to work on bytea\n> \t * values.\tThis change in functionality could be disabled by setting the\n> \t * \"compatible\" level to be 7.1, in which case the driver will revert to\n> \t * the 7.1 functionality.\n> \t */\n> \tpublic boolean haveMinimumCompatibleVersion(String ver) throws SQLException\n> \t{\n> \t\treturn (compatible.compareTo(ver) >= 0);\n> \t}\n> \n> \n> \t/**\n> \t * This returns the java.sql.Types type for a PG type oid\n> \t *\n> \t * @param oid PostgreSQL type oid\n> \t * @return the java.sql.Types type\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic int getSQLType(int oid) throws SQLException\n> \t{\n> \t\tInteger sqlType = (Integer)typeOidCache.get(new Integer(oid));\n> \n> \t\t// it's not in the cache, so perform a query, and add the result to the cache\n> \t\tif (sqlType == null)\n> \t\t{\n> \t\t\tResultSet result = (org.postgresql.ResultSet)ExecSQL(\"select typname from pg_type where oid = \" + oid);\n> \t\t\tif (result.getColumnCount() != 1 || result.getTupleCount() != 1)\n> \t\t\t\tthrow new PSQLException(\"postgresql.unexpected\");\n> \t\t\tresult.next();\n> \t\t\tString pgType = result.getString(1);\n> \t\t\tInteger iOid = new Integer(oid);\n> \t\t\tsqlType = new Integer(getSQLType(result.getString(1)));\n> \t\t\tsqlTypeCache.put(iOid, sqlType);\n> \t\t\tpgTypeCache.put(iOid, pgType);\n> \t\t\tresult.close();\n> \t\t}\n> \n> \t\treturn sqlType.intValue();\n> \t}\n> \n> \t/**\n> \t * This returns the java.sql.Types type for a PG type\n> \t *\n> \t * @param pgTypeName PostgreSQL type name\n> \t * @return the java.sql.Types type\n> \t */\n> \tpublic abstract int getSQLType(String pgTypeName);\n> \n> \t/**\n> \t * This returns the oid for a given PG data type\n> \t * @param typeName PostgreSQL type name\n> \t * @return PostgreSQL oid value for a field of this type\n> \t */\n> \tpublic int getOID(String typeName) throws SQLException\n> \t{\n> \t\tint oid = -1;\n> \t\tif (typeName != null)\n> \t\t{\n> \t\t\tInteger oidValue = (Integer) typeOidCache.get(typeName);\n> \t\t\tif (oidValue != null)\n> \t\t\t{\n> \t\t\t\toid = oidValue.intValue();\n> \t\t\t}\n> \t\t\telse\n> \t\t\t{\n> \t\t\t\t// it's not in the cache, so perform a query, and add the result to the cache\n> \t\t\t\tResultSet result = (org.postgresql.ResultSet)ExecSQL(\"select oid from pg_type where typname='\"\n> \t\t\t\t\t\t\t\t + typeName + \"'\");\n> \t\t\t\tif (result.getColumnCount() != 1 || result.getTupleCount() != 1)\n> \t\t\t\t\tthrow new PSQLException(\"postgresql.unexpected\");\n> \t\t\t\tresult.next();\n> \t\t\t\toid = Integer.parseInt(result.getString(1));\n> \t\t\t\ttypeOidCache.put(typeName, new Integer(oid));\n> \t\t\t\tresult.close();\n> \t\t\t}\n> \t\t}\n> \t\treturn oid;\n> \t}\n> \n> \t/**\n> \t * We also need to get the PG type name as returned by the back end.\n> \t *\n> \t * @return the String representation of the type of this field\n> \t * @exception SQLException if a database access error occurs\n> \t */\n> \tpublic String getPGType(int oid) throws SQLException\n> \t{\n> \t\tString pgType = (String) pgTypeCache.get(new Integer(oid));\n> \t\tif (pgType == null)\n> \t\t{\n> \t\t\tgetSQLType(oid);\n> \t\t\tpgType = (String) pgTypeCache.get(new Integer(oid));\n> \t\t}\n> \t\treturn pgType;\n> \t}\n> \n> }\n> \n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n\n",
"msg_date": "Wed, 07 Nov 2001 20:26:49 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "> On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > > Hey folks,\n> > > \n> > > I don't see MD5-based password code in the JDBC CVS tree. Is anyone\n> > > working on this?\n> > > \n> > > I'll take a stab, if not.\n> > \n> > There is no one working on it. ODBC needs it too. It wasn't on the\n> > TODO list but I just added it.\n> > \n> > I can assist with any questions. See libpq for a sample implementation.\n> \n> OK, how about this? Someone will have to help me with appropriate exception\n> behavior and where the bytesToHex util is placed.\n> \n> I'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why isn't\n> this (4 + ...?\n\nOK, now that we have this, what do people want to do with it, 7.2 or\n7.3? It is a feature addition to JDBC to allow MD5 encryption.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 12:37:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nI'd vote to pop it into 7.2 if we can.\n\nOn 08-Nov-2001 Bruce Momjian wrote:\n> OK, now that we have this, what do people want to do with it, 7.2 or\n> 7.3? It is a feature addition to JDBC to allow MD5 encryption.\n\n\nVirtually, \nNed Wolpert <ned.wolpert@knowledgenet.com>\n\nD08C2F45: 28E7 56CB 58AC C622 5A51 3C42 8B2B 2739 D08C 2F45 \n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.6 (GNU/Linux)\nComment: For info see http://www.gnupg.org\n\niD8DBQE76sfsiysnOdCML0URAn0PAJ0cI7Zx7x5j/xZ/kIuym7RiZD5eqgCdE6F4\n8jqZ4Kvux0v5s5jKsYpMuPo=\n=Y32n\n-----END PGP SIGNATURE-----\n",
"msg_date": "Thu, 08 Nov 2001 10:59:08 -0700 (MST)",
"msg_from": "Ned Wolpert <ned.wolpert@knowledgenet.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MD5-based passwords"
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 08:26:49PM -0800, Barry Lind wrote:\n> I think I would recommend moving most of this logic into a new class \n> under org.postgresql.util, called MD5.java?? (that is where the \n> UnixCrypt class is located).\n> \n> [...]\n\nOK. Attached is an abstracted version, with your cast fix.\n\nAppropriate exception handling is still open. As is indenting.\n\n> PS. When sending diffs please use context diff format (i.e.-c). It \n> makes it much easier to review.\n\nHow about unified? :)\n\n-jeremy\n_____________________________________________________________________\njeremy wohl ..: http://igmus.org",
"msg_date": "Thu, 8 Nov 2001 10:31:31 -0800",
"msg_from": "Jeremy Wohl <jeremyw-pgjdbc@igmus.org>",
"msg_from_op": true,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "We have quite a few code changes which I am unsure about at this point.\nThere are quite a few contributions which are coming in right now. I am\nunclear as to how to handle the 7.2/7.3 thing\n\nI don't want to lose momentum, but I also don't want to put out buggy\ncode?\n\nA number of the contributions are implementations of the driver which\ndidn't exist before. Since none of the core code is being modified, I am\ntending towards adding them as they com in?\n\nThe contributions that I am aware of include\n\nMD5 passwords\nExported/Imported keys\nFixes to getTables\n\nSuggestions?\n\n\n\nDave\n\n-----Original Message-----\nFrom: pgsql-jdbc-owner@postgresql.org\n[mailto:pgsql-jdbc-owner@postgresql.org] On Behalf Of Bruce Momjian\nSent: November 8, 2001 12:38 PM\nTo: Jeremy Wohl\nCc: pgsql-jdbc@postgresql.org; PostgreSQL-development\nSubject: Re: [JDBC] MD5-based passwords\n\n\n> On Wed, Nov 07, 2001 at 12:27:53AM -0500, Bruce Momjian wrote:\n> > > Hey folks,\n> > > \n> > > I don't see MD5-based password code in the JDBC CVS tree. Is\nanyone\n> > > working on this?\n> > > \n> > > I'll take a stab, if not.\n> > \n> > There is no one working on it. ODBC needs it too. It wasn't on the\n> > TODO list but I just added it.\n> > \n> > I can assist with any questions. See libpq for a sample\nimplementation.\n> \n> OK, how about this? Someone will have to help me with appropriate\nexception\n> behavior and where the bytesToHex util is placed.\n> \n> I'm not clear on the SendInteger(5 + .. code, seen elsewhere. Why\nisn't\n> this (4 + ...?\n\nOK, now that we have this, what do people want to do with it, 7.2 or\n7.3? It is a feature addition to JDBC to allow MD5 encryption.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n",
"msg_date": "Thu, 8 Nov 2001 14:06:21 -0500",
"msg_from": "\"Dave Cramer\" <Dave@micro-automation.net>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "> We have quite a few code changes which I am unsure about at this point.\n> There are quite a few contributions which are coming in right now. I am\n> unclear as to how to handle the 7.2/7.3 thing\n\nIt is always a tough decision.\n\n> I don't want to lose momentum, but I also don't want to put out buggy\n> code?\n\nYep, that is the tradeoff. At this point we usually only add bug fixes\nor features that many people really need.\n\n> \n> A number of the contributions are implementations of the driver which\n> didn't exist before. Since none of the core code is being modified, I am\n> tending towards adding them as they com in?\n> \n> The contributions that I am aware of include\n> \n> MD5 passwords\n> Exported/Imported keys\n> Fixes to getTables\n\nThe decision is up to you guys now. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 8 Nov 2001 14:30:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "Well, if we're talking about 7.2 versus 7.3, I'd rather see them in the\n7.2 release. If, however, we're talking about 7.2 version 7.2.x, then \nwe may want to wait until 7.2.x.\n\n(For things like the Pooling and jdbc2 compliance stuff I'm working on, I\nsee that as 7.3 either way. But these things are more basic.)\n\n--- Dave Cramer <Dave@micro-automation.net> wrote:\n> We have quite a few code changes which I am unsure about at this point.\n> There are quite a few contributions which are coming in right now. I am\n> unclear as to how to handle the 7.2/7.3 thing\n> \n> I don't want to lose momentum, but I also don't want to put out buggy\n> code?\n> \n> A number of the contributions are implementations of the driver which\n> didn't exist before. Since none of the core code is being modified, I am\n> tending towards adding them as they com in?\n> \n> The contributions that I am aware of include\n> \n> MD5 passwords\n> Exported/Imported keys\n> Fixes to getTables\n> \n> Suggestions?\n\n\n=====\nVirtually, | \"Must you shout too?\" \nNed Wolpert | -Dante\nwolpert@yahoo.com | \n_________________/ \"Who watches the watchmen?\"\n4e75 -Juvenal, 120 AD\n\n-- Place your commercial here -- fnord\n\n__________________________________________________\nDo You Yahoo!?\nFind a job, post your resume.\nhttp://careers.yahoo.com\n",
"msg_date": "Thu, 8 Nov 2001 11:50:21 -0800 (PST)",
"msg_from": "Ned Wolpert <wolpert@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "Jeremy,\n\nThe revised patch looks fine to me. I will apply it tomorrow (unless \nsomeone comes along objecting to the inclusion of this patch).\n\nthanks,\n--Barry\n\n\nJeremy Wohl wrote:\n\n> On Wed, Nov 07, 2001 at 08:26:49PM -0800, Barry Lind wrote:\n> \n>>I think I would recommend moving most of this logic into a new class \n>>under org.postgresql.util, called MD5.java?? (that is where the \n>>UnixCrypt class is located).\n>>\n>>[...]\n>>\n> \n> OK. Attached is an abstracted version, with your cast fix.\n> \n> Appropriate exception handling is still open. As is indenting.\n> \n> \n>>PS. When sending diffs please use context diff format (i.e.-c). It \n>>makes it much easier to review.\n>>\n> \n> How about unified? :)\n> \n> -jeremy\n> _____________________________________________________________________\n> jeremy wohl ..: http://igmus.org\n> \n> \n> ------------------------------------------------------------------------\n> \n> --- Connection.java.old\tWed Nov 7 11:19:11 2001\n> +++ Connection.java\tThu Nov 8 10:20:13 2001\n> @@ -63,6 +63,7 @@\n> \tprivate static final int AUTH_REQ_KRB5 = 2;\n> \tprivate static final int AUTH_REQ_PASSWORD = 3;\n> \tprivate static final int AUTH_REQ_CRYPT = 4;\n> + private static final int AUTH_REQ_MD5 = 5;\n> \n> \t// New for 6.3, salt value for crypt authorisation\n> \tprivate String salt;\n> @@ -180,22 +181,34 @@\n> \t\t\t\t\t// Get the type of request\n> \t\t\t\t\tareq = pg_stream.ReceiveIntegerR(4);\n> \n> -\t\t\t\t\t// Get the password salt if there is one\n> +\t\t\t\t\t// Get the crypt password salt if there is one\n> \t\t\t\t\tif (areq == AUTH_REQ_CRYPT)\n> \t\t\t\t\t{\n> \t\t\t\t\t\tbyte[] rst = new byte[2];\n> \t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\tsalt = new String(rst, 0, 2);\n> -\t\t\t\t\t\tDriverManager.println(\"Salt=\" + salt);\n> +\t\t\t\t\t\tDriverManager.println(\"Crypt salt=\" + salt);\n> +\t\t\t\t\t}\n> +\n> +\t\t\t\t\t// Or get the md5 password salt if there is one\n> +\t\t\t\t\tif (areq == AUTH_REQ_MD5)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tbyte[] rst = new byte[4];\n> +\t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\trst[2] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\trst[3] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\tsalt = new String(rst, 0, 4);\n> +\t\t\t\t\t\tDriverManager.println(\"MD5 salt=\" + salt);\n> \t\t\t\t\t}\n> \n> \t\t\t\t\t// now send the auth packet\n> \t\t\t\t\tswitch (areq)\n> \t\t\t\t\t{\n> \t\t\t\t\tcase AUTH_REQ_OK:\n> -\t\t\t\t\t\tbreak;\n> -\n> +\t\t\t\t\t break;\n> +\t\t\t\t\t\t\n> \t\t\t\t\tcase AUTH_REQ_KRB4:\n> \t\t\t\t\t\tDriverManager.println(\"postgresql: KRB4\");\n> \t\t\t\t\t\tthrow new PSQLException(\"postgresql.con.kerb4\");\n> @@ -217,6 +230,15 @@\n> \t\t\t\t\t\tString crypted = UnixCrypt.crypt(salt, PG_PASSWORD);\n> \t\t\t\t\t\tpg_stream.SendInteger(5 + crypted.length(), 4);\n> \t\t\t\t\t\tpg_stream.Send(crypted.getBytes());\n> +\t\t\t\t\t\tpg_stream.SendInteger(0, 1);\n> +\t\t\t\t\t\tpg_stream.flush();\n> +\t\t\t\t\t\tbreak;\n> +\n> +\t\t\t\t\tcase AUTH_REQ_MD5:\n> +\t\t\t\t\t DriverManager.println(\"postgresql: MD5\");\n> +\t\t\t\t\t\tbyte[] digest = MD5Digest.encode(PG_USER, PG_PASSWORD, salt);\n> +\t\t\t\t\t\tpg_stream.SendInteger(5 + digest.length, 4);\n> +\t\t\t\t\t\tpg_stream.Send(digest);\n> \t\t\t\t\t\tpg_stream.SendInteger(0, 1);\n> \t\t\t\t\t\tpg_stream.flush();\n> \t\t\t\t\t\tbreak;\n> \n> \n> ------------------------------------------------------------------------\n> \n> package org.postgresql.util;\n> \n> /**\n> * MD5-based utility function to obfuscate passwords before network transmission\n> *\n> * @author Jeremy Wohl\n> *\n> */\n> \n> import java.security.*;\n> \n> public class MD5Digest\n> {\n> private MD5Digest() {}\n> \n> \n> /**\n> * Encodes user/password/salt information in the following way:\n> * MD5(MD5(password + user) + salt)\n> *\n> * @param user The connecting user.\n> * @param password The connecting user's password.\n> * @param salt A four-character string sent by the server.\n> *\n> * @return A 35-byte array, comprising the string \"md5\", followed by an MD5 digest.\n> */\n> public static byte[] encode(String user, String password, String salt)\n> {\n> \tMessageDigest md;\n> \tbyte[] temp_digest, pass_digest;\n> \tbyte[] hex_digest = new byte[35];\n> \t \n> \n> \ttry {\n> \t md = MessageDigest.getInstance(\"MD5\");\n> \n> \t md.update(password.getBytes());\n> \t md.update(user.getBytes());\n> \t temp_digest = md.digest();\n> \n> \t bytesToHex(temp_digest, hex_digest, 0);\n> \t md.update(hex_digest, 0, 32);\n> \t md.update(salt.getBytes());\n> \t pass_digest = md.digest();\n> \n> \t bytesToHex(pass_digest, hex_digest, 3);\n> \t hex_digest[0] = (byte) 'm'; hex_digest[1] = (byte) 'd'; hex_digest[2] = (byte) '5';\n> \t} catch (Exception e) {\n> \t ; // \"MessageDigest failure; \" + e\n> \t}\n> \n> \treturn hex_digest;\n> }\n> \n> \n> /**\n> * Turn 16-byte stream into a human-readable 32-byte hex string\n> */\n> private static void bytesToHex(byte[] bytes, byte[] hex, int offset)\n> {\n> \tfinal char lookup[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n> \t\t\t\t'a', 'b', 'c', 'd', 'e', 'f' };\n> \t\n> \tint i, c, j, pos = offset;\n> \t\n> \tfor (i = 0; i < 16; i++) {\n> \t c = bytes[i] & 0xFF; j = c >> 4;\n> \t hex[pos++] = (byte) lookup[j];\n> \t j = (c & 0xF);\n> \t hex[pos++] = (byte) lookup[j];\n> \t}\n> }\n> }\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n",
"msg_date": "Thu, 08 Nov 2001 18:15:56 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "I think we need to evaluate each patch on a case by case basis. With \nthat said, there certainly are guidelines we probably should follow:\n 1) new functionality = next release\n 2) bugfix (simple or low risk fix) = this release\n 3) bugfix (complex or high risk fix) = next release (unless it is \nreally important and gets a lot of testing and extensive code review \nthen possibly this release)\n\nIf we are going to start to see a lot of activity for jdbc (something I \nam certainly hoping for), we might decide to have a jdbc only release \nbetween 7.2 and 7.3 as the next release. We could create a branch in \nCVS for this work if necessary then release the result on the \njdbc.postgresql.org web site. I don't think this work should go into \nthe 7.2.x branch.\n\nthanks,\n--Barry\n\n\nBruce Momjian wrote:\n\n>>We have quite a few code changes which I am unsure about at this point.\n>>There are quite a few contributions which are coming in right now. I am\n>>unclear as to how to handle the 7.2/7.3 thing\n>>\n> \n> It is always a tough decision.\n> \n> \n>>I don't want to lose momentum, but I also don't want to put out buggy\n>>code?\n>>\n> \n> Yep, that is the tradeoff. At this point we usually only add bug fixes\n> or features that many people really need.\n> \n> \n>>A number of the contributions are implementations of the driver which\n>>didn't exist before. Since none of the core code is being modified, I am\n>>tending towards adding them as they com in?\n>>\n>>The contributions that I am aware of include\n>>\n>>MD5 passwords\n>>Exported/Imported keys\n>>Fixes to getTables\n>>\n> \n> The decision is up to you guys now. :-)\n> \n> \n\n\n",
"msg_date": "Thu, 08 Nov 2001 19:27:18 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] MD5-based passwords"
},
{
"msg_contents": "Dave Cramer writes:\n\n> A number of the contributions are implementations of the driver which\n> didn't exist before. Since none of the core code is being modified, I am\n> tending towards adding them as they com in?\n\nOnce you add them they become core code and you're liable for them.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 9 Nov 2001 19:07:30 +0100 (CET)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
},
{
"msg_contents": "Patch applied.\n\nthanks,\n--Barry\n\n\nJeremy Wohl wrote:\n\n> On Wed, Nov 07, 2001 at 08:26:49PM -0800, Barry Lind wrote:\n> \n>>I think I would recommend moving most of this logic into a new class \n>>under org.postgresql.util, called MD5.java?? (that is where the \n>>UnixCrypt class is located).\n>>\n>>[...]\n>>\n> \n> OK. Attached is an abstracted version, with your cast fix.\n> \n> Appropriate exception handling is still open. As is indenting.\n> \n> \n>>PS. When sending diffs please use context diff format (i.e.-c). It \n>>makes it much easier to review.\n>>\n> \n> How about unified? :)\n> \n> -jeremy\n> _____________________________________________________________________\n> jeremy wohl ..: http://igmus.org\n> \n> \n> ------------------------------------------------------------------------\n> \n> --- Connection.java.old\tWed Nov 7 11:19:11 2001\n> +++ Connection.java\tThu Nov 8 10:20:13 2001\n> @@ -63,6 +63,7 @@\n> \tprivate static final int AUTH_REQ_KRB5 = 2;\n> \tprivate static final int AUTH_REQ_PASSWORD = 3;\n> \tprivate static final int AUTH_REQ_CRYPT = 4;\n> + private static final int AUTH_REQ_MD5 = 5;\n> \n> \t// New for 6.3, salt value for crypt authorisation\n> \tprivate String salt;\n> @@ -180,22 +181,34 @@\n> \t\t\t\t\t// Get the type of request\n> \t\t\t\t\tareq = pg_stream.ReceiveIntegerR(4);\n> \n> -\t\t\t\t\t// Get the password salt if there is one\n> +\t\t\t\t\t// Get the crypt password salt if there is one\n> \t\t\t\t\tif (areq == AUTH_REQ_CRYPT)\n> \t\t\t\t\t{\n> \t\t\t\t\t\tbyte[] rst = new byte[2];\n> \t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> \t\t\t\t\t\tsalt = new String(rst, 0, 2);\n> -\t\t\t\t\t\tDriverManager.println(\"Salt=\" + salt);\n> +\t\t\t\t\t\tDriverManager.println(\"Crypt salt=\" + salt);\n> +\t\t\t\t\t}\n> +\n> +\t\t\t\t\t// Or get the md5 password salt if there is one\n> +\t\t\t\t\tif (areq == AUTH_REQ_MD5)\n> +\t\t\t\t\t{\n> +\t\t\t\t\t\tbyte[] rst = new byte[4];\n> +\t\t\t\t\t\trst[0] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\trst[1] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\trst[2] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\trst[3] = (byte)pg_stream.ReceiveChar();\n> +\t\t\t\t\t\tsalt = new String(rst, 0, 4);\n> +\t\t\t\t\t\tDriverManager.println(\"MD5 salt=\" + salt);\n> \t\t\t\t\t}\n> \n> \t\t\t\t\t// now send the auth packet\n> \t\t\t\t\tswitch (areq)\n> \t\t\t\t\t{\n> \t\t\t\t\tcase AUTH_REQ_OK:\n> -\t\t\t\t\t\tbreak;\n> -\n> +\t\t\t\t\t break;\n> +\t\t\t\t\t\t\n> \t\t\t\t\tcase AUTH_REQ_KRB4:\n> \t\t\t\t\t\tDriverManager.println(\"postgresql: KRB4\");\n> \t\t\t\t\t\tthrow new PSQLException(\"postgresql.con.kerb4\");\n> @@ -217,6 +230,15 @@\n> \t\t\t\t\t\tString crypted = UnixCrypt.crypt(salt, PG_PASSWORD);\n> \t\t\t\t\t\tpg_stream.SendInteger(5 + crypted.length(), 4);\n> \t\t\t\t\t\tpg_stream.Send(crypted.getBytes());\n> +\t\t\t\t\t\tpg_stream.SendInteger(0, 1);\n> +\t\t\t\t\t\tpg_stream.flush();\n> +\t\t\t\t\t\tbreak;\n> +\n> +\t\t\t\t\tcase AUTH_REQ_MD5:\n> +\t\t\t\t\t DriverManager.println(\"postgresql: MD5\");\n> +\t\t\t\t\t\tbyte[] digest = MD5Digest.encode(PG_USER, PG_PASSWORD, salt);\n> +\t\t\t\t\t\tpg_stream.SendInteger(5 + digest.length, 4);\n> +\t\t\t\t\t\tpg_stream.Send(digest);\n> \t\t\t\t\t\tpg_stream.SendInteger(0, 1);\n> \t\t\t\t\t\tpg_stream.flush();\n> \t\t\t\t\t\tbreak;\n> \n> \n> ------------------------------------------------------------------------\n> \n> package org.postgresql.util;\n> \n> /**\n> * MD5-based utility function to obfuscate passwords before network transmission\n> *\n> * @author Jeremy Wohl\n> *\n> */\n> \n> import java.security.*;\n> \n> public class MD5Digest\n> {\n> private MD5Digest() {}\n> \n> \n> /**\n> * Encodes user/password/salt information in the following way:\n> * MD5(MD5(password + user) + salt)\n> *\n> * @param user The connecting user.\n> * @param password The connecting user's password.\n> * @param salt A four-character string sent by the server.\n> *\n> * @return A 35-byte array, comprising the string \"md5\", followed by an MD5 digest.\n> */\n> public static byte[] encode(String user, String password, String salt)\n> {\n> \tMessageDigest md;\n> \tbyte[] temp_digest, pass_digest;\n> \tbyte[] hex_digest = new byte[35];\n> \t \n> \n> \ttry {\n> \t md = MessageDigest.getInstance(\"MD5\");\n> \n> \t md.update(password.getBytes());\n> \t md.update(user.getBytes());\n> \t temp_digest = md.digest();\n> \n> \t bytesToHex(temp_digest, hex_digest, 0);\n> \t md.update(hex_digest, 0, 32);\n> \t md.update(salt.getBytes());\n> \t pass_digest = md.digest();\n> \n> \t bytesToHex(pass_digest, hex_digest, 3);\n> \t hex_digest[0] = (byte) 'm'; hex_digest[1] = (byte) 'd'; hex_digest[2] = (byte) '5';\n> \t} catch (Exception e) {\n> \t ; // \"MessageDigest failure; \" + e\n> \t}\n> \n> \treturn hex_digest;\n> }\n> \n> \n> /**\n> * Turn 16-byte stream into a human-readable 32-byte hex string\n> */\n> private static void bytesToHex(byte[] bytes, byte[] hex, int offset)\n> {\n> \tfinal char lookup[] = { '0', '1', '2', '3', '4', '5', '6', '7', '8', '9',\n> \t\t\t\t'a', 'b', 'c', 'd', 'e', 'f' };\n> \t\n> \tint i, c, j, pos = offset;\n> \t\n> \tfor (i = 0; i < 16; i++) {\n> \t c = bytes[i] & 0xFF; j = c >> 4;\n> \t hex[pos++] = (byte) lookup[j];\n> \t j = (c & 0xF);\n> \t hex[pos++] = (byte) lookup[j];\n> \t}\n> }\n> }\n> \n> \n> ------------------------------------------------------------------------\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n",
"msg_date": "Mon, 12 Nov 2001 11:12:47 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: MD5-based passwords"
}
] |
[
{
"msg_contents": "PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\nPSQLODBC.DLL - 07.01.0007\nVisual C++ - 6.0\n\nI sent a previous mail with regard to using the '\\' (backslash) character in\nan SQL SELECT statement.\nThe outcome was that postgres does not escape the '\\' itself - I need to do\nit myself before submitting the SQL - fair enough, I now do this.\n\ni.e\ninstead of\n mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\me';\nI now do\n mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\\\me';\n\nBUT, if I use the LIKE predicate I have to escape the escape.\n\ni.e\n mydb=# SELECT * FROM users WHERE id LIKE 'WORKGROUP\\\\\\\\me';\n\n\nNow this must be treated as a bug.\nAs you can see it is not an error with the PSQLODBC driver as I ran the SQL\nfrom the command line with the same results.\nI am presuming that the backend parsing logic around the LIKE prodicate is\nignoring the '\\'.\n\nIs anyone working on this ?. Can anyone send me a fix, as without this I'm\nscrewed.\n\nThanks for any help\n\nAndy.\nahm@exel.co.uk\n\n\n\n\n\n",
"msg_date": "Wed, 7 Nov 2001 12:56:46 -0000",
"msg_from": "\"Andy Hallam\" <ahm@exel.co.uk>",
"msg_from_op": true,
"msg_subject": "LIKE predicate and '\\' character"
},
{
"msg_contents": "> I sent a previous mail with regard to using the '\\' (backslash) character\nin\n> an SQL SELECT statement.\n> The outcome was that postgres does not escape the '\\' itself - I need to\ndo\n> it myself before submitting the SQL - fair enough, I now do this.\n\nI have just written a java app that uses a single '\\' in an SQL SELECT\nstatement and unlike my C application that uses the PSQLODBC driver this\n*DOES* return data. To me this says that the problem of having to escape the\n'\\' myself (as I have to do in my C++ ODBC application) has already been\naddressed in the Java driver, and so I do not need to escape it myself in my\nJava application.\nIf this problem has been addressed in the Java driver then surely (for\nconformity) it should also be addressed in the ODBC driver ?.\n\n\nHere is my Java code :\n...\n try {\n con = DriverManager.getConnection(url, \"postgres\",\n\"postgres\");\n }\n catch (Exception e) {\n MyOutput(e.getMessage());\n System.exit(1);\n }\n\n try {\n String strPart;\n\n strPart = \"A\\\\B\";\nMyOutput(\"strPart: <\" + strPart + \">\");\n\n strSQL = \"SELECT partdesc FROM partmaster WHERE partnum = ?\";\nMyOutput(\"SELECT SQL: <\" + strSQL + \">\");\n\n PreparedStatement pstmt = con.prepareStatement(strSQL);\n pstmt.setString(1, strPart);\n\n result = pstmt.executeQuery();\n\n while (result.next()) {\n data = result.getString(1);\nMyOutput(\"DATA FETCHED: Partdesc = <\" + result.getString(1) + \">\");\n }\n }\n catch (Exception e) {\n MyOutput(e.getMessage());\n System.exit(1);\n }\n\nHere is my program output:\n\nstrPart: <A\\B>\nSELECT SQL: <SELECT partdesc FROM partmaster WHERE partnum = ?>\nDATA FETCHED: Partdesc = <AB SLASH TEST>\n\n\nJava does have the same problem with the LIKE predicate however, as to\nreturn any data I need to change my code to :\n\n...\nstrPart = \"A\\\\\\\\B\";\n...\nstrSQL = \"SELECT partdesc FROM partmaster WHERE partnum LIKE ?\";\n\nComments please.\n\nAndy\nahm@exel.co.uk\n\n\"Andy Hallam\" <ahm@exel.co.uk> wrote in message\nnews:9sb3ek$r0k$1@news.tht.net...\n> PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> PSQLODBC.DLL - 07.01.0007\n> Visual C++ - 6.0\n>\n> I sent a previous mail with regard to using the '\\' (backslash) character\nin\n> an SQL SELECT statement.\n> The outcome was that postgres does not escape the '\\' itself - I need to\ndo\n> it myself before submitting the SQL - fair enough, I now do this.\n>\n> i.e\n> instead of\n> mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\me';\n> I now do\n> mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\\\me';\n>\n> BUT, if I use the LIKE predicate I have to escape the escape.\n>\n> i.e\n> mydb=# SELECT * FROM users WHERE id LIKE 'WORKGROUP\\\\\\\\me';\n>\n>\n> Now this must be treated as a bug.\n> As you can see it is not an error with the PSQLODBC driver as I ran the\nSQL\n> from the command line with the same results.\n> I am presuming that the backend parsing logic around the LIKE prodicate is\n> ignoring the '\\'.\n>\n> Is anyone working on this ?. Can anyone send me a fix, as without this I'm\n> screwed.\n>\n> Thanks for any help\n>\n> Andy.\n> ahm@exel.co.uk\n>\n>\n>\n\n\n\n\n",
"msg_date": "Wed, 7 Nov 2001 12:58:00 -0000",
"msg_from": "\"Andy Hallam\" <ahm@exel.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: LIKE predicate and '\\' character"
},
{
"msg_contents": "On Wed, Nov 07, 2001 at 12:56:46PM -0000, Andy Hallam wrote:\n\n The PostgreSQL parser \"eat\" one '\\' on arbitrary place in query. You\n you want put '\\' to some function (operator) you must use '\\\\'. \n\ntest=# select '\\\\';\n?column?\n----------\n\\\n(1 row)\n\ntest=# select '\\\\\\\\';\n?column?\n----------\n\\\\\n(1 row)\n\ntest=# select 'hello\\\\pg' like 'hello\\\\pg';\n?column?\n----------\nf\n(1 row)\n\ntest=# select 'hello\\\\pg' like 'hello\\\\\\\\pg';\n?column?\n----------\n t\n (1 row)\n \n Karel\n\n\n> PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> PSQLODBC.DLL - 07.01.0007\n> Visual C++ - 6.0\n> \n> I sent a previous mail with regard to using the '\\' (backslash) character in\n> an SQL SELECT statement.\n> The outcome was that postgres does not escape the '\\' itself - I need to do\n> it myself before submitting the SQL - fair enough, I now do this.\n> \n> i.e\n> instead of\n> mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\me';\n> I now do\n> mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\\\me';\n> \n> BUT, if I use the LIKE predicate I have to escape the escape.\n> \n> i.e\n> mydb=# SELECT * FROM users WHERE id LIKE 'WORKGROUP\\\\\\\\me';\n> \n> \n> Now this must be treated as a bug.\n> As you can see it is not an error with the PSQLODBC driver as I ran the SQL\n> from the command line with the same results.\n> I am presuming that the backend parsing logic around the LIKE prodicate is\n> ignoring the '\\'.\n> \n> Is anyone working on this ?. Can anyone send me a fix, as without this I'm\n> screwed.\n> \n> Thanks for any help\n> \n> Andy.\n> ahm@exel.co.uk\n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Wed, 7 Nov 2001 15:11:11 +0100",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: LIKE predicate and '\\' character"
},
{
"msg_contents": "\"Andy Hallam\" <ahm@exel.co.uk> writes:\n> BUT, if I use the LIKE predicate I have to escape the escape.\n> Now this must be treated as a bug.\n\nIt's not a bug, it's the defined behavior of LIKE. See\nhttp://www.ca.postgresql.org/users-lounge/docs/7.1/postgres/functions-matching.html#FUNCTIONS-LIKE\n\nYou might find it more convenient to select a different escape\ncharacter for LIKE.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 12:21:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: LIKE predicate and '\\' character "
},
{
"msg_contents": "On Wed, 7 Nov 2001, Andy Hallam wrote:\n\n> PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> PSQLODBC.DLL - 07.01.0007\n> Visual C++ - 6.0\n>\n> I sent a previous mail with regard to using the '\\' (backslash) character in\n> an SQL SELECT statement.\n> The outcome was that postgres does not escape the '\\' itself - I need to do\n> it myself before submitting the SQL - fair enough, I now do this.\n>\n> i.e\n> instead of\n> mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\me';\n> I now do\n> mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\\\me';\n>\n> BUT, if I use the LIKE predicate I have to escape the escape.\n>\n> i.e\n> mydb=# SELECT * FROM users WHERE id LIKE 'WORKGROUP\\\\\\\\me';\n>\n>\n> Now this must be treated as a bug.\n\nPostgres *also* treats \\ as the default LIKE escape character.\nUse LIKE '<string>' ESCAPE '' (or some other character if\nyou want to use the like escaping for %, etc).\n\n\n",
"msg_date": "Wed, 7 Nov 2001 09:23:27 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: LIKE predicate and '\\' character"
},
{
"msg_contents": "Did you actually read my mail posting???\n\n> I sent a previous mail with regard to using the '\\' (backslash) character\nin\n> an SQL SELECT statement.\n> The outcome was that postgres does not escape the '\\' itself - I need to\ndo\n> it myself before submitting the SQL - fair enough, I now do this.\n\nI already know what you have just tried to explain to me.\n\nMy question is not about having to escape a '\\' character but the fact that\nthe behaviour is not consistent when using the LIKE predicate. I.e - you\nhave to escape the escape when using LIKE.\n\nAndy.\n\n\n\n\n\n\n\"Karel Zak\" <zakkr@zf.jcu.cz> wrote in message\nnews:20011107151110.C6354@zf.jcu.cz...\n> On Wed, Nov 07, 2001 at 12:56:46PM -0000, Andy Hallam wrote:\n>\n> The PostgreSQL parser \"eat\" one '\\' on arbitrary place in query. You\n> you want put '\\' to some function (operator) you must use '\\\\'.\n>\n> test=# select '\\\\';\n> ?column?\n> ----------\n> \\\n> (1 row)\n>\n> test=# select '\\\\\\\\';\n> ?column?\n> ----------\n> \\\\\n> (1 row)\n>\n> test=# select 'hello\\\\pg' like 'hello\\\\pg';\n> ?column?\n> ----------\n> f\n> (1 row)\n>\n> test=# select 'hello\\\\pg' like 'hello\\\\\\\\pg';\n> ?column?\n> ----------\n> t\n> (1 row)\n>\n> Karel\n>\n>\n> > PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> > PSQLODBC.DLL - 07.01.0007\n> > Visual C++ - 6.0\n> >\n> > I sent a previous mail with regard to using the '\\' (backslash)\ncharacter in\n> > an SQL SELECT statement.\n> > The outcome was that postgres does not escape the '\\' itself - I need to\ndo\n> > it myself before submitting the SQL - fair enough, I now do this.\n> >\n> > i.e\n> > instead of\n> > mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\me';\n> > I now do\n> > mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\\\me';\n> >\n> > BUT, if I use the LIKE predicate I have to escape the escape.\n> >\n> > i.e\n> > mydb=# SELECT * FROM users WHERE id LIKE 'WORKGROUP\\\\\\\\me';\n> >\n> >\n> > Now this must be treated as a bug.\n> > As you can see it is not an error with the PSQLODBC driver as I ran the\nSQL\n> > from the command line with the same results.\n> > I am presuming that the backend parsing logic around the LIKE prodicate\nis\n> > ignoring the '\\'.\n> >\n> > Is anyone working on this ?. Can anyone send me a fix, as without this\nI'm\n> > screwed.\n> >\n> > Thanks for any help\n> >\n> > Andy.\n> > ahm@exel.co.uk\n> >\n> >\n> >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n> --\n> Karel Zak <zakkr@zf.jcu.cz>\n> http://home.zf.jcu.cz/~zakkr/\n>\n> C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n",
"msg_date": "Thu, 8 Nov 2001 14:10:09 -0000",
"msg_from": "\"Andy Hallam\" <ahm@exel.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: LIKE predicate and '\\' character"
},
{
"msg_contents": "I have since found out that Postgres treats \\ as the default LIKE escape\ncharacter. (Courtesy Stephan Szabo)\n\nI will learn to live with this 'feature'.\n\nThanks.\n\nAndy\n\n\"Andy Hallam\" <ahm@exel.co.uk> wrote in message\nnews:9se3qn$1dju$1@news.tht.net...\n> Did you actually read my mail posting???\n>\n> > I sent a previous mail with regard to using the '\\' (backslash)\ncharacter\n> in\n> > an SQL SELECT statement.\n> > The outcome was that postgres does not escape the '\\' itself - I need to\n> do\n> > it myself before submitting the SQL - fair enough, I now do this.\n>\n> I already know what you have just tried to explain to me.\n>\n> My question is not about having to escape a '\\' character but the fact\nthat\n> the behaviour is not consistent when using the LIKE predicate. I.e - you\n> have to escape the escape when using LIKE.\n>\n> Andy.\n>\n>\n>\n>\n>\n>\n> \"Karel Zak\" <zakkr@zf.jcu.cz> wrote in message\n> news:20011107151110.C6354@zf.jcu.cz...\n> > On Wed, Nov 07, 2001 at 12:56:46PM -0000, Andy Hallam wrote:\n> >\n> > The PostgreSQL parser \"eat\" one '\\' on arbitrary place in query. You\n> > you want put '\\' to some function (operator) you must use '\\\\'.\n> >\n> > test=# select '\\\\';\n> > ?column?\n> > ----------\n> > \\\n> > (1 row)\n> >\n> > test=# select '\\\\\\\\';\n> > ?column?\n> > ----------\n> > \\\\\n> > (1 row)\n> >\n> > test=# select 'hello\\\\pg' like 'hello\\\\pg';\n> > ?column?\n> > ----------\n> > f\n> > (1 row)\n> >\n> > test=# select 'hello\\\\pg' like 'hello\\\\\\\\pg';\n> > ?column?\n> > ----------\n> > t\n> > (1 row)\n> >\n> > Karel\n> >\n> >\n> > > PostgreSQL - 7.1.3 (installed on Linux 2.4.2-2)\n> > > PSQLODBC.DLL - 07.01.0007\n> > > Visual C++ - 6.0\n> > >\n> > > I sent a previous mail with regard to using the '\\' (backslash)\n> character in\n> > > an SQL SELECT statement.\n> > > The outcome was that postgres does not escape the '\\' itself - I need\nto\n> do\n> > > it myself before submitting the SQL - fair enough, I now do this.\n> > >\n> > > i.e\n> > > instead of\n> > > mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\me';\n> > > I now do\n> > > mydb=# SELECT * FROM users WHERE id = 'WORKGROUP\\\\me';\n> > >\n> > > BUT, if I use the LIKE predicate I have to escape the escape.\n> > >\n> > > i.e\n> > > mydb=# SELECT * FROM users WHERE id LIKE 'WORKGROUP\\\\\\\\me';\n> > >\n> > >\n> > > Now this must be treated as a bug.\n> > > As you can see it is not an error with the PSQLODBC driver as I ran\nthe\n> SQL\n> > > from the command line with the same results.\n> > > I am presuming that the backend parsing logic around the LIKE\nprodicate\n> is\n> > > ignoring the '\\'.\n> > >\n> > > Is anyone working on this ?. Can anyone send me a fix, as without this\n> I'm\n> > > screwed.\n> > >\n> > > Thanks for any help\n> > >\n> > > Andy.\n> > > ahm@exel.co.uk\n> > >\n> > >\n> > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of\nbroadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n> >\n> > --\n> > Karel Zak <zakkr@zf.jcu.cz>\n> > http://home.zf.jcu.cz/~zakkr/\n> >\n> > C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n>\n\n\n",
"msg_date": "Thu, 8 Nov 2001 14:11:56 -0000",
"msg_from": "\"Andy Hallam\" <ahm@exel.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: LIKE predicate and '\\' character"
}
] |
[
{
"msg_contents": "Hi-\n\nI'm on version 7.1, and I'm getting this error when attempting to select\nfrom a view:\n\nRIGHT JOIN is only supported with mergejoinable join conditions\n\nI don't understand what this error is telling me...\n\nThe script I have used to create the view is below pasted in below.\nEssentially, I have a main table which I want to see every row from. I also\nhave two separate lookup tables that I want to get a description field from\n*if* there is a matching code in a corresponding nullable field in the main\ntable. I tried pasting this into MSAccess, and it works fine there. (I know\nthis doesn't necessarily mean it is valid SQL <grin>.)\n\nMy questions are:\n1)Have I done something wrong here, or am I hitting a limitation of\nPostgreSQL?\n2)In either case, how could I re-write this to make it work with PostgreSQL?\n\nThanks!\n\n-Nick\n\ncreate view demo as\n select\n case_data.case_id,\n case_disposition_code.case_disp_global_desc,\n local_case_type.global_case_type_desc\n from\n local_case_type\n right join\n (\n case_disposition_code\n right join\n case_data\n on\n case_disposition_code.case_disp_local_code =\n case_data.case_disp_local_code\n )\n on\n (\n local_case_type.court_id =\n case_data.court_id\n )\n and\n (\n local_case_type.local_case_subtype_code =\n case_data.local_case_type_code\n )\n and\n (\n local_case_type.local_case_subtype_code =\n case_data.local_case_subtype_code\n );\n\n\n--------------------------------------------------------------------------\nNick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\nRay Ontko & Co. Software Consulting Services http://www.ontko.com/\n\n",
"msg_date": "Wed, 7 Nov 2001 11:16:38 -0500",
"msg_from": "\"Nick Fankhauser\" <nickf@ontko.com>",
"msg_from_op": true,
"msg_subject": "RIGHT JOIN is only supported with mergejoinable join conditions"
},
{
"msg_contents": "Nick,\n\n> RIGHT JOIN is only supported with mergejoinable join conditions\n\nWoof! Talk about destruction testing. You have ... let's see ... a\nthree-column right join on two right-joined tables. If you ahve\nuncovered a bug, I wouldn't be surprised. \n\nHowever, are you sure you want RIGHT OUTER JOINS and not LEFT? Try\nre-organizing the query as LEFT JOINS, and see if it works. \n\ncreate view demo as\n �����select\n �������case_data.case_id,\n �������case_disposition_code.case_disp_global_desc,\n �������local_case_type.global_case_type_desc\n �����from\n �������(�case_data\n ���������left join\n case_disposition_code\n ���������on\n ���������case_data.case_disp_local_code =\n ���������case_disposition_code.case_disp_local_code \n �������)\n LEFT JOIN local_case_type ON\n �������((\n ���������local_case_type.court_id =\n ���������case_data.court_id\n �������)\n �������and\n �������(\n ���������local_case_type.local_case_subtype_code =\n ���������case_data.local_case_type_code\n �������)\n �������and\n �������(\n ���������local_case_type.local_case_subtype_code =\n ���������case_data.local_case_subtype_code\n �������));\n\nIf that doesn't work, try making the case_data and case_disposition_code\njoin into a subselect.\n\n-Josh \n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Wed, 07 Nov 2001 08:49:10 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: RIGHT JOIN is only supported with mergejoinable join"
},
{
"msg_contents": "\"Nick Fankhauser\" <nickf@ontko.com> writes:\n> I'm on version 7.1, and I'm getting this error when attempting to select\n> from a view:\n> RIGHT JOIN is only supported with mergejoinable join conditions\n\nWhat are the datatypes of the columns you're using?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 12:38:43 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RIGHT JOIN is only supported with mergejoinable join conditions "
},
{
"msg_contents": "They are all varchar.\n-Nick\n\n> > RIGHT JOIN is only supported with mergejoinable join conditions\n> \n> What are the datatypes of the columns you're using?\n> \n> \t\t\tregards, tom lane\n> \n",
"msg_date": "Wed, 7 Nov 2001 13:29:44 -0500",
"msg_from": "\"Nick Fankhauser\" <nickf@ontko.com>",
"msg_from_op": true,
"msg_subject": "Re: RIGHT JOIN is only supported with mergejoinable join conditions "
},
{
"msg_contents": "\"Nick Fankhauser\" <nickf@ontko.com> writes:\n> and\n> (\n> local_case_type.local_case_subtype_code =\n> case_data.local_case_type_code\n> )\n\nDid you actually mean to match local_case_subtype_code against\nlocal_case_type_code, or is that a typo?\n\nI believe you have uncovered a planner bug, but the bug may be triggered\nby the partial overlap of this join condition with the next one.\nAssuming that it's a typo, you may find that you avoid the problem by\nfixing the typo.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 14:06:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RIGHT JOIN is only supported with mergejoinable join conditions "
},
{
"msg_contents": "Tom, Josh:\n\nThanks for the ideas! Tom's idea was the easiest to test, so I tried it\nfirst, and it worked! As you surmised, there was a typo, so I removed the\nextra \"sub\".\n\nI agree that this still may be a bug. These tables have been migrated\nforward from an older postgresql version & hence have no primary or foreign\nkey constraints that might tip off the planner about my typo - as far as the\ndatabase knows, these are just two varchar fields in separate tables. Your\nthought about the overlap causing the problem seems likely since this seems\nto be a valid query, even with the typo.\n\nAt any rate, my immediate problem is solved & I'm a happy camper!\n\nThanks.\n\n-Nick\n\n--------------------------------------------------------------------------\nNick Fankhauser nickf@ontko.com Phone 1.765.935.4283 Fax 1.765.962.9788\nRay Ontko & Co. Software Consulting Services http://www.ontko.com/\n\n> -----Original Message-----\n> From: pgsql-sql-owner@postgresql.org\n> [mailto:pgsql-sql-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Wednesday, November 07, 2001 2:06 PM\n> To: nickf@ontko.com\n> Cc: PGSQL-SQL\n> Subject: Re: [SQL] RIGHT JOIN is only supported with mergejoinable join\n> conditions\n>\n>\n> \"Nick Fankhauser\" <nickf@ontko.com> writes:\n> > and\n> > (\n> > local_case_type.local_case_subtype_code =\n> > case_data.local_case_type_code\n> > )\n>\n> Did you actually mean to match local_case_subtype_code against\n> local_case_type_code, or is that a typo?\n>\n> I believe you have uncovered a planner bug, but the bug may be triggered\n> by the partial overlap of this join condition with the next one.\n> Assuming that it's a typo, you may find that you avoid the problem by\n> fixing the typo.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n",
"msg_date": "Wed, 7 Nov 2001 17:22:13 -0500",
"msg_from": "\"Nick Fankhauser\" <nickf@ontko.com>",
"msg_from_op": true,
"msg_subject": "Re: RIGHT JOIN is only supported with mergejoinable join conditions "
},
{
"msg_contents": "\"Nick Fankhauser\" <nickf@ontko.com> writes:\n> I agree that this still may be a bug.\n\nIt definitely is a bug --- we fixed a similar problem around 7.1.1 or\nso, but this test case appears to expose a different variant of the\nmistake. (The planner is generating a plan that the executor can't\nhandle; it's supposed to know not to do that.)\n\nI think I know where to fix it, but am not confident enough in\nmy powers of analysis today to want to actually commit anything.\n(I've had a bad head-cold all week and am still unable to do anything\nthat requires more than a few minutes of sustained thought :-()\nWill get back on it as soon as I feel better...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 17:48:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RIGHT JOIN is only supported with mergejoinable join conditions "
},
{
"msg_contents": "\"Nick Fankhauser\" <nickf@ontko.com> writes:\n> I'm on version 7.1, and I'm getting this error when attempting to select\n> from a view:\n> RIGHT JOIN is only supported with mergejoinable join conditions\n\nI have committed a fix for this problem --- of the three routines that\ncan generate mergejoin plans, only two were checking to ensure they'd\ngenerated a valid join plan in RIGHT/FULL join cases. I seem to recall\nhaving deliberately decided that sort_inner_and_outer didn't need to\ncheck, but your example proves that it does.\n\nThere is still a related problem with FULL JOIN, which is that *all*\nthe possible join plans may get rejected:\n\nregression=# create table aa (v1 varchar, v2 varchar);\nCREATE\nregression=# create table bb (v1 varchar, v2 varchar, v3 varchar);\nCREATE\nregression=# select * from aa a full join bb b on\nregression-# a.v2 = b.v3 and a.v1 = b.v2 and a.v1 = b.v1 and a.v2 = b.v1;\nERROR: Unable to devise a query plan for the given query\nregression=#\n\nThis is not exactly fatal, since you can work around it by pushing\ndown the redundant join condition to one of the input relations:\n\nregression=# select * from aa a full join bb b on\nregression-# a.v2 = b.v3 and a.v1 = b.v2 and a.v1 = b.v1\nregression-# where a.v2 = a.v1;\n[ okay ]\n\nBut it's pretty annoying anyway. I'm trying to figure out how we could\nimplement the query as given...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 11 Nov 2001 14:37:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] RIGHT JOIN is only supported with mergejoinable join\n\tconditions"
}
] |
[
{
"msg_contents": "i need to know if i can install postgres database on windows 2000.\nand how can i do such thing\n\nthank you\n\n-- \njfk\n\n\n",
"msg_date": "Wed, 7 Nov 2001 17:06:57 -0000",
"msg_from": "\"julian felipe castrillon\" <jfk@puj.edu.co>",
"msg_from_op": true,
"msg_subject": ""
}
] |
[
{
"msg_contents": "Dear all,\n\nCould it be possible to use the Java Unicode Notation to define UTF-8 \nstrings in PostgreSQL 7.2.\nInformation can be found on http://czyborra.com/utf/\n\nBest regards,\nJean-Michel pOURE\n\n************************************************\n\nJava's Unicode Notation\nThere are some less compact but more readable ASCII transformations the \nmost important of which is the Java Unicode Notation as allowed in Java \nsource code and processed by Java's native2ascii converter:\n\nputwchar(c)\n{\n if (c >= 0x10000) {\n printf (\"\\\\u%04x\\\\u%04x\" , 0xD7C0 + (c >> 10), 0xDC00 | c & 0x3FF);\n }\n else if (c >= 0x100) printf (\"\\\\u%04x\", c);\n else putchar (c);\n}\n\nThe advantage of the \\u20ac notation is that it is very easy to type it in \non any old ASCII keyboard and easy to look up the intended character if you \nhappen to have a copy of the Unicode book or the \n{unidata2,names2,unihan}.txt files from the Unicode FTP site or CD-ROM or \nknow what U+20AC is the �.\n\nWhat's not so nice about the \\u20ac notation is that the small letters are \nquite unusual for Unicode characters, the backslashes have to be quoted for \nmany Unix tools, the four hexdigits without a terminator may appear merged \nwith the following word as in \\u00a333 for �33, it is unclear when and how \nyou have to escape the backslash character itself, 6 bytes for one \ncharacter may be considered wasteful, and there is no way to clearly \npresent the characters beyond \\uffff without \\ud800\\udc00 surrogates, and \nlast but not least the plain hexnumbers may not be very helpful.\n\nJAVA is one of the target and source encodings of yudit and its uniconv \nconverter.\n\n\n",
"msg_date": "Wed, 07 Nov 2001 21:45:42 +0100",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": true,
"msg_subject": "Java's Unicode Notation"
},
{
"msg_contents": "Hi Folks;\n Anyone have a code hack to 7.1 to make postgreSQL break out of the\n'sameuser' jail if a user as the 'postgres' superuser flag? Or maybe to set\nconfig file lines based also on 'superuser' (like 'crypt superuser' or\nsomething like that). Otherwise I think I might make one.\n--\nMike\n\n",
"msg_date": "Wed, 7 Nov 2001 18:58:36 -0400",
"msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>",
"msg_from_op": false,
"msg_subject": "'postgres' flag"
},
{
"msg_contents": "\"Mike Rogers\" <temp6453@hotmail.com> writes:\n> Anyone have a code hack to 7.1 to make postgreSQL break out of the\n> 'sameuser' jail if a user as the 'postgres' superuser flag?\n\nThe difficulty with that idea is that the connection-matching code has\nno idea whether a given userid is superuser or not (indeed, that info\nis not available to the postmaster at all).\n\n> Or maybe to set\n> config file lines based also on 'superuser' (like 'crypt superuser' or\n> something like that). Otherwise I think I might make one.\n\nDid you read the thread a day or two back in pgsql-admin? Consider\nsomething like\n\n\tlocal\tsameuser\tpassword\n\tlocal\tall\t\tpassword crossauth\n\nwhere crossauth contains the usernames you want to allow to connect\nto databases other than their own.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 08 Nov 2001 09:10:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 'postgres' flag "
},
{
"msg_contents": "There appears to be some delay on the list. I just received that message\nthis morning (how helpful)- I will be trying to implement it now and see how\nfar I can get. It looks like it'll work. Does it work with 'crypt' or only\n'password' (i presently use crypted passwords, but I can change that if\nit'll make all the difference)?\n\n Now the even bigger question- why isn't this documented?\n--\nMike\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Thursday, November 08, 2001 10:10 AM\nSubject: Re: [HACKERS] 'postgres' flag\n\n\n> \"Mike Rogers\" <temp6453@hotmail.com> writes:\n> > Anyone have a code hack to 7.1 to make postgreSQL break out of the\n> > 'sameuser' jail if a user as the 'postgres' superuser flag?\n>\n> The difficulty with that idea is that the connection-matching code has\n> no idea whether a given userid is superuser or not (indeed, that info\n> is not available to the postmaster at all).\n>\n> > Or maybe to set\n> > config file lines based also on 'superuser' (like 'crypt superuser' or\n> > something like that). Otherwise I think I might make one.\n>\n> Did you read the thread a day or two back in pgsql-admin? Consider\n> something like\n>\n> local sameuser password\n> local all password crossauth\n>\n> where crossauth contains the usernames you want to allow to connect\n> to databases other than their own.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n",
"msg_date": "Thu, 8 Nov 2001 14:11:46 -0400",
"msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 'postgres' flag"
},
{
"msg_contents": "Thank you so much- I have been trying to do exactly that for months (my\npostgres and admin users could never see the individual users because we\nwere using sameuser, unless they were logged in as certain users so that\nident could work- and even then, it's not hard to come from\nroot@anothermachine or admin@anothermachine). Thanks so much. This should\nreally be documented. It's not in the sample pg_hba.conf nor the web docs.\n--\nMike\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Mike Rogers\" <temp6453@hotmail.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Thursday, November 08, 2001 10:10 AM\nSubject: Re: [HACKERS] 'postgres' flag\n\n\n> \"Mike Rogers\" <temp6453@hotmail.com> writes:\n> > Anyone have a code hack to 7.1 to make postgreSQL break out of the\n> > 'sameuser' jail if a user as the 'postgres' superuser flag?\n>\n> The difficulty with that idea is that the connection-matching code has\n> no idea whether a given userid is superuser or not (indeed, that info\n> is not available to the postmaster at all).\n>\n> > Or maybe to set\n> > config file lines based also on 'superuser' (like 'crypt superuser' or\n> > something like that). Otherwise I think I might make one.\n>\n> Did you read the thread a day or two back in pgsql-admin? Consider\n> something like\n>\n> local sameuser password\n> local all password crossauth\n>\n> where crossauth contains the usernames you want to allow to connect\n> to databases other than their own.\n>\n> regards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n",
"msg_date": "Thu, 8 Nov 2001 14:21:49 -0400",
"msg_from": "\"Mike Rogers\" <temp6453@hotmail.com>",
"msg_from_op": false,
"msg_subject": "Re: 'postgres' flag"
},
{
"msg_contents": "Hi,\n\nI'm answering to the original mail, as it has the description itself.\n\n* Jean-Michel POURE <jm.poure@freesurf.fr> [011107 22:04]:\n> Dear all,\n> \n> Could it be possible to use the Java Unicode Notation to define UTF-8 \n> strings in PostgreSQL 7.2.\n> Information can be found on http://czyborra.com/utf/\n> \n> Best regards,\n> Jean-Michel pOURE\n> \n> ************************************************\n> \n> Java's Unicode Notation\n> There are some less compact but more readable ASCII transformations\n> the most important of which is the Java Unicode Notation as allowed\n> in Java source code and processed by Java's native2ascii converter:\n> \n> putwchar(c)\n> {\n> if (c >= 0x10000) {\n> printf (\"\\\\u%04x\\\\u%04x\" , 0xD7C0 + (c >> 10), 0xDC00 | c & 0x3FF);\n> }\n> else if (c >= 0x100) printf (\"\\\\u%04x\", c);\n> else putchar (c);\n> }\n> \n> The advantage of the \\u20ac notation is that it is very easy to type\n> it in on any old ASCII keyboard and easy to look up the intended\n> character if you happen to have a copy of the Unicode book or the\n> {unidata2,names2,unihan}.txt files from the Unicode FTP site or\n> CD-ROM or know what U+20AC is the ᅵ.\n ^^^\nWas that the codepoint for the windows proprietary charset for the\nEuro, disguised in a mail advertising itself as \"iso-8859-1\", which\ndoesn't have the euro sign ? ;)\n\n[No wonder Unicode is really needed in Europe !]\n\n> What's not so nice about the \\u20ac notation is that the small\n> letters are quite unusual for Unicode characters, the backslashes\n> have to be quoted for many Unix tools, the four hexdigits without a\n> terminator may appear merged with the following word as in \\u00a333\n> for ᅵ33, it is unclear when and how you have to escape the backslash\n> character itself, 6 bytes for one character may be considered\n> wasteful, and there is no way to clearly present the characters\n> beyond \\uffff without \\ud800\\udc00 surrogates, and last but not\n> least the plain hexnumbers may not be very helpful.\n> \n> JAVA is one of the target and source encodings of yudit and its\n> uniconv converter.\n\nI have to disagree about this feature... well, not about the idea, but\nthe implementation.\n\nFirst, the use of surrogates to describe > 0x010000 codepoints.\nSurrogates are NOT Unicode codepoints. They only exist in UTF-16\nencoding, which is the encoding used by Java and Windows. However,\nPostgreSQL, as most Unix tools, uses UTF-8 as encoding.\n\nEncoding codepoints over 0xffff with two surrogates in UTF-8 is\nillegal... So, you should forget about this, as this is an unnatural\nextra step.\n\nI've seen somewhere the notation \\v010000 (using \\v for 6-char\ncodepoints). But I don't like it too much either.\n\nI agree with your idea of being able to express unicode codepoints\ndirectly with escape characters. I personally like Perl's solution :\n\n\\x{20ac}\n\\x{010123}\n\\x{7e}\n\nUsing the braces, it makes it unambiguous to deal with codepoint\nlength (I've often myself put one \"0\" too much or not enough in\nunicode code point descriptions).\n\nI don't mind \\u{...} instead of \\x{...}. But a lot of PostgreSQL users\nwould be familiar with \\x{} notation :) [Me being the first one]\n\nI think that this is something for psql however. Where is \"\\n\"\ntranslated, for example ? Anyway, for 7.3... :)\n\nPatrice.\n\n-- \nPatrice Hᅵdᅵ\nemail: patrice hede ᅵ islande org\nwww : http://www.islande.org/\n",
"msg_date": "Mon, 12 Nov 2001 19:03:14 +0100",
"msg_from": "Patrice =?iso-8859-15?Q?H=E9d=E9?= <phede-ml@islande.org>",
"msg_from_op": false,
"msg_subject": "Re: Java's Unicode Notation"
}
] |
[
{
"msg_contents": "\nHi -\n\nIs there a good reason why the aclcontains() UDF in utils/adt/acl.c is\ndefined as it is, instead of calling over to aclcheck() in\ncatalog/aclchk.c? With that, aclcontains('{\"group foo=r\"}',\"user bar=r\")\nwould return true if bar is in foo.\n\n- FChE\n",
"msg_date": "07 Nov 2001 15:45:43 -0500",
"msg_from": "fche@redhat.com (Frank Ch. Eigler)",
"msg_from_op": true,
"msg_subject": "ACL-related adt functions: aclcontains vs aclcheck"
},
{
"msg_contents": "fche@redhat.com (Frank Ch. Eigler) writes:\n> Is there a good reason why the aclcontains() UDF in utils/adt/acl.c is\n> defined as it is, instead of calling over to aclcheck() in\n> catalog/aclchk.c?\n\nBackwards compatibility?\n\n> With that, aclcontains('{\"group foo=r\"}',\"user bar=r\")\n> would return true if bar is in foo.\n\nI suspect what you are really after is a function that tests \"is\nprivilege x available to user y given this ACL?\" That would be a\ngood thing to have, but I'd say make a new function for it; don't\narbitrarily redefine old functions, no matter how useless you might\nthink they are as-is.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 07 Nov 2001 16:26:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ACL-related adt functions: aclcontains vs aclcheck "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.