threads listlengths 1 2.99k |
|---|
[
{
"msg_contents": "Hi all,\n\nI did an initial patch for ALTER TABLE / SET NULL that should just say 'not\nimplemented' when someone tries it, but I get this:\n\ntemplate1=# alter table test alter column a set null;\nERROR: parser: parse error at or near \"null\"\ntemplate1=# alter table test alter column a set null_p;\nERROR: parser: parse error at or near \"null_p\"\ntemplate1=# alter table test alter column a set not null;\nERROR: parser: parse error at or near \"not\"\n\nWhat have I missed?\n\nAll regression tests pass...\n\nAttached is context diff\n\nI'm pretty sure that I haven't done preproc.y correctly either...\n\nChris\n\nps. DON'T COMMIT THIS PATCH!!!",
"msg_date": "Wed, 20 Mar 2002 13:10:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Help with SET NULL/SET NOT NULL"
},
{
"msg_contents": "seems like other systems keep very similar syntax to the CREATE TABLE \ncommand. i.e.\n \nALTER TABLE blah ALTER COLUMN col datatype (precision.scale) NULL\nALTER TABLE blah ALTER COLUMN col datatype (precision.scale) NOT NULL\n\nDwayne\n\n\n",
"msg_date": "Fri, 22 Mar 2002 11:21:02 -0500",
"msg_from": "\"Dwayne Miller\" <dmiller@espgroup.net>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL/SET NOT NULL"
}
] |
[
{
"msg_contents": "Hello!\n\nDoes anybody know a reason parse_datestyle_internal always returns TRUE?\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Wed, 20 Mar 2002 15:06:15 +0600 (NOVT)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "parse_datestyle_internal always return TRUE"
},
{
"msg_contents": "Yury Bokhoncovich <byg@center-f1.ru> writes:\n> Does anybody know a reason parse_datestyle_internal always returns TRUE?\n\nAncient history. Before GUC there were lots more routines and a lot\nof control structure in variable.c; the return value of the parse/set\nfunctions was used for something or other. The stuff remaining in\nvariable.c isn't yet merged into the GUC mechanism ... but it should be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 20 Mar 2002 09:51:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: parse_datestyle_internal always return TRUE "
},
{
"msg_contents": "> Does anybody know a reason parse_datestyle_internal always returns TRUE?\n\nIt does not. However, if the code has not errored out on elog() calls\nbeforehand, the routine does return TRUE.\n\n - Thomas\n",
"msg_date": "Wed, 20 Mar 2002 06:54:54 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: parse_datestyle_internal always return TRUE"
}
] |
[
{
"msg_contents": "Hello!\n\nIs it valid to change a constant in src/include/miscadmin.h?\n\n===========================\n@@ -150,10 +150,10 @@\n\n #define MAXTZLEN 10 /* max TZ name len, not counting tr. null */\n\n-#define USE_POSTGRES_DATES 0\n #define USE_ISO_DATES 1\n #define USE_SQL_DATES 2\n #define USE_GERMAN_DATES 3\n+#define USE_POSTGRES_DATES 4\n\n extern int DateStyle;\n extern bool EuroDates;\n===========================\n\nThis can make easy parsing of date style in parse_datestyle_internal\nfunction (src/backend/commands/variable.c) in this way:\n\ndatestyle=0;\nif () datestyle=USE_xxx\n...\nif (!datestyle) elog(ERROR\n\n-- \nWBR, Yury Bokhoncovich, Senior System Administrator, NOC of F1 Group.\nPhone: +7 (3832) 106228, ext.140, E-mail: byg@center-f1.ru.\nUnix is like a wigwam -- no Gates, no Windows, and an Apache inside.\n\n\n",
"msg_date": "Wed, 20 Mar 2002 19:17:40 +0600 (NOVT)",
"msg_from": "Yury Bokhoncovich <byg@center-f1.ru>",
"msg_from_op": true,
"msg_subject": "Changing constant in src/include/miscadmin.h"
},
{
"msg_contents": "> Is it valid to change a constant in src/include/miscadmin.h?\n> -#define USE_POSTGRES_DATES 0\n> +#define USE_POSTGRES_DATES 4\n\nYes, the code should still work and afaik these values are not embedded\nanywhere other than in the compiled code so you will stay\nself-consistant.\n\n> This can make easy parsing of date style in parse_datestyle_internal\n> function (src/backend/commands/variable.c) in this way:\n> datestyle=0;\n> if () datestyle=USE_xxx\n> ...\n> if (!datestyle) elog(ERROR\n\nAt the moment, one is allowed to call parse_datestyle_internal() only\nsetting the \"european\" vs \"noneuropean\" flag for month and day\ninterpretation. So the code should not have the check mentioned above.\n\nAlso, I would suggest using an explicit comparison rather than an\nimplicit comparison against zero. Something like\n\n#define DATESTYLE_NOT_SPECIFIED 0\ndatestyle = DATESTYLE_NOT_SPECIFIED\n...\nif (datestyle == DATESTYLE_NOT_SPECIFIED) elog()...\n\nwhere the #define is in the same place as the USE_xxx definitions. That\nway you aren't relying on someone remembering that they *shouldn't* use\nzero as one of the possible valid values. And that way the\nDATESTYLE_NOT_SPECIFIED does not actually have to be zero.\n\n - Thomas\n",
"msg_date": "Wed, 20 Mar 2002 07:10:32 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Changing constant in src/include/miscadmin.h"
}
] |
[
{
"msg_contents": "If I do this as any user:\n\n\tSELECT update_pg_pwd();\n\nit crashes all backends and causes a server-wide restart. Is this\nacceptable behavior? I am sure there are other cases too. Isn't it a\nproblem that we let ordinary users crash the server and cause a restart?\n\n---------------------------------------------------------------------------\n\nLOG: server process (pid 23337) was terminated by signal 11\nLOG: terminating any other active server processes\nLOG: all server processes terminated; reinitializing shared memory and semaphores\nFATAL: The database system is starting up\nLOG: database system was interrupted at 2002-03-21 03:42:08 CET\nLOG: checkpoint record is at 0/43C048\nLOG: redo record is at 0/43C048; undo record is at 0/0; shutdown TRUE\nLOG: next transaction id: 99; next oid: 24747\nLOG: database system was not properly shut down; automatic recovery in progress\nLOG: redo starts at 0/43C088\nLOG: ReadRecord: record with zero length at 0/4421B4\nLOG: redo done at 0/442190\nLOG: database system is ready\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 20 Mar 2002 21:46:11 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Function call crashes server"
},
{
"msg_contents": "Does the same thing here.\n\nSounds like a serious problem to me too.\n\nGreg\n\nOn Wed, 2002-03-20 at 20:46, Bruce Momjian wrote:\n> If I do this as any user:\n> \n> \tSELECT update_pg_pwd();\n> \n> it crashes all backends and causes a server-wide restart. Is this\n> acceptable behavior? I am sure there are other cases too. Isn't it a\n> problem that we let ordinary users crash the server and cause a restart?\n> \n> ---------------------------------------------------------------------------\n> \n> LOG: server process (pid 23337) was terminated by signal 11\n> LOG: terminating any other active server processes\n> LOG: all server processes terminated; reinitializing shared memory and semaphores\n> FATAL: The database system is starting up\n> LOG: database system was interrupted at 2002-03-21 03:42:08 CET\n> LOG: checkpoint record is at 0/43C048\n> LOG: redo record is at 0/43C048; undo record is at 0/0; shutdown TRUE\n> LOG: next transaction id: 99; next oid: 24747\n> LOG: database system was not properly shut down; automatic recovery in progress\n> LOG: redo starts at 0/43C088\n> LOG: ReadRecord: record with zero length at 0/4421B4\n> LOG: redo done at 0/442190\n> LOG: database system is ready\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "20 Mar 2002 22:37:33 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Function call crashes server"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> If I do this as any user:\n> \tSELECT update_pg_pwd();\n> it crashes all backends and causes a server-wide restart. Is this\n> acceptable behavior?\n\nThere are a number of things we might blame this on, all having to do\nwith the overuse of type OID zero to mean too many different things.\nBut my attention is currently focused on this tidbit in ExecTypeFromTL:\n\n TupleDescInitEntry(typeInfo,\n resdom->resno,\n resdom->resname,\n /* fix for SELECT NULL ... */\n (restype ? restype : UNKNOWNOID),\n resdom->restypmod,\n 0,\n false);\n\nHad ExecTypeFromTL rejected restype = 0 rather than substituting\nUNKNOWNOID (a pretty durn random response, IMHO), we'd not see this\ncrash.\n\nThe \"fix for SELECT NULL\" appears to have been committed by you\non 7 Dec 1996. Care to explain it?\n\n(AFAICT, \"SELECT NULL\" does not produce a zero at this point now,\nthough perhaps it did in 1996. Or was there some other case you\nwere defending against back then?)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 00:08:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Function call crashes server "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > If I do this as any user:\n> > \tSELECT update_pg_pwd();\n> > it crashes all backends and causes a server-wide restart. Is this\n> > acceptable behavior?\n> \n> There are a number of things we might blame this on, all having to do\n> with the overuse of type OID zero to mean too many different things.\n> But my attention is currently focused on this tidbit in ExecTypeFromTL:\n> \n> TupleDescInitEntry(typeInfo,\n> resdom->resno,\n> resdom->resname,\n> /* fix for SELECT NULL ... */\n> (restype ? restype : UNKNOWNOID),\n> resdom->restypmod,\n> 0,\n> false);\n> \n> Had ExecTypeFromTL rejected restype = 0 rather than substituting\n> UNKNOWNOID (a pretty durn random response, IMHO), we'd not see this\n> crash.\n> \n> The \"fix for SELECT NULL\" appears to have been committed by you\n> on 7 Dec 1996. Care to explain it?\n> \n> (AFAICT, \"SELECT NULL\" does not produce a zero at this point now,\n> though perhaps it did in 1996. Or was there some other case you\n> were defending against back then?)\n\nThat was 6 months into the Internet-based project. We were just\npatching things to prevent crashes. My guess is that I was trying to\nfix the much more common case of \"SELECT NULL\" and had no idea how it\nwould affect functions that return no value. Feel free to wack it\naround.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 00:13:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Function call crashes server"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Feel free to wack it around.\n\nRemoving the special-case logic in ExecTypeFromTL yields\n\nregression=# SELECT update_pg_pwd();\nERROR: getTypeOutputInfo: Cache lookup of type 0 failed\n\nwhich is not exactly pretty, but it beats a core dump. \"SELECT NULL\"\nstill works.\n\nI'm satisfied with this until we get around to breaking up the uses of\n\"type OID 0\" into several pseudo-types with crisper meanings, per\nprevious discussions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 01:24:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Function call crashes server "
},
{
"msg_contents": "> regression=# SELECT update_pg_pwd();\n> ERROR: getTypeOutputInfo: Cache lookup of type 0 failed\n>\n> which is not exactly pretty, but it beats a core dump. \"SELECT NULL\"\n> still works.\n\nMaybe the regression test database should have tests for all the built-in\nfunctions?\n\nChris\n\n",
"msg_date": "Thu, 21 Mar 2002 14:39:21 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Function call crashes server "
}
] |
[
{
"msg_contents": "Hi all,\n\nThis message didn't seem to go through - am I being blocked by the list\nserver? Let's see if it does this time...\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au]\n> Sent: Wednesday, 20 March 2002 1:10 PM\n> To: Hackers\n> Subject: Help with SET NULL/SET NOT NULL\n>\n>\n> Hi all,\n>\n> I did an initial patch for ALTER TABLE / SET NULL that should\n> just say 'not implemented' when someone tries it, but I get this:\n>\n> template1=# alter table test alter column a set null;\n> ERROR: parser: parse error at or near \"null\"\n> template1=# alter table test alter column a set null_p;\n> ERROR: parser: parse error at or near \"null_p\"\n> template1=# alter table test alter column a set not null;\n> ERROR: parser: parse error at or near \"not\"\n>\n> What have I missed?\n>\n> All regression tests pass...\n>\n> Attached is context diff\n>\n> I'm pretty sure that I haven't done preproc.y correctly either...\n>\n> Chris\n>\n> ps. DON'T COMMIT THIS PATCH!!!",
"msg_date": "Thu, 21 Mar 2002 13:51:52 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "FW: Help with SET NULL/SET NOT NULL"
}
] |
[
{
"msg_contents": "Sorry about including the regression test changes in that last patch - just\nignore them.\n\nSince sending in that last patch, I've fixed preproc.y to use 5 instead of 6\nas the number of params to concatenate...\n\nChris\n\n",
"msg_date": "Thu, 21 Mar 2002 14:19:51 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "oops"
}
] |
[
{
"msg_contents": "I am adding users and groups to pg_hba.conf. The coding is done but I\nam stuck on a reload issue.\n\nAs you may know, 7.2 tokenizes pg_hba.conf once, and reads those tokens\nto test every connection request. I have added code to dump the\ngroup/user mappings into global/pg_group and the postmaster can read\nthat file and substitute group names for users lists during\ntokenization.\n\nI have also added code to dump a new pg_group every time a group/user is\nmodified. (Users have to be done because of user renaming.)\n\nThe problem is when to retokenize pg_hba.conf after a new pg_group is\nmade. Seems I can either force administrators to 'pg_ctl reload' to\nupdate for group changes, or automatically retokenize pg_hba.conf every\ntime I update pg_group. (We don't have any way of handling user renames\nin pg_hba.conf because we enter those as strings, but pg_group will\nhandle them.)\n\nDoes anyone see another option? I can write code so only pg_global is\nretokenized, but right now the user tokens are pulled out for the\nmatching group and inlined into the token stream. If I have a separate\ntoken tree for pg_group, each connection will have to spin through the\ntokens looking for matching group names. I suppose it isn't a big deal,\nbut I want to make sure we want to prevent auto-reloading of pg_hba.conf\non user/group changes, and just reload pg_group.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 03:36:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "\nI think I have figured out a way to do this efficiently. Instead of\nmaking pg_group with groupname/username on each line, I will do\ngroupname/username,username, ... so I can spin through the group token\nfile much quicker; that way, I can read just retokenize pg_group and\nspin through it for each connection. I think that is the way to go.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> I am adding users and groups to pg_hba.conf. The coding is done but I\n> am stuck on a reload issue.\n> \n> As you may know, 7.2 tokenizes pg_hba.conf once, and reads those tokens\n> to test every connection request. I have added code to dump the\n> group/user mappings into global/pg_group and the postmaster can read\n> that file and substitute group names for users lists during\n> tokenization.\n> \n> I have also added code to dump a new pg_group every time a group/user is\n> modified. (Users have to be done because of user renaming.)\n> \n> The problem is when to retokenize pg_hba.conf after a new pg_group is\n> made. Seems I can either force administrators to 'pg_ctl reload' to\n> update for group changes, or automatically retokenize pg_hba.conf every\n> time I update pg_group. (We don't have any way of handling user renames\n> in pg_hba.conf because we enter those as strings, but pg_group will\n> handle them.)\n> \n> Does anyone see another option? I can write code so only pg_global is\n> retokenized, but right now the user tokens are pulled out for the\n> matching group and inlined into the token stream. If I have a separate\n> token tree for pg_group, each connection will have to spin through the\n> tokens looking for matching group names. I suppose it isn't a big deal,\n> but I want to make sure we want to prevent auto-reloading of pg_hba.conf\n> on user/group changes, and just reload pg_group.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 03:42:25 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> The problem is when to retokenize pg_hba.conf after a new pg_group is\n> made. Seems I can either force administrators to 'pg_ctl reload' to\n> update for group changes, or automatically retokenize pg_hba.conf every\n> time I update pg_group.\n\nWhy exactly are you looking to reinvent the wheel, rather than doing\nit the same way we currently handle pg_shadow updates? Send the\npostmaster a signal when you modify the flat file, and it can reread\nthe file on receipt of the signal. See SendPostmasterSignal().\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 11:31:58 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > The problem is when to retokenize pg_hba.conf after a new pg_group is\n> > made. Seems I can either force administrators to 'pg_ctl reload' to\n> > update for group changes, or automatically retokenize pg_hba.conf every\n> > time I update pg_group.\n> \n> Why exactly are you looking to reinvent the wheel, rather than doing\n> it the same way we currently handle pg_shadow updates? Send the\n> postmaster a signal when you modify the flat file, and it can reread\n> the file on receipt of the signal. See SendPostmasterSignal().\n\nI am handling it like pg_shadow. The problem is that because I expand\npg_group inside the pg_hba tokens, I have to retokenize pg_hba.conf too\nafter pg_group changes. I assumed we didn't want pg_hba.conf\nretokenized on a password change and only on a pg_ctl reload.\n\nMy new code has a separate pg_group token list which is not expanded\ninto the pg_hba.conf token list and is traversed for every connection.\n\nIs this the right way to go?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 11:38:05 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "On Thu, Mar 21, 2002 at 11:38:05AM -0500, Bruce Momjian wrote:\n> \n> I am handling it like pg_shadow. The problem is that because I expand\n> pg_group inside the pg_hba tokens, I have to retokenize pg_hba.conf too\n> after pg_group changes. I assumed we didn't want pg_hba.conf\n> retokenized on a password change and only on a pg_ctl reload.\n> \n> My new code has a separate pg_group token list which is not expanded\n> into the pg_hba.conf token list and is traversed for every connection.\n\nHmm, your trading performance on every connection for less work on the\nrare event of a password change? What's wrong with reparsing pg_hba.conf\nat password/group change? Streamline the common case, don't optimize for\nthe rare condition.\n\nRoss\n\n",
"msg_date": "Thu, 21 Mar 2002 10:49:16 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Ross J. Reedstrom wrote:\n> On Thu, Mar 21, 2002 at 11:38:05AM -0500, Bruce Momjian wrote:\n> > \n> > I am handling it like pg_shadow. The problem is that because I expand\n> > pg_group inside the pg_hba tokens, I have to retokenize pg_hba.conf too\n> > after pg_group changes. I assumed we didn't want pg_hba.conf\n> > retokenized on a password change and only on a pg_ctl reload.\n> > \n> > My new code has a separate pg_group token list which is not expanded\n> > into the pg_hba.conf token list and is traversed for every connection.\n> \n> Hmm, your trading performance on every connection for less work on the\n> rare event of a password change? What's wrong with reparsing pg_hba.conf\n> at password/group change? Streamline the common case, don't optimize for\n> the rare condition.\n\nYes, that was the issue. We tell people pg_hba.conf only gets reloaded\nwhen they tell the postmaster to do it. We can't have it happening at\nrandom times, e.g. password change. My new coding will need to only\nspin through a list of group names, not the list of users in each group.\nThat's why the new format for global/pg_group should make things ok for\ndoing this at connection time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 11:52:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, that was the issue. We tell people pg_hba.conf only gets reloaded\n> when they tell the postmaster to do it. We can't have it happening at\n> random times, e.g. password change.\n\nI agree on that: the signal should cause the postmaster to reload\npg_pwd/pg_group info *only*. So you cannot integrate the data from\nthese files into the same datastructure as you use for pg_hba.conf;\nthey have to be separate datastructures.\n\nI think what you are really asking is whether to expand groups by\nsubstitution of user names during read of the file, vs doing it\non-the-fly when accepting a connection. On that I agree with Ross:\nbetter to move work out of the connection logic and into the file\nreread logic as much as possible.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 12:02:13 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, that was the issue. We tell people pg_hba.conf only gets reloaded\n> > when they tell the postmaster to do it. We can't have it happening at\n> > random times, e.g. password change.\n> \n> I agree on that: the signal should cause the postmaster to reload\n> pg_pwd/pg_group info *only*. So you cannot integrate the data from\n> these files into the same datastructure as you use for pg_hba.conf;\n> they have to be separate datastructures.\n> \n> I think what you are really asking is whether to expand groups by\n> substitution of user names during read of the file, vs doing it\n> on-the-fly when accepting a connection. On that I agree with Ross:\n> better to move work out of the connection logic and into the file\n> reread logic as much as possible.\n\nYes, I am doing that. pg_group will be tokenized into username tokens,\nand on connection, the mention of a group token in pg_hba.conf will\ncause a spin through the pg_group tokens to find a matching groupname,\nthen it will look for the requested username. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 12:06:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I am adding users and groups to pg_hba.conf.\n\nYou know what would be cool?\n\nGRANT CONNECT ON mydb TO GROUP myfriends;\n\nand it rewrites pg_hba.conf accordingly.\n\nJust a thought...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 21 Mar 2002 21:29:37 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I am adding users and groups to pg_hba.conf.\n> \n> You know what would be cool?\n> \n> GRANT CONNECT ON mydb TO GROUP myfriends;\n> \n> and it rewrites pg_hba.conf accordingly.\n> \n> Just a thought...\n\nWe are actually not that far away. If you create a group for each\ndatabase, you can grant access to just that group and add/delete users\nfrom that group at will. My new pg_group code will do that.\n\nNow, as far as rewriting pg_hba.conf, that goes into an area where we\nare not sure if the master connection information is in the file or in\nthe database. We also get into a chicken and egg case where we have to\nhave the database loaded to connect to it. I am interested to hear\nwhere people think we should go with this.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 21 Mar 2002 21:37:00 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Now, as far as rewriting pg_hba.conf, that goes into an area where we\n> are not sure if the master connection information is in the file or in\n> the database. We also get into a chicken and egg case where we have to\n> have the database loaded to connect to it. I am interested to hear\n> where people think we should go with this.\n> \n\nI would like to offer this opinion:\n\npostmaster should connect to the database directory as the user who started it.\n",
"msg_date": "Thu, 21 Mar 2002 21:40:06 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
}
] |
[
{
"msg_contents": "I will be on vacation from tomorrow up to April 6th. So don't expect any\nanswer prior early April from me.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n",
"msg_date": "Thu, 21 Mar 2002 10:38:43 +0100",
"msg_from": "Michael Meskes <meskes@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Vacation"
}
] |
[
{
"msg_contents": "I sent a message about this yesterday, but it does not appear to have\ngot to the list.\n\nAccording to the attached message, mips builds for Linux should not use\n-mips2 in the compilation or linking. It appears that this can be\nprevented by removing the mips special case from src/template/linux.\n\n-----Forwarded Message-----\n\nFrom: rmurray@debian.org\nTo: Oliver Elphick <olly@lfix.co.uk>\nSubject: Re: [Fwd: Mail delivery failed: returning message to sender]\nDate: 20 Mar 2002 07:48:38 -0800\n\nOn Wed, Mar 20, 2002 at 09:52:52AM +0000, Oliver Elphick wrote:\n> > Upstream automatically passes -mips2 to gcc on mips platforms. In the case\n> > of Linux, this should not be done, as the main reason to use it (ll/sc) is\n> > handled by glibc and emulated by the kernel. It also makes all of postgresql\n> > unusable on DECstation mipsel machines,...\n> \n> The build failure appears to be that\n> debian/tmp/usr/lib/postgresql/bin/postgres has somehow been deleted\n> during the build. (The build log shows it being installed correctly.) \n> Are you saying that this is related to use of -mips2, or is that a\n> totally separate problem?\n\nNo, the kernel doesn't always have the greatest error messages...\n\ntest.c:\nmain(void)\n{\n return 1;\n}\n\nrmurray@resume:~$ gcc -o t test.c ; ls -l t ; ./t\n-rwxr-xr-x 1 rmurray rmurray 7628 Mar 20 16:43 t\nrmurray@resume:~$ gcc -mips2 -o t test.c ; ls -l t ; ./t\n-rwxr-xr-x 1 rmurray rmurray 7628 Mar 20 16:44 t\nbash: ./t: No such file or directory\n\nThe reason this worked in the past is due to a bug in binutils -- it wasn't\nsetting the mips2 bit in the elf header of the binary. That bug has now\nbeen fixed, and the kernel refuses to run anything with a mips# bit set\nin the header, as it is used to indicate an irix binary, according to the\ncomments in elf.h.\n\n-- \nRyan Murray, Debian Developer (rmurray@cyberhqz.com, rmurray@debian.org)\nThe opinions expressed here are my own.\n\n",
"msg_date": "21 Mar 2002 10:27:52 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Linux/mips compile should not use -mips2"
},
{
"msg_contents": "Oliver Elphick writes:\n\n> > > Upstream automatically passes -mips2 to gcc on mips platforms. In the case\n> > > of Linux, this should not be done, as the main reason to use it (ll/sc) is\n> > > handled by glibc and emulated by the kernel. It also makes all of postgresql\n> > > unusable on DECstation mipsel machines,...\n\nIt's gone. I never liked it there anyway.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 21 Mar 2002 10:26:49 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Linux/mips compile should not use -mips2"
}
] |
[
{
"msg_contents": "\n> Removing the special-case logic in ExecTypeFromTL yields\n> \n> regression=# SELECT update_pg_pwd();\n> ERROR: getTypeOutputInfo: Cache lookup of type 0 failed\n\nWouldn't it be nice to make this a feature that allows\nstored procedures (void update_pg_pwd ()) ? Correctly register\nthis function to not return anything ? This is what the 0 is actually\nsupposed to mean here, no ? Such a proc would need a fmgr, that generates \nan empty resultset.\n\nAndreas\n",
"msg_date": "Thu, 21 Mar 2002 14:34:41 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: Function call crashes server "
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n>> regression=# SELECT update_pg_pwd();\n>> ERROR: getTypeOutputInfo: Cache lookup of type 0 failed\n\n> Wouldn't it be nice to make this a feature that allows\n> stored procedures (void update_pg_pwd ()) ? Correctly register\n> this function to not return anything ? This is what the 0 is actually\n> supposed to mean here, no ?\n\nNo, in this case the procedure is a trigger procedure and is not\nsupposed to be called directly at all. But we don't have a\ndistinguishable signature for triggers as yet. One of the changes\nI'd like to make eventually is that trigger procs take and return\nsome special pseudo-type, so that the type system can catch this\nsort of mistake explicitly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 11:19:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Function call crashes server "
}
] |
[
{
"msg_contents": "I was trying to write a gist index extension, and, after some debugging, \nit looks like I found a bug somewhere in the gist.c code ...\nI can't be quite sure, because I am not familiar with the postgres \ncode... but, here is what I see happenning (this is 7.1, but I compared \nthe sources to 7.2, and did not see this fixed - although, I did not \ninspect it too carefully)...\n\nFirst of all, gistPageAddItem () calls gistdentryinit() with a pointer \nto what's stored in the tuple, so, 'by-value' types do not work (because \ngistcentryinit () would be passed the value itself, when called from \ngistinsert(), and then, in gistPageAddItem (), it is passed a pointer, \ncoming from gistdentryinit () - so, it just doesn't know really how to \ntreat the argument)...\n\nSecondly, gist_tuple_replacekey() seems to have incorrect logic figuring \nout if there is enough space in the tuple (it checks for '<', instead of \n'<=') - this causes a new tuple to get always created (this one, seems \nto be fixed in 7.2)\n\nThirdly, gist_tuple_replace_key () sends a pointer to entry.pred (which \nis already a pointer to the actual value) to index_formtuple (), that \nlooks at the tuple, sees that the type is 'pass-by-value', and puts that \npointer directly into the tuple, so that, the resulting tuple now \ncontains a pointer to a pointer to the actual value...\n\nNow, if more then one split is required, this sequence is repeated again \nand again and again, so that, by the time the tuple gets actually \nwritten, it contains something like a pointer to a pointer to a pointer \nto a pointer to the actual data :-(\n\nOnce again, I've seen some comments in the 7.2 branch about gists and \npass-by-value types, but brief looking at the differences in the source \ndid not make me conveinced that it was indeed fixed...\n\nAnyone knows otherwise?\n\nThanks a lot!\n\nDima\n\n",
"msg_date": "Thu, 21 Mar 2002 16:33:31 -0500",
"msg_from": "Dmitry Tkach <dmitry@openratings.com>",
"msg_from_op": true,
"msg_subject": "A bug in gistPageAddItem()/gist_tuple_replacekey() ???"
},
{
"msg_contents": "\n[ Cc to hackers.]\n\nI haven't see any comment on this. If no one replies, would you send\nover a patch of fixes? Thanks.\n\n---------------------------------------------------------------------------\n\nDmitry Tkach wrote:\n> I was trying to write a gist index extension, and, after some debugging, \n> it looks like I found a bug somewhere in the gist.c code ...\n> I can't be quite sure, because I am not familiar with the postgres \n> code... but, here is what I see happenning (this is 7.1, but I compared \n> the sources to 7.2, and did not see this fixed - although, I did not \n> inspect it too carefully)...\n> \n> First of all, gistPageAddItem () calls gistdentryinit() with a pointer \n> to what's stored in the tuple, so, 'by-value' types do not work (because \n> gistcentryinit () would be passed the value itself, when called from \n> gistinsert(), and then, in gistPageAddItem (), it is passed a pointer, \n> coming from gistdentryinit () - so, it just doesn't know really how to \n> treat the argument)...\n> \n> Secondly, gist_tuple_replacekey() seems to have incorrect logic figuring \n> out if there is enough space in the tuple (it checks for '<', instead of \n> '<=') - this causes a new tuple to get always created (this one, seems \n> to be fixed in 7.2)\n> \n> Thirdly, gist_tuple_replace_key () sends a pointer to entry.pred (which \n> is already a pointer to the actual value) to index_formtuple (), that \n> looks at the tuple, sees that the type is 'pass-by-value', and puts that \n> pointer directly into the tuple, so that, the resulting tuple now \n> contains a pointer to a pointer to the actual value...\n> \n> Now, if more then one split is required, this sequence is repeated again \n> and again and again, so that, by the time the tuple gets actually \n> written, it contains something like a pointer to a pointer to a pointer \n> to a pointer to the actual data :-(\n> \n> Once again, I've seen some comments in the 7.2 branch about gists and \n> pass-by-value types, but brief looking at the differences in the source \n> did not make me conveinced that it was indeed fixed...\n> \n> Anyone knows otherwise?\n> \n> Thanks a lot!\n> \n> Dima\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 17:58:31 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] A bug in gistPageAddItem()/gist_tuple_replacekey() ???"
}
] |
[
{
"msg_contents": "I've been able to compile previous versions of PostgreSQL on my SGI\nmachines, but am having trouble this time. I have an SGI O2 with IRIX\n6.5.15f, gmake 3.79.1, and MIPSPro C 7.3.1.3 compiler. I can get\nthrough the 'configure' step fine. I've copied the Makefile.irix5 up\nto the src directory. I've also added '-O2' flags to the CFLAGS and\nLDFLAGS in template/irix5.\n\nWhen I run 'gmake all', the compilation errors on:\n\n\ncc-1521 cc: WARNING File = /usr/include/setjmp.h, Line = 26\n A nonstandard preprocessing directive is used.\n\n #ident \"$Revision: 1.36 $\"\n ^\n\ncc-1521 cc: WARNING File = /usr/include/sys/ipc.h, Line = 17\n A nonstandard preprocessing directive is used.\n\n #ident \"$Revision: 3.30 $\"\n ^\n\ncc-1070 cc: ERROR File = xact.c, Line = 587\n The indicated type is incomplete.\n\n struct timeval delay;\n ^\n\n1 error detected in the compilation of \"xact.c\".\ngmake[4]: *** [xact.o] Error 2\ngmake[4]: Leaving directory\n`/usr/src/postgresql-7.2/src/backend/access/transam'\ngmake[3]: *** [transam-recursive] Error 2\ngmake[3]: Leaving directory\n`/usr/src/postgresql-7.2/src/backend/access'\ngmake[2]: *** [access-recursive] Error 2\ngmake[2]: Leaving directory `/usr/src/postgresql-7.2/src/backend'\ngmake[1]: *** [all] Error 2\ngmake[1]: Leaving directory `/usr/src/postgresql-7.2/src'\ngmake: *** [all] Error 2\n\n\n\nCan anyone offer a suggestion on what I'm doing wrong?\n\nThanks.\n-Tony\n",
"msg_date": "21 Mar 2002 16:15:27 -0800",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": true,
"msg_subject": "Problem compiling PostgreSQL 7.2 on IRIX 6.5.15f"
},
{
"msg_contents": ">\n> cc-1070 cc: ERROR File = xact.c, Line = 587\n> The indicated type is incomplete.\n>\n> struct timeval delay;\n\nstruct timeval must be defined on your \"include path\"/sys/time.h, what have\nyou got?\nregards\n\n",
"msg_date": "Fri, 22 Mar 2002 09:19:35 +0100",
"msg_from": "\"Luis Alberto Amigo Navarro\" <lamigo@atc.unican.es>",
"msg_from_op": false,
"msg_subject": "Re: Problem compiling PostgreSQL 7.2 on IRIX 6.5.15f"
},
{
"msg_contents": "lamigo@atc.unican.es (\"Luis Alberto Amigo Navarro\") wrote in message news:<005901c1d17a$4d7b0e10$cab990c1@atc.unican.es>...\n> >\n> > cc-1070 cc: ERROR File = xact.c, Line = 587\n> > The indicated type is incomplete.\n> >\n> > struct timeval delay;\n> \n> struct timeval must be defined on your \"include path\"/sys/time.h, what have\n> you got?\n> regards\n\nOk. timeval is defined in /sys/time.h.\n\n\n#if _XOPEN4UX || defined(_BSD_TYPES) || defined(_BSD_COMPAT)\n/*\n * Structure returned by gettimeofday(2) system call,\n * and used in other calls.\n * Note this is also defined in sys/resource.h\n */\n#ifndef _TIMEVAL_T\n#define _TIMEVAL_T\nstruct timeval {\n#if _MIPS_SZLONG == 64\n\t__int32_t :32;\n#endif\n\ttime_t\ttv_sec;\t\t/* seconds */\n\tlong\ttv_usec;\t/* and microseconds */\n};\n\n\nIt looks like it won't be used unless XOPEN4UX, BSD_TYPES, or\nBSD_COMPAT is defined.\n\nIs there a way to force the build to define one of these flags. Which\none would be best, what's the syntax and what file should it be added\ninto in the Postgres source?\n\nThanks.\n-Tony\n",
"msg_date": "22 Mar 2002 14:32:42 -0800",
"msg_from": "reina@nsi.edu (Tony Reina)",
"msg_from_op": true,
"msg_subject": "Re: Problem compiling PostgreSQL 7.2 on IRIX 6.5.15f"
}
] |
[
{
"msg_contents": "Jeff Davis asked on -general why NOTIFY doesn't take an optional\nargument, specifying a message that is passed to the listening backend.\nThis feature is supported by Oracle and other databases and I think it's\nquite useful, so I've started to implement it. Most of the modifications\nhave been pretty straight-forward, except for 2 issues:\n\n(1) Processing notifies. Currently, the only data that is passed from\nthe notifying backend to the listening one is the PID of the notifier,\nwhich is stored in the \"notification\" column of pg_listener. In order to\npass messages from notifier to listener, I could add another column to\npg_listener, but IMHO that's a bad idea: there is really no reason for\nthis kind of data to be in pg_listener in the first place. pg_listener\nshould simply list the PIDs of listening backends, as well as the\nconditions upon which they are listening -- any data that is related to\nspecific notifications should be put elsewhere.\n\n(2) Multiple notifications on the same condition name in a short time\nspan are delivered as a single notification. This isn't currently a\nproblem because the NOTIFY itself doesn't carry any data (other than\nbackend PID), it just informs the listener that an event has occurred.\nIf we allow NOTIFY to send a message to the listener, this is not good\n-- the listener should be notified for each and every notification,\nsince the contents of the message could be important.\n\nSolution: Create a new system catalog, pg_notify. This should contain 4\ncolumns:\n\n\trelname: the name of the NOTIFY condition that has been sent\n\tmessage: the optional message sent by the NOTIFY\n\tsender: the PID of the backend that sent the NOTIFY\n\treceiver: the PID of the listening backend\n\nAFAICT, this should resolve the two issues mentioned above. The actual\nnotification of a listening backend is still done at transaction commit,\nby sending a SIGUSR2: however, all this does is to ask the backend to\nscan through pg_notify, looking for tuples containing its PID in\n\"receiver\". Therefore, even if Unix doesn't send multiple signals for\nmultiple notifications, a single signal should be enough to ensure a\nscan of pg_notify, where any additional notifications will be found.\n\nIf we continued to add columns to pg_listener, there would be a limit of\n1 tuple per listening backend: thus, we would still run into problems\nwith multiple notifications being ignored.\n\nCan anyone see a better way to do this? Are there any problems with the\nimplementation I've outlined?\n\nAny feedback would be appreciated.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "21 Mar 2002 21:12:57 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "notification: pg_notify ?"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Solution: Create a new system catalog, pg_notify.\n\nIt's not apparent to me why that helps much.\n\nThere is a very significant performance problem with LISTEN/NOTIFY\nvia pg_listener: in any application that generates notifications at\na significant rate, pg_listener will accumulate dead tuples at that\nsame rate, and we will soon find ourselves wasting lots of time\nscanning through dead tuples. Frequent VACUUMs might help, but the\nwhole thing is really quite silly: why are we using a storage mechanism\nthat's designed entirely for *stable* storage of data to pass inherently\n*transient* signals? If the system crashes, we have absolutely zero\ninterest in the former contents of pg_listener (and indeed need to go\nto some trouble to get rid of them).\n\nSo if someone wants to undertake a revision of the listen/notify code,\nI think the first thing to do ought to be to throw away pg_listener\nentirely and develop some lower-overhead, shared-memory-based\ncommunication mechanism. You could do worse than to use the shared\ncache inval code as a model --- or perhaps even incorporate LISTEN\nsignaling into that mechanism. (Actually that seems like a good plan,\nso as not to use shared memory inefficiently by dedicating two separate\nmemory pools to parallel purposes.)\n\nIf you follow the SI model then NOTIFY messages would essentially be\nbroadcast to all backends, and whether any given backend pays attention\nto one is its own problem; no one else cares.\n\nA deficiency of the SI implementation (and probably anything else that\nrelies solely on shared memory) is that it can suffer from buffer\noverrun, since there's a fixed-size message pool. For the purposes\nof cache inval, we cope with buffer overrun by just invalidating\neverything in sight. It might be a workable tradeoff to cope with\nbuffer overrun for LISTEN/NOTIFY by reporting notifies on all conditions\ncurrently listened for. Assuming that overrun is infrequent, the net\nperformance gain from being able to use shared memory is probably worth\nthe occasional episode of wasted work.\n\n\nBTW, I would like to see a spec for this \"notify with parameter\" feature\nbefore it's implemented, not after. Exactly what semantics do you have\nin mind?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 22:41:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "On Thu, 2002-03-21 at 22:41, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > Solution: Create a new system catalog, pg_notify.\n> \n> It's not apparent to me why that helps much.\n\nWell, it solves the functional problem at hand -- this feature can now\nbe implemented. However, I agree with you that there are still problems\nwith NOTIFY and pg_listener, as you have outlined.\n\n> So if someone wants to undertake a revision of the listen/notify code,\n> I think the first thing to do ought to be to throw away pg_listener\n> entirely and develop some lower-overhead, shared-memory-based\n> communication mechanism. You could do worse than to use the shared\n> cache inval code as a model --- or perhaps even incorporate LISTEN\n> signaling into that mechanism. (Actually that seems like a good plan,\n> so as not to use shared memory inefficiently by dedicating two separate\n> memory pools to parallel purposes.)\n\nThat's very interesting. I need to read the code you're referring to\nbefore I can comment further, but I'll definately look into this. That's\na good idea.\n\n> If you follow the SI model then NOTIFY messages would essentially be\n> broadcast to all backends,\n\nMy apologies, but what's the SI model?\n\n> A deficiency of the SI implementation (and probably anything else that\n> relies solely on shared memory) is that it can suffer from buffer\n> overrun, since there's a fixed-size message pool. For the purposes\n> of cache inval, we cope with buffer overrun by just invalidating\n> everything in sight. It might be a workable tradeoff to cope with\n> buffer overrun for LISTEN/NOTIFY by reporting notifies on all conditions\n> currently listened for.\n\nThis assumes that the NOTIFY condition we're waiting for is fairly\nroutine (e.g. \"table x is updated, refresh the cache\"). If a NOTIFY\nactually represents the occurence of a non-trivial condition, this could\nbe a problem (e.g. \"the site crashed, page the sys-admin\", and the\nbuffer happens to overflow at 2 AM :-) ). However, it's questionable\nwhether that is an appropriate usage of NOTIFY.\n\n> BTW, I would like to see a spec for this \"notify with parameter\" feature\n> before it's implemented, not after.\n\nWhat information would you like to know?\n\n> Exactly what semantics do you have in mind?\n\nThe current syntax I'm using is:\n\n\tNOTIFY condition_name [ [WITH MESSAGE] 'my message' ];\n\nBut I'm open to suggestions for improvement.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "21 Mar 2002 23:20:32 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n>> BTW, I would like to see a spec for this \"notify with parameter\" feature\n>> before it's implemented, not after.\n\n> The current syntax I'm using is:\n> \tNOTIFY condition_name [ [WITH MESSAGE] 'my message' ];\n\nHm. How are you going to transmit that to the client side without\nchanging the FE/BE protocol? (While we will no doubt find reasons\nto change the protocol in the future, I'm not eager to force a protocol\nupdate right now; at least not without more reason than just NOTIFY\nparameters.) If we want to avoid a protocol break then it seems\nlike the value transmitted to the client has to be a single string.\n\nI guess we could say that what's transmitted is a single string in\nthe form\n\tcondition_name.additional_text\n(or pick some other delimiter instead of dot, but doesn't seem like\nit matters much). Pretty grotty though.\n\nAnother thought that comes to mind is that we could reinterpret the\nparameter of LISTEN as a pattern to match against the strings generated\nby NOTIFY --- then there's no need to draw a hard-and-fast distinction\nbetween condition name and parameter text; it's all in the eye of the\nbeholder. However it's tough to see how to do this without breaking\nbackwards compatibility at the syntax level --- you'd really want LISTEN\nto be accepting a string literal, rather than a name, to make this\nhappen.\n\nThat brings up the more general point that you'd want at least\nthe \"message\" part of NOTIFY to be computable as an SQL expression,\nnot just a literal. It might be entertaining to try to reimplement\nNOTIFY as something that's internally like a SELECT, just with a\nfunny data destination. I find this attractive because if it were\na SELECT then it could have (at least on the inside) a WHERE clause,\nwhich'd make it possible to handle NOTIFYs in conditional rules in\na less broken fashion than we do now.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 21 Mar 2002 23:40:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "> > Exactly what semantics do you have in mind?\n>\n> The current syntax I'm using is:\n>\n> \tNOTIFY condition_name [ [WITH MESSAGE] 'my message' ];\n>\n> But I'm open to suggestions for improvement.\n\nHave you considered visiting the oracle site and finding their documentation\nfor their NOTIFY statement and making sure you're using compatible syntax?\nThey might have extra stuff as well.\n\nChris\n\n",
"msg_date": "Fri, 22 Mar 2002 12:41:19 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "On Thu, 2002-03-21 at 23:41, Christopher Kings-Lynne wrote:\n> > > Exactly what semantics do you have in mind?\n> >\n> > The current syntax I'm using is:\n> >\n> > \tNOTIFY condition_name [ [WITH MESSAGE] 'my message' ];\n> >\n> > But I'm open to suggestions for improvement.\n> \n> Have you considered visiting the oracle site and finding their documentation\n> for their NOTIFY statement and making sure you're using compatible syntax?\n\nOracle's implementation uses a completely different syntax to begin\nwith: it's called DBMS_ALERT.\n\n> They might have extra stuff as well.\n\n From a brief scan of their docs, it doesn't look like it. In fact, their\nimplementation seems to be worse than PostgreSQL's in at least one\nrespect: \"A waiting application is blocked in the database and cannot do\nany other work.\"\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "22 Mar 2002 02:06:36 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "> On Thu, 2002-03-21 at 23:41, Christopher Kings-Lynne wrote:\n> > > > Exactly what semantics do you have in mind?\n> > >\n> > > The current syntax I'm using is:\n> > >\n> > > \tNOTIFY condition_name [ [WITH MESSAGE] 'my message' ];\n> > >\n> > > But I'm open to suggestions for improvement.\n> >\n> > Have you considered visiting the oracle site and finding their\n> documentation\n> > for their NOTIFY statement and making sure you're using\n> compatible syntax?\n>\n> Oracle's implementation uses a completely different syntax to begin\n> with: it's called DBMS_ALERT.\n\nOK - not Oracle then. Didn't you say some other db did it - what about\ntheir syntax?\n\nChris\n\n",
"msg_date": "Fri, 22 Mar 2002 15:10:38 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "On Fri, 2002-03-22 at 06:40, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> >> BTW, I would like to see a spec for this \"notify with parameter\" feature\n> >> before it's implemented, not after.\n> \n> > The current syntax I'm using is:\n> > \tNOTIFY condition_name [ [WITH MESSAGE] 'my message' ];\n> \n> Hm. How are you going to transmit that to the client side without\n> changing the FE/BE protocol? (While we will no doubt find reasons\n> to change the protocol in the future, I'm not eager to force a protocol\n> update right now; at least not without more reason than just NOTIFY\n> parameters.) If we want to avoid a protocol break then it seems\n> like the value transmitted to the client has to be a single string.\n> \n> I guess we could say that what's transmitted is a single string in\n> the form\n> \tcondition_name.additional_text\n> (or pick some other delimiter instead of dot, but doesn't seem like\n> it matters much). Pretty grotty though.\n> \n> Another thought that comes to mind is that we could reinterpret the\n> parameter of LISTEN as a pattern to match against the strings generated\n> by NOTIFY --- then there's no need to draw a hard-and-fast distinction\n> between condition name and parameter text; it's all in the eye of the\n> beholder.\n\nThat'ts what I suggested a few weeks ago in a well hidden message at the\nend of reply to somewhat related question ;)\n\n> However it's tough to see how to do this without breaking\n> backwards compatibility at the syntax level --- you'd really want LISTEN\n> to be accepting a string literal, rather than a name, to make this\n> happen.\n\nCan't we accept both - name for simple things and string for regexes.\n\n> That brings up the more general point that you'd want at least\n> the \"message\" part of NOTIFY to be computable as an SQL expression,\n> not just a literal.\n\nI think this should be any expression that returns text.\n\nI even wouldnt mind if I had to use explicit insert:\n\ninsert into pg_notify \nselect\n relname || '.' || cast(myobjectid as text),\n listenerpid\nfrom pg_listener\nwhere 'inv' ~ relname \n\nJust the delivery has to be automatic.\n\n> It might be entertaining to try to reimplement\n> NOTIFY as something that's internally like a SELECT, just with a\n> funny data destination.\n\nI thought that NOTIFY is implemented as an INSERT internally, no ?\n\n> I find this attractive because if it were\n> a SELECT then it could have (at least on the inside) a WHERE clause,\n> which'd make it possible to handle NOTIFYs in conditional rules in\n> a less broken fashion than we do now.\n\n--------------\nHannu\n\n\n",
"msg_date": "22 Mar 2002 15:11:25 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "On Thu, 2002-03-21 at 22:41, Tom Lane wrote:\n> It might be a workable tradeoff to cope with\n> buffer overrun for LISTEN/NOTIFY by reporting notifies on all conditions\n> currently listened for. Assuming that overrun is infrequent, the net\n> performance gain from being able to use shared memory is probably worth\n> the occasional episode of wasted work.\n\nI've thought about this some more, and I don't think that solution will\nbe sufficient.\n\nSpurious notifications seems like a pretty serious drawback, and I don't\nthink they solve anything. As I mentioned earlier, if the event a notify\nsignifies is non-trivial, this could have serious repercussions.\n\nBut more importantly, what happens when the buffer overruns and we\nnotify all backends? If a listening backend is in the middle of a\ntransaction when it is notified, it just sets a flag and goes back to\nprocessing (i.e. it doesn't clear the buffer).\n\nIf a listening backend is idle when it is notified, it checks the\nbuffer: but since this is normal behavior, any idle & notified backend\nwill have already checked the buffer! I don't see how the \"notify\neveryone\" scheme solves anything -- if a backend _could_ respond\nquickly, it also would already done so and we wouldn't have an overrun\nbuffer in the first place.\n\nIf we notify all backends and then clear the notification buffer,\nbackends in the midst of a transaction will check the buffer when they\nfinish their transaction but find it empty. Since this has the potential\nto destroy legitimate notifications, this is clearly not an option.\n\nUltimately, we're just coming up with kludges to work around a\nfundamental flaw (we're using a static buffer for a dynamically sized\nresource). (Am I the only one who keeps running into shared memory\nlimitations? :-)\n\nI can see two viable solutions:\n\n(1) Use the shared-memory-based buffer scheme you suggested. When a\nbackend executes a NOTIFY, it stores it until transaction commit (as in\ncurrent sources). When the transaction commits, it checks to see if\nthere would be a buffer overflow if it added the NOTIFY to the buffer --\nif so, it complains loudly to the log, and sleeps. When it awakens, it\nrepeats (try to add to buffer; else, sleep).\n\n(2) The pg_notify scheme I suggested. It only marginally improves the\nsituation, but it does preserve the behavior we have now.\n\nI think #1 isn't as bad as it might at first seem. The notification\nbuffer only overflows in a rare (and arguably broken) situation: when\nthe listening backend is in a (very) long-lived transaction, so that the\nnotification buffer is never checked and eventually fills up. If we\nstrongly suggest to application developers that they avoid this\nsituation in the first place (by not starting long-running transactions\nin listening backends), and we also make the size of the buffer\nconfigurable, this situation is tolerable.\n\nComments? Can anyone see a better solution? Is #1 reasonable behavior?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "22 Mar 2002 20:03:19 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> (1) Use the shared-memory-based buffer scheme you suggested. When a\n> backend executes a NOTIFY, it stores it until transaction commit (as in\n> current sources). When the transaction commits, it checks to see if\n> there would be a buffer overflow if it added the NOTIFY to the buffer --\n> if so, it complains loudly to the log, and sleeps. When it awakens, it\n> repeats (try to add to buffer; else, sleep).\n\nThis is NOT an improvement over the current arrangement. It implies\nthat a notification might be postponed indefinitely, thereby allowing\nlisteners to keep using stale data indefinitely.\n\nLISTEN/NOTIFY is basically designed for invalidate-your-cache\narrangements (which is what led into this discussion originally, no?).\nIn *any* caching arrangement, it is far better to have the occasional\nspurious data drop than to fail to drop stale data when you need to.\nAccordingly, a forced cache clear is an appropriate response to\noverrun of the communications buffer.\n\nI can certainly imagine applications where the messages are too\nimportant to trust to a not-fully-reliable transmission medium;\nbut I don't think that LISTEN/NOTIFY should be loaded down with\nthat sort of requirement. You can easily build 100% reliable\n(and correspondingly slow and expensive) communications mechanisms\nusing standard SQL operations. I think the design center for\nLISTEN/NOTIFY should be exactly the case of maintaining client-side\ncaches --- at least that's what I used it for when I had occasion\nto use it, several years ago when I first got involved with Postgres.\nAnd for that application, a cheap mechanism that never loses a\nnotification, but might occasionally over-notify, is just what you\nwant.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Mar 2002 00:13:21 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "What if we used a combination of the two approaches? That is, when an\noverflow occurs, overflow into a table? That way, nothing is lost and\nspurious random events don't have to occur. That way, things are faster\nwhen overflows are not occurring. When the system gets too far behind,\nit simply overflows into the the existing table until the system can\ncatch up. This way, we don't have to waste resources notifying listens\nthat would otherwise not need to be notified.\n\nGreg\n\n\n\n\nOn Fri, 2002-03-22 at 23:13, Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > (1) Use the shared-memory-based buffer scheme you suggested. When a\n> > backend executes a NOTIFY, it stores it until transaction commit (as in\n> > current sources). When the transaction commits, it checks to see if\n> > there would be a buffer overflow if it added the NOTIFY to the buffer --\n> > if so, it complains loudly to the log, and sleeps. When it awakens, it\n> > repeats (try to add to buffer; else, sleep).\n> \n> This is NOT an improvement over the current arrangement. It implies\n> that a notification might be postponed indefinitely, thereby allowing\n> listeners to keep using stale data indefinitely.\n> \n> LISTEN/NOTIFY is basically designed for invalidate-your-cache\n> arrangements (which is what led into this discussion originally, no?).\n> In *any* caching arrangement, it is far better to have the occasional\n> spurious data drop than to fail to drop stale data when you need to.\n> Accordingly, a forced cache clear is an appropriate response to\n> overrun of the communications buffer.\n> \n> I can certainly imagine applications where the messages are too\n> important to trust to a not-fully-reliable transmission medium;\n> but I don't think that LISTEN/NOTIFY should be loaded down with\n> that sort of requirement. You can easily build 100% reliable\n> (and correspondingly slow and expensive) communications mechanisms\n> using standard SQL operations. I think the design center for\n> LISTEN/NOTIFY should be exactly the case of maintaining client-side\n> caches --- at least that's what I used it for when I had occasion\n> to use it, several years ago when I first got involved with Postgres.\n> And for that application, a cheap mechanism that never loses a\n> notification, but might occasionally over-notify, is just what you\n> want.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "23 Mar 2002 10:51:36 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Greg Copeland <greg@CopelandConsulting.Net> writes:\n> What if we used a combination of the two approaches? That is, when an\n> overflow occurs, overflow into a table?\n\nI think this is a really bad idea.\n\nThe major problem with it is that the overflow path would be complex,\ninfrequently exercised, and therefore almost inevitably buggy. (Look\nat all the problems we had for so long with SI overflow response. I'd\nstill not like to have to swear there are none left.)\n\nAlso, I do not think you could get away with merging listen/notify with\nthe system cache inval mechanism if you wanted to have table overflow for\nlisten/notify. SI is too low level --- to point out just one problem,\na new backend's access to the SI message queue has to be initialized\nlong before we are ready to do any table access. So you'd be requiring\ndedicated shared memory space just for listen/notify. That's a hard\nsell in my book.\n\n> That way, nothing is lost and spurious random events don't have to\n> occur.\n\nI think this argument is spurious. Almost any client-side caching\narrangement is going to have cases where it's best to issue a \"flush\neverything\" kind of event rather than expend the effort to keep track\nof exactly what has to be invalidated by particular kinds of changes.\nAs long as such changes are infrequent, you have better performance\nand better reliability by not trying to do the extra bookkeeping for\nexact invalidation. Why shouldn't the signal transport mechanism \nbe able to do the same thing?\n\nAlso, the notion that the NOTIFY mechanism can't be lossy misses the\nfact that you've got a perfectly good non-lossy mechanism at hand\nalready: user tables. The traditional way of using NOTIFY has been\nto stick the important data into tables and use NOTIFY simply to\ncue listeners to look in those tables. I don't foresee this changing;\nit'll simply be possible to give somewhat finer-grain notification of\nwhat/where to look. I don't think that forcing NOTIFY to have the\nsame kinds of semantics as SQL tables do is the right design approach.\nIMHO the only reason NOTIFY exists at all is to provide a simpler,\nhigher-performance communication pathway than you can get with tables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Mar 2002 12:46:33 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "On Sat, 2002-03-23 at 12:46, Tom Lane wrote:\n> Also, the notion that the NOTIFY mechanism can't be lossy misses the\n> fact that you've got a perfectly good non-lossy mechanism at hand\n> already: user tables. The traditional way of using NOTIFY has been\n> to stick the important data into tables and use NOTIFY simply to\n> cue listeners to look in those tables. I don't foresee this changing;\n> it'll simply be possible to give somewhat finer-grain notification of\n> what/where to look. I don't think that forcing NOTIFY to have the\n> same kinds of semantics as SQL tables do is the right design approach.\n> IMHO the only reason NOTIFY exists at all is to provide a simpler,\n> higher-performance communication pathway than you can get with tables.\n\nOkay, I agree (of course, it would be nice to have a more reliable\nNOTIFY mechanism, but I can't see of a way to implement a\nhigh-performance, reliable mechanism without at least one serious\ndrawback). And as you rightly point out, there are other methods for\npeople who need more reliability.\n\nSo the new behavior of NOTIFY should be: when the notifying backend\ncommits its transaction, the notification is stored in a shared memory\nbuffer of fixed size, and the listening backend is sent a SIGUSR2. If\nthe shared memory buffer is full, it is completely emptied. In the\nlistening backend's SIGUSR2 signal handler, a flag is set and the\nbackend goes back to its current transaction. When it becomes idle, it\nchecks the shared buffer: if it can't find any matching elements in the\nbuffer, it knows an overrun has occurred. When informing the front-end,\na notification that results from an overrun is signified by a\nnotification with a NULL message and with the PID of the notifying\nbackend sent to some constant (say, -1). This informs the front-end that\nan overrun has occurred, so it can take appropriate action.\n\nIs this behavior acceptable to everyone?\n\nI can see 1 potential problem: there is a race condition in the \"detect\nan overrun\" logic. If an overrun occurs and the buffer is flushed but\nthen another notification for one of the listening backends arrives, a\nbackend will only inform the front-end about the most recent\nnotification: there will be no indication that an overrun occurred, or\nthat there were other legitimate notifications in the buffer before the\noverrun. It would be nice to be able to tell clients 100% \"an overrun\njust occurred, be careful\", but apparently that's not even possible.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "23 Mar 2002 17:46:30 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> I can see 1 potential problem: there is a race condition in the \"detect\n> an overrun\" logic.\n\nOnly if you do it that way :-(. Take another look at the SI messaging\nlogic: it will *not* lose overrun notifications.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Mar 2002 18:03:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n \n> There is a very significant performance problem with LISTEN/NOTIFY\n> via pg_listener: in any application that generates notifications at\n> a significant rate, pg_listener will accumulate dead tuples at that\n> same rate, and we will soon find ourselves wasting lots of time\n> scanning through dead tuples. Frequent VACUUMs might help, but the\n\n\nThat's unfortunate, may be if backend could reuse tuple on updates could help?\n\n\n> whole thing is really quite silly: why are we using a storage mechanism\n> that's designed entirely for *stable* storage of data to pass inherently\n> *transient* signals? If the system crashes, we have absolutely zero\n\n\n\nBecause there is no other easy way to guarantee message delivery?\n\n\n> interest in the former contents of pg_listener (and indeed need to go\n> to some trouble to get rid of them).\n\n\nThere is no free beer :)\n\nRegards,\nMikhail Terekhov\n\n\n\n",
"msg_date": "Wed, 03 Apr 2002 10:03:01 -0500",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "On Wed, 3 Apr 2002, Mikhail Terekhov wrote:\n\n> \n> \n> Tom Lane wrote:\n> \n> \n> > There is a very significant performance problem with LISTEN/NOTIFY\n> > via pg_listener: in any application that generates notifications at\n> > a significant rate, pg_listener will accumulate dead tuples at that\n> > same rate, and we will soon find ourselves wasting lots of time\n> > scanning through dead tuples. Frequent VACUUMs might help, but the\n> \n> \n> That's unfortunate, may be if backend could reuse tuple on updates could help?\n\nThere is already a TODO item to address this. But row reuse is the wrong\nsolution to the problem. See below.\n\n> \n> \n> > whole thing is really quite silly: why are we using a storage mechanism\n> > that's designed entirely for *stable* storage of data to pass inherently\n> > *transient* signals? If the system crashes, we have absolutely zero\n> \n> \n> \n> Because there is no other easy way to guarantee message delivery?\n\nShared memory is much easier and, to all intents and purposes, as reliable\nfor this kind of usage. It is much faster and is the-right-way-to-do-it. \n\nI don't believe that the question 'what happens if there is a buffer\noverrun?' is a valid criticism of this approach. In the case of the\nbackend cache invalidation system, the backends just blow away their cache\nto be on the safe side. A buffer overrun (rare as it would be,\nconsidering the different usage patterns of the shared memory for\nnotification) would result in an elog(ERROR) from within the backend which\nhas attempted to execute the notification. After all, running out of\nmemory is an error in this case.\n\nGavin\n\n",
"msg_date": "Thu, 4 Apr 2002 01:15:47 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n>> Because there is no other easy way to guarantee message delivery?\n\n> Shared memory is much easier and, to all intents and purposes, as reliable\n> for this kind of usage. It is much faster and is the-right-way-to-do-it. \n\nRight. Since we do not attempt to preserve NOTIFY messages over a\nsystem crash, there's no good reason to keep the messages in a table.\nExcept for the problem that shared memory is of limited size.\nBut if we are willing to define the semantics in a way that allows\nbuffer overflow recovery, that can be dealt with.\n\n> A buffer overrun (rare as it would be,\n> considering the different usage patterns of the shared memory for\n> notification) would result in an elog(ERROR) from within the backend which\n> has attempted to execute the notification.\n\nHmm. That's a different way of attacking the overflow problem. I don't\nmuch care for it, but I can see that some applications might prefer this\nbehavior to cache-style overrun response (ie, issue forced NOTIFYs on\nall conditions). Maybe support both ways?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 10:37:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "Gavin Sherry wrote:\n\n> On Wed, 3 Apr 2002, Mikhail Terekhov wrote:\n>>\n>>Tom Lane wrote:\n>>\n>>>There is a very significant performance problem with LISTEN/NOTIFY\n>>>via pg_listener: in any application that generates notifications at\n>>>a significant rate, pg_listener will accumulate dead tuples at that\n>>>same rate, and we will soon find ourselves wasting lots of time\n>>>scanning through dead tuples. Frequent VACUUMs might help, but the\n>>>\n>>That's unfortunate, may be if backend could reuse tuple on updates could help?\n> \n> There is already a TODO item to address this. But row reuse is the wrong\n> solution to the problem. See below.\n> \n\nIt is not a solution to the whole LISTEN/NOTIFY problem, but it is a\nsolution to the dead tuples accumulation.\n\n> \n>>\n>>>whole thing is really quite silly: why are we using a storage mechanism\n>>>that's designed entirely for *stable* storage of data to pass inherently\n>>>*transient* signals? If the system crashes, we have absolutely zero\n>>>\n>>Because there is no other easy way to guarantee message delivery?\n>>\n> \n> Shared memory is much easier and, to all intents and purposes, as reliable\n> for this kind of usage. It is much faster and is the-right-way-to-do-it. \n> \n\nThat highly depends on WHAT-you-want-to-do :)\nIf the new shared memory implementation will guarantee message delivery\nat the same degree as current implementation then it is the-right-way-to-do-it.\nIf not then let's not broke existing functionality! Let's implement it as an\nadditional functionality, say FASTNOTIFY or RIGHTNOTIFY ;)\n >\n\n> I don't believe that the question 'what happens if there is a buffer\n> overrun?' is a valid criticism of this approach. In the case of the\n> backend cache invalidation system, the backends just blow away their cache\n\n\nForgive my ignorance, you mean sending backend?\n\n\n> to be on the safe side. A buffer overrun (rare as it would be,\n\nRegards,\nMikhail\n\n",
"msg_date": "Wed, 03 Apr 2002 14:17:43 -0500",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n> LISTEN/NOTIFY is basically designed for invalidate-your-cache\n> arrangements (which is what led into this discussion originally, no?).\n\n\nWhy do you think so? Even if you are right and original design was\njust for invalidate-your-cache arrangements, current implementation\nhas much more functionality and can be used as a reliable message\ntransmission mechanism (we use it that way). There is no reason to\nbroke this reliability.\n\n\n> In *any* caching arrangement, it is far better to have the occasional\n> spurious data drop than to fail to drop stale data when you need to.\n> Accordingly, a forced cache clear is an appropriate response to\n> overrun of the communications buffer.\n> \n\nThere are not only caching arrangements out there!\nThis resembles me the difference between poll(2) and select(2).\nThey are both useful in different cases.\n\n\n> I can certainly imagine applications where the messages are too\n> important to trust to a not-fully-reliable transmission medium;\n\n\nThat is exactly what we are using LISTEN/NOTIFY for. We don't need\nseparate message passing system, we don't need waste system resources\npolling database and application is simpler and easier to maintain.\n\n\n> but I don't think that LISTEN/NOTIFY should be loaded down with\n> that sort of requirement. You can easily build 100% reliable\n\n\nThis functionality is already in Postgres. \n\nMay be it is not perfect but why remove it?\n\n\n> (and correspondingly slow and expensive) communications mechanisms\n> using standard SQL operations. I think the design center for\n\n\nCould you please elaborate on how to do that without polling?\n\n> LISTEN/NOTIFY should be exactly the case of maintaining client-side\n> caches --- at least that's what I used it for when I had occasion\n> to use it, several years ago when I first got involved with Postgres.\n> And for that application, a cheap mechanism that never loses a\n> notification, but might occasionally over-notify, is just what you\n> want.\n> \nAgain, client side cache is not the only one application of LISTEN/NOTIFY.\n\nIf we need a cheap mechanism for maintaining client side cache let's\nimplement one. Why remove existing functionality!\n\n",
"msg_date": "Wed, 03 Apr 2002 14:40:15 -0500",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Mikhail Terekhov <terekhov@emc.com> writes:\n> Why do you think so? Even if you are right and original design was\n> just for invalidate-your-cache arrangements, current implementation\n> has much more functionality and can be used as a reliable message\n> transmission mechanism (we use it that way).\n\nIt is *not* reliable, at least not in the sense of \"the message is\nguaranteed to be delivered even if there's a system crash\". Which is\nthe normal meaning of \"reliable\" in SQL environments. If you want that\nlevel of reliability, you need to pass your messages by storing them\nin a regular table.\n\nLISTEN/NOTIFY can optimize your message passing by avoiding unnecessary\npolling of the table in the normal no-crash case. But they are not a\nsubstitute for having a table, and I don't see a reason to bog them down\nwith an intermediate level of reliability that isn't buying anything.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 15:11:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "Tom Lane wrote:\n\n> It is *not* reliable, at least not in the sense of \"the message is\n> guaranteed to be delivered even if there's a system crash\". Which is\n> the normal meaning of \"reliable\" in SQL environments. If you want that\n\n\nThat is exactly what I mean by \"reliable\".\n\n\nPlease correct me if I'm wrong but the buffer overrun problem in the new\nLISTEN/NOTOFY mechanism means that it is perfectly possible that sending\nbackend may drop all or some of the pending NOTIFY messages in case of such\nan overrun. If this is the case then this new mechanism would be step\nbackward in terms of functionality relative to the current implementation.\n\nThere will be no guaranty even in a no-crash case.\n\n\n> level of reliability, you need to pass your messages by storing them\n> in a regular table.\n> \n\nThat is exactly what I do in my application. I store messages in a regular\ntable and then send a notify to other clients. But I'd like to have a\nguaranty that without system crash all my notifies will be delivered.\nI use this method when I need to send some additional information except\nthe notice's name. Another case is similar to your cache invalidation\nexample. The big difference is that I need to maintain a kind of cache for\nthe large number of big tables and I need to know promptly when these\ntables change. I can't afford to update this cache frequently enough in\ncase of polling. And when there is no NOTIFY delivery guaranty the only\nsolution is polling. Occasional delivery of NOTIFY messages may only improve\nin some sense the polling strategy. One can not rely on them.\n\n> LISTEN/NOTIFY can optimize your message passing by avoiding unnecessary\n> polling of the table in the normal no-crash case. But they are not a\n\n\nGuaranteed delivery in the normal no-crash case avoids polling\ncompletely in case of cache invalidation scenario. DB crash recovery is a\nvery complex task for an application. Some time a recovery is not possible\nat all. But for cache invalidation a DB crash is nothing more than cache\nreinitialisation (you will get this crash notification without LISTEN/NOTIFY\nmessage ;) Even stronger: you can't receive a crash notification with\nLISTEN/NOTIFY mechanism).\n\nAnd again, this no-crash case guaranty is already here! We don't need to\ndo anything!\n\n\n> substitute for having a table, and I don't see a reason to bog them down\n\n\nSure their are not substitute, and I'm not the one who proposed to extend \n\nLISTEN/NOTIFY mechanism with additional information ;) This whole thread\nwas started to extend LISTEN/NOTIFY mechanism to support optional messages.\nIf we are agree that LISTEN/NOTIFY is not a substitute for having a table for\nsuch a messages, then what is the purpose to reimplement this feature with\na loss of functionality?\n\n > with an intermediate level of reliability that isn't buying anything.\n >\n\nIf you mean reliability in no-crash case then it gives a lot - it eliminates\nneed for polling completely. And once again, we already have this level of\nreliability.\n\nWhat exactly PG will get with this new LISTEN/NOTIFY mechanism? If the profit\nhas so great value, let's implement it as an additional feature, not as a\nreplacement of the existing one with loss of functionality.\n\n\nRegards\nMikhail Terekhov\n\n\n",
"msg_date": "Tue, 09 Apr 2002 13:09:29 -0400",
"msg_from": "Mikhail Terekhov <terekhov@emc.com>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ?"
},
{
"msg_contents": "Mikhail Terekhov <terekhov@emc.com> writes:\n> Please correct me if I'm wrong but the buffer overrun problem in the new\n> LISTEN/NOTOFY mechanism means that it is perfectly possible that sending\n> backend may drop all or some of the pending NOTIFY messages in case of such\n> an overrun.\n\nYou would be guaranteed to get *some* notify. You wouldn't be\nguaranteed to receive the auxiliary info that's proposed to be added to\nthe basic message type; also you might get notify reports for conditions\nthat hadn't actually been signaled.\n\n> If this is the case then this new mechanism would be step\n> backward in terms of functionality relative to the current implementation.\n\nThe current mechanism is hardly perfect; it drops multiple occurrences\nof the same NOTIFY. Yes, the behavior would be different, but that\ndoesn't immediately translate to \"a step backwards\".\n\n> That is exactly what I do in my application. I store messages in a regular\n> table and then send a notify to other clients. But I'd like to have a\n> guaranty that without system crash all my notifies will be delivered.\n\nPlease re-read the proposal. It will not break your application.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Apr 2002 15:42:55 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
},
{
"msg_contents": "On Tue, 9 Apr 2002, Tom Lane wrote:\n\n> Mikhail Terekhov <terekhov@emc.com> writes:\n> > Please correct me if I'm wrong but the buffer overrun problem in the new\n> > LISTEN/NOTOFY mechanism means that it is perfectly possible that sending\n> > backend may drop all or some of the pending NOTIFY messages in case of such\n> > an overrun.\n> \n> You would be guaranteed to get *some* notify. You wouldn't be\n> guaranteed to receive the auxiliary info that's proposed to be added to\n> the basic message type; also you might get notify reports for conditions\n> that hadn't actually been signaled.\n\nI poked around the notify code and had a think about the ideas which have\nbeen put forward. I think the buffer overrun issue can be addressed by\nallowing users to define the importance of the notify they are making. Eg:\n\nNOTIFY HARSH <condition>\n\nIf there is to be a buffer overrun, all conditions are notified and the\nbuffer is, eventually, reset.\n\nNOTIFY SAFE <condition>\n\n(Yes, bad keywords). This on the other hand would check if there is to be\na buffer overrun and (after a SendPostmasterSignal(PMSIGNAL_WAKEN_CHILDREN) \nfails to reduce the buffer) it would invalidate the transaction with an\nelog(ERROR). This can be done since AtCommit_Notify() is run before\nRecordTransactionCommit().\n\nThis does not deal with recovery from a crash. The only way it could is by\nplugging the listen and notify signals into the xlog. This seems very\nmessy though.\n\nGavin\n\n",
"msg_date": "Wed, 10 Apr 2002 11:17:13 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: notification: pg_notify ? "
}
] |
[
{
"msg_contents": "Neil,\n\nFollowing is an email I sent the other day detailing how this works.\n\nThe entry point to the underlying invalidation system is the heap\nmanipulation functions: heap_delete(), heap_update(). (I've just had a\nquick look at heap_insert() and cannot find where the cache modification\ntakes place)\n\nThese call RelationInvalidateHeapTuple() ->\n\tPrepareForTupleInvalidation() -> \n\t\tRegisterCatcacheInvalidation()/RegisterRelcacheInvalidation.\n\nThese feed linked lists which get processed at the end of the transaction\nas is detailed below. Clearly, this is a much better way of running the\nLISTEN/NOTIFY than storing them in the system.\n\nGavin\n\n---------- Forwarded message ----------\nDate: Wed, 20 Mar 2002 02:17:09 +1100 (EST)\nFrom: Gavin Sherry <swm@linuxworld.com.au>\nTo: Greg Copeland <greg@CopelandConsulting.Net>\nCc: mlw <markw@mohawksoft.com>, Jeff Davis <list-pgsql-hackers@dynworks.com>,\n PostgreSQL-development <pgsql-hackers@postgresql.org>\nSubject: Re: [HACKERS] Again, sorry, caching, (Tom What do you think: function\n\nOn 19 Mar 2002, Greg Copeland wrote:\n\n> On Tue, 2002-03-19 at 07:46, mlw wrote:\n> [snip]\n> \n> > Right now, the function manager can only return one value, or one set of values\n> > for a column. It should be possible, but require a lot of research, to enable\n> > the function manager to return a set of rows. If we could get that working, it\n> > could be fairly trivial to implement a cache as a contrib project. It would\n> > work something like this:\n> > \n> > select querycache(\"select * from mytable where foo='bar') ;\n> \n> Interesting concept...but how would you know when the cache has become\n> dirty? That would give you a set of rows...but I don't understand what\n> would let you know your result set is invalid?\n> \n> Perhaps: select querycache( foobar_event, \"select * from my table where\n> foo='bar'\" ) ; would automatically create a listen for you??\n\n\nPersonally, I think this method of providing query caching is very\nmessy. Why not just implement this along side the system relation\ncache? This maybe be slightly more time consuming but it will perform\nbetter and will be able to take advantage of Postgres's current MVCC.\n\nThere would be three times when the cache would be interacted with\n\nExecRetrieve() would need to be modified to handle a\nprepare-for-cache-update kind of feature. This would involve adding the\ntuple table slot data into a linked list.\n\nAt the end of processing/transaction and if the query was successfuly, the\nprepare-for-cache-update list could be processed by AtCommit_Cache() \n(called from CommitTransaction()) and the shared cache updated.\n\n2) attempt to get result set from cache\n\nBefore planning in postgres.c, test if the query will produce an already\ncached result set. If so, send the data off from cache.\n\n3) modification of underlying heap\n\nLike (1), produce a list inside the executor (ExecAppend(), ExecDelete(),\nExecReplace() -> RelationInvalidateHeapTuple() ->\nPrepareForTupleInvalidation()) which gets processed by\nAtEOXactInvalidationMessages(). This results in the affected entries being\npurged.\n\n---\n\nI'm not sure that cached results is a direction postgres need move in. But\nif it does, I think this a better way to do it (given that I may have\noverlooked something) than modifying the function manager (argh!).\n\nGavin\n\n\n\n\n",
"msg_date": "Fri, 22 Mar 2002 15:42:48 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Again, sorry, caching, (Tom What do you think: function"
}
] |
[
{
"msg_contents": "pgman wrote:\n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> > \n> > > I am adding users and groups to pg_hba.conf.\n> > \n> > You know what would be cool?\n> > \n> > GRANT CONNECT ON mydb TO GROUP myfriends;\n> > \n> > and it rewrites pg_hba.conf accordingly.\n> > \n> > Just a thought...\n> \n> We are actually not that far away. If you create a group for each\n> database, you can grant access to just that group and add/delete users\n> from that group at will. My new pg_group code will do that.\n> \n> Now, as far as rewriting pg_hba.conf, that goes into an area where we\n> are not sure if the master connection information is in the file or in\n> the database. We also get into a chicken and egg case where we have to\n> have the database loaded to connect to it. I am interested to hear\n> where people think we should go with this.\n\nI have another idea. What if we had a default group for each database,\nlike pg_connect_{dbname}, and you can add/remove users from that group\nto grant/remove connection privileges? Sort of like a default +dbname\nin pg_hba.conf.\n\nIt sort of merges the group feature with pg_hba.conf connections.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 00:30:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I have another idea. What if we had a default group for each database,\n> like pg_connect_{dbname}, and you can add/remove users from that group\n> to grant/remove connection privileges?\n\nThat strikes me as a very ugly abuse of the privilege system. If you want\nto grant a privilege, use GRANT, not the name of a group.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 22 Mar 2002 01:27:52 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I have another idea. What if we had a default group for each database,\n> > like pg_connect_{dbname}, and you can add/remove users from that group\n> > to grant/remove connection privileges?\n> \n> That strikes me as a very ugly abuse of the privilege system. If you want\n> to grant a privilege, use GRANT, not the name of a group.\n\nWe could use GRANT and internally do it with per-database system groups.\nIt would fit into our system cleanly, and could be dumped/reloaded\ncleanly too. Unfortunately, that would give us two places to specify\nthe connecting users, pg_hba.conf and GRANT CONNECT. Is that a problem?\n\nIt would be tricky to grant access to only one db or all db's using\nGRANT. Not sure how that would be specified. This is where we start to\nget overlap and confusion because it doesn't behave just like\npg_hba.conf but also doesn't have the same flexibility of pg_hba.conf. \nI am still looking for ideas.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 01:32:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Unfortunately, that would give us two places to specify\n> the connecting users, pg_hba.conf and GRANT CONNECT. Is that a problem?\n\nYes. What if they conflict?\n\nI don't think GRANT CONNECT fits into our setup at all. I also doubt\nthat it will be needed very much once we have schemas.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 01:51:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Unfortunately, that would give us two places to specify\n> > the connecting users, pg_hba.conf and GRANT CONNECT. Is that a problem?\n> \n> Yes. What if they conflict?\n> \n> I don't think GRANT CONNECT fits into our setup at all. I also doubt\n> that it will be needed very much once we have schemas.\n\nWith groups, we are at least giving admins a way to do this that they\ndidn't have before. That may be enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 01:54:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Tom Lane writes:\n\n> I don't think GRANT CONNECT fits into our setup at all. I also doubt\n> that it will be needed very much once we have schemas.\n\nPeople have many times asked for a way to alter the connection settings\nfrom within the database. For instance, you add users in the database,\nbut then you need to go elsewhere to give that user any access. Consider\nGRANT CONNECT a built-in editor for pg_hba.conf. You don't have to\nactually store the information in two separate places.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 22 Mar 2002 11:41:29 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > I don't think GRANT CONNECT fits into our setup at all. I also doubt\n> > that it will be needed very much once we have schemas.\n> \n> People have many times asked for a way to alter the connection settings\n> from within the database. For instance, you add users in the database,\n> but then you need to go elsewhere to give that user any access. Consider\n> GRANT CONNECT a built-in editor for pg_hba.conf. You don't have to\n> actually store the information in two separate places.\n\nI don't know. Automatically modifying a manually maintained config file\nisn't too common a feature. One problem would be if you where modifying\nthe file in your editor and the backend rewrote the file.\n\nI think groups will give use the ability to add/remove connection from\nwithin the database. You just need to mention the group name in the\nconfig file. My original idea was to automatically identify some group\nname for each database but maybe that is too smart.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 12:19:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think groups will give use the ability to add/remove connection from\n> within the database. You just need to mention the group name in the\n> config file.\n\nGood point.\n\nThere's also the fact that people will probably start using one big\ndatabase divided into per-user schemas as soon as schema facilities\nare available. So getting fancy with pg_hba controls at this point\nmay well prove to be like building a better buggy whip; good in its\nown context but rendered irrelevant by events.\n\nI agree with Bruce that group-level access controls in pg_hba seem\nlike a sufficient answer at this point. If it turns out not, we\ncan always improve further in future releases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 12:24:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I don't know. Automatically modifying a manually maintained config file\n> isn't too common a feature. One problem would be if you where modifying\n> the file in your editor and the backend rewrote the file.\n\nThat's not different from you modifying the file in your editor and\nsomeone else doing the same thing at the same time. Yes, the concurrency\nissues are not trivial, but they can be solved.\n\n> I think groups will give use the ability to add/remove connection from\n> within the database. You just need to mention the group name in the\n> config file. My original idea was to automatically identify some group\n> name for each database but maybe that is too smart.\n\nYes, that is perfectly fine. I just want an additional interface that\nallows you to \"mention the group name in the config file\" while connected\nto the database.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 22 Mar 2002 12:34:38 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I don't know. Automatically modifying a manually maintained config file\n> > isn't too common a feature. One problem would be if you where modifying\n> > the file in your editor and the backend rewrote the file.\n> \n> That's not different from you modifying the file in your editor and\n> someone else doing the same thing at the same time. Yes, the concurrency\n> issues are not trivial, but they can be solved.\n\nWell, hopefully there is only one administrator at a time modifying\npg_hba.conf. Random user/group mods by any superuser seems like a much\nmore frequent occurance. Another thing is that people duing\ndatabase-level user/group changes may not even know they are modifying\npg_hba.conf.\n\n> > I think groups will give use the ability to add/remove connection from\n> > within the database. You just need to mention the group name in the\n> > config file. My original idea was to automatically identify some group\n> > name for each database but maybe that is too smart.\n> \n> Yes, that is perfectly fine. I just want an additional interface that\n> allows you to \"mention the group name in the config file\" while connected\n> to the database.\n\nI understand. I think the only way to do this cleanly is to have a\nper-database system group that can be created and modified inside the\ndatabase. We can even have an 'all' group to match pg_hba.conf's\ndatabase column 'all'. It is actually trivial to do this in the code\nwith my patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 12:38:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > I don't think GRANT CONNECT fits into our setup at all. I also doubt\n> > that it will be needed very much once we have schemas.\n> \n> People have many times asked for a way to alter the connection settings\n> from within the database. For instance, you add users in the database,\n> but then you need to go elsewhere to give that user any access. Consider\n> GRANT CONNECT a built-in editor for pg_hba.conf. You don't have to\n> actually store the information in two separate places.\n\nOK, Peter, I have implemented a 'samegroup' keyword in pg_hba.conf that\nworks just like sameuser, except it checks for user membership in a\ngroup that is the same name as the database. Two lines of code (plus\ndocs), lots of flexibility.\n\nSo, if people want to control everything from psql, then can just put\nsamegroup in the database column and create groups for each database. \nIf we want to extend this, we can add a GRANT CONNECT command that\noptionally creates the group and add/removes users from that group.\n\nThis is part of my pg_hba.conf overhaul patch that I am still working\non.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 11:16:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Problem with reloading groups in pg_hba.conf"
}
] |
[
{
"msg_contents": "\n> > Do we want the above syntax, or this syntax:\n> >\n> > ALTER TABLE blah ALTER COLUMN col SET NOT NULL;\n> > ALTER TABLE blah ALTER COLUMN col SET NULL;\n> \n> My only objection to the second command is that it's plain wrong. You\n> don't set anything to NULL, so don't make the command look like it.\n\nImho it would be nice if the command would look exactly like a create \ntable. It is simply convenient to use cut and paste :-) And I haven't \nseen a keyword yet, that would make it more descriptive, certainly not SET.\n\nALTER TABLE blah ALTER [COLUMN] col [int4] [NOT NULL] [DEFAULT 32];\nALTER TABLE blah ALTER [COLUMN] col [int8] [NULL] [DEFAULT 32];\nmaybe even [DEFAULT NULL] to drop the default :-)\n\nAndreas\n\n",
"msg_date": "Fri, 22 Mar 2002 15:42:16 +0100",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Zeugswetter Andreas SB SD wrote:\n> \n> Imho it would be nice if the command would look exactly like a create\n> table. It is simply convenient to use cut and paste :-) And I haven't\n> seen a keyword yet, that would make it more descriptive, certainly not SET.\n> \n> ALTER TABLE blah ALTER [COLUMN] col [int4] [NOT NULL] [DEFAULT 32];\n> ALTER TABLE blah ALTER [COLUMN] col [int8] [NULL] [DEFAULT 32];\n> maybe even [DEFAULT NULL] to drop the default :-)\n> \n\nI like this one. I would not make COLUMN optional though.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 22 Mar 2002 11:54:19 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at> writes:\n> ALTER TABLE blah ALTER [COLUMN] col [int4] [NOT NULL] [DEFAULT 32];\n> ALTER TABLE blah ALTER [COLUMN] col [int8] [NULL] [DEFAULT 32];\n\nThis cannot work unless you are prepared to turn a lot more keywords\ninto reserved words. In the CREATE syntax, the data type is not\noptional. In the above, there will be parse conflicts because the\nsystem won't be able to decide whether a type name is present or not.\n\nYou could possibly make it work if you were willing to include the word\nTYPE when trying to respecify column type:\n\nALTER TABLE blah ALTER [COLUMN] col [TYPE int4] [NOT NULL] [DEFAULT 32];\n\nAlso I agree with Fernando that trying to make the word COLUMN optional\nis likely to lead to conflicts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 13:12:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "On Fri, Mar 22, 2002 at 01:12:09PM -0500, Tom Lane wrote:\n> \n> Also I agree with Fernando that trying to make the word COLUMN optional\n> is likely to lead to conflicts.\n\nAccording to the docs, COLUMN is _already_ optional at that point.\nAre the changes past that point going to cause different problems? Boy,\nparsers make my brain hurt.\n\nBTW, is NULLABLE so ugly that no one wanted to comment on it? It _is_\nan sql92 reserved keyword, and it's actual english grammar.\n\nRoss\n\n\n",
"msg_date": "Fri, 22 Mar 2002 13:00:37 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "\"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> BTW, is NULLABLE so ugly that no one wanted to comment on it?\n\nI kinda liked it, actually, if we were going to use the SET syntax.\nBut people seem to be focused in on this \"let's make it look like\nCREATE\" notion. I'm willing to wait and see how far that can be made\nto work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 14:07:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "On Fri, 2002-03-22 at 14:00, Ross J. Reedstrom wrote:\n> BTW, is NULLABLE so ugly that no one wanted to comment on it? It _is_\n> an sql92 reserved keyword, and it's actual english grammar.\n\nFWIW, I liked it the best of all the solutions that have been proposed\nso far.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "22 Mar 2002 14:12:02 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Tom Lane wrote:\n> \"Ross J. Reedstrom\" <reedstrm@rice.edu> writes:\n> > BTW, is NULLABLE so ugly that no one wanted to comment on it?\n> \n> I kinda liked it, actually, if we were going to use the SET syntax.\n> But people seem to be focused in on this \"let's make it look like\n> CREATE\" notion. I'm willing to wait and see how far that can be made\n> to work.\n\nOK, how about:\n\n\tSET CONSTRAINT NOT NULL\n\nor\n\n\tDROP CONSTRAINT NOT NULL\n\nor simply:\n\n\tSET/DROP NOT NULL\n\nI think the problem with trying to get it look like CREATE TABLE is that\nthe plain NULL parameter to CREATE TABLE is meaningless and probably\nshould never be used. I remember at one point pg_dump output NULL in\nthe schema output and it confused many people. NOT NULL is the\nconstraint, and I think any solution to remove NOT NULL has to include\nthe NOT NULL keyword. I think this is also why SET NULL looks so bad. \n\"CREATE TABLE test (x int NULL)\" doesn't look great either. :-) What\nis that NULL doing there?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 14:20:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "...\n> \"CREATE TABLE test (x int NULL)\" doesn't look great either. :-) What\n> is that NULL doing there?\n\nWell, because NOT NULL *was* in the standard, and because one should be\nable to explicitly negate *that*. The alternative was\n\n CREATE TABLE test (x int NOT NOT NULL)\n\n:O\n\n - Thomas\n",
"msg_date": "Fri, 22 Mar 2002 12:53:41 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> ...\n> > \"CREATE TABLE test (x int NULL)\" doesn't look great either. :-) What\n> > is that NULL doing there?\n> \n> Well, because NOT NULL *was* in the standard, and because one should be\n> able to explicitly negate *that*. The alternative was\n> \n> CREATE TABLE test (x int NOT NOT NULL)\n> \n> :O\n\nYea, what I meant is that NULL doesn't look too clear in CREATE TABLE\neither.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 16:24:49 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> You could possibly make it work if you were willing to include the word\n> TYPE when trying to respecify column type:\n>\n> ALTER TABLE blah ALTER [COLUMN] col [TYPE int4] [NOT NULL] [DEFAULT 32];\n>\n> Also I agree with Fernando that trying to make the word COLUMN optional\n> is likely to lead to conflicts.\n\nBut all the other ALTER TABLE/Alter Column commands have it optional...\n\nI have throught of at least two problems with changing nullability. The\nfirst is primary keys. I have to prevent people setting a column involved\nin a PK to null, right?\n\nThe second is DOMAINs - what if they change a NOT NULL domain in a colun\nto NULL? Shoudl I just outright prevent people from altering domain-based\ncolumns nullability>\n\nChris\n\n",
"msg_date": "Sat, 23 Mar 2002 17:36:29 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> I have throught of at least two problems with changing nullability. The\n> first is primary keys. I have to prevent people setting a column involved\n> in a PK to null, right?\n\nProbably so.\n\n> The second is DOMAINs - what if they change a NOT NULL domain in a colun\n> to NULL? Shoudl I just outright prevent people from altering domain-based\n> columns nullability>\n\nI don't think you need worry about this. The prototype DOMAIN\nimplementation is broken anyway --- it should not be transposing\ndomain constraints into column constraints, but should keep 'em\nseparate. The column-level attnotnull setting should be independent\nof whether the domain enforces not-nullness or not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 23 Mar 2002 12:17:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "> OK, how about:\n>\n> \tSET CONSTRAINT NOT NULL\n>\n> or\n>\n> \tDROP CONSTRAINT NOT NULL\n>\n> or simply:\n>\n> \tSET/DROP NOT NULL\n>\n> I think the problem with trying to get it look like CREATE TABLE is that\n> the plain NULL parameter to CREATE TABLE is meaningless and probably\n> should never be used. I remember at one point pg_dump output NULL in\n> the schema output and it confused many people. NOT NULL is the\n> constraint, and I think any solution to remove NOT NULL has to include\n> the NOT NULL keyword. I think this is also why SET NULL looks so bad.\n> \"CREATE TABLE test (x int NULL)\" doesn't look great either. :-) What\n> is that NULL doing there?\n\nOK, I've decided to go with:\n\nALTER TABLE blah ALTER [COLUMN] col SET NOT NULL;\n\nand\n\nALTER TABLE blah ALTER [COLUMN] col DROP NOT NULL;\n\nThis is synchronous with the SET/DROP default stuff and is extensible in the\nfuture to fit in with column type changing.\n\nOf course, it can always be changed in the parser without affecting my code.\n\nChris\n\n",
"msg_date": "Mon, 25 Mar 2002 11:24:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> ALTER TABLE blah ALTER [COLUMN] col SET NOT NULL;\n>\n> and\n>\n> ALTER TABLE blah ALTER [COLUMN] col DROP NOT NULL;\n>\n> This is synchronous with the SET/DROP default stuff and is\n> extensible in the\n> future to fit in with column type changing.\n>\n> Of course, it can always be changed in the parser without\n> affecting my code.\n\nAlso, in the future, once (if) the 'SET TYPE' column type changing function\nhas been implemented, we can create a meta-command to do it all in one\nstatement (for reliability and consistency for users). It could look like\nthis:\n\nALTER TABLE blah ALTER [COLUMN] col [SET TYPE type] [{SET | DROP} NOT NULL]\n[{SET | DROP} DEFAULT [default]]\n\nAnd a command like this should be able to just re-use already written code.\nHowever, some interdependency checks might be more efficient if their done\nbefore any changes are actually made! ie. Changing type to boolean and then\nsetting default to 'blah' in one statement, etc.\n\nChris\n\n",
"msg_date": "Mon, 25 Mar 2002 15:38:35 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL"
},
{
"msg_contents": "> Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> > I have throught of at least two problems with changing nullability. The\n> > first is primary keys. I have to prevent people setting a\n> column involved\n> > in a PK to null, right?\n>\n> Probably so.\n\nWhat about temporary tables - is there any reason they shouldn't be able to\nmodify a temporary table?\n\nWhat about indices? Will twiddling the nullability break indices on a table\nin any way?\n\nAnd foreign keys - foreign keys only have to reference UNIQUE, right? The\nnullability isn't an issue?\n\nLastly - in a multicolumn primary key, does EVERY column in the key need to\nbe NOT NULL?\n\nChris\n\n",
"msg_date": "Tue, 26 Mar 2002 12:33:33 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> What about temporary tables - is there any reason they shouldn't be able to\n> modify a temporary table?\n\nI don't see one.\n\n> What about indices? Will twiddling the nullability break indices on a table\n> in any way?\n\nNo, not as long as you aren't changing existing data in the table.\n\n> And foreign keys - foreign keys only have to reference UNIQUE, right? The\n> nullability isn't an issue?\n\nNot sure about that --- Stephan or Jan will know.\n\n> Lastly - in a multicolumn primary key, does EVERY column in the key need to\n> be NOT NULL?\n\nYes, I believe so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Mar 2002 23:38:44 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
},
{
"msg_contents": "\nOn Tue, 26 Mar 2002, Christopher Kings-Lynne wrote:\n\n> > Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> > > I have throught of at least two problems with changing nullability. The\n> > > first is primary keys. I have to prevent people setting a\n> > column involved\n> > > in a PK to null, right?\n> >\n> > Probably so.\n>\n> And foreign keys - foreign keys only have to reference UNIQUE, right? The\n> nullability isn't an issue?\n\nThat should be fine.\n\n> Lastly - in a multicolumn primary key, does EVERY column in the key need to\n> be NOT NULL?\n\nWell, it looks like the primary key will not be satisfied if any of the\nvalues are NULL.\n\nIn my SQL 92 draft, 11.7 Syntax Rules 3a says:\n If the <unique specification> specifies PRIMARY KEY, then let\n SC be the <search condition>:\n\n UNIQUE ( SELECT UCL FROM TN )\n AND\n ( UCL ) IS NOT NULL\n\n\n\n",
"msg_date": "Mon, 25 Mar 2002 21:59:06 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: SET NULL / SET NOT NULL "
}
] |
[
{
"msg_contents": "At Tom Lane's suggestion, I am adding a field to pg_control to hold the\ncompile-time configuration of timestamp and time storage.\n\nI notice that the compile-time locale settings are registered in that\nsame structure. And that they depend on NAMEDATALEN, which is *not* in\nthat structure. istm that it should be, and I'll go ahead and add it\nbarring objections. Comments?\n\n - Thomas\n",
"msg_date": "Fri, 22 Mar 2002 07:24:47 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "pg_control contents"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> I notice that the compile-time locale settings are registered in that\n> same structure. And that they depend on NAMEDATALEN,\n\nThey do? That would be fairly broken if so; sizeof(ControlFileData)\nhas to be independent of configurable settings, else you'll not get as\nfar as inspecting any of its contents (because the CRC check will fail\nif computed over the wrong number of bytes). But it looks to me like\nLOCALE_NAME_BUFLEN is hardwired at 128.\n\n> which is *not* in\n> that structure. istm that it should be, and I'll go ahead and add it\n> barring objections. Comments?\n\nPutting NAMEDATALEN into the struct does seem like a good idea, and\nperhaps FUNC_MAX_ARGS as well, since the system catalogs will be\nunreadable if these numbers are wrong. I think it's just an oversight\nthat we didn't put these values in pg_control to start with.\n\nDon't forget to bump PG_CONTROL_VERSION.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 10:42:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_control contents "
},
{
"msg_contents": "> > I notice that the compile-time locale settings are registered in that\n> > same structure. And that they depend on NAMEDATALEN,\n> They do? That would be fairly broken if so; sizeof(ControlFileData)\n> has to be independent of configurable settings, else you'll not get as\n> far as inspecting any of its contents (because the CRC check will fail\n> if computed over the wrong number of bytes). But it looks to me like\n> LOCALE_NAME_BUFLEN is hardwired at 128.\n\nAh. I should have looked before sending the mail; I was working on this\nseveral days ago...\n\n> Putting NAMEDATALEN into the struct does seem like a good idea, and\n> perhaps FUNC_MAX_ARGS as well, since the system catalogs will be\n> unreadable if these numbers are wrong. I think it's just an oversight\n> that we didn't put these values in pg_control to start with.\n\nOK, I'll add NAMEDATALEN, FUNC_MAX_ARGS, and LOCALE_NAME_BUFLEN. Any\nmore?\n\n> Don't forget to bump PG_CONTROL_VERSION.\n\nI'd like to change this to the yyyymmddN format used in the catalog\nversion number (it is currently an integer set to ~71). It should make\nit much easier to guess at code vintages from problem reports (if\nnothing else), right?\n\n - Thomas\n",
"msg_date": "Fri, 22 Mar 2002 08:23:31 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_control contents"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> OK, I'll add NAMEDATALEN, FUNC_MAX_ARGS, and LOCALE_NAME_BUFLEN. Any\n> more?\n\nNo, you're missing my point: there is zero value in adding\nLOCALE_NAME_BUFLEN as an explicit field in ControlFileData.\nThe entire physical layout of ControlFileData has to be implicit in\nPG_CONTROL_VERSION, because that is the only field we can reasonably\ncheck before computing the CRC --- and if we don't have the correct\nsizeof(ControlFileData), the CRC check will surely fail. Therefore,\nany change in LOCALE_NAME_BUFLEN would have to be signaled by bumping\nPG_CONTROL_VERSION, *not* by any change in some other field inside\nControlFileData. Look at the code that reads and validates pg_control\nin xlog.c.\n\nIf there are other configurable parameters that can affect the format of\nsystem catalogs, then by all means let's add 'em. Nothing comes to mind\nhowever.\n\n>> Don't forget to bump PG_CONTROL_VERSION.\n\n> I'd like to change this to the yyyymmddN format used in the catalog\n> version number (it is currently an integer set to ~71). It should make\n> it much easier to guess at code vintages from problem reports (if\n> nothing else), right?\n\nActually, I deliberately did not use yyyymmdd for PG_CONTROL_VERSION,\nbecause I wanted it to be absolutely not confusable with\nCATALOG_VERSION_NO. I took the then major version number as being\nprobably sufficient --- I do not foresee us revising pg_control's layout\nvery often, certainly less than once per release.\n\nIf you want to change it to yyyyN (eg, 20021 for this change) I won't\nobject. But let's not use a convention that makes it look just like\nCATALOG_VERSION_NO. I think that'd be a recipe for confusion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 12:17:53 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_control contents "
},
{
"msg_contents": "> > OK, I'll add NAMEDATALEN, FUNC_MAX_ARGS, and LOCALE_NAME_BUFLEN. Any\n> > more?\n> No, you're missing my point: there is zero value in adding\n> LOCALE_NAME_BUFLEN as an explicit field in ControlFileData.\n> The entire physical layout of ControlFileData has to be implicit in\n> PG_CONTROL_VERSION, because that is the only field we can reasonably\n> check before computing the CRC --- and if we don't have the correct\n> sizeof(ControlFileData), the CRC check will surely fail. Therefore,\n> any change in LOCALE_NAME_BUFLEN would have to be signaled by bumping\n> PG_CONTROL_VERSION, *not* by any change in some other field inside\n> ControlFileData. Look at the code that reads and validates pg_control\n> in xlog.c.\n\nGot all that the first time. You are saying what *should* happen wrt\nbumping version numbers when resizing those string fields, but that\nisn't the point at all. *If* that doesn't happen, we should gracefully\ngive as much information as possible about the *nature* of the problem.\nJust because the CRC fails doesn't mean that there are some clues inside\nthe file as to why, or what it would take to fix the problem.\n\nIf LOCALE_NAME_BUFLEN changes size between writing and reading the\ncontrol file, the CRC *could* still be calculated correctly. Currently\nthat is not the case, but that doesn't mean that we have an ideal\nimplementation at the moment.\n\n> If there are other configurable parameters that can affect the format of\n> system catalogs, then by all means let's add 'em. Nothing comes to mind\n> however.\n> >> Don't forget to bump PG_CONTROL_VERSION.\n> > I'd like to change this to the yyyymmddN format used in the catalog\n> > version number (it is currently an integer set to ~71). It should make\n> > it much easier to guess at code vintages from problem reports (if\n> > nothing else), right?\n> Actually, I deliberately did not use yyyymmdd for PG_CONTROL_VERSION,\n> because I wanted it to be absolutely not confusable with\n> CATALOG_VERSION_NO. I took the then major version number as being\n> probably sufficient --- I do not foresee us revising pg_control's layout\n> very often, certainly less than once per release.\n> If you want to change it to yyyyN (eg, 20021 for this change) I won't\n> object. But let's not use a convention that makes it look just like\n> CATALOG_VERSION_NO. I think that'd be a recipe for confusion.\n\nI don't agree that detailed information in the same style as other\ninformation is a guarantee of future trouble; in some circles consistant\napproaches are used to avoid other possible trouble. I'm not much\ninterested in fighting Yet Another Issue for this case. Will revert to\nthe current scheme of incremental integer number but would be willing to\ndiscuss this at some future date if it comes up again.\n\n - Thomas\n",
"msg_date": "Mon, 25 Mar 2002 10:10:08 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: pg_control contents"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> If LOCALE_NAME_BUFLEN changes size between writing and reading the\n> control file, the CRC *could* still be calculated correctly.\n\nThere might be some value to storing sizeof(ControlFileData) explicitly,\nso that that CRC calculation could be made. But I still see none in\nstoring LOCALE_NAME_BUFLEN explicitly. There is *no* difference between\n\"I changed LOCALE_NAME_BUFLEN at random\" and \"I added or reordered\nfields in the struct at random\". In either case the file has to be\ntreated as completely useless, because we don't really know what's in\nthere at what offset. And making either sort of change without bumping\nPG_CONTROL_VERSION is simply a mistake that we cannot afford to make.\nThere are plenty of places in PG where ill-considered hacking will have\nundesirable consequences; pg_control is just one more.\n\nThe value of storing sizeof(ControlFileData) explicitly would not be\nthat we could hope to extract data safely, but only that we could\ndistinguish \"corrupt data\" from \"good data in an incompatible format\"\nwith marginally more reliability than now. But both of these are and\nmust be failure cases, so it's really not that interesting to\ndistinguish between them.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Mar 2002 13:35:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_control contents "
}
] |
[
{
"msg_contents": "I am seeing conflicting usage of PG_BINARY_R and \"r\" in AllocateFile()\ncalls. Is there a logic of when to use one or the other, or is this\njust badly maintained code?\n\nThe significant difference is:\n\t\n\t#ifdef __CYGWIN__\n\t#define PG_BINARY\tO_BINARY\n\t#define PG_BINARY_R \"rb\"\n\t#define PG_BINARY_W \"wb\"\n\t#else\n\t#define PG_BINARY\t0\n\t#define PG_BINARY_R \"r\"\n\t#define PG_BINARY_W \"w\"\n\t#endif\n\nFor example, in 7.2 I see pg_hba.conf opened with \"r\" and pg_ident.conf\nopened with PG_BINARY_R.\n\nMy assumption is that text files should use \"r\" and binary files use\nPG_BINARY_R.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 13:43:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Use of PG_BINARY_R and \"r\""
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My assumption is that text files should use \"r\" and binary files use\n> PG_BINARY_R.\n\nI believe that's correct. It seems kinda inconsistent though.\n\n> For example, in 7.2 I see pg_hba.conf opened with \"r\" and pg_ident.conf\n> opened with PG_BINARY_R.\n\nThe latter is clearly wrong, since pg_ident.conf is not binary.\n\nIs there any interest in defining\n\t#define PG_TEXT_R \"r\"\n\t#define PG_TEXT_W \"w\"\nso that AllocateFile is always called with one of this set of macros?\nOr is that just silly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 22 Mar 2002 14:04:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Use of PG_BINARY_R and \"r\" "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > My assumption is that text files should use \"r\" and binary files use\n> > PG_BINARY_R.\n> \n> I believe that's correct. It seems kinda inconsistent though.\n> \n> > For example, in 7.2 I see pg_hba.conf opened with \"r\" and pg_ident.conf\n> > opened with PG_BINARY_R.\n> \n> The latter is clearly wrong, since pg_ident.conf is not binary.\n> \n> Is there any interest in defining\n> \t#define PG_TEXT_R \"r\"\n> \t#define PG_TEXT_W \"w\"\n> so that AllocateFile is always called with one of this set of macros?\n> Or is that just silly?\n\nI kind of like that. The problem I think is that we use \"r\" in some\nplaces so people assume it is just like ordinary open() args, which it\nis unless it is a binary file, where you have to use the macro. That\nseems kind of confusing.\n\nHowever, we don't do this very often so just cleaning up what we have\nmay be enough.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 14:12:40 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Use of PG_BINARY_R and \"r\""
}
] |
[
{
"msg_contents": "I want to change the code that maintains the pg_shadow password cache\ninto the format used by pg_hba.conf so they can all use the same code. \nThis will allow me to cleanly use the pg_shadow code for pg_group cache\ntoo because the pg_shadow cache has binary searching for lookups. I\nwill merge the binary search stuff into the new code. New format would\nbe:\n\n\t\"user\"\t\"password\" \"valid until\"\n\nObjections?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 22 Mar 2002 18:34:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Having pg_shadow cache use pg_hba.conf cache code"
}
] |
[
{
"msg_contents": "Hello,\n\nI would be happy if it is possible to apply this patch to include/utils/datetime.h\nto target NetWare.\n\nOn NetWare TIMEZONE_GLOBAL is _timezone.\n\nbest wishes.\n\nUlrich Neumann\nNovell Worldwide Developer Support\n\nbegin 666 nw.diff\nM,C`Y+#(Q,6,R,#D-\"CP@(VEF9&5F(%]?0UE'5TE.7U\\-\"CP@(V1E9FEN92!4\nM24U%6D].15]'3$]\"04P@7W1I;65Z;VYE#0H\\(\"-E;'-E#0HM+2T-\"CX@(VEF\nM(\"%D969I;F5D*%]?0UE'5TE.7U\\I(\"8F(\"%D969I;F5D*$Y?4$Q!5%].3$TI\nM#0HR,3)A,C$Q+#(Q,@T*/B`C96QS90T*/B`C9&5F:6YE(%1)345:3TY%7T=,\n03T)!3\"!?=&EM97IO;F4-\"@``\n`\nend\n",
"msg_date": "24 Mar 2002 06:55:00 +0200",
"msg_from": "Ulrich Neumann <u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "patch for include/utils/datetime.h to target NetWare"
},
{
"msg_contents": "> I would be happy if it is possible to apply this patch to include/utils/datetime.h\n> to target NetWare.\n> On NetWare TIMEZONE_GLOBAL is _timezone.\n\nWithout a context diff, it is not entirely clear to me what *exactly*\nthe patch is doing since I'm working with an already-patched datetime.h. \n\nHowever, if you need TIMEZONE_GLOBAL defined as \"_timezone\" for netware\n(as it is already defined for cygwin), then couldn't the patch simply\nchange\n\n #ifdef __CYGWIN__\n\nto\n\n #if defined(__CYGWIN__) || defined(N_PLAT_NLM)\n\nleaving everything else the same? Let me know and I'll go ahead and\npatch it (unless of course, others on netware see a problem)...\n\n - Thomas\n",
"msg_date": "Sat, 23 Mar 2002 23:13:07 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: patch for include/utils/datetime.h to target NetWare"
},
{
"msg_contents": "Hi Thomas,\n\nthanks for your response.\nYour assumption is correct. All I need is what you described:\n#ifdef __CYGWIN__\nto\n#if defined(__CYGWIN__) || defined(N_PLAT_NLM)\n\nSorry that the patchfile didn't include the context. I will make sure that this\ndoesn't happen any more.\n\nUlrich Neumann\nNovell Worldwide Developer Support.\n\n\n>>> Thomas Lockhart<lockhart@fourpalms.org> 24.03.2002 17:53:50 >>>\n> I would be happy if it is possible to apply this patch to include/utils/datetime.h\n> to target NetWare.\n> On NetWare TIMEZONE_GLOBAL is _timezone.\n\nWithout a context diff, it is not entirely clear to me what *exactly*\nthe patch is doing since I'm working with an already-patched datetime.h. \n\nHowever, if you need TIMEZONE_GLOBAL defined as \"_timezone\" for netware\n(as it is already defined for cygwin), then couldn't the patch simply\nchange\n\n #ifdef __CYGWIN__\n\nto\n\n #if defined(__CYGWIN__) || defined(N_PLAT_NLM)\n\nleaving everything else the same? Let me know and I'll go ahead and\npatch it (unless of course, others on netware see a problem)...\n\n - Thomas\n\n---------------------------(end of broadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html \n\n\n",
"msg_date": "Mon, 25 Mar 2002 06:27:00 +0200",
"msg_from": "Ulrich Neumann <u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Antw: Re: patch for include/utils/datetime.h to target NetWare"
},
{
"msg_contents": "> Your assumption is correct. All I need is what you described:\n> #ifdef __CYGWIN__\n> to\n> #if defined(__CYGWIN__) || defined(N_PLAT_NLM)\n\nGreat. I'll include that in a set of patches I'm working on.\n\n> Sorry that the patchfile didn't include the context. I will make sure that this\n> doesn't happen any more.\n\nNo problem. Thanks for the patch!\n\n - Thomas\n",
"msg_date": "Mon, 25 Mar 2002 07:54:08 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Antw: Re: patch for include/utils/datetime.h to target NetWare"
}
] |
[
{
"msg_contents": "I've got patches to enable storage of date/time values as integers\nrather than as floating point numbers, as discussed earlier. 64-bit\nintegers are (afaik) not available on every platform we want to support,\nand there *may* be a performance difference depending on the processor\nand compiler involved (I haven't run focused timing tests yet), so we\nwill want the original floating point implementation to continue to be\navailable.\n\nFor development, I've just defined HAVE_INT64_TIMESTAMP (probably should\nbecome HAVE_INT64_DATETIMES or something like that?) in my\nMakefile.custom, but of course this should become some combination of\ncommand-line option and perhaps a consistancy check in the configuration\nprocess. afaict configure.in and pg_config.h.in are the interesting\nfiles for this.\n\nI've got questions:\n\n1) For the ./configure command line option, does\n--enable-integer-datetimes seem OK, or could someone suggest a better\nchoice? --enable-integer-datetimes is a longer string than any of the\nother options, so a less wordy possibility may be better.\n\n2) Actually using integer date/time storage depends on whether 64-bit\nintegers are really supported. So although they may be enabled, it may\nnot be supported. Should ./configure error out at that point, or should\nthings gracefully (silently?) configure to use the existing floating\npoint implementation? See (3).\n\n3) ./configure checks for several different styles of 64-bit integers to\nsee what is actually supported. But it does not control whether an\noverall \"yes we have some kind of 64-bit integer\" flag gets defined. So\ndetecting an inconsistancy in enabling 64-bit integer storage on\nmachines without 64-bit integers seems to be out of the scope of\n./configure. Correct?\n\nI could keep asking variations on these same questions, so feel free to\nadd answers to questions which weren't asked ;)\n\n - Thomas\n",
"msg_date": "Sun, 24 Mar 2002 09:58:39 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Configuring for 64-bit integer date/time storage?"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> 1) For the ./configure command line option, does\n> --enable-integer-datetimes seem OK, or could someone suggest a better\n> choice? --enable-integer-datetimes is a longer string than any of the\n> other options, so a less wordy possibility may be better.\n\nI'd suggest --enable-bigint-datetimes, or possibly\n--enable-int8-datetimes, to make it clearer that 64-bit-int support\nis needed.\n\nA more interesting question is which way should be the default; might it\nmake sense to default to bigint datetimes on machines where it's\npossible to do so? Without performance info it's probably too soon\nto decide, though.\n\n> 2) Actually using integer date/time storage depends on whether 64-bit\n> integers are really supported. So although they may be enabled, it may\n> not be supported. Should ./configure error out at that point, or should\n> things gracefully (silently?) configure to use the existing floating\n> point implementation? See (3).\n\nNo strong feeling here, but I suspect Peter will say that it should\nerror out.\n\n> 3) ./configure checks for several different styles of 64-bit integers to\n> see what is actually supported. But it does not control whether an\n> overall \"yes we have some kind of 64-bit integer\" flag gets defined. So\n> detecting an inconsistancy in enabling 64-bit integer storage on\n> machines without 64-bit integers seems to be out of the scope of\n> ./configure. Correct?\n\nNo. configure must define either HAVE_LONG_INT_64 or\nHAVE_LONG_LONG_INT_64 to get the int8 support to work. If neither gets\ndefined, you should conclude that bigint datetimes won't work.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 24 Mar 2002 13:09:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Configuring for 64-bit integer date/time storage? "
},
{
"msg_contents": "> I'd suggest --enable-bigint-datetimes, or possibly\n> --enable-int8-datetimes, to make it clearer that 64-bit-int support\n> is needed.\n\nHmm. The *feature* it is enabling is \"consistant precision through the\nrange of allowed values\". I'd rather move in the direction of\nqualitative description, than toward the direction of explicit\nunderlying implementation.\n\n> A more interesting question is which way should be the default; might it\n> make sense to default to bigint datetimes on machines where it's\n> possible to do so? Without performance info it's probably too soon\n> to decide, though.\n\nRight. The default could be changed sometime later.\n\n...\n> No strong feeling here, but I suspect Peter will say that it should\n> error out.\n\nThere is code in c.h which links the HAVE_xxx definitions to compiler\ntypes, etc. But is *also* defines INT64_IS_BUSTED as a catchall for\nhaving not found any int64 types at all. And it isn't until this point\nthat anyone knows for sure that there is not an int64 type. At least in\nthe current code, at least afaict. And at that point, we don't allow\nthings to \"error out\" in the sense of allowing \"#error\" or something\nsimilar.\n\n> No. configure must define either HAVE_LONG_INT_64 or\n> HAVE_LONG_LONG_INT_64 to get the int8 support to work. If neither gets\n> defined, you should conclude that bigint datetimes won't work.\n\nSure. The problem is in doing that in two different places for two\ndifferent reasons, where it *should* happen in one place (or at least at\nsimilar stages of the process). The possibilities you have raised don't\nhave that happening so istm that we are still missing something. As you\npoint out, Peter will probably have a suggestion ;)\n\n - Thomas\n",
"msg_date": "Sun, 24 Mar 2002 10:45:33 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Configuring for 64-bit integer date/time storage?"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> I've got patches to enable storage of date/time values as integers\n> rather than as floating point numbers, as discussed earlier.\n\nI'd like to know first what the overall plan for this feature is. If it\nis to make the date/time values \"better\" all around, i.e., you get exact\narithmetic and comparable range and precision, and the same or better\nspeed, then I'd vote for making it the default and offering the old\nimplementation as a (silent?) fallback. If, on the other hand, it is a\nspace vs. time vs. whatever tradeoff then we'd really need to see the\nnumbers first to decide where to go with it.\n\nThe other day I offered the \"rules of the game\" for configure options, one\nof which was that an option should not replace one behavior by another,\nso that binary packagers can make neutral decisions about which options to\nbuild with. The last thing we'd want to happen is that every operating\nsystem distribution comes with a different timestamp implementation. That\nwould be quite a chaos.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 24 Mar 2002 21:22:12 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Configuring for 64-bit integer date/time storage?"
},
{
"msg_contents": "> > I've got patches to enable storage of date/time values as integers\n> > rather than as floating point numbers, as discussed earlier.\n> I'd like to know first what the overall plan for this feature is.\n\nMy feeling is that the int64 implementation will be \"better\". But I\n*don't* know how many platforms support that data type, and I'm not yet\nsure if there will be a measurable performance difference on (some?)\nplatforms.\n\nI'd expect that we could form a consensus on the best default over the\nnext couple of months. In either case, the option should be selectable,\notherwise some of us would have trouble testing the feature set, right?\n\nDid you catch the questions on dealing with HAVE_LONG_INT_64,\nHAVE_LONG_LONG_INT_64, and INT64_IS_BUSTED? I'd like to be able to\nenable/disable integer date/time storage in configure, so some notion of\n\"do I have some kind of 64 bit integer?\" seems to be desirable in\nconfigure itself.\n\n - Thomas\n",
"msg_date": "Sun, 24 Mar 2002 19:07:23 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": true,
"msg_subject": "Re: Configuring for 64-bit integer date/time storage?"
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> Did you catch the questions on dealing with HAVE_LONG_INT_64,\n> HAVE_LONG_LONG_INT_64, and INT64_IS_BUSTED? I'd like to be able to\n> enable/disable integer date/time storage in configure, so some notion of\n> \"do I have some kind of 64 bit integer?\" seems to be desirable in\n> configure itself.\n\nIs this what you want?\n\nif test \"$enable_integer_datetimes\" = yes; then\n if test \"$HAVE_LONG_LONG_INT64\" != yes && test \"$HAVE_LONG_INT64\" != yes; then\n AC_MSG_ERROR([integer datetimes not available due to lack of 64-bit integer type])\n fi\nfi\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 26 Mar 2002 12:59:07 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Configuring for 64-bit integer date/time storage?"
}
] |
[
{
"msg_contents": "\nfor whom? you? robot?\n\nOn Sun, 24 Mar 2002, Oleg Bartunov wrote:\n\n> Marc,\n>\n> something is strange. No messages from any pg mailing lists !\n> Please, check subscribers list.\n>\n> \tOleg\n>\n> On Tue, 19 Mar 2002, Marc G. Fournier wrote:\n>\n> >\n> > do you know if any of the other lists are missing? or is it just -hackers\n> > that got lost?\n> >\n> > On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n> >\n> > > Thanks.\n> > > I'm getting messages now.\n> > >\n> > > \tOleg\n> > > On Tue, 19 Mar 2002, Marc G. Fournier wrote:\n> > >\n> > > >\n> > > > Must have gotten unsubscribed from the list at some point ... just\n> > > > re-added it now ...\n> > > >\n> > > > On Tue, 19 Mar 2002, Oleg Bartunov wrote:\n> > > >\n> > > > > Marc,\n> > > > >\n> > > > > I see no postings to hackers come to fts.postgresql.org for more than a\n> > > > > month. Seems there is a problem, because I also didn't get *any* messages\n> > > > > from psql mailing lists. I was subscribed to lists since 1995 and\n> > > > > want to stay in there. Could you please check the problem.\n> > > > >\n> > > > > I was patient because I thought developers get timeout after\n> > > > > 7.2 release.\n> > > > >\n> > > > > \tRegards,\n> > > > > \t\tOleg\n> > > > >\n> > > > > _____________________________________________________________\n> > > > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > > > >\n> > > > >\n> > > > >\n> > > >\n> > >\n> > > \tRegards,\n> > > \t\tOleg\n> > > _____________________________________________________________\n> > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > >\n> > >\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n>\n\n",
"msg_date": "Sun, 24 Mar 2002 16:33:21 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Re: Problems with mailing list"
}
] |
[
{
"msg_contents": "I was browsing through SQL92 and I noticed this, when discussing the\nCREATE VIEW syntax:\n\n\"5) Any <table name> that is specified in the <query expression> shall\nbe different from the <table name> of any <temporary table\ndeclaration>.\"\n\n(<query expression> is the defintion of the view. This basically says\nthat you're not allowed to create views on temp tables.)\n\nCurrently, PostgreSQL allows this -- when the session ends and the temp\ntable is dropped, an subsequent queries on the view fail. Is this the\noptimal behavior?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "24 Mar 2002 16:59:32 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": true,
"msg_subject": "views on temp tables"
},
{
"msg_contents": "Neil Conway wrote:\n> I was browsing through SQL92 and I noticed this, when discussing the\n> CREATE VIEW syntax:\n> \n> \"5) Any <table name> that is specified in the <query expression> shall\n> be different from the <table name> of any <temporary table\n> declaration>.\"\n> \n> (<query expression> is the defintion of the view. This basically says\n> that you're not allowed to create views on temp tables.)\n> \n> Currently, PostgreSQL allows this -- when the session ends and the temp\n> table is dropped, an subsequent queries on the view fail. Is this the\n> optimal behavior?\n\nClearly not optimal. TODO has:\n\n\t* Allow temporary views\n\nMy idea would be to make any view temporary that relies on a temp table\n--- throw a NOTICE to the user when they create it so they know it is\ntemporary. We could allow TEMP on CREATE VIEW but there seems little\nreason for that, though we could allow TEMP views on real tables, so I\nguess we would need that option too.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 24 Mar 2002 17:35:15 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: views on temp tables"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> Currently, PostgreSQL allows this -- when the session ends and the temp\n> table is dropped, an subsequent queries on the view fail. Is this the\n> optimal behavior?\n\nWell, I think it's better than refusing views on temp tables, as the\nspec would have us do.\n\nThe \"correct\" behavior is probably to drop such views on backend exit.\nPossibly we should invent the notion of temp views, and disallow\nreferences from non-temp views to temp tables. That seems like it\nmight be less likely to cause unpleasant surprises than just silently\ndropping views that reference temp tables.\n\nIn any case I'd say this is something best tackled in the context of\ngeneralized reference tracking ... which is something we know we need,\nbut no one's stepped up to make it happen yet. I don't think this\nparticular problem is bad enough to warrant a special-purpose\nimplementation mechanism.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Mar 2002 03:04:05 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: views on temp tables "
},
{
"msg_contents": "Tom Lane wrote:\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > Currently, PostgreSQL allows this -- when the session ends and the temp\n> > table is dropped, an subsequent queries on the view fail. Is this the\n> > optimal behavior?\n> \n> Well, I think it's better than refusing views on temp tables, as the\n> spec would have us do.\n> \n> The \"correct\" behavior is probably to drop such views on backend exit.\n> Possibly we should invent the notion of temp views, and disallow\n> references from non-temp views to temp tables. That seems like it\n> might be less likely to cause unpleasant surprises than just silently\n> dropping views that reference temp tables.\n\nTODO updated with:\n\t\n\t* Allow temporary views\n\t* Require view using temporary tables to be temporary views\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 25 Mar 2002 15:55:28 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: views on temp tables"
}
] |
[
{
"msg_contents": "(posted last week to pgsql-general; no responses there, so I'm seeing if\nanyone here can contribute. Thanks!)\n\n\nI'm working on a site hosted in a BSD Jail; they have MySQL installed but,\nof course, I'd rather use PostgreSQL.\n\nIt installs fine but can't initdb; get the following:\n\nFixing permissions on existing directory /usr/local/pgsql/data... ok\ncreating directory /usr/local/pgsql/data/base... ok\ncreating directory /usr/local/pgsql/data/global... ok\ncreating directory /usr/local/pgsql/data/pg_xlog... ok\ncreating directory /usr/local/pgsql/data/pg_clog... ok\ncreating template1 database in /usr/local/pgsql/data/base/1...\nIpcSemaphoreCreate: semget(key=1, num=17, 03600) failed:\nFunction not implemented\n\nEarlier message traffic suggests that SYSV IPC has not been fixed to run\nunder BSD Jails.\n\nThe last time this was raised was ~1 year ago. Has there been any changes\nhere that anyone knows of? Any hope of getting PG running in our jail? (Or,\nalternatively, can PG run on the real machine's processes so that the\ndifferent jails can access it?)\n\nAny help would be appreciated!\n\nThanks.\n\nJoel BURTON | joel@joelburton.com | joelburton.com | aim: wjoelburton\nKnowledge Management & Technology Consultant\n\n",
"msg_date": "Mon, 25 Mar 2002 10:51:19 -0500",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": true,
"msg_subject": "initdb dies during IpcSemaphoreCreate under BSD jail"
},
{
"msg_contents": "On Mon, 25 Mar 2002, Joel Burton wrote:\n\n> (posted last week to pgsql-general; no responses there, so I'm seeing if\n> anyone here can contribute. Thanks!)\n>\n>\n> I'm working on a site hosted in a BSD Jail; they have MySQL installed but,\n> of course, I'd rather use PostgreSQL.\n>\n> It installs fine but can't initdb; get the following:\n>\n> Fixing permissions on existing directory /usr/local/pgsql/data... ok\n> creating directory /usr/local/pgsql/data/base... ok\n> creating directory /usr/local/pgsql/data/global... ok\n> creating directory /usr/local/pgsql/data/pg_xlog... ok\n> creating directory /usr/local/pgsql/data/pg_clog... ok\n> creating template1 database in /usr/local/pgsql/data/base/1...\n> IpcSemaphoreCreate: semget(key=1, num=17, 03600) failed:\n> Function not implemented\n>\n> Earlier message traffic suggests that SYSV IPC has not been fixed to run\n> under BSD Jails.\n>\n> The last time this was raised was ~1 year ago. Has there been any changes\n> here that anyone knows of? Any hope of getting PG running in our jail? (Or,\n> alternatively, can PG run on the real machine's processes so that the\n> different jails can access it?)\n\nI don't know about running PG in a jail, but if you have it running\non the parent or real machine the jails can access it just fine but\nnot as localhost.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Mon, 25 Mar 2002 11:02:53 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb dies during IpcSemaphoreCreate under BSD jail"
},
{
"msg_contents": "You need to get your provider to set the sysctl jail.sysvipc_allowed to\n1 in the host environment. If they're not willing to do this for you, we\nprovide this feature on our servers, and also have a shared Postgres\ndatabase you can use.\n\n--\nAlastair D'Silva B. Sc. mob: 0413 485 733\nNetworking Consultant\nNew Millennium Networking http://www.newmillennium.net.au \n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org \n> [mailto:pgsql-hackers-owner@postgresql.org] On Behalf Of \n> Vince Vielhaber\n> Sent: Tuesday, 26 March 2002 3:03 AM\n> To: Joel Burton\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] initdb dies during IpcSemaphoreCreate \n> under BSD jail\n> \n> \n> On Mon, 25 Mar 2002, Joel Burton wrote:\n> \n> > (posted last week to pgsql-general; no responses there, so \n> I'm seeing \n> > if anyone here can contribute. Thanks!)\n> >\n> >\n> > I'm working on a site hosted in a BSD Jail; they have MySQL \n> installed \n> > but, of course, I'd rather use PostgreSQL.\n> >\n> > It installs fine but can't initdb; get the following:\n> >\n> > Fixing permissions on existing directory \n> /usr/local/pgsql/data... ok \n> > creating directory /usr/local/pgsql/data/base... ok \n> creating directory \n> > /usr/local/pgsql/data/global... ok creating directory \n> > /usr/local/pgsql/data/pg_xlog... ok creating directory \n> > /usr/local/pgsql/data/pg_clog... ok creating template1 database in \n> > /usr/local/pgsql/data/base/1...\n> > IpcSemaphoreCreate: semget(key=1, num=17, 03600) failed: \n> Function not \n> > implemented\n> >\n> > Earlier message traffic suggests that SYSV IPC has not been \n> fixed to \n> > run under BSD Jails.\n> >\n> > The last time this was raised was ~1 year ago. Has there been any \n> > changes here that anyone knows of? Any hope of getting PG \n> running in \n> > our jail? (Or, alternatively, can PG run on the real machine's \n> > processes so that the different jails can access it?)\n> \n> I don't know about running PG in a jail, but if you have it \n> running on the parent or real machine the jails can access it \n> just fine but not as localhost.\n> \n> Vince.\n> -- \n> ==============================================================\n> ============\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com \nhttp://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n========================================================================\n==\n\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n",
"msg_date": "Wed, 27 Mar 2002 14:51:37 +1100",
"msg_from": "\"Alastair D'Silva\" <deece@newmillennium.net.au>",
"msg_from_op": false,
"msg_subject": "Re: initdb dies during IpcSemaphoreCreate under BSD jail"
},
{
"msg_contents": "> -----Original Message-----\n> From: Alastair D'Silva [mailto:deece@newmillennium.net.au]\n> Sent: Tuesday, March 26, 2002 10:52 PM\n> To: 'Vince Vielhaber'; 'Joel Burton'\n> Cc: pgsql-hackers@postgresql.org\n> Subject: RE: [HACKERS] initdb dies during IpcSemaphoreCreate under BSD\n> jail\n>\n>\n> You need to get your provider to set the sysctl jail.sysvipc_allowed to\n> 1 in the host environment. If they're not willing to do this for you, we\n> provide this feature on our servers, and also have a shared Postgres\n> database you can use.\n\nThanks for the tip. I'm not a *BSD guru, so I'm not familiar with this\nconfiguration change, but I've written to the Powers That Be at my ISP to\nsee if this is something that they feel they could change.\n\n",
"msg_date": "Tue, 26 Mar 2002 23:19:15 -0500",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": true,
"msg_subject": "Re: initdb dies during IpcSemaphoreCreate under BSD jail"
},
{
"msg_contents": "> You need to get your provider to set the sysctl jail.sysvipc_allowed to\n> 1 in the host environment. If they're not willing to do this for you, we\n> provide this feature on our servers, and also have a shared Postgres\n> database you can use.\n\nMy ISP responds to this point:\n\n\"\"\"\n>In the thread on the pgsql-hackers list, someone wrote to me to say that\n>\"You need to get your provider to set the sysctl jail.sysvipc_allowed to\n>1 in the host environment.\" Apparently, according to this person, this will\n>allow the use of PG in the jailed environments. Is this something that\nimeme\n>can configure? If this isn't clear, I'd be happy to find out more\n>information for you about this configuration change and what other\n>ramifications it might have for your servers.\n\nThis will allow you to run a single postgres in a single jail only one\nuser would have access to it. If you try to run more then one it will\ntry to use the same shared memory and crash.\n\"\"\"\n\n\nIs this, in fact, the case?\n\nThanks!\n\n",
"msg_date": "Wed, 27 Mar 2002 01:32:56 -0500",
"msg_from": "\"Joel Burton\" <joel@joelburton.com>",
"msg_from_op": true,
"msg_subject": "Re: initdb dies during IpcSemaphoreCreate under BSD jail"
},
{
"msg_contents": "\"Joel Burton\" <joel@joelburton.com> writes:\n>> This will allow you to run a single postgres in a single jail only one\n>> user would have access to it. If you try to run more then one it will\n>> try to use the same shared memory and crash.\n\n> Is this, in fact, the case?\n\nUnless BSD jails have very bizarre shared memory behavior, this is\nnonsense. PG can easily run multiple postmasters in the same machine\n(there are currently four postmasters of different vintages alive on\nthe machine I'm typing this on). Give each one a different database\ndirectory and a unique port number, and you're good to go.\n\nIt might be that postmasters in different jails on the same machine\nwould have to be assigned different port numbers to keep them from\nconflicting. Don't know exactly how airtight a BSD jail is ...\nbut there is an interaction between port number and shared memory\nkey. I can imagine that a jail that hides processes but not shared\nmemory segments might confuse our startup logic that tries to detect\nwhether an existing shared memory segment is safe to reuse or not.\nPerhaps your ISP has seen failures of that type from trying to\nstart multiple postmasters on the same port number in different\njails.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Mar 2002 01:51:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: initdb dies during IpcSemaphoreCreate under BSD jail "
},
{
"msg_contents": "On Wed, 27 Mar 2002, Tom Lane wrote:\n\n> \"Joel Burton\" <joel@joelburton.com> writes:\n> >> This will allow you to run a single postgres in a single jail only one\n> >> user would have access to it. If you try to run more then one it will\n> >> try to use the same shared memory and crash.\n>\n> > Is this, in fact, the case?\n>\n> Unless BSD jails have very bizarre shared memory behavior, this is\n> nonsense. PG can easily run multiple postmasters in the same machine\n> (there are currently four postmasters of different vintages alive on\n> the machine I'm typing this on). Give each one a different database\n> directory and a unique port number, and you're good to go.\n>\n> It might be that postmasters in different jails on the same machine\n> would have to be assigned different port numbers to keep them from\n> conflicting. Don't know exactly how airtight a BSD jail is ...\n> but there is an interaction between port number and shared memory\n> key. I can imagine that a jail that hides processes but not shared\n> memory segments might confuse our startup logic that tries to detect\n> whether an existing shared memory segment is safe to reuse or not.\n> Perhaps your ISP has seen failures of that type from trying to\n> start multiple postmasters on the same port number in different\n> jails.\n\nFreeBSD jails are supposed to put just about everything in to different\nnamespaces/contention domains/whatever. You can't see processes running\noutside a jail from within it, you can't see files outside your jail, you\ncan only use your jail's IP address, etc. However, this doesn't work for\nSYSV IPC (not in FreeBSD-STABLE, at least) and everything goes in to one\nmachine-wide namespace - hence the sysctl to turn it on/off.\n\nPostgreSQL will run quite happily using different port numbers in\ndifferent jails - but the port numbers MUST be different. Since the ISP is\nprobably using jails to make multiple users as unaware of each other as\npossible this might be a problem for them...\n\nYou should probably also consider that someone in /another/ jail might be\nable to get access to your shared memory segments. This would, most\nlikely, be a bad thing to happen.\n\n",
"msg_date": "Wed, 27 Mar 2002 10:56:24 +0000 (GMT)",
"msg_from": "Alex Hayward <xelah@xelah.com>",
"msg_from_op": false,
"msg_subject": "Re: initdb dies during IpcSemaphoreCreate under BSD jail"
}
] |
[
{
"msg_contents": "Hello everybody,\n\nIf possible please add the following patch to better support NetWare.\n\nbest regards\n\nUlrich Neumann\nNovell Worldwide Developer Support\n\nbegin 666 xlog.patch\nM+2TM(%QP9W-Q;#<R+F]R9UQS<F-<8F%C:V5N9%QA8V-E<W-<=')A;G-A;5QX\nM;&]G+F,)36]N($IA;B`Q-\"`Q-CHU-3HU.\"`R,#`R#0HK*RL@7'!G<W%L-S(N\nM9&5V7'-R8UQB86-K96YD7&%C8V5S<UQT<F%N<V%M7'AL;V<N8PE-;VX@1F5B\nM(#$Q(#`P.C0P.C$P(#(P,#(-\"D!`(\"TQ-#DS+#<@*S$T.3,L-R!`0`T*(`D@\nM*B!O=F5R=W)I=&4@86X@97AI<W1I;F<@;&]G9FEL92X@($AO=V5V97(L('1H\nM97)E('-H;W5L9&XG=\"!B92!O;F4L('-O#0H@\"2`J(')E;F%M92@I(&ES(&%N\nM(&%C8V5P=&%B;&4@<W5B<W1I='5T92!E>&-E<'0@9F]R('1H92!T<G5L>2!P\nM87)A;F]I9\"X-\"B`)(\"HO#0HM(VEF;F1E9B!?7T)%3U-?7PT**R-I9B`A9&5F\nM:6YE9\"A?7T)%3U-?7RD@)B8@(61E9FEN960H3E]03$%47TY,32D-\"B`):68@\nM*&QI;FLH=&UP<&%T:\"P@<&%T:\"D@/\"`P*0T*(`D)96QO9RA35$]0+\"`B;&EN\nM:R!F<F]M(\"5S('1O(\"5S(\"AI;FET:6%L:7IA=&EO;B!O9B!L;V<@9FEL92`E\nM=2P@<V5G;65N=\"`E=2D@9F%I;&5D.B`E;2(L#0H@\"0D)('1M<'!A=&@L('!A\n0=&@L(&QO9RP@<V5G*3L-\"@``\n`\nend\n",
"msg_date": "Mon, 25 Mar 2002 23:50:00 +0200",
"msg_from": "Ulrich Neumann <u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Patch for xlog.c"
},
{
"msg_contents": "Your patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nUlrich Neumann wrote:\n> Hello everybody,\n> \n> If possible please add the following patch to better support NetWare.\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\n--- \\pgsql72.org\\src\\backend\\access\\transam\\xlog.c\tMon Jan 14 16:55:58 2002\n+++ \\pgsql72.dev\\src\\backend\\access\\transam\\xlog.c\tMon Feb 11 00:40:10 2002\n@@ -1493,7 +1493,7 @@\n \t * overwrite an existing logfile. However, there shouldn't be one, so\n \t * rename() is an acceptable substitute except for the truly paranoid.\n \t */\n-#ifndef __BEOS__\n+#if !defined(__BEOS__) && !defined(N_PLAT_NLM)\n \tif (link(tmppath, path) < 0)\n \t\telog(STOP, \"link from %s to %s (initialization of log file %u, segment %u) failed: %m\",\n \t\t\t tmppath, path, log, seg);",
"msg_date": "Wed, 17 Apr 2002 19:47:22 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for xlog.c"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nUlrich Neumann wrote:\n> Hello everybody,\n> \n> If possible please add the following patch to better support NetWare.\n> \n> best regards\n> \n> Ulrich Neumann\n> Novell Worldwide Developer Support\n> \n> begin 666 xlog.patch\n> M+2TM(%QP9W-Q;#<R+F]R9UQS<F-<8F%C:V5N9%QA8V-E<W-<=')A;G-A;5QX\n> M;&]G+F,)36]N($IA;B`Q-\"`Q-CHU-3HU.\"`R,#`R#0HK*RL@7'!G<W%L-S(N\n> M9&5V7'-R8UQB86-K96YD7&%C8V5S<UQT<F%N<V%M7'AL;V<N8PE-;VX@1F5B\n> M(#$Q(#`P.C0P.C$P(#(P,#(-\"D!`(\"TQ-#DS+#<@*S$T.3,L-R!`0`T*(`D@\n> M*B!O=F5R=W)I=&4@86X@97AI<W1I;F<@;&]G9FEL92X@($AO=V5V97(L('1H\n> M97)E('-H;W5L9&XG=\"!B92!O;F4L('-O#0H@\"2`J(')E;F%M92@I(&ES(&%N\n> M(&%C8V5P=&%B;&4@<W5B<W1I='5T92!E>&-E<'0@9F]R('1H92!T<G5L>2!P\n> M87)A;F]I9\"X-\"B`)(\"HO#0HM(VEF;F1E9B!?7T)%3U-?7PT**R-I9B`A9&5F\n> M:6YE9\"A?7T)%3U-?7RD@)B8@(61E9FEN960H3E]03$%47TY,32D-\"B`):68@\n> M*&QI;FLH=&UP<&%T:\"P@<&%T:\"D@/\"`P*0T*(`D)96QO9RA35$]0+\"`B;&EN\n> M:R!F<F]M(\"5S('1O(\"5S(\"AI;FET:6%L:7IA=&EO;B!O9B!L;V<@9FEL92`E\n> M=2P@<V5G;65N=\"`E=2D@9F%I;&5D.B`E;2(L#0H@\"0D)('1M<'!A=&@L('!A\n> 0=&@L(&QO9RP@<V5G*3L-\"@``\n> `\n> end\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Apr 2002 21:56:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for xlog.c"
}
] |
[
{
"msg_contents": "Hello everybody,\n\ni've found a little bug in pqsignal.h. Attached is the patch.\n(AuthBlockSig wasn't defined in one case).\nThis problem is not specific to a special platform, it may target several\nplatfroms.\n\nbest regards\n\nUlrich Neumann\nNovell Worldwide Developer Support\n\n\nbegin 666 pqsignal.patch\nM+2TM(%QP9W-Q;#<R+F]R9UQS<F-<:6YC;'5D95QL:6)P<5QP<7-I9VYA;\"YH\nM\"4UO;B!.;W8@,#4@,38Z-#8Z,S0@,C`P,0T**RLK(%QP9W-Q;#<R+F1E=EQS\nM<F-<:6YC;'5D95QL:6)P<5QP<7-I9VYA;\"YH\"51U92!-87(@,C8@,#`Z,#`Z\nM,#`@,C`P,@T*0$`@+3(X+#<@*S(X+#@@0$`-\"B`C9&5F:6YE(%!'7U-%5$U!\nM4TLH;6%S:RD)<VEG<')O8VUA<VLH4TE'7U-%5$U!4TLL(&UA<VLL($Y53$PI\nM#0H@(V5L<V4-\"B!E>'1E<FX@:6YT\"55N0FQO8VM3:6<L#0HM\"0D)0FQO8VM3\nM:6<[#0HK\"0D)0FQO8VM3:6<L#0HK\"0D)075T:$)L;V-K4VEG.PT*(`T*(\"-D\nM969I;F4@4$=?4T5434%32RAM87-K*0ES:6=S971M87-K*\"HH*&EN=\"HI*&UA\n0<VLI*2D-\"B`C96YD:68-\"@``\n`\nend\n",
"msg_date": "Tue, 26 Mar 2002 00:17:00 +0200",
"msg_from": "Ulrich Neumann <u_neumann@gne.de>",
"msg_from_op": true,
"msg_subject": "Patch for pqsignal.h"
},
{
"msg_contents": "\nPlease check current CVS or snapshot. I believe this is fixed.\n\n---------------------------------------------------------------------------\n\nUlrich Neumann wrote:\n> Hello everybody,\n> \n> i've found a little bug in pqsignal.h. Attached is the patch.\n> (AuthBlockSig wasn't defined in one case).\n> This problem is not specific to a special platform, it may target several\n> platfroms.\n> \n> best regards\n> \n> Ulrich Neumann\n> Novell Worldwide Developer Support\n> \n> \n> begin 666 pqsignal.patch\n> M+2TM(%QP9W-Q;#<R+F]R9UQS<F-<:6YC;'5D95QL:6)P<5QP<7-I9VYA;\"YH\n> M\"4UO;B!.;W8@,#4@,38Z-#8Z,S0@,C`P,0T**RLK(%QP9W-Q;#<R+F1E=EQS\n> M<F-<:6YC;'5D95QL:6)P<5QP<7-I9VYA;\"YH\"51U92!-87(@,C8@,#`Z,#`Z\n> M,#`@,C`P,@T*0$`@+3(X+#<@*S(X+#@@0$`-\"B`C9&5F:6YE(%!'7U-%5$U!\n> M4TLH;6%S:RD)<VEG<')O8VUA<VLH4TE'7U-%5$U!4TLL(&UA<VLL($Y53$PI\n> M#0H@(V5L<V4-\"B!E>'1E<FX@:6YT\"55N0FQO8VM3:6<L#0HM\"0D)0FQO8VM3\n> M:6<[#0HK\"0D)0FQO8VM3:6<L#0HK\"0D)075T:$)L;V-K4VEG.PT*(`T*(\"-D\n> M969I;F4@4$=?4T5434%32RAM87-K*0ES:6=S971M87-K*\"HH*&EN=\"HI*&UA\n> 0<VLI*2D-\"B`C96YD:68-\"@``\n> `\n> end\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 25 Mar 2002 19:59:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Patch for pqsignal.h"
}
] |
[
{
"msg_contents": "\nJust before anyone asks where it is ... I'm just rolling v7.2.1 up right\nnow and will let everyone know once its ready for a download ... I'll do\nup an announce in the morning unless anyone finds a flaw in it ...\n\n\n\n",
"msg_date": "Mon, 25 Mar 2002 23:38:37 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Rolling v7.2.1 ..."
},
{
"msg_contents": "\nTry her out and let me know if there are any problems ... the build looks\nclean, sizes all look right ... if no visible probs, will announce in the\nmroning ...\n\nOn Mon, 25 Mar 2002, Marc G. Fournier wrote:\n\n>\n> Just before anyone asks where it is ... I'm just rolling v7.2.1 up right\n> now and will let everyone know once its ready for a download ... I'll do\n> up an announce in the morning unless anyone finds a flaw in it ...\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Tue, 26 Mar 2002 00:04:47 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> Try her out and let me know if there are any problems ... the build looks\n> clean, sizes all look right ...\n\n... but the contained documentation is for 7.3.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 25 Mar 2002 23:44:26 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Marc G. Fournier writes:\n>> Try her out and let me know if there are any problems ... the build looks\n>> clean, sizes all look right ...\n\n> ... but the contained documentation is for 7.3.\n\nOn the subject of contained documentation, I notice\n\n*** postgresql-7.2.1/doc/src/sgml/version.sgml\tMon Mar 18 18:04:11 2002\n--- REL7_2/doc/src/sgml/version.sgml\tThu May 10 21:46:33 2001\n***************\n*** 3,7 ****\n documentation. In text, use for example &version; to refer to them.\n -->\n \n! <!entity version \"7.2.1\">\n! <!entity majorversion \"7.2.1\">\n--- 3,7 ----\n documentation. In text, use for example &version; to refer to them.\n -->\n \n! <!entity version \"7.2\">\n! <!entity majorversion \"7.2\">\n\nIs this right, or should \"majorversion\" still be 7.2? Right offhand\nthe latter seems correct ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 25 Mar 2002 23:58:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ... "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Marc G. Fournier writes:\n> >> Try her out and let me know if there are any problems ... the build looks\n> >> clean, sizes all look right ...\n> \n> > ... but the contained documentation is for 7.3.\n> \n> On the subject of contained documentation, I notice\n> \n> *** postgresql-7.2.1/doc/src/sgml/version.sgml\tMon Mar 18 18:04:11 2002\n> --- REL7_2/doc/src/sgml/version.sgml\tThu May 10 21:46:33 2001\n> ***************\n> *** 3,7 ****\n> documentation. In text, use for example &version; to refer to them.\n> -->\n> \n> ! <!entity version \"7.2.1\">\n> ! <!entity majorversion \"7.2.1\">\n> --- 3,7 ----\n> documentation. In text, use for example &version; to refer to them.\n> -->\n> \n> ! <!entity version \"7.2\">\n> ! <!entity majorversion \"7.2\">\n> \n> Is this right, or should \"majorversion\" still be 7.2? Right offhand\n> the latter seems correct ...\n\nI wasn't sure what to do here. I figured if the docs were regenerated,\nit should say 7.2.1, and if they aren't, then they will stay as 7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Mar 2002 00:09:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > Is this right, or should \"majorversion\" still be 7.2? Right offhand\n> > > the latter seems correct ...\n> >\n> > I wasn't sure what to do here. I figured if the docs were regenerated,\n> > it should say 7.2.1, and if they aren't, then they will stay as 7.2.\n> \n> But neither explanation warrants setting \"majorversion\" to 7.2.1.\n\nWhy? Why shouldn't the documentation match the release number?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Mar 2002 00:20:41 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > Is this right, or should \"majorversion\" still be 7.2? Right offhand\n> > the latter seems correct ...\n>\n> I wasn't sure what to do here. I figured if the docs were regenerated,\n> it should say 7.2.1, and if they aren't, then they will stay as 7.2.\n\nBut neither explanation warrants setting \"majorversion\" to 7.2.1.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 26 Mar 2002 00:23:19 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> But neither explanation warrants setting \"majorversion\" to 7.2.1.\n\n> Why? Why shouldn't the documentation match the release number?\n\nDid you look at how majorversion is used?\n\nSetting it to 7.2.1 is *clearly* wrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 00:30:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ... "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n<snip> \n> Why? Why shouldn't the documentation match the release number?\n\nShouldn't \"major version\" still be 7.2, and version be \"7.2.1\".\n\ni.e. \"7.2.1\" is a minor release/update/subversion of \"7.2\"?\n\nRegards and best wishes,\n\nJustin Clift\n\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Tue, 26 Mar 2002 16:32:48 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> But neither explanation warrants setting \"majorversion\" to 7.2.1.\n> \n> > Why? Why shouldn't the documentation match the release number?\n> \n> Did you look at how majorversion is used?\n> \n> Setting it to 7.2.1 is *clearly* wrong.\n\nOh, I see, there is majorversion and version. I put majorversion back to\n7.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Mar 2002 00:33:23 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Other than the documentation issues, I confirm the tarball looks good\nfrom here.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 00:42:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ... "
},
{
"msg_contents": "\ngood point ... but, where should I be pulling from? ~ftp/pub/doc/7.2\ncontains .pdf files, which I didn't think we wanted to put into the\ndistribution ... is there an alternative place I should be pulling docs\nfrom then ~ftp/pub/dev/doc? should there be a step in the 'build dist'\nthat builds the docs based on the sgml?\n\nOn Mon, 25 Mar 2002, Peter Eisentraut wrote:\n\n> Marc G. Fournier writes:\n>\n> > Try her out and let me know if there are any problems ... the build looks\n> > clean, sizes all look right ...\n>\n> ... but the contained documentation is for 7.3.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n>\n\n",
"msg_date": "Tue, 26 Mar 2002 11:23:57 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> should there be a step in the 'build dist'\n> that builds the docs based on the sgml?\n\nIt would be a good idea to rebuild the 7.2 docs from 7.2.1 sources,\nas we made several important fixes in the documentation since 7.2\nrelease. I dunno whether it's worth trying to make it fully automatic\nright at the moment, though.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 10:52:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ... "
},
{
"msg_contents": "Hi Justin,\n\nI have a new updated version of the Ora2Pg tool which correct many\nproblems and add some new features, could you or someone else update\nthe contrib directory.\n(download at: http://www.samse.fr/GPL/ora2pg/ora2pg-1.8.tar.gz)\n\nI also just post a new tool in replacement of the Oracle XSQL Servlet,\nuse to create dynamic web application with XML/XSLT.\n\nLet me know if it can take place under the contrib directory.\n(http://www.samse.fr/GPL/pxsql/)\n\n",
"msg_date": "Tue, 26 Mar 2002 17:27:35 +0100",
"msg_from": "Gilles DAROLD <gilles@darold.net>",
"msg_from_op": false,
"msg_subject": "Contrib update"
},
{
"msg_contents": "Marc G. Fournier writes:\n\n> good point ... but, where should I be pulling from? ~ftp/pub/doc/7.2\n> contains .pdf files, which I didn't think we wanted to put into the\n> distribution ... is there an alternative place I should be pulling docs\n> >from then ~ftp/pub/dev/doc?\n\nNo, there currently is no place where these docs are built, because this\nis the first time we're releasing from this branch. I've been trying all\nday to build them on postgresql.org, but that machine seems to be\nincredibly slow right now. I'll try again later.\n\n> should there be a step in the 'build dist'\n> that builds the docs based on the sgml?\n\nI've been promoting that every time this problem happens. And the problem\ndoes happen every time we're making a minor release. I think it's about\ntime to clean this up. But it won't happen in the 7.2 branch anymore.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 26 Mar 2002 21:37:47 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Marc G. Fournier writes:\n> \n> > good point ... but, where should I be pulling from? ~ftp/pub/doc/7.2\n> > contains .pdf files, which I didn't think we wanted to put into the\n> > distribution ... is there an alternative place I should be pulling docs\n> > >from then ~ftp/pub/dev/doc?\n> \n> No, there currently is no place where these docs are built, because this\n> is the first time we're releasing from this branch. I've been trying all\n> day to build them on postgresql.org, but that machine seems to be\n> incredibly slow right now. I'll try again later.\n\nI can do it hear easily. Let me know and I will give you the URL. It\ntakes only 7 minutes here.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Mar 2002 21:46:04 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "El Mar 26, Peter Eisentraut escribio:\n\n> Marc G. Fournier writes:\n\n> > should there be a step in the 'build dist'\n> > that builds the docs based on the sgml?\n>\n> I've been promoting that every time this problem happens. And the problem\n> does happen every time we're making a minor release. I think it's about\n> time to clean this up. But it won't happen in the 7.2 branch anymore.\n\nHello all:\n\nI've been trying to generate HTML from the SGML source here in Mandrake\nLinux 8.1, and it needs some patching (Mdk Linux puts collateindex.pl in\n/usr/bin rather that $DOCBOOKSTYLE/bin). I posted to pgsql-patches a\none-liner that fixed a small problem, and later a patch to allow\ndiscovery of collateindex.pl, but the moderator apparently didn't bother\nto approve the second post. This still needs an autoconf macro to allow\nmanpage building, 'cause docbook2man-spec.pl lives somewhere else than\nthe Makefile expects.\n\n(Yes, I know I can set D2MDIR and DOCBOOKINDEX, but I think this is\ncleaner)\n\nThis is the output of\ncvs diff -u config/docbook.m4 src/Makefile.global.in\n\nIndex: config/docbook.m4\n===================================================================\nRCS file: /projects/cvsroot/pgsql/config/docbook.m4,v\nretrieving revision 1.1\ndiff -u -r1.1 docbook.m4\n--- config/docbook.m4\t2000/11/05 21:04:06\t1.1\n+++ config/docbook.m4\t2002/03/27 02:51:27\n@@ -57,7 +57,8 @@\n for pgac_postfix in \\\n sgml/stylesheets/nwalsh-modular \\\n sgml/stylesheets/docbook \\\n- sgml/docbook/dsssl/modular\n+ sgml/docbook/dsssl/modular \\\n+ sgml/docbook/dsssl-stylesheets\n do\n pgac_candidate=$pgac_prefix/$pgac_infix/$pgac_postfix\n if test -r \"$pgac_candidate/html/docbook.dsl\" \\\n@@ -77,3 +78,26 @@\n else\n AC_MSG_RESULT(no)\n fi])# PGAC_PATH_DOCBOOK_STYLESHEETS\n+\n+# PGAC_PATH_DOCBOOK_COLLATEINDEX\n+# ------------------------------\n+AC_DEFUN([PGAC_PATH_DOCBOOK_COLLATEINDEX],\n+[AC_MSG_CHECKING([for collateindex.pl])\n+AC_CACHE_VAL([pgac_cv_path_collateindex],\n+[if test -n \"$DOCBOOKINDEX\"; then\n+ pgac_cv_path_collateindex=$DOCBOOKINDEX\n+else\n+ for pgac_prefix in $DOCBOOKSTYLE/bin /usr/bin; do\n+ if test -x \"$pgac_prefix/collateindex.pl\"; then\n+ pgac_cv_path_collateindex=$pgac_prefix/collateindex.pl\n+ break\n+ fi\n+ done\n+fi])\n+DOCBOOKINDEX=$pgac_cv_path_collateindex\n+AC_SUBST([DOCBOOKINDEX])\n+if test -n \"$DOCBOOKINDEX\"; then\n+ AC_MSG_RESULT([$DOCBOOKINDEX])\n+else\n+ AC_MSG_RESULT(no)\n+fi])# PGAC_PATH_DOCBOOK_COLLATEINDEX\nIndex: src/Makefile.global.in\n===================================================================\nRCS file: /projects/cvsroot/pgsql/src/Makefile.global.in,v\nretrieving revision 1.143\ndiff -u -r1.143 Makefile.global.in\n--- src/Makefile.global.in\t2002/03/13 00:05:02\t1.143\n+++ src/Makefile.global.in\t2002/03/27 02:51:28\n@@ -149,6 +149,7 @@\n\n have_docbook\t= @have_docbook@\n DOCBOOKSTYLE\t= @DOCBOOKSTYLE@\n+DOCBOOKINDEX\t= @DOCBOOKINDEX@\n\n\n ##########################################################################\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La primera ley de las demostraciones en vivo es: no trate de usar el sistema.\nEscriba un gui�n que no toque nada para no causar da�os.\" (Jakob Nielsen)\n\n",
"msg_date": "Tue, 26 Mar 2002 23:00:00 -0400 (CLT)",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I can do it hear easily. Let me know and I will give you the URL. It\n> > takes only 7 minutes here.\n> \n> Go ahead. Just make sure you use some reasonably recent style sheets (>=\n> 1.70) and not 1.64 that you currently have.\n\nI will not be able to upgrade the style sheets for a day. I remember\nsgml install as being quite complicated. Can you give me the URL for\nthe new style sheets?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Mar 2002 22:53:26 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Bruce Momjian writes:\n\n> I can do it hear easily. Let me know and I will give you the URL. It\n> takes only 7 minutes here.\n\nGo ahead. Just make sure you use some reasonably recent style sheets (>=\n1.70) and not 1.64 that you currently have.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 26 Mar 2002 22:53:46 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "Alvaro Herrera writes:\n\n> I've been trying to generate HTML from the SGML source here in Mandrake\n> Linux 8.1, and it needs some patching (Mdk Linux puts collateindex.pl in\n> /usr/bin rather that $DOCBOOKSTYLE/bin).\n\nI'll look at your patches soon. I've had some other ideas that I'd like\nto weave into this.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 27 Mar 2002 11:59:33 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Rolling v7.2.1 ..."
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nGilles DAROLD wrote:\n> Hi Justin,\n> \n> I have a new updated version of the Ora2Pg tool which correct many\n> problems and add some new features, could you or someone else update\n> the contrib directory.\n> (download at: http://www.samse.fr/GPL/ora2pg/ora2pg-1.8.tar.gz)\n> \n> I also just post a new tool in replacement of the Oracle XSQL Servlet,\n> use to create dynamic web application with XML/XSLT.\n> \n> Let me know if it can take place under the contrib directory.\n> (http://www.samse.fr/GPL/pxsql/)\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Apr 2002 13:30:32 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Contrib update"
},
{
"msg_contents": "Gilles DAROLD wrote:\n> Hi Justin,\n> \n> I have a new updated version of the Ora2Pg tool which correct many\n> problems and add some new features, could you or someone else update\n> the contrib directory.\n> (download at: http://www.samse.fr/GPL/ora2pg/ora2pg-1.8.tar.gz)\n\nThanks. CVS updated.\n\n> I also just post a new tool in replacement of the Oracle XSQL Servlet,\n> use to create dynamic web application with XML/XSLT.\n> \n> Let me know if it can take place under the contrib directory.\n> (http://www.samse.fr/GPL/pxsql/)\n\nI think this belongs on our interfaces page:\n\n\thttp://www.ca.postgresql.org/interfaces.html\n\nVince, would you add this?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 01:24:34 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Contrib update"
},
{
"msg_contents": "On Thu, 4 Apr 2002, Bruce Momjian wrote:\n\n> Gilles DAROLD wrote:\n> > Hi Justin,\n> >\n> > I have a new updated version of the Ora2Pg tool which correct many\n> > problems and add some new features, could you or someone else update\n> > the contrib directory.\n> > (download at: http://www.samse.fr/GPL/ora2pg/ora2pg-1.8.tar.gz)\n>\n> Thanks. CVS updated.\n>\n> > I also just post a new tool in replacement of the Oracle XSQL Servlet,\n> > use to create dynamic web application with XML/XSLT.\n> >\n> > Let me know if it can take place under the contrib directory.\n> > (http://www.samse.fr/GPL/pxsql/)\n>\n> I think this belongs on our interfaces page:\n>\n> \thttp://www.ca.postgresql.org/interfaces.html\n>\n> Vince, would you add this?\n\nI imagine it'll show up eventually. I added it and the file's been\ntrying to save for the last 5 minutes. The loads were up over 12\non the machine and it's extremely slow, hopefully there are no typos.\nMaybe I'll even get to commit it this week.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Thu, 4 Apr 2002 05:25:10 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Contrib update"
}
] |
[
{
"msg_contents": "Sorry for the package, but the following patch need to be applied to get the new verion compiled on SCO Openserver 5.0.5 and Unixware 7.1.1",
"msg_date": "Tue, 26 Mar 2002 17:21:31 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "\nWe am going to need an explaination on these changes. Why move\nthe socket test? Why change pow()? The TCL stuff is going to\neffect other platforms and probably will not be applied without a\ngood reason.\n\n---------------------------------------------------------------------------\n\nNicolas Bazin wrote:\n> Sorry for the package, but the following patch need to be applied\n> to get the new verion compiled on SCO Openserver 5.0.5 and\n> Unixware 7.1.1\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 26 Mar 2002 08:08:31 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "Bruce,\n\nThe reason to move the socket library is that during configuration script\nexecution, the binary created core dumps if not in the order I gave. You can\ncheck in the port list, some people have been complaining that they could\nnot even go any further than the configure step and that is the reason.\nHere is the message you get otherwise:\n\nchecking test program... failed\nconfigure: error:\n*** Could not execute a simple test program. This may be a problem\n*** related to locating shared libraries. Check the file 'config.log'\n*** for the exact reason.\n\nIn config.log the last lines are:\n\nconfigure:7516: checking test program\nconfigure:7525: gcc -o conftest -O2\n\n\n\n conftest.c -lz -lPW -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap\n1>&5\nconfigure: failed program was:\n#line 7521 \"configure\"\n#include \"confdefs.h\"\nint main() { return 0; }\n\n\n\npow is in the static library libm and SCO Openserver linker does not accept\nto link it in a so file. The modification I provide works whithout changing\nthe way the code works. If there is another way to get libm linked in so\nHere is the message I get:\n\ngcc -shared -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o\ncolumninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o\nmd5.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o\nparse.o statement.o tuple.o tuplelist.o dlg_specific.o odbcapi.o\npps.o -lsocket -lnsl -lm -o libpsqlodbc.so.0.27\nrelocations referenced\n from file(s)\n /usr/ccs/lib/libm.a(pow.o)\n /usr/ccs/lib/libm.a(fmod.o)\n /usr/ccs/lib/libm.a(merr.o)\n fatal error: relocations remain against allocatable but non-writable\nsection: .text\n\ncollect2: ld returned 1 exit status\n\n\n\nThe TCL stuff is because Caldera distribution of TCL is compiled with their\ncompiler. If you happen to use another compiler on your platform (gcc) it\ndoesn't work anymore. Caldera compiler has -belf -Kpic options which are\nfully incompatible with gcc. That's why I though best to leave the TCL\npackages been compiled with the compiler used for postgresql.\n\nNote that I have the same issue for perl modules, but I haven't found a\nproper way to correct the make files automatically generated. I understand\nthat we would want the same compilation options but if you install TCL or\nPERL from packages you may not have the same compiler.\n\nAppart these points the regression tests work fine for these platforms. They\nare still a few warnings during the compilation process, when I get some\ntime, I'll try to correct them.\n\nNicolas\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, March 27, 2002 12:08 AM\nSubject: Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware 7.1.1\n\n\n>\n> We am going to need an explaination on these changes. Why move\n> the socket test? Why change pow()? The TCL stuff is going to\n> effect other platforms and probably will not be applied without a\n> good reason.\n>\n> --------------------------------------------------------------------------\n-\n>\n> Nicolas Bazin wrote:\n> > Sorry for the package, but the following patch need to be applied\n> > to get the new verion compiled on SCO Openserver 5.0.5 and\n> > Unixware 7.1.1\n>\n> [ Attachment, skipping... ]\n>\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n",
"msg_date": "Wed, 27 Mar 2002 09:38:57 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "\nThanks. This is exactly the detail I needed. Let me comment on each\nitem.\n\n\nNicolas Bazin wrote:\n> Bruce,\n> \n> The reason to move the socket library is that during configuration script\n> execution, the binary created core dumps if not in the order I gave. You can\n> check in the port list, some people have been complaining that they could\n> not even go any further than the configure step and that is the reason.\n> Here is the message you get otherwise:\n> \n> checking test program... failed\n> configure: error:\n> *** Could not execute a simple test program. This may be a problem\n> *** related to locating shared libraries. Check the file 'config.log'\n> *** for the exact reason.\n> \n> In config.log the last lines are:\n> \n> configure:7516: checking test program\n> configure:7525: gcc -o conftest -O2\n> \n> \n> \n> conftest.c -lz -lPW -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap\n> 1>&5\n> configure: failed program was:\n> #line 7521 \"configure\"\n> #include \"confdefs.h\"\n> int main() { return 0; }\n\n From your link line, it seems -lnls is needed by -lsocket. What I don't\nknow is whether there are other platforms that where -lnls needs\n-lsocket.\n\n\t... $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS\n\nThat last LIBS grows as configure runs, so that is why reordering fixes\nthings for SCO.\n\nI don't see any immediate downside to moving it so I will apply the\nchange to 7.3. Any platforms problems with the reordering will show up\nduring 7.3 beta testing. I would need someone else to agree before\nmaking this change in 7.2.X.\n\n\n> pow is in the static library libm and SCO Openserver linker does not accept\n> to link it in a so file. The modification I provide works whithout changing\n> the way the code works. If there is another way to get libm linked in so\n> Here is the message I get:\n> \n> gcc -shared -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o bind.o\n> columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o\n> md5.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o\n> parse.o statement.o tuple.o tuplelist.o dlg_specific.o odbcapi.o\n> pps.o -lsocket -lnsl -lm -o libpsqlodbc.so.0.27\n> relocations referenced\n> from file(s)\n> /usr/ccs/lib/libm.a(pow.o)\n> /usr/ccs/lib/libm.a(fmod.o)\n> /usr/ccs/lib/libm.a(merr.o)\n> fatal error: relocations remain against allocatable but non-writable\n> section: .text\n> \n> collect2: ld returned 1 exit status\n\nYes, the patch replaces pow(8,*) with a lookup table of 4 8^X values. \nSo SCO provides a library you can't link to? Or you can't mix *.so\nlibraries and static *.a libraries? I am inclined ot add this patch to\nthe doc/FAQ_SCO file. We really try to avoid major code uglyness to\nwork around operating system things that should work on their own.\n\n\n\n> The TCL stuff is because Caldera distribution of TCL is compiled with their\n> compiler. If you happen to use another compiler on your platform (gcc) it\n> doesn't work anymore. Caldera compiler has -belf -Kpic options which are\n> fully incompatible with gcc. That's why I though best to leave the TCL\n> packages been compiled with the compiler used for postgresql.\n> \n> Note that I have the same issue for perl modules, but I haven't found a\n> proper way to correct the make files automatically generated. I understand\n> that we would want the same compilation options but if you install TCL or\n> PERL from packages you may not have the same compiler.\n\nNot sure how to deal with this one. Can you add something to FAQ_SCO or\nshould I add this patch. Clearly this is very OS specific and probably\nonly true for certain versions of SCO.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 06:21:59 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nCc: \"PostgreSQL-development\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, March 27, 2002 10:21 PM\nSubject: Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware 7.1.1\n\n\n>\n> Thanks. This is exactly the detail I needed. Let me comment on each\n> item.\n>\n>\n> Nicolas Bazin wrote:\n> > Bruce,\n> >\n> > The reason to move the socket library is that during configuration\nscript\n> > execution, the binary created core dumps if not in the order I gave. You\ncan\n> > check in the port list, some people have been complaining that they\ncould\n> > not even go any further than the configure step and that is the reason.\n> > Here is the message you get otherwise:\n> >\n> > checking test program... failed\n> > configure: error:\n> > *** Could not execute a simple test program. This may be a problem\n> > *** related to locating shared libraries. Check the file 'config.log'\n> > *** for the exact reason.\n> >\n> > In config.log the last lines are:\n> >\n> > configure:7516: checking test program\n> > configure:7525: gcc -o conftest -O2\n> >\n> >\n> >\n> >\n conftest.c -lz -lPW -lgen -lld -lnsl -lsocket -ldl -lm -lreadline -ltermcap\n> > 1>&5\n> > configure: failed program was:\n> > #line 7521 \"configure\"\n> > #include \"confdefs.h\"\n> > int main() { return 0; }\n>\n> From your link line, it seems -lnls is needed by -lsocket. What I don't\n> know is whether there are other platforms that where -lnls needs\n> -lsocket.\n>\n> ... $CFLAGS $CPPFLAGS $LDFLAGS conftest.$ac_ext $LIBS\n>\n> That last LIBS grows as configure runs, so that is why reordering fixes\n> things for SCO.\n>\n> I don't see any immediate downside to moving it so I will apply the\n> change to 7.3. Any platforms problems with the reordering will show up\n> during 7.3 beta testing. I would need someone else to agree before\n> making this change in 7.2.X.\nThe other possibility is to have configure to test the order of the library,\nthen there won't be any effect on any platform.\nTry the existing order, if it fails try the other order.\n\n>\n>\n> > pow is in the static library libm and SCO Openserver linker does not\naccept\n> > to link it in a so file. The modification I provide works whithout\nchanging\n> > the way the code works. If there is another way to get libm linked in so\n> > Here is the message I get:\n> >\n> > gcc -shared -Wl,-z,text -Wl,-h,libpsqlodbc.so.0 -Wl,-Bsymbolic info.o\nbind.o\n> > columninfo.o connection.o convert.o drvconn.o environ.o execute.o lobj.o\n> > md5.o misc.o options.o pgtypes.o psqlodbc.o qresult.o results.o socket.o\n> > parse.o statement.o tuple.o tuplelist.o dlg_specific.o odbcapi.o\n> > pps.o -lsocket -lnsl -lm -o libpsqlodbc.so.0.27\n> > relocations referenced\n> > from file(s)\n> > /usr/ccs/lib/libm.a(pow.o)\n> > /usr/ccs/lib/libm.a(fmod.o)\n> > /usr/ccs/lib/libm.a(merr.o)\n> > fatal error: relocations remain against allocatable but non-writable\n> > section: .text\n> >\n> > collect2: ld returned 1 exit status\n>\n> Yes, the patch replaces pow(8,*) with a lookup table of 4 8^X values.\n> So SCO provides a library you can't link to? Or you can't mix *.so\n> libraries and static *.a libraries? I am inclined ot add this patch to\n> the doc/FAQ_SCO file. We really try to avoid major code uglyness to\n> work around operating system things that should work on their own.\n>\nMy guess is that this library has not been compiled as a pic code then it's\nnot relocatable. This must be a bug in SCO, and I only have seen this\nproblem with libm only yet.\n\n>\n>\n> > The TCL stuff is because Caldera distribution of TCL is compiled with\ntheir\n> > compiler. If you happen to use another compiler on your platform (gcc)\nit\n> > doesn't work anymore. Caldera compiler has -belf -Kpic options which are\n> > fully incompatible with gcc. That's why I though best to leave the TCL\n> > packages been compiled with the compiler used for postgresql.\n> >\n> > Note that I have the same issue for perl modules, but I haven't found a\n> > proper way to correct the make files automatically generated. I\nunderstand\n> > that we would want the same compilation options but if you install TCL\nor\n> > PERL from packages you may not have the same compiler.\n>\n> Not sure how to deal with this one. Can you add something to FAQ_SCO or\n> should I add this patch. Clearly this is very OS specific and probably\n> only true for certain versions of SCO.\n>\nI don't know much about other platforms. This is more a compiler\nincompatibility then a platform problem. The problem comes from the fact\nthat one compiler was used to create a package and another one is used to\ncompile postgres. I know that your are supposed to be able to recompile the\ncode you installl on your server, but first sometimes it helps using\npreconfigured packages and also it may not be so easy to recompile from the\npublic distribution that may need specific patches.\n\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n>\n\n\n",
"msg_date": "Wed, 27 Mar 2002 23:08:23 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, the patch replaces pow(8,*) with a lookup table of 4 8^X values. \n> So SCO provides a library you can't link to? Or you can't mix *.so\n> libraries and static *.a libraries? I am inclined ot add this patch to\n> the doc/FAQ_SCO file. We really try to avoid major code uglyness to\n> work around operating system things that should work on their own.\n\nActually, the existing coding in odbc is just plain stupid: why are we\nusing a transcendental function call to emulate an integer shift?\nEven the table-based implementation that Nicolas proposed is doing it\nthe hard way. Try converting, eg,\n\n\tfor (i = 1; i <= 3; i++)\n\t\ty += (s[i] - 48) * (int) pow(8, 3 - i);\n\nto\n\n\tfor (i = 1; i <= 3; i++)\n\t\ty += (s[i] - '0') << (3 * (3 - i));\n\nand you can get the patch accepted just on efficiency and readability\ngrounds, never mind whether it avoids SCO library breakage.\n\n>> The TCL stuff is because Caldera distribution of TCL is compiled with their\n>> compiler. If you happen to use another compiler on your platform (gcc) it\n>> doesn't work anymore. Caldera compiler has -belf -Kpic options which are\n>> fully incompatible with gcc. That's why I though best to leave the TCL\n>> packages been compiled with the compiler used for postgresql.\n\nWe've been around on this a couple of times now; the current theory is\nthat we should stop using the TCL-supplied switches altogether. There\nis a patch in the works to change libpgtcl and pltcl to be built the\nsame way we build everything else in the distribution.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Mar 2002 10:30:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1 "
},
{
"msg_contents": "\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nCc: \"Nicolas Bazin\" <nbazin@ingenico.com.au>; \"PostgreSQL-development\"\n<pgsql-hackers@postgresql.org>\nSent: Thursday, March 28, 2002 2:30 AM\nSubject: Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware 7.1.1\n\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, the patch replaces pow(8,*) with a lookup table of 4 8^X values.\n> > So SCO provides a library you can't link to? Or you can't mix *.so\n> > libraries and static *.a libraries? I am inclined ot add this patch to\n> > the doc/FAQ_SCO file. We really try to avoid major code uglyness to\n> > work around operating system things that should work on their own.\n>\n> Actually, the existing coding in odbc is just plain stupid: why are we\n> using a transcendental function call to emulate an integer shift?\n> Even the table-based implementation that Nicolas proposed is doing it\n> the hard way. Try converting, eg,\n>\n> for (i = 1; i <= 3; i++)\n> y += (s[i] - 48) * (int) pow(8, 3 - i);\n>\n> to\n>\n> for (i = 1; i <= 3; i++)\n> y += (s[i] - '0') << (3 * (3 - i));\n>\n> and you can get the patch accepted just on efficiency and readability\n> grounds, never mind whether it avoids SCO library breakage.\n>\n> >> The TCL stuff is because Caldera distribution of TCL is compiled with\ntheir\n> >> compiler. If you happen to use another compiler on your platform (gcc)\nit\n> >> doesn't work anymore. Caldera compiler has -belf -Kpic options which\nare\n> >> fully incompatible with gcc. That's why I though best to leave the TCL\n> >> packages been compiled with the compiler used for postgresql.\n>\n> We've been around on this a couple of times now; the current theory is\n> that we should stop using the TCL-supplied switches altogether. There\n> is a patch in the works to change libpgtcl and pltcl to be built the\n> same way we build everything else in the distribution.\nPerls modules have the same problems. Is there a patch also ?\n\n>\n> regards, tom lane\n>\n>\n\n\n",
"msg_date": "Thu, 28 Mar 2002 09:34:53 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "Nicolas Bazin wrote:\n> Sorry for the package, but the following patch need to be applied\n> to get the new verion compiled on SCO Openserver 5.0.5 and\n> Unixware 7.1.1\n\nReworked patch attached. I reordered configure.in (autoconf will need\nto be run). I fixed the ODBC pow() call as Tom suggested, and the\nregression script. I did not touch TCL because that should be reworked\nfor 7.3 anyway.\n\n--\n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.178\ndiff -c -r1.178 configure.in\n*** configure.in\t14 Apr 2002 17:23:20 -0000\t1.178\n--- configure.in\t18 Apr 2002 01:14:14 -0000\n***************\n*** 696,703 ****\n AC_CHECK_LIB(util, setproctitle)\n AC_CHECK_LIB(m, main)\n AC_CHECK_LIB(dl, main)\n- AC_CHECK_LIB(socket, main)\n AC_CHECK_LIB(nsl, main)\n AC_CHECK_LIB(ipc, main)\n AC_CHECK_LIB(IPC, main)\n AC_CHECK_LIB(lc, main)\n--- 696,703 ----\n AC_CHECK_LIB(util, setproctitle)\n AC_CHECK_LIB(m, main)\n AC_CHECK_LIB(dl, main)\n AC_CHECK_LIB(nsl, main)\n+ AC_CHECK_LIB(socket, main)\n AC_CHECK_LIB(ipc, main)\n AC_CHECK_LIB(IPC, main)\n AC_CHECK_LIB(lc, main)\nIndex: src/interfaces/odbc/convert.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/odbc/convert.c,v\nretrieving revision 1.78\ndiff -c -r1.78 convert.c\n*** src/interfaces/odbc/convert.c\t1 Apr 2002 03:01:14 -0000\t1.78\n--- src/interfaces/odbc/convert.c\t18 Apr 2002 01:14:29 -0000\n***************\n*** 2717,2723 ****\n \t\t\t\ty = 0;\n \n \tfor (i = 1; i <= 3; i++)\n! \t\ty += (s[i] - 48) * (int) pow(8, 3 - i);\n \n \treturn y;\n \n--- 2717,2723 ----\n \t\t\t\ty = 0;\n \n \tfor (i = 1; i <= 3; i++)\n! \t\ty += (s[i] - '0') << (3 * (3 - i));\n \n \treturn y;\n \n***************\n*** 2740,2746 ****\n \t\telse\n \t\t\tval = s[i] - '0';\n \n! \t\ty += val * (int) pow(16, 2 - i);\n \t}\n \n \treturn y;\n--- 2740,2746 ----\n \t\telse\n \t\t\tval = s[i] - '0';\n \n! \t\ty += val << (4 * (2 - i));\n \t}\n \n \treturn y;\n***************\n*** 2795,2801 ****\n \n \tfor (i = 4; i > 1; i--)\n \t{\n! \t\tx[i] = (val & 7) + 48;\n \t\tval >>= 3;\n \t}\n \n--- 2795,2801 ----\n \n \tfor (i = 4; i > 1; i--)\n \t{\n! \t\tx[i] = (val & 7) + '0';\n \t\tval >>= 3;\n \t}\n \nIndex: src/test/regress/pg_regress.sh\n===================================================================\nRCS file: /cvsroot/pgsql/src/test/regress/pg_regress.sh,v\nretrieving revision 1.23\ndiff -c -r1.23 pg_regress.sh\n*** src/test/regress/pg_regress.sh\t3 Jan 2002 21:52:05 -0000\t1.23\n--- src/test/regress/pg_regress.sh\t18 Apr 2002 01:14:30 -0000\n***************\n*** 161,167 ****\n # ----------\n \n case $host_platform in\n! *-*-qnx* | *beos*)\n unix_sockets=no;;\n *)\n unix_sockets=yes;;\n--- 161,167 ----\n # ----------\n \n case $host_platform in\n! *-*-qnx* | *beos* | *-*-sco3.2v5* | *-*-sysv5)\n unix_sockets=no;;\n *)\n unix_sockets=yes;;",
"msg_date": "Wed, 17 Apr 2002 21:17:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
},
{
"msg_contents": "The have_unix_sockets test in the regression test script should NOT\ninclude sysv5. They work fine in UnixWare 7.1.1 and OU8.\n\n\n\nOn Wed, 2002-04-17 at 20:17, Bruce Momjian wrote:\n> Nicolas Bazin wrote:\n> > Sorry for the package, but the following patch need to be applied\n> > to get the new verion compiled on SCO Openserver 5.0.5 and\n> > Unixware 7.1.1\n> \n> Reworked patch attached. I reordered configure.in (autoconf will need\n> to be run). I fixed the ODBC pow() call as Tom suggested, and the\n> regression script. I did not touch TCL because that should be reworked\n> for 7.3 anyway.\n> \n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> ----\n> \n\n> Index: configure.in\n> ===================================================================\n> RCS file: /cvsroot/pgsql/configure.in,v\n> retrieving revision 1.178\n> diff -c -r1.178 configure.in\n> *** configure.in\t14 Apr 2002 17:23:20 -0000\t1.178\n> --- configure.in\t18 Apr 2002 01:14:14 -0000\n> ***************\n> *** 696,703 ****\n> AC_CHECK_LIB(util, setproctitle)\n> AC_CHECK_LIB(m, main)\n> AC_CHECK_LIB(dl, main)\n> - AC_CHECK_LIB(socket, main)\n> AC_CHECK_LIB(nsl, main)\n> AC_CHECK_LIB(ipc, main)\n> AC_CHECK_LIB(IPC, main)\n> AC_CHECK_LIB(lc, main)\n> --- 696,703 ----\n> AC_CHECK_LIB(util, setproctitle)\n> AC_CHECK_LIB(m, main)\n> AC_CHECK_LIB(dl, main)\n> AC_CHECK_LIB(nsl, main)\n> + AC_CHECK_LIB(socket, main)\n> AC_CHECK_LIB(ipc, main)\n> AC_CHECK_LIB(IPC, main)\n> AC_CHECK_LIB(lc, main)\n> Index: src/interfaces/odbc/convert.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/interfaces/odbc/convert.c,v\n> retrieving revision 1.78\n> diff -c -r1.78 convert.c\n> *** src/interfaces/odbc/convert.c\t1 Apr 2002 03:01:14 -0000\t1.78\n> --- src/interfaces/odbc/convert.c\t18 Apr 2002 01:14:29 -0000\n> ***************\n> *** 2717,2723 ****\n> \t\t\t\ty = 0;\n> \n> \tfor (i = 1; i <= 3; i++)\n> ! \t\ty += (s[i] - 48) * (int) pow(8, 3 - i);\n> \n> \treturn y;\n> \n> --- 2717,2723 ----\n> \t\t\t\ty = 0;\n> \n> \tfor (i = 1; i <= 3; i++)\n> ! \t\ty += (s[i] - '0') << (3 * (3 - i));\n> \n> \treturn y;\n> \n> ***************\n> *** 2740,2746 ****\n> \t\telse\n> \t\t\tval = s[i] - '0';\n> \n> ! \t\ty += val * (int) pow(16, 2 - i);\n> \t}\n> \n> \treturn y;\n> --- 2740,2746 ----\n> \t\telse\n> \t\t\tval = s[i] - '0';\n> \n> ! \t\ty += val << (4 * (2 - i));\n> \t}\n> \n> \treturn y;\n> ***************\n> *** 2795,2801 ****\n> \n> \tfor (i = 4; i > 1; i--)\n> \t{\n> ! \t\tx[i] = (val & 7) + 48;\n> \t\tval >>= 3;\n> \t}\n> \n> --- 2795,2801 ----\n> \n> \tfor (i = 4; i > 1; i--)\n> \t{\n> ! \t\tx[i] = (val & 7) + '0';\n> \t\tval >>= 3;\n> \t}\n> \n> Index: src/test/regress/pg_regress.sh\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/test/regress/pg_regress.sh,v\n> retrieving revision 1.23\n> diff -c -r1.23 pg_regress.sh\n> *** src/test/regress/pg_regress.sh\t3 Jan 2002 21:52:05 -0000\t1.23\n> --- src/test/regress/pg_regress.sh\t18 Apr 2002 01:14:30 -0000\n> ***************\n> *** 161,167 ****\n> # ----------\n> \n> case $host_platform in\n> ! *-*-qnx* | *beos*)\n> unix_sockets=no;;\n> *)\n> unix_sockets=yes;;\n> --- 161,167 ----\n> # ----------\n> \n> case $host_platform in\n> ! *-*-qnx* | *beos* | *-*-sco3.2v5* | *-*-sysv5)\n> unix_sockets=no;;\n> *)\n> unix_sockets=yes;;\n> ----\n> \n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "17 Apr 2002 20:21:45 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and"
},
{
"msg_contents": "\nOK, new code is:\n\n\t! *-*-qnx* | *beos* | *-*-sco3.2v5*) \n\n---------------------------------------------------------------------------\n\nLarry Rosenman wrote:\n> The have_unix_sockets test in the regression test script should NOT\n> include sysv5. They work fine in UnixWare 7.1.1 and OU8.\n> \n> \n> \n> On Wed, 2002-04-17 at 20:17, Bruce Momjian wrote:\n> > Nicolas Bazin wrote:\n> > > Sorry for the package, but the following patch need to be applied\n> > > to get the new verion compiled on SCO Openserver 5.0.5 and\n> > > Unixware 7.1.1\n> > \n> > Reworked patch attached. I reordered configure.in (autoconf will need\n> > to be run). I fixed the ODBC pow() call as Tom suggested, and the\n> > regression script. I did not touch TCL because that should be reworked\n> > for 7.3 anyway.\n> > \n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > ----\n> > \n> \n> > Index: configure.in\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/configure.in,v\n> > retrieving revision 1.178\n> > diff -c -r1.178 configure.in\n> > *** configure.in\t14 Apr 2002 17:23:20 -0000\t1.178\n> > --- configure.in\t18 Apr 2002 01:14:14 -0000\n> > ***************\n> > *** 696,703 ****\n> > AC_CHECK_LIB(util, setproctitle)\n> > AC_CHECK_LIB(m, main)\n> > AC_CHECK_LIB(dl, main)\n> > - AC_CHECK_LIB(socket, main)\n> > AC_CHECK_LIB(nsl, main)\n> > AC_CHECK_LIB(ipc, main)\n> > AC_CHECK_LIB(IPC, main)\n> > AC_CHECK_LIB(lc, main)\n> > --- 696,703 ----\n> > AC_CHECK_LIB(util, setproctitle)\n> > AC_CHECK_LIB(m, main)\n> > AC_CHECK_LIB(dl, main)\n> > AC_CHECK_LIB(nsl, main)\n> > + AC_CHECK_LIB(socket, main)\n> > AC_CHECK_LIB(ipc, main)\n> > AC_CHECK_LIB(IPC, main)\n> > AC_CHECK_LIB(lc, main)\n> > Index: src/interfaces/odbc/convert.c\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/src/interfaces/odbc/convert.c,v\n> > retrieving revision 1.78\n> > diff -c -r1.78 convert.c\n> > *** src/interfaces/odbc/convert.c\t1 Apr 2002 03:01:14 -0000\t1.78\n> > --- src/interfaces/odbc/convert.c\t18 Apr 2002 01:14:29 -0000\n> > ***************\n> > *** 2717,2723 ****\n> > \t\t\t\ty = 0;\n> > \n> > \tfor (i = 1; i <= 3; i++)\n> > ! \t\ty += (s[i] - 48) * (int) pow(8, 3 - i);\n> > \n> > \treturn y;\n> > \n> > --- 2717,2723 ----\n> > \t\t\t\ty = 0;\n> > \n> > \tfor (i = 1; i <= 3; i++)\n> > ! \t\ty += (s[i] - '0') << (3 * (3 - i));\n> > \n> > \treturn y;\n> > \n> > ***************\n> > *** 2740,2746 ****\n> > \t\telse\n> > \t\t\tval = s[i] - '0';\n> > \n> > ! \t\ty += val * (int) pow(16, 2 - i);\n> > \t}\n> > \n> > \treturn y;\n> > --- 2740,2746 ----\n> > \t\telse\n> > \t\t\tval = s[i] - '0';\n> > \n> > ! \t\ty += val << (4 * (2 - i));\n> > \t}\n> > \n> > \treturn y;\n> > ***************\n> > *** 2795,2801 ****\n> > \n> > \tfor (i = 4; i > 1; i--)\n> > \t{\n> > ! \t\tx[i] = (val & 7) + 48;\n> > \t\tval >>= 3;\n> > \t}\n> > \n> > --- 2795,2801 ----\n> > \n> > \tfor (i = 4; i > 1; i--)\n> > \t{\n> > ! \t\tx[i] = (val & 7) + '0';\n> > \t\tval >>= 3;\n> > \t}\n> > \n> > Index: src/test/regress/pg_regress.sh\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/src/test/regress/pg_regress.sh,v\n> > retrieving revision 1.23\n> > diff -c -r1.23 pg_regress.sh\n> > *** src/test/regress/pg_regress.sh\t3 Jan 2002 21:52:05 -0000\t1.23\n> > --- src/test/regress/pg_regress.sh\t18 Apr 2002 01:14:30 -0000\n> > ***************\n> > *** 161,167 ****\n> > # ----------\n> > \n> > case $host_platform in\n> > ! *-*-qnx* | *beos*)\n> > unix_sockets=no;;\n> > *)\n> > unix_sockets=yes;;\n> > --- 161,167 ----\n> > # ----------\n> > \n> > case $host_platform in\n> > ! *-*-qnx* | *beos* | *-*-sco3.2v5* | *-*-sysv5)\n> > unix_sockets=no;;\n> > *)\n> > unix_sockets=yes;;\n> > ----\n> > \n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 21:23:19 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "On Wed, 2002-04-17 at 20:23, Bruce Momjian wrote:\n> \n> OK, new code is:\n> \n> \t! *-*-qnx* | *beos* | *-*-sco3.2v5*) \nThank You.\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "17 Apr 2002 20:25:27 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and"
},
{
"msg_contents": "On Wed, 2002-04-17 at 22:16, Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, new code is:\n> >\n> > \t! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> ^^^^^^^^^^^^^\n> \n> I would like to see an explanation for this.\nI personally can't comment on SCO OpenServer. I do know that the\noriginal patch for sysv5 was wrong. (from personal experience). \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "17 Apr 2002 22:14:20 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, new code is:\n> >\n> > \t! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> ^^^^^^^^^^^^^\n> \n> I would like to see an explanation for this.\n\nThe patch section is below. Not knowing the platform, I have no idea\nwhy.\n\n case $host_platform in\n! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n unix_sockets=no;;\n *)\n unix_sockets=yes;;\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 23:15:30 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, new code is:\n>\n> \t! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n ^^^^^^^^^^^^^\n\nI would like to see an explanation for this.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 17 Apr 2002 23:16:01 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "The patch from Bruce does not correct the proper thing. The original patch I\nsubmitted was :\n*** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26 16:49:04\n2002\n--- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39 2002\n***************\n*** 173,179 ****\n# ----------\ncase $host_platform in\n! *-*-qnx*)\nDIFFFLAGS=-b;;\n*)\nDIFFFLAGS=-w;;\n--- 173,179 ----\n# ----------\ncase $host_platform in\n! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\nDIFFFLAGS=-b;;\n*)\nDIFFFLAGS=-w;;\n\nBecause the diff tool that comes with Openserver or Unixware does not suppor\nthe -w option but the -b option to remove blank characters.\nThere is nothing wrong with unix sockets on both platforms.\n\nNicolas\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Peter Eisentraut\" <peter_e@gmx.net>\nCc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n<nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n<pgsql-patches@postgresql.org>\nSent: Thursday, April 18, 2002 1:15 PM\nSubject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\nUnixware\n\n\n> Peter Eisentraut wrote:\n> > Bruce Momjian writes:\n> >\n> > > OK, new code is:\n> > >\n> > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > ^^^^^^^^^^^^^\n> >\n> > I would like to see an explanation for this.\n>\n> The patch section is below. Not knowing the platform, I have no idea\n> why.\n>\n> case $host_platform in\n> ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> unix_sockets=no;;\n> *)\n> unix_sockets=yes;;\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n\n\n",
"msg_date": "Thu, 18 Apr 2002 17:13:24 +1000",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "Per the man page on OpenUNIX 8, the UnixWare diff ***DOES*** support -w.\n\nsee the man page at http://www.lerctr.org:457/\n\nLER\n\n\nOn Thu, 2002-04-18 at 02:13, Nicolas Bazin wrote:\n> The patch from Bruce does not correct the proper thing. The original patch I\n> submitted was :\n> *** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26 16:49:04\n> 2002\n> --- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39 2002\n> ***************\n> *** 173,179 ****\n> # ----------\n> case $host_platform in\n> ! *-*-qnx*)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n> --- 173,179 ----\n> # ----------\n> case $host_platform in\n> ! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n> \n> Because the diff tool that comes with Openserver or Unixware does not suppor\n> the -w option but the -b option to remove blank characters.\n> There is nothing wrong with unix sockets on both platforms.\n> \n> Nicolas\n> \n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> Cc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n> <nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n> <pgsql-patches@postgresql.org>\n> Sent: Thursday, April 18, 2002 1:15 PM\n> Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> Unixware\n> \n> \n> > Peter Eisentraut wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > OK, new code is:\n> > > >\n> > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > ^^^^^^^^^^^^^\n> > >\n> > > I would like to see an explanation for this.\n> >\n> > The patch section is below. Not knowing the platform, I have no idea\n> > why.\n> >\n> > case $host_platform in\n> > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > unix_sockets=no;;\n> > *)\n> > unix_sockets=yes;;\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "18 Apr 2002 09:38:06 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and"
},
{
"msg_contents": "Sorry, here is an updated patch. I didn't realize there were two QNX\ntests in the file.\n\nThe patch has:\n\n\tcase $host_platform in\n\t *-*-qnx* | *-*-sco3.2v5*)\n\t DIFFFLAGS=-b;;\n\t *)\n\t DIFFFLAGS=-w;;\n\tesac\n\nIs this correct. We don't need the sysv5? It is my understanding that\nsysv5 is much more than Open Server 8.\n\n---------------------------------------------------------------------------\n\nNicolas Bazin wrote:\n> The patch from Bruce does not correct the proper thing. The original patch I\n> submitted was :\n> *** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26 16:49:04\n> 2002\n> --- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39 2002\n> ***************\n> *** 173,179 ****\n> # ----------\n> case $host_platform in\n> ! *-*-qnx*)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n> --- 173,179 ----\n> # ----------\n> case $host_platform in\n> ! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n> \n> Because the diff tool that comes with Openserver or Unixware does not suppor\n> the -w option but the -b option to remove blank characters.\n> There is nothing wrong with unix sockets on both platforms.\n> \n> Nicolas\n> \n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> Cc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n> <nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n> <pgsql-patches@postgresql.org>\n> Sent: Thursday, April 18, 2002 1:15 PM\n> Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> Unixware\n> \n> \n> > Peter Eisentraut wrote:\n> > > Bruce Momjian writes:\n> > >\n> > > > OK, new code is:\n> > > >\n> > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > ^^^^^^^^^^^^^\n> > >\n> > > I would like to see an explanation for this.\n> >\n> > The patch section is below. Not knowing the platform, I have no idea\n> > why.\n> >\n> > case $host_platform in\n> > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > unix_sockets=no;;\n> > *)\n> > unix_sockets=yes;;\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> >\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.178\ndiff -c -r1.178 configure.in\n*** configure.in\t14 Apr 2002 17:23:20 -0000\t1.178\n--- configure.in\t18 Apr 2002 15:26:08 -0000\n***************\n*** 696,703 ****\n AC_CHECK_LIB(util, setproctitle)\n AC_CHECK_LIB(m, main)\n AC_CHECK_LIB(dl, main)\n- AC_CHECK_LIB(socket, main)\n AC_CHECK_LIB(nsl, main)\n AC_CHECK_LIB(ipc, main)\n AC_CHECK_LIB(IPC, main)\n AC_CHECK_LIB(lc, main)\n--- 696,703 ----\n AC_CHECK_LIB(util, setproctitle)\n AC_CHECK_LIB(m, main)\n AC_CHECK_LIB(dl, main)\n AC_CHECK_LIB(nsl, main)\n+ AC_CHECK_LIB(socket, main)\n AC_CHECK_LIB(ipc, main)\n AC_CHECK_LIB(IPC, main)\n AC_CHECK_LIB(lc, main)\nIndex: src/interfaces/odbc/convert.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/interfaces/odbc/convert.c,v\nretrieving revision 1.78\ndiff -c -r1.78 convert.c\n*** src/interfaces/odbc/convert.c\t1 Apr 2002 03:01:14 -0000\t1.78\n--- src/interfaces/odbc/convert.c\t18 Apr 2002 15:26:23 -0000\n***************\n*** 2717,2723 ****\n \t\t\t\ty = 0;\n \n \tfor (i = 1; i <= 3; i++)\n! \t\ty += (s[i] - 48) * (int) pow(8, 3 - i);\n \n \treturn y;\n \n--- 2717,2723 ----\n \t\t\t\ty = 0;\n \n \tfor (i = 1; i <= 3; i++)\n! \t\ty += (s[i] - '0') << (3 * (3 - i));\n \n \treturn y;\n \n***************\n*** 2740,2746 ****\n \t\telse\n \t\t\tval = s[i] - '0';\n \n! \t\ty += val * (int) pow(16, 2 - i);\n \t}\n \n \treturn y;\n--- 2740,2746 ----\n \t\telse\n \t\t\tval = s[i] - '0';\n \n! \t\ty += val << (4 * (2 - i));\n \t}\n \n \treturn y;\n***************\n*** 2795,2801 ****\n \n \tfor (i = 4; i > 1; i--)\n \t{\n! \t\tx[i] = (val & 7) + 48;\n \t\tval >>= 3;\n \t}\n \n--- 2795,2801 ----\n \n \tfor (i = 4; i > 1; i--)\n \t{\n! \t\tx[i] = (val & 7) + '0';\n \t\tval >>= 3;\n \t}\n \nIndex: src/test/regress/pg_regress.sh\n===================================================================\nRCS file: /cvsroot/pgsql/src/test/regress/pg_regress.sh,v\nretrieving revision 1.23\ndiff -c -r1.23 pg_regress.sh\n*** src/test/regress/pg_regress.sh\t3 Jan 2002 21:52:05 -0000\t1.23\n--- src/test/regress/pg_regress.sh\t18 Apr 2002 15:26:24 -0000\n***************\n*** 173,179 ****\n # ----------\n \n case $host_platform in\n! *-*-qnx*)\n DIFFFLAGS=-b;;\n *)\n DIFFFLAGS=-w;;\n--- 173,179 ----\n # ----------\n \n case $host_platform in\n! *-*-qnx* | *-*-sco3.2v5*)\n DIFFFLAGS=-b;;\n *)\n DIFFFLAGS=-w;;",
"msg_date": "Thu, 18 Apr 2002 11:31:24 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> The patch has:\n>\n> \tcase $host_platform in\n> \t *-*-qnx* | *-*-sco3.2v5*)\n> \t DIFFFLAGS=-b;;\n> \t *)\n> \t DIFFFLAGS=-w;;\n> \tesac\n>\n> Is this correct. We don't need the sysv5? It is my understanding that\n> sysv5 is much more than Open Server 8.\n\nDo we need this at all? Why not simply fix the expected files to remove\nthe whitespace differences?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 18 Apr 2002 12:33:48 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Do we need this at all? Why not simply fix the expected files to remove\n> the whitespace differences?\n\nThere shouldn't *be* any whitespace differences in a successful test.\nThe point of the -w switch is not to make any difference in a successful\ntest, it is to reduce the amount of irrelevant stuff printed in a failed\ntest. If you have one bogus output item that is wider than the correct\nvalue, that can cause psql to reformat the display wider in all rows.\nThe point of using -w is to not show those other rows as changed.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Apr 2002 12:57:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware "
},
{
"msg_contents": "Well I use Unixware 7.1.1. I cannot check again right now whether 7.1.1\nsupports -w but if I made the effort to change the shell, I guess I had a\ngood reason. Next time I put my hand on my Unixware PC I'll double check.\n\nNicolas\n\n----- Original Message -----\nFrom: \"Larry Rosenman\" <ler@lerctr.org>\nTo: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nCc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"Peter Eisentraut\"\n<peter_e@gmx.net>; \"PostgreSQL-patches\" <pgsql-patches@postgresql.org>\nSent: Friday, April 19, 2002 12:38 AM\nSubject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver\nandUnixware\n\n\n> Per the man page on OpenUNIX 8, the UnixWare diff ***DOES*** support -w.\n>\n> see the man page at http://www.lerctr.org:457/\n>\n> LER\n>\n>\n> On Thu, 2002-04-18 at 02:13, Nicolas Bazin wrote:\n> > The patch from Bruce does not correct the proper thing. The original\npatch I\n> > submitted was :\n> > *** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26\n16:49:04\n> > 2002\n> > --- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39\n2002\n> > ***************\n> > *** 173,179 ****\n> > # ----------\n> > case $host_platform in\n> > ! *-*-qnx*)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> > --- 173,179 ----\n> > # ----------\n> > case $host_platform in\n> > ! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> >\n> > Because the diff tool that comes with Openserver or Unixware does not\nsuppor\n> > the -w option but the -b option to remove blank characters.\n> > There is nothing wrong with unix sockets on both platforms.\n> >\n> > Nicolas\n> >\n> > ----- Original Message -----\n> > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > Cc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n> > <nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n> > <pgsql-patches@postgresql.org>\n> > Sent: Thursday, April 18, 2002 1:15 PM\n> > Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> > Unixware\n> >\n> >\n> > > Peter Eisentraut wrote:\n> > > > Bruce Momjian writes:\n> > > >\n> > > > > OK, new code is:\n> > > > >\n> > > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > > ^^^^^^^^^^^^^\n> > > >\n> > > > I would like to see an explanation for this.\n> > >\n> > > The patch section is below. Not knowing the platform, I have no idea\n> > > why.\n> > >\n> > > case $host_platform in\n> > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > unix_sockets=no;;\n> > > *)\n> > > unix_sockets=yes;;\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> > >\n> > > ---------------------------(end of\nbroadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> > >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n>\n>\n>\n\n\n\n",
"msg_date": "Fri, 19 Apr 2002 08:53:59 +1000",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver andUnixware"
},
{
"msg_contents": "Well then it means that we will have to make a difference between Unixware\n7.1.1 and OpenUnix 8.0.\n\n----- Original Message -----\nFrom: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nTo: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\nCc: \"Peter Eisentraut\" <peter_e@gmx.net>; \"Larry Rosenman\" <ler@lerctr.org>;\n\"PostgreSQL-patches\" <pgsql-patches@postgresql.org>\nSent: Friday, April 19, 2002 1:31 AM\nSubject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\nUnixware\n\n\n>\n> Sorry, here is an updated patch. I didn't realize there were two QNX\n> tests in the file.\n>\n> The patch has:\n>\n> case $host_platform in\n> *-*-qnx* | *-*-sco3.2v5*)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n> esac\n>\n> Is this correct. We don't need the sysv5? It is my understanding that\n> sysv5 is much more than Open Server 8.\n>\n> --------------------------------------------------------------------------\n-\n>\n> Nicolas Bazin wrote:\n> > The patch from Bruce does not correct the proper thing. The original\npatch I\n> > submitted was :\n> > *** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26\n16:49:04\n> > 2002\n> > --- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39\n2002\n> > ***************\n> > *** 173,179 ****\n> > # ----------\n> > case $host_platform in\n> > ! *-*-qnx*)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> > --- 173,179 ----\n> > # ----------\n> > case $host_platform in\n> > ! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> >\n> > Because the diff tool that comes with Openserver or Unixware does not\nsuppor\n> > the -w option but the -b option to remove blank characters.\n> > There is nothing wrong with unix sockets on both platforms.\n> >\n> > Nicolas\n> >\n> > ----- Original Message -----\n> > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > Cc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n> > <nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n> > <pgsql-patches@postgresql.org>\n> > Sent: Thursday, April 18, 2002 1:15 PM\n> > Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> > Unixware\n> >\n> >\n> > > Peter Eisentraut wrote:\n> > > > Bruce Momjian writes:\n> > > >\n> > > > > OK, new code is:\n> > > > >\n> > > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > > ^^^^^^^^^^^^^\n> > > >\n> > > > I would like to see an explanation for this.\n> > >\n> > > The patch section is below. Not knowing the platform, I have no idea\n> > > why.\n> > >\n> > > case $host_platform in\n> > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > unix_sockets=no;;\n> > > *)\n> > > unix_sockets=yes;;\n> > >\n> > > --\n> > > Bruce Momjian | http://candle.pha.pa.us\n> > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > + Christ can be your backup. | Drexel Hill, Pennsylvania\n19026\n> > >\n> > > ---------------------------(end of\nbroadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> > >\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n> Index: configure.in\n> ===================================================================\n> RCS file: /cvsroot/pgsql/configure.in,v\n> retrieving revision 1.178\n> diff -c -r1.178 configure.in\n> *** configure.in 14 Apr 2002 17:23:20 -0000 1.178\n> --- configure.in 18 Apr 2002 15:26:08 -0000\n> ***************\n> *** 696,703 ****\n> AC_CHECK_LIB(util, setproctitle)\n> AC_CHECK_LIB(m, main)\n> AC_CHECK_LIB(dl, main)\n> - AC_CHECK_LIB(socket, main)\n> AC_CHECK_LIB(nsl, main)\n> AC_CHECK_LIB(ipc, main)\n> AC_CHECK_LIB(IPC, main)\n> AC_CHECK_LIB(lc, main)\n> --- 696,703 ----\n> AC_CHECK_LIB(util, setproctitle)\n> AC_CHECK_LIB(m, main)\n> AC_CHECK_LIB(dl, main)\n> AC_CHECK_LIB(nsl, main)\n> + AC_CHECK_LIB(socket, main)\n> AC_CHECK_LIB(ipc, main)\n> AC_CHECK_LIB(IPC, main)\n> AC_CHECK_LIB(lc, main)\n> Index: src/interfaces/odbc/convert.c\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/interfaces/odbc/convert.c,v\n> retrieving revision 1.78\n> diff -c -r1.78 convert.c\n> *** src/interfaces/odbc/convert.c 1 Apr 2002 03:01:14 -0000 1.78\n> --- src/interfaces/odbc/convert.c 18 Apr 2002 15:26:23 -0000\n> ***************\n> *** 2717,2723 ****\n> y = 0;\n>\n> for (i = 1; i <= 3; i++)\n> ! y += (s[i] - 48) * (int) pow(8, 3 - i);\n>\n> return y;\n>\n> --- 2717,2723 ----\n> y = 0;\n>\n> for (i = 1; i <= 3; i++)\n> ! y += (s[i] - '0') << (3 * (3 - i));\n>\n> return y;\n>\n> ***************\n> *** 2740,2746 ****\n> else\n> val = s[i] - '0';\n>\n> ! y += val * (int) pow(16, 2 - i);\n> }\n>\n> return y;\n> --- 2740,2746 ----\n> else\n> val = s[i] - '0';\n>\n> ! y += val << (4 * (2 - i));\n> }\n>\n> return y;\n> ***************\n> *** 2795,2801 ****\n>\n> for (i = 4; i > 1; i--)\n> {\n> ! x[i] = (val & 7) + 48;\n> val >>= 3;\n> }\n>\n> --- 2795,2801 ----\n>\n> for (i = 4; i > 1; i--)\n> {\n> ! x[i] = (val & 7) + '0';\n> val >>= 3;\n> }\n>\n> Index: src/test/regress/pg_regress.sh\n> ===================================================================\n> RCS file: /cvsroot/pgsql/src/test/regress/pg_regress.sh,v\n> retrieving revision 1.23\n> diff -c -r1.23 pg_regress.sh\n> *** src/test/regress/pg_regress.sh 3 Jan 2002 21:52:05 -0000 1.23\n> --- src/test/regress/pg_regress.sh 18 Apr 2002 15:26:24 -0000\n> ***************\n> *** 173,179 ****\n> # ----------\n>\n> case $host_platform in\n> ! *-*-qnx*)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n> --- 173,179 ----\n> # ----------\n>\n> case $host_platform in\n> ! *-*-qnx* | *-*-sco3.2v5*)\n> DIFFFLAGS=-b;;\n> *)\n> DIFFFLAGS=-w;;\n>\n\n\n----------------------------------------------------------------------------\n----\n\n\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n\n",
"msg_date": "Fri, 19 Apr 2002 08:56:36 +1000",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": "Unixware 711 manual says diff supports -w as said in :\n-w\n\tIgnores all blanks (<Space> and TAB characters) and treats all\n\tother strings of blanks as equivalent; for example,\n\tif ( a == b ) will compare eaqual to if(a==b).\n\nHope it helps,\nOn Fri, 19 Apr 2002, Nicolas Bazin wrote:\n\n> Well I use Unixware 7.1.1. I cannot check again right now whether 7.1.1\n> supports -w but if I made the effort to change the shell, I guess I had a\n> good reason. Next time I put my hand on my Unixware PC I'll double check.\n> \n> Nicolas\n> \n> ----- Original Message -----\n> From: \"Larry Rosenman\" <ler@lerctr.org>\n> To: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\n> Cc: \"Bruce Momjian\" <pgman@candle.pha.pa.us>; \"Peter Eisentraut\"\n> <peter_e@gmx.net>; \"PostgreSQL-patches\" <pgsql-patches@postgresql.org>\n> Sent: Friday, April 19, 2002 12:38 AM\n> Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver\n> andUnixware\n> \n> \n> > Per the man page on OpenUNIX 8, the UnixWare diff ***DOES*** support -w.\n> >\n> > see the man page at http://www.lerctr.org:457/\n> >\n> > LER\n> >\n> >\n> > On Thu, 2002-04-18 at 02:13, Nicolas Bazin wrote:\n> > > The patch from Bruce does not correct the proper thing. The original\n> patch I\n> > > submitted was :\n> > > *** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26\n> 16:49:04\n> > > 2002\n> > > --- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39\n> 2002\n> > > ***************\n> > > *** 173,179 ****\n> > > # ----------\n> > > case $host_platform in\n> > > ! *-*-qnx*)\n> > > DIFFFLAGS=-b;;\n> > > *)\n> > > DIFFFLAGS=-w;;\n> > > --- 173,179 ----\n> > > # ----------\n> > > case $host_platform in\n> > > ! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\n> > > DIFFFLAGS=-b;;\n> > > *)\n> > > DIFFFLAGS=-w;;\n> > >\n> > > Because the diff tool that comes with Openserver or Unixware does not\n> suppor\n> > > the -w option but the -b option to remove blank characters.\n> > > There is nothing wrong with unix sockets on both platforms.\n> > >\n> > > Nicolas\n> > >\n> > > ----- Original Message -----\n> > > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > > Cc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n> > > <nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n> > > <pgsql-patches@postgresql.org>\n> > > Sent: Thursday, April 18, 2002 1:15 PM\n> > > Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> > > Unixware\n> > >\n> > >\n> > > > Peter Eisentraut wrote:\n> > > > > Bruce Momjian writes:\n> > > > >\n> > > > > > OK, new code is:\n> > > > > >\n> > > > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > > > ^^^^^^^^^^^^^\n> > > > >\n> > > > > I would like to see an explanation for this.\n> > > >\n> > > > The patch section is below. Not knowing the platform, I have no idea\n> > > > why.\n> > > >\n> > > > case $host_platform in\n> > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > > unix_sockets=no;;\n> > > > *)\n> > > > unix_sockets=yes;;\n> > > >\n> > > > --\n> > > > Bruce Momjian | http://candle.pha.pa.us\n> > > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> > > >\n> > > > ---------------------------(end of\n> broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> > --\n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> >\n> >\n> >\n> >\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n",
"msg_date": "Fri, 19 Apr 2002 13:04:16 +0200 (MET DST)",
"msg_from": "Olivier PRENANT <ohp@pyrenet.fr>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver andUnixware"
},
{
"msg_contents": "\nOK, I will apply the patch as it current stands and wait for additional\npatches.\n\n\n\n---------------------------------------------------------------------------\n\nNicolas Bazin wrote:\n> Well then it means that we will have to make a difference between Unixware\n> 7.1.1 and OpenUnix 8.0.\n> \n> ----- Original Message -----\n> From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> To: \"Nicolas Bazin\" <nbazin@ingenico.com.au>\n> Cc: \"Peter Eisentraut\" <peter_e@gmx.net>; \"Larry Rosenman\" <ler@lerctr.org>;\n> \"PostgreSQL-patches\" <pgsql-patches@postgresql.org>\n> Sent: Friday, April 19, 2002 1:31 AM\n> Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> Unixware\n> \n> \n> >\n> > Sorry, here is an updated patch. I didn't realize there were two QNX\n> > tests in the file.\n> >\n> > The patch has:\n> >\n> > case $host_platform in\n> > *-*-qnx* | *-*-sco3.2v5*)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> > esac\n> >\n> > Is this correct. We don't need the sysv5? It is my understanding that\n> > sysv5 is much more than Open Server 8.\n> >\n> > --------------------------------------------------------------------------\n> -\n> >\n> > Nicolas Bazin wrote:\n> > > The patch from Bruce does not correct the proper thing. The original\n> patch I\n> > > submitted was :\n> > > *** postgresql-7.2.1-rc/src/test/regress/pg_regress.sh Tue Mar 26\n> 16:49:04\n> > > 2002\n> > > --- postgresql-7.2.1/src/test/regress/pg_regress.sh Tue Mar 26 17:03:39\n> 2002\n> > > ***************\n> > > *** 173,179 ****\n> > > # ----------\n> > > case $host_platform in\n> > > ! *-*-qnx*)\n> > > DIFFFLAGS=-b;;\n> > > *)\n> > > DIFFFLAGS=-w;;\n> > > --- 173,179 ----\n> > > # ----------\n> > > case $host_platform in\n> > > ! *-*-qnx* | *-*-sco3.2v5* | *-*-sysv5)\n> > > DIFFFLAGS=-b;;\n> > > *)\n> > > DIFFFLAGS=-w;;\n> > >\n> > > Because the diff tool that comes with Openserver or Unixware does not\n> suppor\n> > > the -w option but the -b option to remove blank characters.\n> > > There is nothing wrong with unix sockets on both platforms.\n> > >\n> > > Nicolas\n> > >\n> > > ----- Original Message -----\n> > > From: \"Bruce Momjian\" <pgman@candle.pha.pa.us>\n> > > To: \"Peter Eisentraut\" <peter_e@gmx.net>\n> > > Cc: \"Larry Rosenman\" <ler@lerctr.org>; \"Nicolas Bazin\"\n> > > <nbazin@ingenico.com.au>; \"PostgreSQL-patches\"\n> > > <pgsql-patches@postgresql.org>\n> > > Sent: Thursday, April 18, 2002 1:15 PM\n> > > Subject: Re: [PATCHES] [HACKERS] build of 7.2.1 on SCO Openserver and\n> > > Unixware\n> > >\n> > >\n> > > > Peter Eisentraut wrote:\n> > > > > Bruce Momjian writes:\n> > > > >\n> > > > > > OK, new code is:\n> > > > > >\n> > > > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > > > ^^^^^^^^^^^^^\n> > > > >\n> > > > > I would like to see an explanation for this.\n> > > >\n> > > > The patch section is below. Not knowing the platform, I have no idea\n> > > > why.\n> > > >\n> > > > case $host_platform in\n> > > > ! *-*-qnx* | *beos* | *-*-sco3.2v5*)\n> > > > unix_sockets=no;;\n> > > > *)\n> > > > unix_sockets=yes;;\n> > > >\n> > > > --\n> > > > Bruce Momjian | http://candle.pha.pa.us\n> > > > pgman@candle.pha.pa.us | (610) 853-3000\n> > > > + If your life is a hard drive, | 830 Blythe Avenue\n> > > > + Christ can be your backup. | Drexel Hill, Pennsylvania\n> 19026\n> > > >\n> > > > ---------------------------(end of\n> broadcast)---------------------------\n> > > > TIP 5: Have you checked our extensive FAQ?\n> > > >\n> > > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > > >\n> > > >\n> > >\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 4: Don't 'kill -9' the postmaster\n> > >\n> >\n> > --\n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> >\n> \n> \n> ----------------------------------------------------------------------------\n> ----\n> \n> \n> > Index: configure.in\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/configure.in,v\n> > retrieving revision 1.178\n> > diff -c -r1.178 configure.in\n> > *** configure.in 14 Apr 2002 17:23:20 -0000 1.178\n> > --- configure.in 18 Apr 2002 15:26:08 -0000\n> > ***************\n> > *** 696,703 ****\n> > AC_CHECK_LIB(util, setproctitle)\n> > AC_CHECK_LIB(m, main)\n> > AC_CHECK_LIB(dl, main)\n> > - AC_CHECK_LIB(socket, main)\n> > AC_CHECK_LIB(nsl, main)\n> > AC_CHECK_LIB(ipc, main)\n> > AC_CHECK_LIB(IPC, main)\n> > AC_CHECK_LIB(lc, main)\n> > --- 696,703 ----\n> > AC_CHECK_LIB(util, setproctitle)\n> > AC_CHECK_LIB(m, main)\n> > AC_CHECK_LIB(dl, main)\n> > AC_CHECK_LIB(nsl, main)\n> > + AC_CHECK_LIB(socket, main)\n> > AC_CHECK_LIB(ipc, main)\n> > AC_CHECK_LIB(IPC, main)\n> > AC_CHECK_LIB(lc, main)\n> > Index: src/interfaces/odbc/convert.c\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/src/interfaces/odbc/convert.c,v\n> > retrieving revision 1.78\n> > diff -c -r1.78 convert.c\n> > *** src/interfaces/odbc/convert.c 1 Apr 2002 03:01:14 -0000 1.78\n> > --- src/interfaces/odbc/convert.c 18 Apr 2002 15:26:23 -0000\n> > ***************\n> > *** 2717,2723 ****\n> > y = 0;\n> >\n> > for (i = 1; i <= 3; i++)\n> > ! y += (s[i] - 48) * (int) pow(8, 3 - i);\n> >\n> > return y;\n> >\n> > --- 2717,2723 ----\n> > y = 0;\n> >\n> > for (i = 1; i <= 3; i++)\n> > ! y += (s[i] - '0') << (3 * (3 - i));\n> >\n> > return y;\n> >\n> > ***************\n> > *** 2740,2746 ****\n> > else\n> > val = s[i] - '0';\n> >\n> > ! y += val * (int) pow(16, 2 - i);\n> > }\n> >\n> > return y;\n> > --- 2740,2746 ----\n> > else\n> > val = s[i] - '0';\n> >\n> > ! y += val << (4 * (2 - i));\n> > }\n> >\n> > return y;\n> > ***************\n> > *** 2795,2801 ****\n> >\n> > for (i = 4; i > 1; i--)\n> > {\n> > ! x[i] = (val & 7) + 48;\n> > val >>= 3;\n> > }\n> >\n> > --- 2795,2801 ----\n> >\n> > for (i = 4; i > 1; i--)\n> > {\n> > ! x[i] = (val & 7) + '0';\n> > val >>= 3;\n> > }\n> >\n> > Index: src/test/regress/pg_regress.sh\n> > ===================================================================\n> > RCS file: /cvsroot/pgsql/src/test/regress/pg_regress.sh,v\n> > retrieving revision 1.23\n> > diff -c -r1.23 pg_regress.sh\n> > *** src/test/regress/pg_regress.sh 3 Jan 2002 21:52:05 -0000 1.23\n> > --- src/test/regress/pg_regress.sh 18 Apr 2002 15:26:24 -0000\n> > ***************\n> > *** 173,179 ****\n> > # ----------\n> >\n> > case $host_platform in\n> > ! *-*-qnx*)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> > --- 173,179 ----\n> > # ----------\n> >\n> > case $host_platform in\n> > ! *-*-qnx* | *-*-sco3.2v5*)\n> > DIFFFLAGS=-b;;\n> > *)\n> > DIFFFLAGS=-w;;\n> >\n> \n> \n> ----------------------------------------------------------------------------\n> ----\n> \n> \n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 6: Have you searched our list archives?\n> >\n> > http://archives.postgresql.org\n> >\n> \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Apr 2002 12:51:38 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] build of 7.2.1 on SCO Openserver and Unixware"
},
{
"msg_contents": " \nUpdated version applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nNicolas Bazin wrote:\n> Sorry for the package, but the following patch need to be applied to get the new verion compiled on SCO Openserver 5.0.5 and Unixware 7.1.1\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 23 Apr 2002 21:57:17 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: build of 7.2.1 on SCO Openserver and Unixware 7.1.1"
}
] |
[
{
"msg_contents": "Has anyone ever considered setting up LXR to cross-reference pgsql \nsource on developer.postgresql.org or maybe techdocs.postgresql.org? I \nhave seen it used in a couple of projects (originally written for the \nLinux kernel; lxr.php.net is another user), and it seems pretty useful. \nI set one up if anyone wants to take a look:\n\nhttp://www.joeconway.com/lxr.pgsql/\n\nIt just needs apache, perl, glimpse, agrep, and a small amount of \nconfiguration (and of course a fresh copy of cvs tip).\n\nJoe\n\n\n",
"msg_date": "Mon, 25 Mar 2002 22:32:25 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": true,
"msg_subject": "PostgreSQL cross-reference using LXR"
},
{
"msg_contents": "I'd love such a service...\n\nMakes it a lot easier for me to figure out what the heck all the functions\nand types are...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Joe Conway\n> Sent: Tuesday, 26 March 2002 2:32 PM\n> To: pgsql-hackers\n> Subject: [HACKERS] PostgreSQL cross-reference using LXR\n>\n>\n> Has anyone ever considered setting up LXR to cross-reference pgsql\n> source on developer.postgresql.org or maybe techdocs.postgresql.org? I\n> have seen it used in a couple of projects (originally written for the\n> Linux kernel; lxr.php.net is another user), and it seems pretty useful.\n> I set one up if anyone wants to take a look:\n>\n> http://www.joeconway.com/lxr.pgsql/\n>\n> It just needs apache, perl, glimpse, agrep, and a small amount of\n> configuration (and of course a fresh copy of cvs tip).\n>\n> Joe\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n",
"msg_date": "Tue, 26 Mar 2002 14:52:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL cross-reference using LXR"
},
{
"msg_contents": ">> http://www.joeconway.com/lxr.pgsql/\n>> \n>> It just needs apache, perl, glimpse, agrep, and a small amount of\n>> configuration (and of course a fresh copy of cvs tip).\n\nI've long used glimpse + an emacs macro for searching the sources.\nLXR seems to add little to the glimpse engine except for an unreliable\ngloss on what the references are --- for entertainment try searching\non a common local variable name, eg\nhttp://www.joeconway.com/lxr.pgsql/ident?i=relname\n(I think I'd better look for myself instead of trust this).\n\nAnother problem is that actually examining each reference is ungodly\npainful in LXR --- I don't even see *how* to visit multiple references\nin a single file, and you certainly can't visit a bunch of 'em without\nlots of mousing-around. Never mind editing each one once you've found\nit.\n\nThink I'll stick with Control-x backquote ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 02:16:47 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PostgreSQL cross-reference using LXR "
}
] |
[
{
"msg_contents": "I plan to upgrade to Autoconf 2.53 in the next couple of days. If anyone\nhas configure changes pending or other objections, speak now.\n\nAn RPM package for Autoconf 2.53 is available at\n\nhttp://www.postgresql.org/~petere/download/\n\nYou might wish to use this until your vendor comes out with one.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 26 Mar 2002 11:55:28 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Autoconf upgrade"
},
{
"msg_contents": "> I plan to upgrade to Autoconf 2.53 in the next couple of days. If anyone\n> has configure changes pending or other objections, speak now.\n\nI'm working on the integer timestamp issue, which seems to have an\nautoconf component. I was hoping for feedback on the question of setting\nHAVE_LONG_INT_64, HAVE_LONG_LONG_INT_64, and INT64_IS_BUSTED, how that\nrelates to USE_INTEGER_DATETIMES, and how much of this to push into\nautoconf. istm that most of it needs to be in autoconf so that we get\nthese defined consistantly.\n\n - Thomas\n",
"msg_date": "Tue, 26 Mar 2002 09:15:33 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgrade"
}
] |
[
{
"msg_contents": "Tom Lane wrote:\n> As of CVS tip, referential integrity triggers are kinda broken: they\n> will only work for tablenames that are in the current search path.\n> I think that instead of storing just table names in the trigger\n> parameters, we should store either table OIDs or schema name + table\n> name. Do you have any preferences about this?\n>\n> An advantage of using OIDs is that we could forget the pushups that\n> ALTER TABLE RENAME presently goes through to update RI triggers.\n>\n> On the other hand, as long as the RI implementation depends on\n> generating textual queries, it'd be faster to have the names available\n> than to have to look them up from the OID. But I seem to recall Stephan\n> threatening to rewrite that code at a lower level pretty soon, so the\n> speed issue might go away. In any case it's probably a minor issue\n> compared to generating the query plan.\n>\n> So I'm leaning towards OIDs, but wanted to see if anyone had a beef\n> with that.\n\n I'd go with OIDs too, because they're unambigous and don't\n change.\n\n Actually I'm kicking around a slightly different idea, how to\n resolve the entire problem. We could build up the\n querystring, required to do the check, at trigger creation\n time, parse it and store the querytree node-print or hand it\n to the trigger as argument. Isn't that using oid's already,\n ignoring the names? This requires a shortcut into SPI to\n plan an existing parsetree.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 26 Mar 2002 15:17:29 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "As of CVS tip, referential integrity triggers are kinda broken: they\nwill only work for tablenames that are in the current search path.\nI think that instead of storing just table names in the trigger\nparameters, we should store either table OIDs or schema name + table\nname. Do you have any preferences about this?\n\nAn advantage of using OIDs is that we could forget the pushups that\nALTER TABLE RENAME presently goes through to update RI triggers.\n\nOn the other hand, as long as the RI implementation depends on\ngenerating textual queries, it'd be faster to have the names available\nthan to have to look them up from the OID. But I seem to recall Stephan\nthreatening to rewrite that code at a lower level pretty soon, so the\nspeed issue might go away. In any case it's probably a minor issue\ncompared to generating the query plan.\n\nSo I'm leaning towards OIDs, but wanted to see if anyone had a beef\nwith that.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 15:32:08 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "RI triggers and schemas"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Actually I'm kicking around a slightly different idea, how to\n> > resolve the entire problem. We could build up the\n> > querystring, required to do the check, at trigger creation\n> > time, parse it and store the querytree node-print or hand it\n> > to the trigger as argument.\n>\n> Hm. Seems kinda bulky; and the parse step alone is not that expensive.\n> (You could only do raw grammar parsing I think, not the parse analysis\n> phase, unless you wanted to deal with having to outdate these stored\n> querytrees after changes in table schemas.)\n\n You're right, as soon as other details than the column or\n table name might change, this could get even more screwed.\n\n> I think the existing scheme of generating the plan during first use\n> in a particular backend is fine. At least as long as we're sticking\n> with standard plans at all ... IIRC Stephan was wondering about\n> bypassing the whole parse/plan mechanism in favor of heap-access-level\n> operations.\n\n I don't know if using heap-access directly in the RI triggers\n is such a good idea.\n\n It is guaranteed that there is a unique key covering all the\n referenced columns (and only them). I'm not sure though if it\n has to be in the same column order as the reference. Nor do I\n think that matters other than making the creation of the\n scankey a bit more difficult.\n\n But there could be no, some, a full matching index, maybe one\n with extra columns at the end on the foreign key. So for the\n referential action, the entire process of deciding which\n index fits best, pushing some of the qualification into the\n index scankey, and do the rest on the heap tuples, has to be\n duplicated here.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Tue, 26 Mar 2002 15:55:58 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "On Tue, 26 Mar 2002, Tom Lane wrote:\n\n> As of CVS tip, referential integrity triggers are kinda broken: they\n> will only work for tablenames that are in the current search path.\n> I think that instead of storing just table names in the trigger\n> parameters, we should store either table OIDs or schema name + table\n> name. Do you have any preferences about this?\n>\n> An advantage of using OIDs is that we could forget the pushups that\n> ALTER TABLE RENAME presently goes through to update RI triggers.\n>\n> On the other hand, as long as the RI implementation depends on\n> generating textual queries, it'd be faster to have the names available\n> than to have to look them up from the OID. But I seem to recall Stephan\n> threatening to rewrite that code at a lower level pretty soon, so the\n> speed issue might go away. In any case it's probably a minor issue\n> compared to generating the query plan.\n>\n> So I'm leaning towards OIDs, but wanted to see if anyone had a beef\n> with that.\n\nI'd say oids are better (and probably attnos for the columns). That's\ngenerally what I've been assuming in my attempts on rewriting the\ncode. I've been working on getting something together using the\nheap_*/index_* scanning functions. I feel like I'm reimplementing alot of\nwheels though, so I need to see what I can use from other places.\n\n",
"msg_date": "Tue, 26 Mar 2002 12:58:44 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Actually I'm kicking around a slightly different idea, how to\n> resolve the entire problem. We could build up the\n> querystring, required to do the check, at trigger creation\n> time, parse it and store the querytree node-print or hand it\n> to the trigger as argument.\n\nHm. Seems kinda bulky; and the parse step alone is not that expensive.\n(You could only do raw grammar parsing I think, not the parse analysis\nphase, unless you wanted to deal with having to outdate these stored\nquerytrees after changes in table schemas.)\n\nI think the existing scheme of generating the plan during first use\nin a particular backend is fine. At least as long as we're sticking\nwith standard plans at all ... IIRC Stephan was wondering about\nbypassing the whole parse/plan mechanism in favor of heap-access-level\noperations.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 16:02:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> As of CVS tip, referential integrity triggers are kinda broken: they\n> will only work for tablenames that are in the current search path.\n> I think that instead of storing just table names in the trigger\n> parameters, we should store either table OIDs or schema name + table\n> name. Do you have any preferences about this?\n> \n> An advantage of using OIDs is that we could forget the pushups that\n> ALTER TABLE RENAME presently goes through to update RI triggers.\n\nI'm always suspicious about the spec if it makes RENAME easy.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 27 Mar 2002 12:50:41 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> An advantage of using OIDs is that we could forget the pushups that\n>> ALTER TABLE RENAME presently goes through to update RI triggers.\n\n> I'm always suspicious about the spec if it makes RENAME easy.\n\nPoint taken ;-)\n\nHowever, unless someone can give a specific reason to make life hard,\nI'm inclined to simplify this.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 23:33:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "> > Tom Lane wrote:\n> >> An advantage of using OIDs is that we could forget the pushups that\n> >> ALTER TABLE RENAME presently goes through to update RI triggers.\n> \n> > I'm always suspicious about the spec if it makes RENAME easy.\n> \n> Point taken ;-)\n\nI don't get it???\n\nChris\n\n",
"msg_date": "Wed, 27 Mar 2002 13:12:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "\nOn Tue, 26 Mar 2002, Jan Wieck wrote:\n\n> Tom Lane wrote:\n> > I think the existing scheme of generating the plan during first use\n> > in a particular backend is fine. At least as long as we're sticking\n> > with standard plans at all ... IIRC Stephan was wondering about\n> > bypassing the whole parse/plan mechanism in favor of heap-access-level\n> > operations.\n>\n> I don't know if using heap-access directly in the RI triggers\n> is such a good idea.\n>\n> It is guaranteed that there is a unique key covering all the\n> referenced columns (and only them). I'm not sure though if it\n> has to be in the same column order as the reference. Nor do I\n> think that matters other than making the creation of the\n> scankey a bit more difficult.\n>\n> But there could be no, some, a full matching index, maybe one\n> with extra columns at the end on the foreign key. So for the\n> referential action, the entire process of deciding which\n> index fits best, pushing some of the qualification into the\n> index scankey, and do the rest on the heap tuples, has to be\n> duplicated here.\n\nThat is the problem that I've run into in working on doing it. I'm\nstill trying to figure out what levels of that code can be used.\n\nThe advantage that I see is that we get more control over the time\nqualifications used for tuples which may come into play for match\npartial. I'm not sure that it's worth the effort to try doing it\nthis way, but I figured I'd try it.\n\n",
"msg_date": "Tue, 26 Mar 2002 22:30:40 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> The advantage that I see is that we get more control over the time\n> qualifications used for tuples which may come into play for match\n> partial. I'm not sure that it's worth the effort to try doing it\n> this way, but I figured I'd try it.\n\nIt might be better to address that directly, eg:\n\n- define another SnapShot value that has the semantics you want\n\n- add a field to Scan plan nodes to specify explicitly the snapshot\n you want used. Presumably by default the planner would fill this\n with the standard QuerySnapshot, but you could\n\n- find a way to override the default (if nothing else, walk the\n completed plan tree and tweak the snapshot settings).\n\nI believe it's already true that scan plan nodes lock down the target\nsnapshot during plan node startup, by copying QuerySnapshot into node\nlocal execution state. So maybe you don't even need the above hack;\nperhaps just twiddling QuerySnapshot right before ExecutorStart would\nget the job done.\n\nIt might be useful to discuss exactly what is bad or broken about the\ncurrent RI implementation, so we can get a clearer idea of what ought\nto be done. I know that y'all are dissatisfied with it but I'm not\nsure I fully understand the issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Mar 2002 01:40:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "On Tue, 26 Mar 2002, Stephan Szabo wrote:\n\n>\n> On Tue, 26 Mar 2002, Jan Wieck wrote:\n>\n> > Tom Lane wrote:\n> > > I think the existing scheme of generating the plan during first use\n> > > in a particular backend is fine. At least as long as we're sticking\n> > > with standard plans at all ... IIRC Stephan was wondering about\n> > > bypassing the whole parse/plan mechanism in favor of heap-access-level\n> > > operations.\n> >\n> > I don't know if using heap-access directly in the RI triggers\n> > is such a good idea.\n> >\n> > It is guaranteed that there is a unique key covering all the\n> > referenced columns (and only them). I'm not sure though if it\n> > has to be in the same column order as the reference. Nor do I\n> > think that matters other than making the creation of the\n> > scankey a bit more difficult.\n> >\n> > But there could be no, some, a full matching index, maybe one\n> > with extra columns at the end on the foreign key. So for the\n> > referential action, the entire process of deciding which\n> > index fits best, pushing some of the qualification into the\n> > index scankey, and do the rest on the heap tuples, has to be\n> > duplicated here.\n>\n> That is the problem that I've run into in working on doing it. I'm\n> still trying to figure out what levels of that code can be used.\n>\n> The advantage that I see is that we get more control over the time\n> qualifications used for tuples which may come into play for match\n> partial. I'm not sure that it's worth the effort to try doing it\n> this way, but I figured I'd try it.\n\nAnother thing to bear in mind:\n\nWe (www.seatbooker.net, that is) have had a certain amount of trouble with\ncontention for locks taken out by RI triggers - in particular triggers on\nFK tables where large numbers of rows refer to a small number of rows in\nthe PK table.\n\nHaving had a look at it one possible solution seemed to be to do two\nspecial queries in these triggers. Whilst checking and UPDATE/INSERT on\nthe FK table do a special 'SELECT ... FOR UPDATE BARRIER' query which is\nexactly like 'SELECT ... FOR UPDATE', including waiting for transactions\nwith the row marked for update to commit/rollback, except that it doesn't\nactually mark the row for update afterwards. Whilst checking DELETE/UPDATE\non the PK table do a 'SELECT ... FOR UPDATE INCLUDE UNCOMMITTED LIMIT 1'\n(or 'SELECT ... FOR UPDATE READ UNCOMMITTED if READ UNCOMMITTED can be\nmade to work) which would do everything which SELECT .. FOR UPDATE does\nbut also wait for any transactions with matching uncommitted rows to\ncomplete before returning.\n\nIf the RI triggers could have more direct control over the time\nqualifications then this could be implemented without the need for these\ntwo queries....which after all are a bit of a hack.\n\nHmm, come to think of it the check which is triggered on the PK update\nprobably doesn't need to mark anything for update at all - it might work\nwith just an update barrier that could include uncommitted rows and return\na 'matching rows existed' vs 'no matching rows existed' status. Perhaps\nthis would help eliminate the possibility of deadlocking whilst checking\nthe two constraints simultaeneously for concurrent updates, too...\n\nUnfortionately, whilst I've managed to write a (seemingly, anyway) working\n'SELECT .. FOR UPDATE BARRIER' I haven't really got much time to work on\nthis any more. Comments on it wouldn't go amiss, though.\n\n",
"msg_date": "Wed, 27 Mar 2002 13:51:28 +0000 (GMT)",
"msg_from": "Alex Hayward <xelah@xelah.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "On Wed, 27 Mar 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > The advantage that I see is that we get more control over the time\n> > qualifications used for tuples which may come into play for match\n> > partial. I'm not sure that it's worth the effort to try doing it\n> > this way, but I figured I'd try it.\n>\n> It might be better to address that directly, eg:\n>\n> - define another SnapShot value that has the semantics you want\n>\n> - add a field to Scan plan nodes to specify explicitly the snapshot\n> you want used. Presumably by default the planner would fill this\n> with the standard QuerySnapshot, but you could\n>\n> - find a way to override the default (if nothing else, walk the\n> completed plan tree and tweak the snapshot settings).\n>\n> I believe it's already true that scan plan nodes lock down the target\n> snapshot during plan node startup, by copying QuerySnapshot into node\n> local execution state. So maybe you don't even need the above hack;\n> perhaps just twiddling QuerySnapshot right before ExecutorStart would\n> get the job done.\n>\n> It might be useful to discuss exactly what is bad or broken about the\n> current RI implementation, so we can get a clearer idea of what ought\n> to be done. I know that y'all are dissatisfied with it but I'm not\n> sure I fully understand the issues.\n\nWell, let's see, the big things in the current functionality are:\n\n For update locking is much stronger than we actually need to guarantee\nthe constraint.\n\n There are some cases that the current constraints may get wrong. We\nhaven't come to an agreement on some of these cases, but...\n On the insert/update fk check, we should not check rows that\n aren't valid since the intermediate states don't need to be valid. In\n fact this is already patched, but it opens up another possible failure\n case below, so I'm mentioning it.\n\n On the noaction pk checks, if other rows have been added such that there\n are no failures of the constraint there shouldn't be an error. That\n was the NOT EXISTS addition to the constraint that was objected to\n in a previous patch. For match full this could be a simple check for\n an equal row, but for match partial it seems alot more complicated\n since each fk row may have multiple matching rows in the pk table and\n those rows may be different for each fk row.\n\n On the referential actions, we need to agree on the behavior of the\n cases. If you do something like (with a deferred on delete cascade)\n begin; delete from pk; insert into fk; end;\n is it supposed to be a failure? On 7.2 it would be. Currently it\n wouldn't be because it sees the inserted row as being invalid by the\n time it checks. I think it should be, but the old check may not\n have been the right place depending on the answers to the below:\n\n If we did instead:\n begin; delete from pk; insert into fk; insert into pk; end;\n is there a row in fk at the end or not?\n\n If we did:\n begin; insert into fk; delete from pk; insert into fk; insert into pk;\n end;\n do we end up with zero, one or two rows in fk?\n\n\nSome things that would be good to add:\n Making the foreign key stuff work with inheritance.\n\n Adding match partial. This gets complicated with the referential actions\n particularly update cascade. My reading of the match partial update\n cascade says that if a row gets updated due to having all of its\n matching rows being updated by the same statement that all of the rows\n that matched this row were updated to non-distinct values for the\n columns of the fk row that were not NULL.\n\n",
"msg_date": "Wed, 27 Mar 2002 10:40:43 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "Last week I said:\n>> I think that instead of storing just table names in the trigger\n>> parameters, we should store either table OIDs or schema name + table\n>> name. [ ... ]\n>> So I'm leaning towards OIDs, but wanted to see if anyone had a beef\n>> with that.\n\nI've just realized that if we change the RI trigger arguments this way,\nwe will have a really serious problem with accepting pg_dump scripts\nfrom prior versions. The scripts' representation of foreign key\nconstraints will contain commands like\n\nCREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER UPDATE ON \"bar\" FROM \"baz\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', 'baz', 'bar', 'UNSPECIFIED', 'f1', 'f1');\n\nwhich will absolutely not work at all if the 7.3 triggers are expecting\nto find OIDs in those arguments.\n\nI thought about allowing the triggers to take qualified names in the\nstyle of what nextval() is doing in current sources, but that's still\ngoing to have a lot of nasty compatibility issues --- mixed-case\nnames, names containing dots, etc are all going to be interpreted\ndifferently than before.\n\nI think we may have little choice except to create two sets of RI trigger\nprocedures, one that takes the old-style arguments and one that takes\nnew-style arguments. However the old-style set will be horribly fragile\nbecause they'll have to interpret their arguments based on the current\nnamespace search path.\n\nOf course the *real* problem here is that pg_dump is outputting a\nlow-level representation of the original constraints. We knew all along\nthat that would get us into trouble eventually ... and that trouble is\nnow upon us. We really need to fix pg_dump to emit ALTER TABLE ADD\nCONSTRAINT type commands instead of trigger definitions.\n\nA possible escape from the dilemma is to fix pg_dump so that it can emit\nADD CONSTRAINT commands when it sees RI triggers, release that in 7.2.2,\nand then *require* people to use 7.2.2 or later pg_dump when it comes\ntime to update to 7.3. I do not much like this ... but it may be better\nthan the alternative of trying to maintain backwards-compatible\ntriggers.\n\nComments? Better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Mar 2002 20:43:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "If pg_upgrade was shipped with 7.3 in working order with the ability\nto convert the old foreign key commands to the new ones I don't think\nanyone would care how many funny things are involved. Just fix the\nforeign key stuff for 7.3 pg_dump and only support upgrades using that\nversion, or included pg_upgrade script (any 7.2 release to 7.3)\n\nThat said, it doesn't look like it'll be a pretty thing to do with a\nshell script.\n\nHoop jumping may be required to go from 6.5 or 7.0/1 directly to 7.3.\n\n\nDownside is pg_upgrade is fairly new (can it be trusted -- made to\nwork 100%?)\nUpside is no changes would be required to 7.2 and lots of people would\nbe really happy to have a fast upgrade process (dump / restore can\ntake quite a while on large dbs)\n\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nTo: \"Stephan Szabo\" <sszabo@megazone23.bigpanda.com>\nCc: \"Jan Wieck\" <JanWieck@Yahoo.com>; <pgsql-hackers@postgresql.org>\nSent: Sunday, March 31, 2002 8:43 PM\nSubject: Re: [HACKERS] RI triggers and schemas\n\n\n> Last week I said:\n> >> I think that instead of storing just table names in the trigger\n> >> parameters, we should store either table OIDs or schema name +\ntable\n> >> name. [ ... ]\n> >> So I'm leaning towards OIDs, but wanted to see if anyone had a\nbeef\n> >> with that.\n>\n> I've just realized that if we change the RI trigger arguments this\nway,\n> we will have a really serious problem with accepting pg_dump scripts\n> from prior versions. The scripts' representation of foreign key\n> constraints will contain commands like\n>\n> CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER UPDATE ON \"bar\" FROM\n\"baz\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE\nPROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', 'baz', 'bar',\n'UNSPECIFIED', 'f1', 'f1');\n>\n> which will absolutely not work at all if the 7.3 triggers are\nexpecting\n> to find OIDs in those arguments.\n>\n> I thought about allowing the triggers to take qualified names in the\n> style of what nextval() is doing in current sources, but that's\nstill\n> going to have a lot of nasty compatibility issues --- mixed-case\n> names, names containing dots, etc are all going to be interpreted\n> differently than before.\n>\n> I think we may have little choice except to create two sets of RI\ntrigger\n> procedures, one that takes the old-style arguments and one that\ntakes\n> new-style arguments. However the old-style set will be horribly\nfragile\n> because they'll have to interpret their arguments based on the\ncurrent\n> namespace search path.\n>\n> Of course the *real* problem here is that pg_dump is outputting a\n> low-level representation of the original constraints. We knew all\nalong\n> that that would get us into trouble eventually ... and that trouble\nis\n> now upon us. We really need to fix pg_dump to emit ALTER TABLE ADD\n> CONSTRAINT type commands instead of trigger definitions.\n>\n> A possible escape from the dilemma is to fix pg_dump so that it can\nemit\n> ADD CONSTRAINT commands when it sees RI triggers, release that in\n7.2.2,\n> and then *require* people to use 7.2.2 or later pg_dump when it\ncomes\n> time to update to 7.3. I do not much like this ... but it may be\nbetter\n> than the alternative of trying to maintain backwards-compatible\n> triggers.\n>\n> Comments? Better ideas?\n>\n> regards, tom lane\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n",
"msg_date": "Sun, 31 Mar 2002 22:03:41 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "> I've just realized that if we change the RI trigger arguments this way,\n> we will have a really serious problem with accepting pg_dump scripts\n> from prior versions. The scripts' representation of foreign key\n> constraints will contain commands like\n>\n> CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER UPDATE ON \"bar\" FROM \"baz\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', 'baz', 'bar', 'UNSPECIFIED', 'f1', 'f1');\n>\n> which will absolutely not work at all if the 7.3 triggers are expecting\n> to find OIDs in those arguments.\n\nWhy can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up\nthe OIDs, etc. for the arguments and convert them internally to an ALTER\nTABLE/ADD CONSTRAINT or whatever...\n\nChris\n\n\n",
"msg_date": "Mon, 1 Apr 2002 12:06:50 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up\n> the OIDs, etc. for the arguments and convert them internally to an ALTER\n> TABLE/ADD CONSTRAINT or whatever...\n\nHmm ... seems pretty ugly, but then again the alternatives are pretty\ndurn ugly themselves. This ugliness would at least be localized.\nMight be a plan.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 31 Mar 2002 23:18:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "> Yeah, although it'd still be a good idea probably to convert the dump form\n> to ALTER TABLE in any case. The one downside that was brought up in the\n> past was the time involved in checking dumped (presumably correct) data\n> when the constraint is added to very large tables. I can probably make\n> that faster since right now it's just running the check on each row,\n> but it'll still be slow on big tables possibly. Another option would\n> be to have an argument that would disable the check on an add constraint,\n> except that wouldn't work for unique/primary key.\n\nMaybe it could be a really evil SET CONSTRAINTS command like:\n\nSET CONSTRAINTS UNCHECKED;\n\n...\n\nChris\n\n\n",
"msg_date": "Mon, 1 Apr 2002 14:00:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "On Sun, 31 Mar 2002, Tom Lane wrote:\n\n> Christopher Kings-Lynne <chriskl@familyhealth.com.au> writes:\n> > Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up\n> > the OIDs, etc. for the arguments and convert them internally to an ALTER\n> > TABLE/ADD CONSTRAINT or whatever...\n>\n> Hmm ... seems pretty ugly, but then again the alternatives are pretty\n> durn ugly themselves. This ugliness would at least be localized.\n> Might be a plan.\n\nYeah, although it'd still be a good idea probably to convert the dump form\nto ALTER TABLE in any case. The one downside that was brought up in the\npast was the time involved in checking dumped (presumably correct) data\nwhen the constraint is added to very large tables. I can probably make\nthat faster since right now it's just running the check on each row,\nbut it'll still be slow on big tables possibly. Another option would\nbe to have an argument that would disable the check on an add constraint,\nexcept that wouldn't work for unique/primary key.\n\n\n",
"msg_date": "Sun, 31 Mar 2002 22:00:48 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > I've just realized that if we change the RI trigger arguments this way,\n> > we will have a really serious problem with accepting pg_dump scripts\n> > from prior versions. The scripts' representation of foreign key\n> > constraints will contain commands like\n> >\n> > CREATE CONSTRAINT TRIGGER \"<unnamed>\" AFTER UPDATE ON \"bar\" FROM \"baz\" NOT DEFERRABLE INITIALLY IMMEDIATE FOR EACH ROW EXECUTE PROCEDURE \"RI_FKey_noaction_upd\" ('<unnamed>', 'baz', 'bar', 'UNSPECIFIED', 'f1', 'f1');\n> >\n> > which will absolutely not work at all if the 7.3 triggers are expecting\n> > to find OIDs in those arguments.\n>\n> Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up\n> the OIDs, etc. for the arguments and convert them internally to an ALTER\n> TABLE/ADD CONSTRAINT or whatever...\n\n And what language hack do you suggest to suppress the\n complete referential check of the foreign key table at ALTER\n TABLE ... time? Currently, it does a sequential scan of the\n entire table to check every single row. So adding 3\n constraints to a 10M row table might take some time.\n\n Note, that that language hack will again make the dump non-\n ANSI complient and thus, I don't consider the entire change\n to ALTER TABLE an improvement at all.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 1 Apr 2002 10:25:03 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > Yeah, although it'd still be a good idea probably to convert the dump form\n> > to ALTER TABLE in any case. The one downside that was brought up in the\n> > past was the time involved in checking dumped (presumably correct) data\n> > when the constraint is added to very large tables. I can probably make\n> > that faster since right now it's just running the check on each row,\n> > but it'll still be slow on big tables possibly. Another option would\n> > be to have an argument that would disable the check on an add constraint,\n> > except that wouldn't work for unique/primary key.\n>\n> Maybe it could be a really evil SET CONSTRAINTS command like:\n>\n> SET CONSTRAINTS UNCHECKED;\n\n\n Hmmm, I would like this one if restricted to superusers or\n database owner.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 1 Apr 2002 10:29:21 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: RI triggers and schemas"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Christopher Kings-Lynne wrote:\n>> Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up\n>> the OIDs, etc. for the arguments and convert them internally to an ALTER\n>> TABLE/ADD CONSTRAINT or whatever...\n\n> And what language hack do you suggest to suppress the\n> complete referential check of the foreign key table at ALTER\n> TABLE ... time?\n\nActually, I was interpreting his idea to mean that we add intelligence\nto CREATE TRIGGER to adjust the specified trigger arguments if it sees\nthe referenced trigger procedure is one of the RI triggers. It'd be\nfairly self-contained, really, since the CREATE TRIGGER code could use\nits \"ON table\" and \"FROM table\" arguments to derive the correct OIDs\nto insert. This could be done always (whether the incoming arguments\nlook like OIDs or not), which'd also give us a short-term answer for\ndumping/reloading 7.3-style RI triggers. I'd still like to change\npg_dump to output some kind of ALTER command in place of CREATE TRIGGER,\nbut we'd have some breathing room to debate about how.\n\nI'm now inclined to leave the attribute arguments alone (stick to names\nnot numbers) just to avoid possible conversion problems there.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 10:39:11 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "\nOn Mon, 1 Apr 2002, Tom Lane wrote:\n\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Christopher Kings-Lynne wrote:\n> >> Why can't we just hack up the CREATE CONSTRAINT TRIGGER code to look up\n> >> the OIDs, etc. for the arguments and convert them internally to an ALTER\n> >> TABLE/ADD CONSTRAINT or whatever...\n>\n> > And what language hack do you suggest to suppress the\n> > complete referential check of the foreign key table at ALTER\n> > TABLE ... time?\n>\n> Actually, I was interpreting his idea to mean that we add intelligence\n> to CREATE TRIGGER to adjust the specified trigger arguments if it sees\n> the referenced trigger procedure is one of the RI triggers. It'd be\n> fairly self-contained, really, since the CREATE TRIGGER code could use\n> its \"ON table\" and \"FROM table\" arguments to derive the correct OIDs\n> to insert. This could be done always (whether the incoming arguments\n> look like OIDs or not), which'd also give us a short-term answer for\n> dumping/reloading 7.3-style RI triggers. I'd still like to change\n> pg_dump to output some kind of ALTER command in place of CREATE TRIGGER,\n> but we'd have some breathing room to debate about how.\n>\n> I'm now inclined to leave the attribute arguments alone (stick to names\n> not numbers) just to avoid possible conversion problems there.\n\nWell, there is another place where the current name behavior\ncauses problems so we'd need to be sticking in the fully qualified\nname, otherwise creating a table in your search path earlier than\nthe intended table would break the constraint. This currently already\nhappens with temp tables.\n\n",
"msg_date": "Mon, 1 Apr 2002 08:46:58 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Well, there is another place where the current name behavior\n> causes problems so we'd need to be sticking in the fully qualified\n> name, otherwise creating a table in your search path earlier than\n> the intended table would break the constraint. This currently already\n> happens with temp tables.\n\nBut the point is that the table name would be resolved to OID once at\nCREATE TRIGGER time (or when the original FK constraint is created).\nAfter that, it's up to the trigger to construct queries using the\nfully-qualified table name. This should eliminate the temp table\ngotcha as well as change-of-search-path issues.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 12:19:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "On Mon, 1 Apr 2002, Tom Lane wrote:\n\n> Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> > Well, there is another place where the current name behavior\n> > causes problems so we'd need to be sticking in the fully qualified\n> > name, otherwise creating a table in your search path earlier than\n> > the intended table would break the constraint. This currently already\n> > happens with temp tables.\n>\n> But the point is that the table name would be resolved to OID once at\n> CREATE TRIGGER time (or when the original FK constraint is created).\n> After that, it's up to the trigger to construct queries using the\n> fully-qualified table name. This should eliminate the temp table\n> gotcha as well as change-of-search-path issues.\n\nSorry, I must have misunderstood you. I thought you were backing away\nfrom changing the arguments that were created for the trigger. Or did\nyou mean using the stored info on the two oids we already have in the\nrecord (tgrelid and tgconstrrelid)?\n\n",
"msg_date": "Mon, 1 Apr 2002 09:42:20 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "Stephan Szabo <sszabo@megazone23.bigpanda.com> writes:\n> Sorry, I must have misunderstood you. I thought you were backing away\n> from changing the arguments that were created for the trigger. Or did\n> you mean using the stored info on the two oids we already have in the\n> record (tgrelid and tgconstrrelid)?\n\nNo, I still want to put table OIDs not names into the trigger arguments.\nThe table OIDs in pg_trigger would do fine if the trigger function could\nget at them, but it can't; so we need to copy them into the trigger\narguments. (Hmm, I suppose another option is to extend the Trigger\ndata structure to include tgconstrrelid, and just ignore the table names\nin the trigger argument list.)\n\nI am backing away from changing the attribute name arguments to attnums,\nthough; I'm thinking that the potential conversion problems outweigh\nbeing able to eliminate some RENAME support code.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 12:50:52 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
},
{
"msg_contents": "I said:\n> The table OIDs in pg_trigger would do fine if the trigger function could\n> get at them, but it can't; so we need to copy them into the trigger\n> arguments. (Hmm, I suppose another option is to extend the Trigger\n> data structure to include tgconstrrelid, and just ignore the table names\n> in the trigger argument list.)\n\nAfter further thought, this is clearly the right approach to take,\nbecause it provides a solution path for other triggers besides the RI\nones. So we'll fix the problem at the code level. The trigger\narguments will be unchanged, but the table names therein will become\npurely decorative (or documentation, if you prefer ;-)). Perhaps\nsomeday we could eliminate them ... but not as long as pg_dump dumps\nRI constraints in the form of trigger definitions.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 13:32:32 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: RI triggers and schemas "
}
] |
[
{
"msg_contents": "I'm trying to work out what to do with indexes in the context of\nschemas.\n\nAs of today's CVS tip, what the code does is that CREATE INDEX can only\nspecify an unqualified index name, and the index is automatically\ncreated in the same namespace as its parent table. Thus, index names\nstill have to be distinct from each other and from regular table names,\nbut only within a namespace (schema) not globally over the whole\ndatabase.\n\nI seem to recall someone claiming that the SQL spec requires indexes to\nbe in a different namespace from tables --- ie, index names and table\nnames should never conflict, period. I can't find any evidence of this\nin the spec; AFAICT it doesn't mention the concept of indexes at all.\nBut perhaps this is standard industry practice (what do Oracle and other\nDBMSes do?). We could imagine creating an \"auxiliary namespace\" for\neach regular namespace in which to put indexes, if anyone thinks that's\nworthwhile. Thoughts?\n\nIn any case, I intend to remove the current prohibition against user\ntable names starting with \"pg_\". Instead there will be a prohibition\nagainst user schema names starting with \"pg_\"; but within a user schema\nyou can call your tables whatever you like. The existing protection\nrestrictions associated with IsSystemRelationName() calls will migrate\nover to instead be tests on which namespace contains the table in\nquestion.\n\nThe system catalogs will still be named pg_xxx, but will live in\nnamespace \"pg_catalog\"; TOAST tables will still be named \"pg_toast_xxx\",\nbut will live in namespace \"pg_toast\". This should minimize the\ndisruption to client applications that look at the catalogs. There'll\nalso be temporary namespaces \"pg_temp_xxx\" to house temporary tables.\n\nComments, objections, better ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 26 Mar 2002 15:53:34 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Indexes, TOAST tables, and namespaces"
}
] |
[
{
"msg_contents": "OK. Here is a test posting to hackers and Ccing to Marc\n\n\tOleg\nOn Tue, 26 Mar 2002, Marc G. Fournier wrote:\n\n>\n> ah, then that's gotta be it ... can you do me a favor and email\n> pgsql-hackers a simple test, so that I can get the email out of hte\n> approved messages and show you what I'm seeing?\n>\n> On Tue, 26 Mar 2002, Oleg Bartunov wrote:\n>\n> > On Tue, 26 Mar 2002, Marc G. Fournier wrote:\n> >\n> > >\n> > > Okay, this is most strange ... you are getting confirmation that you are\n> > > subscribed, right? Can I see a copy of that message? For some reason,\n> >\n> > No, I didn't get any confirmation :-(\n> >\n> > > when I go through the moderated postings, everything for you comes in as\n> > > 'unknown@anonymous is posting for ...', so I'm wondering if maybe its\n> > > trying to subscribe you as this 'unknown@anonymous', which, of course,\n> > > won't get backt o you ...\n> > >\n> > > On Mon, 25 Mar 2002, Oleg Bartunov wrote:\n> > >\n> > > > Marc,\n> > > >\n> > > > I've resubscribed (yesterday) to pg mailing lists but didn't have\n> > > > any response. Perhaps, there are problem with mailing list software?\n> > > >\n> > > > \tRegards,\n> > > > \t\tOleg\n> > > > _____________________________________________________________\n> > > > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > > > Sternberg Astronomical Institute, Moscow University (Russia)\n> > > > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > > > phone: +007(095)939-16-83, +007(095)939-23-83\n> > > >\n> > > >\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n> >\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 27 Mar 2002 00:52:54 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Re: mailing list problem"
}
] |
[
{
"msg_contents": "Is there any way to force PostgreSQL to bind to a specific IP address?\n\nThere doesn't seem to be anything about this in the docs, and if its not\nimplemented, it would be a useful feature to have (and an easy one to\nimplement).\n\n--\nAlastair D'Silva B. Sc. mob: 0413 485 733\nNetworking Consultant\nNew Millennium Networking http://www.newmillennium.net.au\n\n",
"msg_date": "Wed, 27 Mar 2002 15:39:13 +1100",
"msg_from": "\"Alastair D'Silva\" <deece@newmillennium.net.au>",
"msg_from_op": true,
"msg_subject": "Binding PostgreSQL to a specific ip address"
},
{
"msg_contents": "On Wed, 27 Mar 2002, Alastair D'Silva wrote:\n\n> Is there any way to force PostgreSQL to bind to a specific IP address?\n> \n> There doesn't seem to be anything about this in the docs, and if its not\n> implemented, it would be a useful feature to have (and an easy one to\n> implement).\n(from runtime-config.html)\n\n VIRTUAL_HOST (string)\n Specifies the TCP/IP hostname or address on which the\n postmaster is to listen for connections from client\n applications. Defaults to listening on all configured addresses\n (including localhost).\n\nGavin\n\n",
"msg_date": "Wed, 27 Mar 2002 15:45:40 +1100 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Binding PostgreSQL to a specific ip address"
},
{
"msg_contents": "Cheers, next time I'll look a bit harder :)\n\n--\nAlastair D'Silva B. Sc. mob: 0413 485 733\nNetworking Consultant\nNew Millennium Networking http://www.newmillennium.net.au \n\n> -----Original Message-----\n> From: Gavin Sherry [mailto:swm@linuxworld.com.au] \n> Sent: Wednesday, 27 March 2002 3:46 PM\n> To: Alastair D'Silva\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Binding PostgreSQL to a specific ip address\n> \n> \n> On Wed, 27 Mar 2002, Alastair D'Silva wrote:\n> \n> > Is there any way to force PostgreSQL to bind to a specific \n> IP address?\n> > \n> > There doesn't seem to be anything about this in the docs, \n> and if its \n> > not implemented, it would be a useful feature to have (and \n> an easy one \n> > to implement).\n> (from runtime-config.html)\n> \n> VIRTUAL_HOST (string)\n> Specifies the TCP/IP hostname or address on which the\n> postmaster is to listen for connections from client\n> applications. Defaults to listening on all \n> configured addresses\n> (including localhost).\n> \n> Gavin\n> \n\n",
"msg_date": "Wed, 27 Mar 2002 15:49:51 +1100",
"msg_from": "\"Alastair D'Silva\" <deece@newmillennium.net.au>",
"msg_from_op": true,
"msg_subject": "Re: Binding PostgreSQL to a specific ip address"
},
{
"msg_contents": "Note if you are trying to run more than one postgresql you also have to \nprevent the unix socket files from clashing.\n\n > On Wed, 27 Mar 2002, Alastair D'Silva wrote:\n> >\n> > > Is there any way to force PostgreSQL to bind to a specific\n> > IP address?\n> > >\n> > > There doesn't seem to be anything about this in the docs,\n> > and if its\n> > > not implemented, it would be a useful feature to have (and\n> > an easy one\n> > > to implement).\n> > (from runtime-config.html)\n> >\n> > VIRTUAL_HOST (string)\n> > Specifies the TCP/IP hostname or address on which the\n> > postmaster is to listen for connections from client\n> > applications. Defaults to listening on all\n> > configured addresses\n> > (including localhost).\n> >\n> > Gavin\n> >\n>\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n\n",
"msg_date": "Wed, 27 Mar 2002 16:05:33 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Re: Binding PostgreSQL to a specific ip address"
}
] |
[
{
"msg_contents": "I don't think this is me...\n\ngcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I\n./../include -I. -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSIO\nN=10 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/chriskl/local/include\\\" -c -o\npgc.o pgc.c\npgc.c: In function `yylex':\npgc.c:1250: warning: label `find_rule' defined but not used\npgc.l: At top level:\npgc.c:3079: warning: `yy_flex_realloc' defined but not used\n\nChris\n\n",
"msg_date": "Wed, 27 Mar 2002 13:24:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "compile bug in HEAD?"
},
{
"msg_contents": "It's more warnings than bugs. I also have seen that but not familiar enough\nwith bison or yacc to think more of it. Have you got an idea on how to fix\nthese warnings?\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Hackers\" <pgsql-hackers@postgresql.org>\nSent: Wednesday, March 27, 2002 4:24 PM\nSubject: [HACKERS] compile bug in HEAD?\n\n\n> I don't think this is me...\n>\n>\ngcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I\n>\n./../include -I. -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSIO\n> N=10 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/chriskl/local/include\\\" -c -o\n> pgc.o pgc.c\n> pgc.c: In function `yylex':\n> pgc.c:1250: warning: label `find_rule' defined but not used\n> pgc.l: At top level:\n> pgc.c:3079: warning: `yy_flex_realloc' defined but not used\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n",
"msg_date": "Wed, 27 Mar 2002 16:30:21 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> I don't think this is me...\n> \n> gcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I\n> ./../include -I. -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSIO\n> N=10 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/chriskl/local/include\\\" -c -o\n> pgc.o pgc.c\n> pgc.c: In function `yylex':\n> pgc.c:1250: warning: label `find_rule' defined but not used\n> pgc.l: At top level:\n> pgc.c:3079: warning: `yy_flex_realloc' defined but not used\n\nYes, I have gotten the same warning for several releases but haven't\nresearched the cause. Patch?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 00:35:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "\"Nicolas Bazin\" <nbazin@ingenico.com.au> writes:\n> It's more warnings than bugs. I also have seen that but not familiar enough\n> with bison or yacc to think more of it. Have you got an idea on how to fix\n> these warnings?\n\necpg's lexer has always generated those warnings, and so has plpgsql's\nlexer. AFAICT the sloppy C code is triggered by use of yylineno.\nSuggest griping to the flex authors.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Mar 2002 01:16:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD? "
},
{
"msg_contents": "Bruce Momjian writes:\n\n> Christopher Kings-Lynne wrote:\n> > I don't think this is me...\n> >\n> > gcc -pipe -O -Wall -Wmissing-prototypes -Wmissing-declarations -Wno-error -I\n> > ./../include -I. -I../../../../src/include -DMAJOR_VERSION=2 -DMINOR_VERSIO\n> > N=10 -DPATCHLEVEL=0 -DINCLUDE_PATH=\\\"/home/chriskl/local/include\\\" -c -o\n> > pgc.o pgc.c\n> > pgc.c: In function `yylex':\n> > pgc.c:1250: warning: label `find_rule' defined but not used\n> > pgc.l: At top level:\n> > pgc.c:3079: warning: `yy_flex_realloc' defined but not used\n>\n> Yes, I have gotten the same warning for several releases but haven't\n> researched the cause. Patch?\n\nIf someone is really bored out of their mind, at least one of these\nwarnings can be gotten rid of by not using the -l option to flex. That\nmight be desirable for other reasons, too, one of which is improved speed.\n\nNo, just removing -l from the makefile is not the right fix.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 27 Mar 2002 11:06:26 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "On Wed, 2002-03-27 at 11:06, Peter Eisentraut wrote:\n> If someone is really bored out of their mind, at least one of these\n> warnings can be gotten rid of by not using the -l option to flex. That\n> might be desirable for other reasons, too, one of which is improved speed.\n> \n> No, just removing -l from the makefile is not the right fix.\n\nI'm curious; why is this \"not the right fix\"? According to the manpage:\n\n-l\tturns on maximum compatibility with the original\n\tAT&T lex implementation. Note that this does not\n\tmean full compatibility. Use of this option \n\tcosts a considerable amount of performance...\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n\n",
"msg_date": "27 Mar 2002 18:53:53 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "Neil Conway writes:\n\n> I'm curious; why is this \"not the right fix\"? According to the manpage:\n>\n> -l\tturns on maximum compatibility with the original\n> \tAT&T lex implementation. Note that this does not\n> \tmean full compatibility. Use of this option\n> \tcosts a considerable amount of performance...\n\nThe manpage also lists the specific incompatibilities. I think we should\nnot be affected by them, but someone better check before removing the -l.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 27 Mar 2002 19:56:15 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "On Wed, Mar 27, 2002 at 07:56:15PM -0500, Peter Eisentraut wrote:\n> Neil Conway writes:\n> \n> > I'm curious; why is this \"not the right fix\"? According to the manpage:\n> >\n> > -l\tturns on maximum compatibility with the original\n> > \tAT&T lex implementation. Note that this does not\n> > \tmean full compatibility. Use of this option\n> > \tcosts a considerable amount of performance...\n> \n> The manpage also lists the specific incompatibilities. I think we should\n> not be affected by them, but someone better check before removing the -l.\n\nAFAICT current sources don't actually use \"-l\" anywhere.\n\nHowever, it does appear that we can tweak flex for more performance\n(usually at the expense of a larger generated parser). In particular, it\nlooks like we could use \"-Cf\" or \"-CF\". Is this a good idea?\n\nWhile we're on the subject of minor optimizations, is there a reason why\nwe execute gcc with \"-O2\" rather than \"-O3\" during compilation?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Sat, 30 Mar 2002 20:11:13 -0500",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "Neil Conway writes:\n\n> However, it does appear that we can tweak flex for more performance\n> (usually at the expense of a larger generated parser). In particular, it\n> looks like we could use \"-Cf\" or \"-CF\". Is this a good idea?\n\nProbably. Run some performance tests if you like. It looks like -CFea\nmight be a reasonable candidate.\n\n> While we're on the subject of minor optimizations, is there a reason why\n> we execute gcc with \"-O2\" rather than \"-O3\" during compilation?\n\nMainly because everyone does it this way. Probably because it's a\nreasonable compromise between execution speed, compilation speed,\ndebuggability, and compiler bugs.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 30 Mar 2002 22:29:13 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
},
{
"msg_contents": "Neil Conway wrote:\n> On Wed, Mar 27, 2002 at 07:56:15PM -0500, Peter Eisentraut wrote:\n> > Neil Conway writes:\n> > \n> > > I'm curious; why is this \"not the right fix\"? According to the manpage:\n> > >\n> > > -l\tturns on maximum compatibility with the original\n> > > \tAT&T lex implementation. Note that this does not\n> > > \tmean full compatibility. Use of this option\n> > > \tcosts a considerable amount of performance...\n> > \n> > The manpage also lists the specific incompatibilities. I think we should\n> > not be affected by them, but someone better check before removing the -l.\n> \n> AFAICT current sources don't actually use \"-l\" anywhere.\n> \n> However, it does appear that we can tweak flex for more performance\n> (usually at the expense of a larger generated parser). In particular, it\n> looks like we could use \"-Cf\" or \"-CF\". Is this a good idea?\n> \n> While we're on the subject of minor optimizations, is there a reason why\n> we execute gcc with \"-O2\" rather than \"-O3\" during compilation?\n\nAdded to TODO:\n\n\t* Try flex flags -Cf and -CF to see if performance improves\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 21:41:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: compile bug in HEAD?"
}
] |
[
{
"msg_contents": "----- Original Message ----- \nFrom: Nicolas Bazin \nTo: pgsql-interfaces@postgresql.org \nSent: Wednesday, March 27, 2002 4:03 PM\nSubject: pgc.l modif. has been overwritten again\n\n\nWe need a little bit of order when several people can commit on the source code, It would be nice that they update their local copy and then commit, or check the directory or check for conflicts. \nIt's the second time that this modif has been overwritten. It's getting anoying.\n\n\nThe patch corrects a test on defines end of visibility that is performed too early by the preprocessor.\n\nHere it is again. \n\n*** pgc.l Wed Mar 27 15:52:45 2002\n--- cvs/src/interfaces/ecpg/preproc/pgc.l Fri Feb 15 17:46:57 2002\n***************\n*** 859,866 ****\n }\n \n <<EOF>> {\n- \n- if (yy_buffer == NULL) {\n if ( preproc_tos > 0 ) {\n preproc_tos = 0;\n \n--- 859,864 ----\n }\n \n <<EOF>> {\n if ( preproc_tos > 0 ) {\n preproc_tos = 0;\n \n***************\n*** 866,871 ****\n \n mmerror(PARSE_ERROR, ET_FATAL, \"Missing 'EXEC SQL ENDIF;'\");\n }\n yyterminate();\n }\n else\n--- 864,871 ----\n \n mmerror(PARSE_ERROR, ET_FATAL, \"Missing 'EXEC SQL ENDIF;'\");\n }\n+ \n+ if (yy_buffer == NULL)\n yyterminate();\n else\n {\n***************\n*** 867,873 ****\n mmerror(PARSE_ERROR, ET_FATAL, \"Missing 'EXEC SQL ENDIF;'\");\n }\n yyterminate();\n- }\n else\n {\n struct _yy_buffer *yb = yy_buffer;\n--- 867,872 ----\n \n if (yy_buffer == NULL)\n yyterminate();\n else\n {\n struct _yy_buffer *yb = yy_buffer;\n\n\n\nNicolas.\n\n\n\n\n\n\n\n \n----- Original Message ----- \nFrom: Nicolas \nBazin \nTo: pgsql-interfaces@postgresql.org\n\nSent: Wednesday, March 27, 2002 4:03 PM\nSubject: pgc.l modif. has been overwritten again\n\nWe need a little bit of order when several people \ncan commit on the source code, It would be nice that they update their local \ncopy and then commit, or check the directory or check for \nconflicts. \nIt's the second time that this modif has been \noverwritten. It's getting anoying.\n \n \nThe patch corrects a test on defines end of \nvisibility that is performed too early by the preprocessor.\n \nHere it is again. \n \n*** pgc.l Wed Mar 27 15:52:45 2002--- \ncvs/src/interfaces/ecpg/preproc/pgc.l Fri Feb 15 17:46:57 \n2002****************** 859,866 **** \n } \n<<EOF>> {- - if \n(yy_buffer == NULL) { if ( preproc_tos \n> 0 ) { \npreproc_tos = 0; --- 859,864 ---- \n } \n<<EOF>> { if ( \npreproc_tos > 0 ) { \n preproc_tos = 0; \n****************** 866,871 **** \n mmerror(PARSE_ERROR, ET_FATAL, \n\"Missing 'EXEC SQL ENDIF;'\"); \n } yyterminate(); \n } else--- 864,871 \n---- \nmmerror(PARSE_ERROR, ET_FATAL, \"Missing 'EXEC SQL ENDIF;'\"); \n }+ + if (yy_buffer == \nNULL) yyterminate(); \n else \n{****************** 867,873 **** \n mmerror(PARSE_ERROR, ET_FATAL, \n\"Missing 'EXEC SQL ENDIF;'\"); \n } yyterminate();- \n } else \n { struct _yy_buffer \n*yb = yy_buffer;--- 867,872 ---- \n if (yy_buffer == NULL) \n yyterminate(); \nelse { \n struct _yy_buffer *yb = yy_buffer;\n \n \nNicolas.",
"msg_date": "Wed, 27 Mar 2002 16:27:13 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Fw: pgc.l modif. has been overwritten again"
}
] |
[
{
"msg_contents": "This is a complete patch to implement changing the nullability of an\nattribute. It passes all regressions tests. It includes its own quite\ncomprehensive regression test suite and documentation. It prevents you from\nmodifying system tables, non-table relations, system attributes, primary\nkeys and columns containing NULLs. It fully supports inheritance. I have\nmade some small changes to TODO to reflect this new functionality, plus\ncorrected some other TODO items.\n\nThe only thing I haven't checked are my ecpg changes. I would like someone\nwith more ecpg experience to check my preproc.y changes.\n\nPlease consider for 7.3!\n\nSince I have now added two new large functions to command.c, I propose that\nsometime before 7.3 beta, command.c is refactored and an alter.c created.\nThere is lots of common code in the Alter* functions that should be reused.\n\nChris",
"msg_date": "Wed, 27 Mar 2002 14:22:09 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "SET NOT NULL/DROP NOT NULL patch"
}
] |
[
{
"msg_contents": "This README file to me seems to look like this:\n\nWas the non-english version committed by mistake?\n\nChris\n\n treequery *> tree[]\n <F0><C5><D2><D7><D9><C5> <C4><D7><C5> <CF><D0><C5><D2><C1><C3><C9><C9>\n<D7>\n<CF><DA><D7><D2><C1><DD><C1><C0><D4> true, <C5><D3><CC><C9> <D7>\n<CD><C1><D3>\n<D3><C9><D7><C5> <C5><D3><D4><D8> <DA><C1><D0><C9><D3><D8>,\n <D2><C1><D7><CE><C1><D1> <C4><D2><D5><C7><CF><CD><D5>\n<CF><D0><C5><D2><C1>\n<CE><C4><D5> <CF><D0><C5><D2><C1><C3><C9><C9>,\n<CF><D3><D4><C1><D7><DB><C9><C5>\n<D3><D1> <D7><CF><DA><D7><D2><C1><DD><C1><C0><D4> true, <C5><D3><CC><C9>\n<D7>\n <CD><C1><D3><D3><C9><D7><C5> <C5><D3><D4><D8> <DA><C1><D0><C9><D3><D8>,\n<D3>\n<CF><CF><D4><D7><C5><D4><D3><D4><D7><D5><C0><DD><C1><D1>\n<D5><CB><C1><DA><C1>\n<CE><CE><CF><CA> <CD><C1><D3><CB><C5>.\n <FC><D4><C9> <CF><D0><C5><D2><C1><C3><C9><C9> <CD><CF><C7><D5><D4>\n<C2><D9>\n<D4><D8> <D5><D3><CB><CF><D2><C5><CE><D9> <D0><D5><D4><C5><CD>\n<D0><CF><D3><D4>\n<D2><CF><C5><CE><C9><D1> GiST-<C9><CE><C4><C5><CB><D3><C1>.\n\n<FA><E1><ED><E5><FE><E1><EE><E9><F1>\n1 <FA><C1><D0><D2><CF><D3><D9>\n select * from treetbl where treefld > '1.1' and treefld < '1.2';\n select * from treetbl where treefld <* '\n\n",
"msg_date": "Wed, 27 Mar 2002 15:12:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "contrib/tree/README.tree"
},
{
"msg_contents": "OOps,\n\nI don't rememeber we have submitted this module :-)\nI was intend to do that after writing README in English in 7.2 beta cycle,\nbut after decision it will not go to 7.2 I lost a focus.\n\n\tOleg\nOn Wed, 27 Mar 2002, Christopher Kings-Lynne wrote:\n\n> This README file to me seems to look like this:\n>\n> Was the non-english version committed by mistake?\n>\n> Chris\n>\n> treequery *> tree[]\n> <F0><C5><D2><D7><D9><C5> <C4><D7><C5> <CF><D0><C5><D2><C1><C3><C9><C9>\n> <D7>\n> <CF><DA><D7><D2><C1><DD><C1><C0><D4> true, <C5><D3><CC><C9> <D7>\n> <CD><C1><D3>\n> <D3><C9><D7><C5> <C5><D3><D4><D8> <DA><C1><D0><C9><D3><D8>,\n> <D2><C1><D7><CE><C1><D1> <C4><D2><D5><C7><CF><CD><D5>\n> <CF><D0><C5><D2><C1>\n> <CE><C4><D5> <CF><D0><C5><D2><C1><C3><C9><C9>,\n> <CF><D3><D4><C1><D7><DB><C9><C5>\n> <D3><D1> <D7><CF><DA><D7><D2><C1><DD><C1><C0><D4> true, <C5><D3><CC><C9>\n> <D7>\n> <CD><C1><D3><D3><C9><D7><C5> <C5><D3><D4><D8> <DA><C1><D0><C9><D3><D8>,\n> <D3>\n> <CF><CF><D4><D7><C5><D4><D3><D4><D7><D5><C0><DD><C1><D1>\n> <D5><CB><C1><DA><C1>\n> <CE><CE><CF><CA> <CD><C1><D3><CB><C5>.\n> <FC><D4><C9> <CF><D0><C5><D2><C1><C3><C9><C9> <CD><CF><C7><D5><D4>\n> <C2><D9>\n> <D4><D8> <D5><D3><CB><CF><D2><C5><CE><D9> <D0><D5><D4><C5><CD>\n> <D0><CF><D3><D4>\n> <D2><CF><C5><CE><C9><D1> GiST-<C9><CE><C4><C5><CB><D3><C1>.\n>\n> <FA><E1><ED><E5><FE><E1><EE><E9><F1>\n> 1 <FA><C1><D0><D2><CF><D3><D9>\n> select * from treetbl where treefld > '1.1' and treefld < '1.2';\n> select * from treetbl where treefld <* '\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n",
"msg_date": "Wed, 27 Mar 2002 10:50:32 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tree/README.tree"
},
{
"msg_contents": "Sorry, we don't make a english translate yet. Sorry. It must be done soon.\n\nChristopher Kings-Lynne wrote:\n> This README file to me seems to look like this:\n> \n> Was the non-english version committed by mistake?\n> \n> Chris\n> \n> treequery *> tree[]\n> <F0><C5><D2><D7><D9><C5> <C4><D7><C5> <CF><D0><C5><D2><C1><C3><C9><C9>\n> <D7>\n> <CF><DA><D7><D2><C1><DD><C1><C0><D4> true, <C5><D3><CC><C9> <D7>\n> <CD><C1><D3>\n> <D3><C9><D7><C5> <C5><D3><D4><D8> <DA><C1><D0><C9><D3><D8>,\n> <D2><C1><D7><CE><C1><D1> <C4><D2><D5><C7><CF><CD><D5>\n> <CF><D0><C5><D2><C1>\n> <CE><C4><D5> <CF><D0><C5><D2><C1><C3><C9><C9>,\n> <CF><D3><D4><C1><D7><DB><C9><C5>\n> <D3><D1> <D7><CF><DA><D7><D2><C1><DD><C1><C0><D4> true, <C5><D3><CC><C9>\n> <D7>\n> <CD><C1><D3><D3><C9><D7><C5> <C5><D3><D4><D8> <DA><C1><D0><C9><D3><D8>,\n> <D3>\n> <CF><CF><D4><D7><C5><D4><D3><D4><D7><D5><C0><DD><C1><D1>\n> <D5><CB><C1><DA><C1>\n> <CE><CE><CF><CA> <CD><C1><D3><CB><C5>.\n> <FC><D4><C9> <CF><D0><C5><D2><C1><C3><C9><C9> <CD><CF><C7><D5><D4>\n> <C2><D9>\n> <D4><D8> <D5><D3><CB><CF><D2><C5><CE><D9> <D0><D5><D4><C5><CD>\n> <D0><CF><D3><D4>\n> <D2><CF><C5><CE><C9><D1> GiST-<C9><CE><C4><C5><CB><D3><C1>.\n> \n> <FA><E1><ED><E5><FE><E1><EE><E9><F1>\n> 1 <FA><C1><D0><D2><CF><D3><D9>\n> select * from treetbl where treefld > '1.1' and treefld < '1.2';\n> select * from treetbl where treefld <* '\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n> \n\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Wed, 27 Mar 2002 10:56:49 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": false,
"msg_subject": "Re: contrib/tree/README.tree"
},
{
"msg_contents": "> I don't rememeber we have submitted this module :-)\n\nDoh - sorry!!!\n\nI just extracted it from your tar.gz into my contrib - it's NOT in CVS!\n\nChris\n\n",
"msg_date": "Wed, 27 Mar 2002 16:47:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: contrib/tree/README.tree"
}
] |
[
{
"msg_contents": "I just sent in this email and it will appear immediately in the list.\n\nSomewhat earlier, I have submitted a 25kb patch and then a 5kb gzipped\nversion of that patch to -hackers and -patches - it has not yet appeared on\nthe list.\n\nWhat's going on? Do posts with patches need to be approved or something???\n\nChris\n\n",
"msg_date": "Wed, 27 Mar 2002 15:17:10 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Mailing List Question"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> I just sent in this email and it will appear immediately in the list.\n> \n> Somewhat earlier, I have submitted a 25kb patch and then a 5kb gzipped\n> version of that patch to -hackers and -patches - it has not yet appeared on\n> the list.\n> \n> What's going on? Do posts with patches need to be approved or something???\n\nMy guess is that there is some delay for large patches to be approved. \nThe problem is I never get an email stating it is queued up, though I\nthink others do get such emails.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 05:58:29 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "\nchecking the moderator-to-approve listing for you, here are the reason(s):\n\nReason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n\n\nOn Wed, 27 Mar 2002, Christopher Kings-Lynne wrote:\n\n> I just sent in this email and it will appear immediately in the list.\n>\n> Somewhat earlier, I have submitted a 25kb patch and then a 5kb gzipped\n> version of that patch to -hackers and -patches - it has not yet appeared on\n> the list.\n>\n> What's going on? Do posts with patches need to be approved or something???\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n\n",
"msg_date": "Wed, 27 Mar 2002 09:12:03 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "Marc G. Fournier wrote:\n> \n> checking the moderator-to-approve listing for you, here are the reason(s):\n> \n> Reason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n> \n\nOK, but should posters get email stating it is in the approval queue?\nHe clearly didn't, and I don't either, but others say they do get such\nmessages.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 09:45:12 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "On Wed, 27 Mar 2002, Bruce Momjian wrote:\n\n> Marc G. Fournier wrote:\n> >\n> > checking the moderator-to-approve listing for you, here are the reason(s):\n> >\n> > Reason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n> >\n>\n> OK, but should posters get email stating it is in the approval queue?\n> He clearly didn't, and I don't either, but others say they do get such\n> messages.\n\nNot necessarily if it's an admin command.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Wed, 27 Mar 2002 10:02:37 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "Vince Vielhaber wrote:\n> \n> On Wed, 27 Mar 2002, Bruce Momjian wrote:\n> \n> > Marc G. Fournier wrote:\n> > >\n> > > checking the moderator-to-approve listing for you, here are the reason(s):\n> > >\n> > > Reason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n> > OK, but should posters get email stating it is in the approval queue?\n> > He clearly didn't, and I don't either, but others say they do get such\n> > messages.\n> Not necessarily if it's an admin command.\n\nistm that we should disable all administrative functions from the main\nmailing lists (this is settable in the configuration). the -request\naddresses handle administration, and it is just plain confusing to find\nthat there are some special words that should never be mentioned in the\nsubject or body of a message. That isn't appropriate behavior for those\nmailing lists!\n\n - Thomas\n",
"msg_date": "Wed, 27 Mar 2002 07:21:04 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "> checking the moderator-to-approve listing for you, here are the reason(s):\n>\n> Reason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n\nOH MY GOD!!!\n\nI've always has this suspicion that every time I send an email with 'SET\nNULL' in the subject it doesn't get through!!! I've even commented on that\non the list before!\n\nNow it looks like I was right!\n\nMarc - I suggest killing all those 3 patch mails I sent and I will resubmit\nthe email without 'set' in the header..\n\nChris\n\n",
"msg_date": "Thu, 28 Mar 2002 09:20:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > checking the moderator-to-approve listing for you, here are the reason(s):\n> >\n> > Reason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n> \n> OH MY GOD!!!\n> \n> I've always has this suspicion that every time I send an email with 'SET\n> NULL' in the subject it doesn't get through!!! I've even commented on that\n> on the list before!\n> \n> Now it looks like I was right!\n> \n> Marc - I suggest killing all those 3 patch mails I sent and I will resubmit\n> the email without 'set' in the header..\n\nThe fact this is done silently is clearly unacceptable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 20:24:51 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Christopher Kings-Lynne wrote:\n>> I've always has this suspicion that every time I send an email with 'SET\n>> NULL' in the subject it doesn't get through!!! I've even commented on that\n>> on the list before!\n\n> The fact this is done silently is clearly unacceptable.\n\nAgreed. Curiously, though, I've always gotten notifications whenever\nany of my messages got held up for moderator approval. Seems like there\nare two questions for Marc here:\n\n1. Why is the system failing to notify some people about their messages\nbeing delayed?\n\n2. Shouldn't the filter patterns be tightened up considerably? For\nexample, I consider it sheer folly that I cannot use the word \"c*ncel\"\nin a Postgres discussion group without my posting being held up for\nseveral days.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Mar 2002 22:48:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question "
},
{
"msg_contents": "> > The fact this is done silently is clearly unacceptable.\n>\n> Agreed. Curiously, though, I've always gotten notifications whenever\n> any of my messages got held up for moderator approval. Seems like there\n> are two questions for Marc here:\n>\n> 1. Why is the system failing to notify some people about their messages\n> being delayed?\n\nI get moderator notifications if I post to -general (to which I am not\nsubscribed)\n\nChris\n\n",
"msg_date": "Thu, 28 Mar 2002 11:59:51 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Mailing List Question "
},
{
"msg_contents": "> > > The fact this is done silently is clearly unacceptable.\n> > Agreed. Curiously, though, I've always gotten notifications whenever\n> > any of my messages got held up for moderator approval. Seems like there\n> > are two questions for Marc here:\n> > 1. Why is the system failing to notify some people about their messages\n> > being delayed?\n> I get moderator notifications if I post to -general (to which I am not\n> subscribed)\n\nimho we should disable *any* special handling of posts to the mailing\nlists. Majordomo (at least the 1.x series) can be configured to respect\ncommand keywords only for the xxx-request management lists, and to\nignore command keywords in the corresponding working lists.\n\nfwiw, I got bit by this myself when setting up a couple of small mailing\nlists at home. Very annoying, and very unexpected.\n\n - Thomas\n",
"msg_date": "Wed, 27 Mar 2002 20:42:23 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> imho we should disable *any* special handling of posts to the mailing\n> lists.\n\nIt would be interesting to try that for awhile and see if the cure is\nworse than the disease or not. How many clueless \"uns*bscr*be\" requests\nwill hit the lists if there are no filters?\n\nI suspect that we need to settle on a happy medium. What we've got now\nseems to be very far over on the \"filter 'em first and sort it out later\"\nend of the spectrum. The \"no filter at all\" end of the spectrum has its\nown obvious drawbacks (though I've used it successfully for >10 years\non another mailing list that I run).\n\nIf we could reduce the occurrence of false blocks by a factor of 10 or\n100, at the price of maybe one or two misdirected administrative\nrequests per month hitting the lists, I'd consider it a great tradeoff;\nand I'd have to think that it'd reduce Marc's moderation workload a lot,\ntoo. Maybe that's an overoptimistic assessment --- Marc probably knows\nbetter than any of the rest of us what fraction of messages stopped by\nthe filters are good traffic and what are not. But it sure seems like\nthe system is not optimally tuned at the moment.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 28 Mar 2002 00:05:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question "
},
{
"msg_contents": "On Wed, 27 Mar 2002, Tom Lane wrote:\n\n> 2. Shouldn't the filter patterns be tightened up considerably? For\n> example, I consider it sheer folly that I cannot use the word \"c*ncel\"\n> in a Postgres discussion group without my posting being held up for\n> several days.\n\nI was wondering if we could in the meantime get a list of patterns that\nare causing mail delays, to help people avoid using them. I've tried to\npost to this list (the same message) going on five or six times now, and\nit doesn't go through. I'm now wondering if the problem is in the subject\nline. I guess if this post goes through, I'll know :)\n\nj\n\n---\n \"Users complain that they receive too much spam, while spammers protest\nmessages are legal.\" -InfoWorld\n \"You do not have to do everything disagreeable that you have a right to\ndo.\" -Judith Martin (Miss Manners)\n\n\n",
"msg_date": "Thu, 28 Mar 2002 09:34:47 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@arborius.net>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question "
},
{
"msg_contents": "On Wed, 27 Mar 2002, Thomas Lockhart wrote:\n\n> Vince Vielhaber wrote:\n> >\n> > On Wed, 27 Mar 2002, Bruce Momjian wrote:\n> >\n> > > Marc G. Fournier wrote:\n> > > >\n> > > > checking the moderator-to-approve listing for you, here are the reason(s):\n> > > >\n> > > > Reason: GLOBAL ADMIN HEADER: /^subject:\\s*set\\b/i matched \"Subject: SET\"\n> > > OK, but should posters get email stating it is in the approval queue?\n> > > He clearly didn't, and I don't either, but others say they do get such\n> > > messages.\n> > Not necessarily if it's an admin command.\n>\n> istm that we should disable all administrative functions from the main\n> mailing lists (this is settable in the configuration). the -request\n> addresses handle administration, and it is just plain confusing to find\n> that there are some special words that should never be mentioned in the\n> subject or body of a message. That isn't appropriate behavior for those\n> mailing lists!\n\nI can do this ... it would just mean ppl erroneously sending\nsubscribe/unsubscribe messages to the list(s) will actually get through\n...\n\nAnyone disagre with this change?\n\n",
"msg_date": "Fri, 29 Mar 2002 01:32:24 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "On Thu, 28 Mar 2002, Tom Lane wrote:\n\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > imho we should disable *any* special handling of posts to the mailing\n> > lists.\n>\n> It would be interesting to try that for awhile and see if the cure is\n> worse than the disease or not. How many clueless \"uns*bscr*be\" requests\n> will hit the lists if there are no filters?\n\nTo be honest, not many ... 50% of what I have to moderate are plain and\nsimply spam (and that isn't an exaggeration, I wiped out something like\n150 out of 350 messages the other day) ... maybe about 25% are duplicate\npostings ... I'd say <1% are subscribe/unsubscribe ... and the rest are\nmostly from ppl not subscribed to the lists at all ...\n\nLet me disable the administrative stuff being blocked and we'll see if it\nmakes much of a difference in the way of 'false traffic' ...\n\n",
"msg_date": "Fri, 29 Mar 2002 01:38:03 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question "
},
{
"msg_contents": "Marc G. Fournier wrote:\n> On Thu, 28 Mar 2002, Tom Lane wrote:\n> \n> > Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > > imho we should disable *any* special handling of posts to the mailing\n> > > lists.\n> >\n> > It would be interesting to try that for awhile and see if the cure is\n> > worse than the disease or not. How many clueless \"uns*bscr*be\" requests\n> > will hit the lists if there are no filters?\n> \n> To be honest, not many ... 50% of what I have to moderate are plain and\n> simply spam (and that isn't an exaggeration, I wiped out something like\n> 150 out of 350 messages the other day) ... maybe about 25% are duplicate\n> postings ... I'd say <1% are subscribe/unsubscribe ... and the rest are\n> mostly from ppl not subscribed to the lists at all ...\n\nLet me add that I have looked at some non-pg lists and it looks terrible\nto see spam in there, right in the archives. Marc's manual review is\nclearly keeping our list of a high quality. Removing the admin keyword\nblocks should fix most of our problems.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 29 Mar 2002 00:54:09 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
},
{
"msg_contents": "...\n> Let me disable the administrative stuff being blocked and we'll see if it\n> makes much of a difference in the way of 'false traffic' ...\n\nGreat! Thanks Marc.\n\n - Thomas\n\nUh, just to confirm: you are removing administrative blocks, and also\nremoving any scanning of messages for administrative commands, right? So\nif someone want something administrative done, they *have* to use the\n-request form of address?\n",
"msg_date": "Fri, 29 Mar 2002 06:29:49 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Mailing List Question"
}
] |
[
{
"msg_contents": "OK,\n\nhttp://members.iinet.net.au/~klfamily/alternotnull.txt.gz\n\nThis is an identical patch to what I've submitted twice now and hasn't come\nthrough...\n\nThis is a complete patch to implement changing the nullability of an\nattribute. It passes all regressions tests. It includes its own quite\ncomprehensive regression test suite and documentation. It prevents you from\nmodifying system tables, non-table relations, system attributes, primary\nkeys and columns containing NULLs. It fully supports inheritance. I have\nmade some small changes to TODO to reflect this new functionality, plus\ncorrected some other TODO items.\n\nThe only thing I haven't checked are my ecpg changes. I would like someone\nwith more ecpg experience to check my preproc.y changes.\n\nPlease consider for 7.3!\n\nSince I have now added two new large functions to command.c, I propose that\nsometime before 7.3 beta, command.c is refactored and an alter.c created.\nThere is lots of common code in the Alter* functions that should be reused.\n\nChris\n\n",
"msg_date": "Wed, 27 Mar 2002 16:49:33 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "SET NOT NULL / DROP NOT NULL as an HREF!"
}
] |
[
{
"msg_contents": "-----Forwarded Message-----\n\nFrom: rmurray@debian.org\nTo: 139003@bugs.debian.org\nCc: control@bugs.debian.org\nSubject: Bug#139003: a little bit more is needed...\nDate: 27 Mar 2002 00:21:18 -0800\n\nreopen 139003\nthanks\n\nLooks like a small patch is needed as well to do the right thing on Linux.\n\nThe patch enables the mips2 ISA for the ll/sc operations, and then restores\nit when done. The kernel/libc emulation code will take over on CPUs without\nll/sc, and on CPUs with it, it'll use the operations provided by the CPU.\n\nCombined with the earlier fix (removing -mips2), postgresql builds again on\nmips and mipsel. The patch is against 7.2-7.\n\ndiff -urN postgresql-7.2/src/backend/storage/lmgr/s_lock.c postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\n--- postgresql-7.2/src/backend/storage/lmgr/s_lock.c\tMon Nov 5 18:46:28 2001\n+++ postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\tWed Mar 27 07:46:59 2002\n@@ -173,9 +173,12 @@\n .global\ttas\t\t\t\t\t\t\\n\\\n tas:\t\t\t\t\t\t\t\\n\\\n \t\t\t.frame\t$sp, 0, $31\t\\n\\\n+\t\t\t.set push\t\t\\n\\\n+\t\t\t.set mips2\t\t\\n\\n\n \t\t\tll\t\t$14, 0($4)\t\\n\\\n \t\t\tor\t\t$15, $14, 1\t\\n\\\n \t\t\tsc\t\t$15, 0($4)\t\\n\\\n+\t\t\t.set pop\t\t\t\\n\\\n \t\t\tbeq\t\t$15, 0, fail\\n\\\n \t\t\tbne\t\t$14, 0, fail\\n\\\n \t\t\tli\t\t$2, 0\t\t\\n\\",
"msg_date": "27 Mar 2002 09:46:04 +0000",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": true,
"msg_subject": "Linux/mips compile: [Fwd: Bug#139003: a little bit more is needed...]"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nOliver Elphick wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> -----Forwarded Message-----\n> \n> From: rmurray@debian.org\n> To: 139003@bugs.debian.org\n> Cc: control@bugs.debian.org\n> Subject: Bug#139003: a little bit more is needed...\n> Date: 27 Mar 2002 00:21:18 -0800\n> \n> reopen 139003\n> thanks\n> \n> Looks like a small patch is needed as well to do the right thing on Linux.\n> \n> The patch enables the mips2 ISA for the ll/sc operations, and then restores\n> it when done. The kernel/libc emulation code will take over on CPUs without\n> ll/sc, and on CPUs with it, it'll use the operations provided by the CPU.\n> \n> Combined with the earlier fix (removing -mips2), postgresql builds again on\n> mips and mipsel. The patch is against 7.2-7.\n> \n> diff -urN postgresql-7.2/src/backend/storage/lmgr/s_lock.c postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\n> --- postgresql-7.2/src/backend/storage/lmgr/s_lock.c\tMon Nov 5 18:46:28 2001\n> +++ postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\tWed Mar 27 07:46:59 2002\n> @@ -173,9 +173,12 @@\n> .global\ttas\t\t\t\t\t\t\\n\\\n> tas:\t\t\t\t\t\t\t\\n\\\n> \t\t\t.frame\t$sp, 0, $31\t\\n\\\n> +\t\t\t.set push\t\t\\n\\\n> +\t\t\t.set mips2\t\t\\n\\n\n> \t\t\tll\t\t$14, 0($4)\t\\n\\\n> \t\t\tor\t\t$15, $14, 1\t\\n\\\n> \t\t\tsc\t\t$15, 0($4)\t\\n\\\n> +\t\t\t.set pop\t\t\t\\n\\\n> \t\t\tbeq\t\t$15, 0, fail\\n\\\n> \t\t\tbne\t\t$14, 0, fail\\n\\\n> \t\t\tli\t\t$2, 0\t\t\\n\\\n> \n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 01:55:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Linux/mips compile: [Fwd: Bug#139003: a little bit more"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nOliver Elphick wrote:\n\nChecking application/pgp-signature: FAILURE\n-- Start of PGP signed section.\n> -----Forwarded Message-----\n> \n> From: rmurray@debian.org\n> To: 139003@bugs.debian.org\n> Cc: control@bugs.debian.org\n> Subject: Bug#139003: a little bit more is needed...\n> Date: 27 Mar 2002 00:21:18 -0800\n> \n> reopen 139003\n> thanks\n> \n> Looks like a small patch is needed as well to do the right thing on Linux.\n> \n> The patch enables the mips2 ISA for the ll/sc operations, and then restores\n> it when done. The kernel/libc emulation code will take over on CPUs without\n> ll/sc, and on CPUs with it, it'll use the operations provided by the CPU.\n> \n> Combined with the earlier fix (removing -mips2), postgresql builds again on\n> mips and mipsel. The patch is against 7.2-7.\n> \n> diff -urN postgresql-7.2/src/backend/storage/lmgr/s_lock.c postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\n> --- postgresql-7.2/src/backend/storage/lmgr/s_lock.c\tMon Nov 5 18:46:28 2001\n> +++ postgresql-7.2.fixed/src/backend/storage/lmgr/s_lock.c\tWed Mar 27 07:46:59 2002\n> @@ -173,9 +173,12 @@\n> .global\ttas\t\t\t\t\t\t\\n\\\n> tas:\t\t\t\t\t\t\t\\n\\\n> \t\t\t.frame\t$sp, 0, $31\t\\n\\\n> +\t\t\t.set push\t\t\\n\\\n> +\t\t\t.set mips2\t\t\\n\\n\n> \t\t\tll\t\t$14, 0($4)\t\\n\\\n> \t\t\tor\t\t$15, $14, 1\t\\n\\\n> \t\t\tsc\t\t$15, 0($4)\t\\n\\\n> +\t\t\t.set pop\t\t\t\\n\\\n> \t\t\tbeq\t\t$15, 0, fail\\n\\\n> \t\t\tbne\t\t$14, 0, fail\\n\\\n> \t\t\tli\t\t$2, 0\t\t\\n\\\n> \n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Apr 2002 06:38:06 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Linux/mips compile: [Fwd: Bug#139003: a little bit more"
}
] |
[
{
"msg_contents": "I'm working on getting libpq to function in a multi-threaded program on\nSolaris and I was getting errors back from the library about\nbeing unable to receive from the server. It turns out that on Solaris\nyou need to compile libpq with the -D_REENTRANT flags set so that\nit defines errno to be a function call instead of a global variable.\nOnce I did this the program worked without any problems.\n\nYou want to consider making this flag standard (or at least provide\na configure option to compile a thread-ready version of libpq) as it\nmay save someone else the hassle of trying to figure out what went\nwrong.\n\nMartin\n",
"msg_date": "Wed, 27 Mar 2002 11:28:16 -0500",
"msg_from": "Martin Renters <martin@datafax.com>",
"msg_from_op": true,
"msg_subject": "Threading in libpg on Solaris"
}
] |
[
{
"msg_contents": "Patch against 7,2 submitted for comment.\n\nIt's a little messy; I had some trouble trying to reconcile the code\nstyle of libpq which I copied from, and odbc.\n\nSuggestions on what parts look ugly, and or where to send this\n(is there a separate ODBC place?) are welcome.\n\nThis seems to work just fine; Now, when our users submit a 2 hour\nquery with four million row sorts by accident, then cancel it 30 seconds\nlater, it doesn't bog down the server ...\n\nregards,\n\n-Brad",
"msg_date": "Wed, 27 Mar 2002 11:28:49 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Patch to add real cancel to ODBC driver"
}
] |
[
{
"msg_contents": "I've mentioned a while ago that I wanted to make the --enable-locale\nswitch the default (and remove the switch), and make the choice of\nlocale-awareness a run-time choice. Here is how that might work. I've\nalready explained how I plan to get around the performance problems, so\nthis will just focus on the user interface.\n\nWe currently have two kinds of locale categories: Those that must be\nfixed at initdb-time and those that may be changed at run-time.\n\nI suggest that initdb always defaults its locales to C and that we\nprovide command line options to set a different locale. E.g.,\n\ninitdb --lc-collate=en_US\n\nThis makes the change transparent for those who like the C locale. It is\nalso much clearer than figuring out which of LANG, LC_COLLATE, LC_ALL will\nget in your way.\n\nPersonally, I also find it better to separate the locale settings in your\nlogin account meant for interactive use from those meant for PostgreSQL\nservers. In other words, if I'm the \"postgres\" account and administering\na bunch of databases I'd still like to set LC_ALL=de_DE so all the shell\ncommands print their things formatted right, and I don't want to change\nthis every time I start a server from within that account.\n\nIn particular, I'd like the following set of options:\n\n--lc-collate\n--lc-ctype\n--locale (allows specifying all in one, but may be overridden by specific options)\n\nIt might actually work to say\n\ninitdb --locale=''\n\nto force inherting the settings from the environment.\n\nIn the post-initdb stage, we'd add a bunch of GUC variables, such as\n\nlc_numeric\nlc_monetary\nlc_time\nlocale\n\nThese all default to \"C\". For a start we'd make them fixed for the\nlife-time of the postmaster, but we could evaluate other options later.\n\nThis again makes this change hidden for users that didn't use locale\nsupport. Also, it prevents accidentally changing the locale when you\n(or someone else) fiddle with your environment variables.\n\nNote that you get the same kind of command line options as in initdb:\n--lc-numeric, --locale, etc. You can also run SHOW lc_numeric to see\nwhat's going on.\n\nComments?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 27 Mar 2002 12:05:53 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Rough sketch for locale by default"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> [ good stuff snipped ]\n\n> ... Also, it prevents accidentally changing the locale when you\n> (or someone else) fiddle with your environment variables.\n\nIf I follow this correctly, the behavior would be that PG would not pay\nattention to *any* LC_xxx environment variables? Although I agree with\nthat principle in the abstract, it bothers me that PG will be out of\nstep with every single other locale-using program in the Unix world.\nWe ought to think twice about whether that's really a good idea.\n\n> Note that you get the same kind of command line options as in initdb:\n> --lc-numeric, --locale, etc. You can also run SHOW lc_numeric to see\n> what's going on.\n\nProbably you thought of this already: please also support SHOW for the\ninitdb-time variables (lc_collate, etc), so that one can find out the\nactive locale settings without having to resort to\ncontrib/pg_controldata.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 27 Mar 2002 12:26:48 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Rough sketch for locale by default "
},
{
"msg_contents": "On Wed, 2002-03-27 at 19:26, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > [ good stuff snipped ]\n> \n> > ... Also, it prevents accidentally changing the locale when you\n> > (or someone else) fiddle with your environment variables.\n> \n> If I follow this correctly, the behavior would be that PG would not pay\n> attention to *any* LC_xxx environment variables? Although I agree with\n> that principle in the abstract, it bothers me that PG will be out of\n> step with every single other locale-using program in the Unix world.\n\nIIRC oracle uses NLS_LANG and not any LC_* (even on unix ;)\n\nit is set to smth like NLS_LANG=ESTONIAN_ESTONIA.WE8ISO8859P15\n\n\n> We ought to think twice about whether that's really a good idea.\n> \n> > Note that you get the same kind of command line options as in initdb:\n> > --lc-numeric, --locale, etc. You can also run SHOW lc_numeric to see\n> > what's going on.\n> \n> Probably you thought of this already: please also support SHOW for the\n> initdb-time variables (lc_collate, etc), so that one can find out the\n> active locale settings without having to resort to\n> contrib/pg_controldata.\n\n------------\nHannu\n\n\n",
"msg_date": "28 Mar 2002 17:50:44 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Rough sketch for locale by default"
},
{
"msg_contents": "On Wed, 2002-03-27 at 19:05, Peter Eisentraut wrote:\n> I've mentioned a while ago that I wanted to make the --enable-locale\n> switch the default (and remove the switch), and make the choice of\n> locale-awareness a run-time choice. Here is how that might work. I've\n> already explained how I plan to get around the performance problems, so\n> this will just focus on the user interface.\n> \n> We currently have two kinds of locale categories: Those that must be\n> fixed at initdb-time and those that may be changed at run-time.\n\nAs a more radical idea we should get rid of those which are fixed at\ninitdb time (except databases storage charset) and do proper NCHAR types\nfor anything not in C locale.\n\n-----------\nHannu\n\n\n",
"msg_date": "28 Mar 2002 17:56:39 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Rough sketch for locale by default"
},
{
"msg_contents": "Tom Lane writes:\n\n> If I follow this correctly, the behavior would be that PG would not pay\n> attention to *any* LC_xxx environment variables? Although I agree with\n> that principle in the abstract, it bothers me that PG will be out of\n> step with every single other locale-using program in the Unix world.\n\nDuring earlier discussions people had objected to enabling locale support\nby default on the grounds that it is very hard to follow which locale is\ngetting activated when. Especially from Japan I heard that a lot of\npeople have some locale settings in their environment, but that most\nlocales are unsuitable (\"broken\") for use in the PostgreSQL server. So\nthis approach would keep the behavior backward compatible with the\n--disable-locale case.\n\nHere's a possible compromise for the postmaster:\n\nWe let initdb figure out what locales the user wants and then not only\ninitialize pg_control appropriately, but also write the run-time\nchangeable categories into the postgresql.conf file. That way, the\npostmaster executable could still consult the LC_* variables, but in the\ncommon case it would just be overridden when the postgresql.conf file is\nread.\n\nThis way we also hide the details of what locale category gets what\ntreatment from users that only want one locale for all categories and\ndon't want to change it. Futhermore it all but eliminates the problem I'm\nconcerned about that the locale may accidentally be changed when the\npostmaster is restarted.\n\nHow does initdb figure out what locale is wanted? I agree it makes sense\nto use the setting in the environment, because in many cases the database\nwill want to use the same locale as everything else on the system. We\ncould provide a flag --no-locale, which sets all locale categories to \"C\",\nas a clear and simple way to turn this off.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 28 Mar 2002 18:07:31 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Rough sketch for locale by default "
}
] |
[
{
"msg_contents": "I have a postgresql DB that contains a lot of HTML code. As you know, HTML \ncontains numerous backslash( \\ ) characters. When I use pg_dump to backup \nthe DB I get a \"CopyReadAttribute: end of record marker corrupted\" error. \nIs there any way to circumvent this problem so that I can backup a DB with \nHTML code stored in it?\n\nThanks,\nMatias\n\n\n>CopyReadAttribute: end of record marker corrupted\n>\n>This message comes out if the COPY data contains \\. not immediately\n>followed by newline (\\n). I'm guessing that you have some backslashes\n>that need to be doubled --- backslash is an escape character for COPY.\n\n\n\n\n_________________________________________________________________\nChat with friends online, try MSN Messenger: http://messenger.msn.com\n\n",
"msg_date": "Wed, 27 Mar 2002 18:30:35 ",
"msg_from": "\"Matias Klein\" <matiasklein@hotmail.com>",
"msg_from_op": true,
"msg_subject": "escape sequence conflicting w/ backup (i.e. pg_dump)"
}
] |
[
{
"msg_contents": "I have a postgresql DB that contains a lot of HTML code. As you know, HTML \ncontains numerous backslash( \\ ) characters. When I use pg_dump to backup \nthe DB I get a \"CopyReadAttribute: end of record marker corrupted\" error. Is \nthere any way to circumvent this problem so that I can backup a DB with HTML \ncode stored in it?\n\nThanks,\nMatias\n\n\n>CopyReadAttribute: end of record marker corrupted\n>\n>This message comes out if the COPY data contains \\. not immediately \n>followed by newline (\\n). I'm guessing that you have some backslashes that \n>need to be doubled --- backslash is an escape character for COPY.\n\n\n\n\n\n_________________________________________________________________\nJoin the world�s largest e-mail service with MSN Hotmail. \nhttp://www.hotmail.com\n\n",
"msg_date": "Wed, 27 Mar 2002 18:39:58 ",
"msg_from": "\"Matias Klein\" <matiasklein@hotmail.com>",
"msg_from_op": true,
"msg_subject": "escape sequence conflicting w/ backup (i.e. pg_dump) "
}
] |
[
{
"msg_contents": "I see in quote.c::do_quote_ident():\n\n\t*cp2++ = '\"';\n\twhile (len-- > 0)\n\t{\n\t\tif (*cp1 == '\"')\n\t\t\t*cp2++ = '\"';\n\t\tif (*cp1 == '\\\\')\n\t\t\t*cp2++ = '\\\\';\n\t\t*cp2++ = *cp1++;\n\t}\n\t*cp2++ = '\"';\n\nI am confused by the backslash handling. In my tests, a backslash in a\ndouble-quoted string does not require two backslashes:\n\t\n\ttest=> create user \"a\\d\";\n\tCREATE USER\n\ttest=> select usename, length(usename) from pg_user;\n\t usename | length \n\t----------+--------\n\t a\\d | 3\n\nThis is because a double-quote in a double-quoted string is entered as\n\"\", not \\\".\n\nIs it adding another backslash because it assumes the result will appear\nin another quoted string?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 27 Mar 2002 15:54:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Problem with do_quote_ident()"
},
{
"msg_contents": "Bruce Momjian wrote:\n> I see in quote.c::do_quote_ident():\n>\n> *cp2++ = '\"';\n> while (len-- > 0)\n> {\n> if (*cp1 == '\"')\n> *cp2++ = '\"';\n> if (*cp1 == '\\\\')\n> *cp2++ = '\\\\';\n> *cp2++ = *cp1++;\n> }\n> *cp2++ = '\"';\n>\n> I am confused by the backslash handling. In my tests, a backslash in a\n> double-quoted string does not require two backslashes:\n>\n> test=> create user \"a\\d\";\n> CREATE USER\n> test=> select usename, length(usename) from pg_user;\n> usename | length\n> ----------+--------\n> a\\d | 3\n>\n> This is because a double-quote in a double-quoted string is entered as\n> \"\", not \\\".\n>\n> Is it adding another backslash because it assumes the result will appear\n> in another quoted string?\n\n I would say it is adding another backslash because it is a\n bug. If you use quote_ident() in a plpgsql procedure to\n build querystrings for EXECUTE (and you should do it that\n way), then it'll no handle identifiers that contain\n backslashes correctly.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Wed, 27 Mar 2002 16:18:38 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Problem with do_quote_ident()"
}
] |
[
{
"msg_contents": "Here is the description:\n\nWhen a macro is replaced by the preprocessor, pgc.l reaches a end of file, which is not the actual end of the file. One side effect of that is that if you are in a ifdef block, you get a wrong error telling you that a endif is missing.\n\nThis patch corrects pgc.l and also adds a test of this problem to test1.pgc. To convince you apply the patch to test1.pgc first then try to compile the test then apply the patch to pgc.l.\n\nThe patch moves the test of the scope of an ifdef block to the end of the file beeing parsed, including all includes files, ... .\n\nFor the record, this patch was applied a first time by bruce then overwritten by Micheal and reapplied by him. But the big mystery is that there is no trace of that in CVS ????\n\nNicolas",
"msg_date": "Thu, 28 Mar 2002 10:30:21 +1100",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "Always the same ecpg bug - please (re)apply patch"
},
{
"msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNicolas Bazin wrote:\n> Here is the description:\n> \n> When a macro is replaced by the preprocessor, pgc.l reaches a end of file, which is not the actual end of the file. One side effect of that is that if you are in a ifdef block, you get a wrong error telling you that a endif is missing.\n> \n> This patch corrects pgc.l and also adds a test of this problem to test1.pgc. To convince you apply the patch to test1.pgc first then try to compile the test then apply the patch to pgc.l.\n> \n> The patch moves the test of the scope of an ifdef block to the end of the file beeing parsed, including all includes files, ... .\n> \n> For the record, this patch was applied a first time by bruce then overwritten by Micheal and reapplied by him. But the big mystery is that there is no trace of that in CVS ????\n> \n> Nicolas\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 01:36:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Always the same ecpg bug - please (re)apply patch"
},
{
"msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nNicolas Bazin wrote:\n> Here is the description:\n> \n> When a macro is replaced by the preprocessor, pgc.l reaches a end of file, which is not the actual end of the file. One side effect of that is that if you are in a ifdef block, you get a wrong error telling you that a endif is missing.\n> \n> This patch corrects pgc.l and also adds a test of this problem to test1.pgc. To convince you apply the patch to test1.pgc first then try to compile the test then apply the patch to pgc.l.\n> \n> The patch moves the test of the scope of an ifdef block to the end of the file beeing parsed, including all includes files, ... .\n> \n> For the record, this patch was applied a first time by bruce then overwritten by Micheal and reapplied by him. But the big mystery is that there is no trace of that in CVS ????\n> \n> Nicolas\n\n[ Attachment, skipping... ]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Apr 2002 06:39:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Always the same ecpg bug - please (re)apply patch"
}
] |
[
{
"msg_contents": "I've a severe problem with deadlocks in postgres, when using referential integrity it's quite easy to trigger deadlocks. I think the may be a bug in ri_trigger.c (discussed later). Here's some short example:\n\ncreate table languages (\n id integer not null,\n name text not null,\n primary key(id)\n);\n\ncreate table entry (\n id integer not null,\n lang_id integer,\n sometext text,\n primary key (id),\n foreign key ( lang_id ) references languages (id)\n);\n\ninsert into languages values (1, 'english');\ninsert into languages values (2, 'german');\n\ninsert into entry values (1, 1, 'text 1');\ninsert into entry values (2, 1, 'text 2');\n\n\ntransaction A: begin;\ntransaction A: update entry set sometext='text 1.1' where id=1;\ntransaction A: .... do more time-consuming processing here...\nmeanwhile, B: begin; \n B: update entry set sometext='text 2.1' where id=2;\n\n-- both processes hang now\n\nI think this is too much locking here, because the logfile show's something like this:\n'select 1 from \"languages\" where id=$1 for update' (2 times).\n\nNow I've a lot of tables (around 30) and use referential integrity a lot on ~10 columns (language, country....) , and with more fields it's very easy to deadlock the whole system (it happens a lot in my web applicaiton with ~20 concorrent users).\n\nIMHO the \"select ... for update\" on languages is not necessary, since I do not want to update \"lang_id\", but I might be wrong. The other problem is, that this will make postgres in benchmarks very slow (with many concurrent connections), at least if the application is not trivial.\n\nIMO the problem is in ri_trigger.c around line 390:\n\t\t/* ----------\n\t\t * The query string built is\n\t\t *\tSELECT 1 FROM ONLY <pktable> WHERE pkatt1 = $1 [AND ...]\n\t\t * The type id's for the $ parameters are those of the\n\t\t * corresponding FK attributes. Thus, SPI_prepare could\n\t\t * eventually fail if the parser cannot identify some way\n\t\t * how to compare these two types by '='.\n\t\t * ----------\n\t\t */\n\nAny ideas if this is a bug or simply strict SQL standard? \n\nBest regards,\n\tMario Weilguni\n\n",
"msg_date": "Thu, 28 Mar 2002 15:44:48 +0100",
"msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>",
"msg_from_op": true,
"msg_subject": "deadlock problems with foreign keys"
},
{
"msg_contents": "There was no deadlock in 7.2 with what was provided -- but the second\ntransaction was blocked from doing it's thing by the lock from the\nfirst. Perhaps a deadlock is caused by 'do other stuff'?\n\nI will agree that a FOR UPDATE is heavy. There is no intention to\nupdate the record, we just want to ensure it's NOT updated or deleted.\nA FOR PREVENT UPDATE lock may be preferable and it should block any\nother locks while allowing the lock to be 'upgraded' in the case where\nyou hold the only PREVENT UPDATE lock. It wouldn't be exclusive to\nitself, only other types of locks.\n\n\nAll that said, SET CONSTRAINTS ALL DEFERRED at the beginning of the\ntransaction also caused a block on the update with the second\ntransaction. That interests me. Why doesn't the second transaction\ngo through and block the first from using COMMIT?\n\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Mario Weilguni\" <mario.weilguni@icomedias.com>\nTo: \"Postgresql Mailinglist (E-Mail)\" <pgsql-hackers@postgresql.org>\nSent: Thursday, March 28, 2002 9:44 AM\nSubject: [HACKERS] deadlock problems with foreign keys\n\n\nI've a severe problem with deadlocks in postgres, when using\nreferential integrity it's quite easy to trigger deadlocks. I think\nthe may be a bug in ri_trigger.c (discussed later). Here's some short\nexample:\n\ncreate table languages (\n id integer not null,\n name text not null,\n primary key(id)\n);\n\ncreate table entry (\n id integer not null,\n lang_id integer,\n sometext text,\n primary key (id),\n foreign key ( lang_id ) references languages (id)\n);\n\ninsert into languages values (1, 'english');\ninsert into languages values (2, 'german');\n\ninsert into entry values (1, 1, 'text 1');\ninsert into entry values (2, 1, 'text 2');\n\n\ntransaction A: begin;\ntransaction A: update entry set sometext='text 1.1' where id=1;\ntransaction A: .... do more time-consuming processing here...\nmeanwhile, B: begin;\n B: update entry set sometext='text 2.1' where id=2;\n\n-- both processes hang now\n\nI think this is too much locking here, because the logfile show's\nsomething like this:\n'select 1 from \"languages\" where id=$1 for update' (2 times).\n\nNow I've a lot of tables (around 30) and use referential integrity a\nlot on ~10 columns (language, country....) , and with more fields it's\nvery easy to deadlock the whole system (it happens a lot in my web\napplicaiton with ~20 concorrent users).\n\nIMHO the \"select ... for update\" on languages is not necessary, since\nI do not want to update \"lang_id\", but I might be wrong. The other\nproblem is, that this will make postgres in benchmarks very slow (with\nmany concurrent connections), at least if the application is not\ntrivial.\n\nIMO the problem is in ri_trigger.c around line 390:\n/* ----------\n* The query string built is\n* SELECT 1 FROM ONLY <pktable> WHERE pkatt1 = $1 [AND ...]\n* The type id's for the $ parameters are those of the\n* corresponding FK attributes. Thus, SPI_prepare could\n* eventually fail if the parser cannot identify some way\n* how to compare these two types by '='.\n* ----------\n*/\n\nAny ideas if this is a bug or simply strict SQL standard?\n\nBest regards,\nMario Weilguni\n\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n",
"msg_date": "Thu, 28 Mar 2002 10:15:25 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: deadlock problems with foreign keys"
},
{
"msg_contents": "Mario Weilguni wrote:\n> I've a severe problem with deadlocks in postgres, when using referential integrity it's quite easy to trigger deadlocks. I think the may be a bug in ri_trigger.c (discussed later). Here's some short example:\n>\n> create table languages (\n> id integer not null,\n> name text not null,\n> primary key(id)\n> );\n>\n> create table entry (\n> id integer not null,\n> lang_id integer,\n> sometext text,\n> primary key (id),\n> foreign key ( lang_id ) references languages (id)\n> );\n>\n> insert into languages values (1, 'english');\n> insert into languages values (2, 'german');\n>\n> insert into entry values (1, 1, 'text 1');\n> insert into entry values (2, 1, 'text 2');\n>\n>\n> transaction A: begin;\n> transaction A: update entry set sometext='text 1.1' where id=1;\n> transaction A: .... do more time-consuming processing here...\n> meanwhile, B: begin;\n> B: update entry set sometext='text 2.1' where id=2;\n>\n> -- both processes hang now\n\n Cannot reproduce that problem in v7.2. Only B blocks until A\n either commits or rolls back. So what exactly is your \"more\n time-consuming processing\"?\n\n>\n> I think this is too much locking here, because the logfile show's something like this:\n> 'select 1 from \"languages\" where id=$1 for update' (2 times).\n>\n> Now I've a lot of tables (around 30) and use referential integrity a lot on ~10 columns (language, country....) , and with more fields it's very easy to deadlock the whole system (it happens a lot in my web applicaiton with ~20 concorrent users).\n>\n> IMHO the \"select ... for update\" on languages is not necessary, since I do not want to update \"lang_id\", but I might be wrong. The other problem is, that this will make postgres in benchmarks very slow (with many concurrent connections), at least if the application is not trivial.\n>\n> IMO the problem is in ri_trigger.c around line 390:\n> /* ----------\n> * The query string built is\n> * SELECT 1 FROM ONLY <pktable> WHERE pkatt1 = $1 [AND ...]\n> * The type id's for the $ parameters are those of the\n> * corresponding FK attributes. Thus, SPI_prepare could\n> * eventually fail if the parser cannot identify some way\n> * how to compare these two types by '='.\n> * ----------\n> */\n>\n> Any ideas if this is a bug or simply strict SQL standard?\n\n It does a SELECT ... FOR UPDATE because we don't have a\n SELECT ... AND PLEASE DO NOT REMOVE.\n\n If we would only check if the PK is there now, another\n concurrent transaction could delete the PK, it's own check\n cannot see our uncommitted row yet and we end up with a\n violation. And if you look at the comment a few lines up, it\n explains why we cannot skip the check even if the key value\n doesn't change.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 28 Mar 2002 10:39:35 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: deadlock problems with foreign keys"
},
{
"msg_contents": "\nOn Thu, 28 Mar 2002, Mario Weilguni wrote:\n\n> I've a severe problem with deadlocks in postgres, when using\n> referential integrity it's quite easy to trigger deadlocks. I think\n> the may be a bug in ri_trigger.c (discussed later). Here's some short\n\nYou might want to see recent messages about foreign keys for more\ninformation (in the RI triggers and schemas thread, specifically\nthe message from Alex Hayward).\n\nJan's example of a failure case is why it does what it currently does\nsince AFAIK we don't have a weaker mechanism available to us through SPI\ncurrently that is still sufficiently strong.\n\n",
"msg_date": "Thu, 28 Mar 2002 08:30:53 -0800 (PST)",
"msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>",
"msg_from_op": false,
"msg_subject": "Re: deadlock problems with foreign keys"
},
{
"msg_contents": "Rod Taylor wrote:\n> There was no deadlock in 7.2 with what was provided -- but the second\n> transaction was blocked from doing it's thing by the lock from the\n> first. Perhaps a deadlock is caused by 'do other stuff'?\n>\n> I will agree that a FOR UPDATE is heavy. There is no intention to\n> update the record, we just want to ensure it's NOT updated or deleted.\n> A FOR PREVENT UPDATE lock may be preferable and it should block any\n> other locks while allowing the lock to be 'upgraded' in the case where\n> you hold the only PREVENT UPDATE lock. It wouldn't be exclusive to\n> itself, only other types of locks.\n>\n>\n> All that said, SET CONSTRAINTS ALL DEFERRED at the beginning of the\n> transaction also caused a block on the update with the second\n> transaction. That interests me. Why doesn't the second transaction\n> go through and block the first from using COMMIT?\n\n SET CONSTRAINTS ALL DEFERRED only set's DEFERRABLE\n constraints to DEFERRED. Constraints default to NOT\n DEFERRABLE, so unless you explicitly allowed it at table\n creation, you did a noop.\n\n\nJan\n\n>\n>\n> --\n> Rod Taylor\n>\n> This message represents the official view of the voices in my head\n>\n> ----- Original Message -----\n> From: \"Mario Weilguni\" <mario.weilguni@icomedias.com>\n> To: \"Postgresql Mailinglist (E-Mail)\" <pgsql-hackers@postgresql.org>\n> Sent: Thursday, March 28, 2002 9:44 AM\n> Subject: [HACKERS] deadlock problems with foreign keys\n>\n>\n> I've a severe problem with deadlocks in postgres, when using\n> referential integrity it's quite easy to trigger deadlocks. I think\n> the may be a bug in ri_trigger.c (discussed later). Here's some short\n> example:\n>\n> create table languages (\n> id integer not null,\n> name text not null,\n> primary key(id)\n> );\n>\n> create table entry (\n> id integer not null,\n> lang_id integer,\n> sometext text,\n> primary key (id),\n> foreign key ( lang_id ) references languages (id)\n> );\n>\n> insert into languages values (1, 'english');\n> insert into languages values (2, 'german');\n>\n> insert into entry values (1, 1, 'text 1');\n> insert into entry values (2, 1, 'text 2');\n>\n>\n> transaction A: begin;\n> transaction A: update entry set sometext='text 1.1' where id=1;\n> transaction A: .... do more time-consuming processing here...\n> meanwhile, B: begin;\n> B: update entry set sometext='text 2.1' where id=2;\n>\n> -- both processes hang now\n>\n> I think this is too much locking here, because the logfile show's\n> something like this:\n> 'select 1 from \"languages\" where id=$1 for update' (2 times).\n>\n> Now I've a lot of tables (around 30) and use referential integrity a\n> lot on ~10 columns (language, country....) , and with more fields it's\n> very easy to deadlock the whole system (it happens a lot in my web\n> applicaiton with ~20 concorrent users).\n>\n> IMHO the \"select ... for update\" on languages is not necessary, since\n> I do not want to update \"lang_id\", but I might be wrong. The other\n> problem is, that this will make postgres in benchmarks very slow (with\n> many concurrent connections), at least if the application is not\n> trivial.\n>\n> IMO the problem is in ri_trigger.c around line 390:\n> /* ----------\n> * The query string built is\n> * SELECT 1 FROM ONLY <pktable> WHERE pkatt1 = $1 [AND ...]\n> * The type id's for the $ parameters are those of the\n> * corresponding FK attributes. Thus, SPI_prepare could\n> * eventually fail if the parser cannot identify some way\n> * how to compare these two types by '='.\n> * ----------\n> */\n>\n> Any ideas if this is a bug or simply strict SQL standard?\n>\n> Best regards,\n> Mario Weilguni\n>\n>\n> ---------------------------(end of\n> broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 28 Mar 2002 13:27:52 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: deadlock problems with foreign keys"
}
] |
[
{
"msg_contents": "just to clarify this, my example does not deadlock. I wanted to provide a simple expample, because my application has 109 (this time I counted) tables with a few (~10) \"central\" tables like \"languages\", where a lot of other table reference to. And deadlocks are quite easy to trigger with more tables. I'll try to create a testcase and post it.\n\n-----Ursprüngliche Nachricht-----\nVon: Rod Taylor [mailto:rbt@zort.ca]\nGesendet: Donnerstag, 28. März 2002 16:15\nAn: Mario Weilguni; Hackers List\nBetreff: Re: [HACKERS] deadlock problems with foreign keys\n\n\nThere was no deadlock in 7.2 with what was provided -- but the second\ntransaction was blocked from doing it's thing by the lock from the\nfirst. Perhaps a deadlock is caused by 'do other stuff'?\n\nI will agree that a FOR UPDATE is heavy. There is no intention to\nupdate the record, we just want to ensure it's NOT updated or deleted.\nA FOR PREVENT UPDATE lock may be preferable and it should block any\nother locks while allowing the lock to be 'upgraded' in the case where\nyou hold the only PREVENT UPDATE lock. It wouldn't be exclusive to\nitself, only other types of locks.\n\n\nAll that said, SET CONSTRAINTS ALL DEFERRED at the beginning of the\ntransaction also caused a block on the update with the second\ntransaction. That interests me. Why doesn't the second transaction\ngo through and block the first from using COMMIT?\n\n\n--\nRod Taylor\n\nThis message represents the official view of the voices in my head\n\n----- Original Message -----\nFrom: \"Mario Weilguni\" <mario.weilguni@icomedias.com>\nTo: \"Postgresql Mailinglist (E-Mail)\" <pgsql-hackers@postgresql.org>\nSent: Thursday, March 28, 2002 9:44 AM\nSubject: [HACKERS] deadlock problems with foreign keys\n\n\nI've a severe problem with deadlocks in postgres, when using\nreferential integrity it's quite easy to trigger deadlocks. I think\nthe may be a bug in ri_trigger.c (discussed later). Here's some short\nexample:\n\ncreate table languages (\n id integer not null,\n name text not null,\n primary key(id)\n);\n\ncreate table entry (\n id integer not null,\n lang_id integer,\n sometext text,\n primary key (id),\n foreign key ( lang_id ) references languages (id)\n);\n\ninsert into languages values (1, 'english');\ninsert into languages values (2, 'german');\n\ninsert into entry values (1, 1, 'text 1');\ninsert into entry values (2, 1, 'text 2');\n\n\ntransaction A: begin;\ntransaction A: update entry set sometext='text 1.1' where id=1;\ntransaction A: .... do more time-consuming processing here...\nmeanwhile, B: begin;\n B: update entry set sometext='text 2.1' where id=2;\n\n-- both processes hang now\n\nI think this is too much locking here, because the logfile show's\nsomething like this:\n'select 1 from \"languages\" where id=$1 for update' (2 times).\n\nNow I've a lot of tables (around 30) and use referential integrity a\nlot on ~10 columns (language, country....) , and with more fields it's\nvery easy to deadlock the whole system (it happens a lot in my web\napplicaiton with ~20 concorrent users).\n\nIMHO the \"select ... for update\" on languages is not necessary, since\nI do not want to update \"lang_id\", but I might be wrong. The other\nproblem is, that this will make postgres in benchmarks very slow (with\nmany concurrent connections), at least if the application is not\ntrivial.\n\nIMO the problem is in ri_trigger.c around line 390:\n/* ----------\n* The query string built is\n* SELECT 1 FROM ONLY <pktable> WHERE pkatt1 = $1 [AND ...]\n* The type id's for the $ parameters are those of the\n* corresponding FK attributes. Thus, SPI_prepare could\n* eventually fail if the parser cannot identify some way\n* how to compare these two types by '='.\n* ----------\n*/\n\nAny ideas if this is a bug or simply strict SQL standard?\n\nBest regards,\nMario Weilguni\n\n\n---------------------------(end of\nbroadcast)---------------------------\nTIP 5: Have you checked our extensive FAQ?\n\nhttp://www.postgresql.org/users-lounge/docs/faq.html\n\n\n",
"msg_date": "Thu, 28 Mar 2002 16:44:51 +0100",
"msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>",
"msg_from_op": true,
"msg_subject": "Re: deadlock problems with foreign keys"
}
] |
[
{
"msg_contents": "CREATE TABLE mytesting ( dosnotmatter text );\n\nCREATE INDEX myunique ON mytesting oid;\n\nwill this help to make sure the oid is unique? and is that right?\nif in fact the oid roll over, and insertation fail. Will reinsert get \nan new oid or the same oid retry.\nAlex\n\n\n",
"msg_date": "Thu, 28 Mar 2002 09:58:05 -0600",
"msg_from": "Alex Lau <alex@dpcgroup.com>",
"msg_from_op": true,
"msg_subject": "None"
},
{
"msg_contents": "Alex Lau <alex@dpcgroup.com> writes:\n> CREATE TABLE mytesting ( dosnotmatter text );\n> CREATE INDEX myunique ON mytesting oid;\n\n> will this help to make sure the oid is unique?\n\nNo, but\n\nCREATE UNIQUE INDEX myunique ON mytesting (oid);\n\nwould do the trick.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 15:34:12 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: "
}
] |
[
{
"msg_contents": "> It does a SELECT ... FOR UPDATE because we don't have a\n> SELECT ... AND PLEASE DO NOT REMOVE.\n>\n> If we would only check if the PK is there now, another\n> concurrent transaction could delete the PK, it's own check\n> cannot see our uncommitted row yet and we end up with a\n> violation. And if you look at the comment a few lines up, it\n> explains why we cannot skip the check even if the key value\n> doesn't change.\n\nBut it does not apply here since there are no \"on update set default\" here. So IMO this case should not apply if there are not \"on update set default\"? Or are other cases where the same restriction applies?\n\nThe problem is this, at the moment there is no such thing as \"row level locking\" in postgres when you use foreign key constraints. This really hits concurrency.\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Thu, 28 Mar 2002 17:33:53 +0100",
"msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>",
"msg_from_op": true,
"msg_subject": "Re: deadlock problems with foreign keys"
}
] |
[
{
"msg_contents": "Now that create or replace function exists, what is alter function\nsupposed to do? MSSQLs alter function does the same as REPLACE. Is\nit simply an alias to the REPLACE case?\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n\n",
"msg_date": "Thu, 28 Mar 2002 11:57:14 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Alter function?"
}
] |
[
{
"msg_contents": "> Under the \"crossSectionTests(Mixed IR)\" part of an OSDB run, a\n> large number of shared_buffers causes severe slowdown on one of\n> the tests -- it goes from a little over 200 seconds to nearly\n> 2000. I suspect internal lock contention, or maybe it's just\n> that the read() path in Linux is quicker than PG's own cache?\n\nMatthew, are you using the --postgresql=no_hash_index option for OSDB? It's conceivable that you are hitting an artifact of the hash index problem here.\n\n/andy\n\n---\nAndy Riebs, andy.riebs@compaq.com High Performance Technical\n(w) 603-884-1521, (fax) 603-884-0630 Computing/Linux Group\n <http://cub.sourceforge.net/> Compaq Computer Corporation\n(h) ariebs@earthlink.net <http://www.compaq.com/linux>\n <http://osdb.sourceforge.net/> <http://opensource.compaq.com>\n",
"msg_date": "Thu, 28 Mar 2002 13:09:44 -0500",
"msg_from": "\"Riebs, Andy\" <Andy.Riebs@compaq.com>",
"msg_from_op": true,
"msg_subject": "Re: Performance Tuning Document?"
},
{
"msg_contents": "On Thu, 28 Mar 2002, Riebs, Andy wrote:\n\n> > Under the \"crossSectionTests(Mixed IR)\" part of an OSDB run, a\n> > large number of shared_buffers causes severe slowdown on one of\n> > the tests -- it goes from a little over 200 seconds to nearly\n> > 2000. I suspect internal lock contention, or maybe it's just\n> > that the read() path in Linux is quicker than PG's own cache?\n>\n> Matthew, are you using the --postgresql=no_hash_index option for OSDB?\n> It's conceivable that you are hitting an artifact of the hash index\n> problem here.\n\nAh, that would make a lot of sense. I'll do a run again with\nthat option and see what turns up.\n\nThanks,\nMatthew.\n\n",
"msg_date": "Thu, 28 Mar 2002 23:15:24 +0000 (GMT)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Performance Tuning Document?"
},
{
"msg_contents": "On Thu, 28 Mar 2002, Matthew Kirkwood wrote:\n\n[ oops, forgot to send this ages ago ]\n\n> > > Under the \"crossSectionTests(Mixed IR)\" part of an OSDB run, a\n> > > large number of shared_buffers causes severe slowdown on one of\n> > > the tests -- it goes from a little over 200 seconds to nearly\n> > > 2000.\n\n> > --postgresql=no_hash_index\n\n> Ah, that would make a lot of sense. I'll do a run again with\n> that option and see what turns up.\n\nThat was right on the nose. The numbers are much better now.\n\nMy initial interest was in benchmarking different filesystems\non Linux. In case anyone is interested, here are today's\nnumbers:\n\n tuning? single ir cs-ir oltp cs-oltp\n (sec) (tps) (sec) (tps) (sec)\next3 kn 841.28 61.52 203.33 407.58 159.72\next3-wb kn 841.19 63.73 217.19 406.30 160.88\next3-jd kn 839.96 58.96 203.02 307.85 159.89\njfs kn 840.53 62.74 205.90 348.33 177.70\nminix kn 841.51 62.12 201.44 343.87 176.68\next2 kn 840.72 65.02 205.40 338.20 182.22\n\next3-wb is ext3 with the \"data=writeback\" mount option. ext3-jd\nis ext3 with \"data=journal\" and a 200Mb journal instead of the\nusual 32Mb one. All filesystems were mounted noatime.\n\npostgresql.conf for all these runs looks like:\n\ntcpip_socket = true\nshared_buffers = 10240\nmax_fsm_relations = 100\nmax_fsm_pages = 10000\nmax_locks_per_transaction = 256\nwal_buffers = 10240\nsort_mem = 5120000\nvacuum_mem = 81920\n\nWithout hash indexes, it looks like only OLTP loads can\ndifferentiate the filesystems. Sometime (once I have got\na more recent kernel going) I'll try a dataset larger than\nmemory.\n\nMatthew.\n\n",
"msg_date": "Sun, 14 Apr 2002 15:25:19 +0100 (BST)",
"msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>",
"msg_from_op": false,
"msg_subject": "Re: Performance Tuning Document?"
}
] |
[
{
"msg_contents": "\nThe archives search is not working on postgresql.org so I need to ask this\nquestion...\n\nWe are using postgresql 7.2 and when dumping one of our larger databases,\nwe get the following error:\n\nFile size limit exceeded (core dumped)\n\nWe suspect pg_dump. Is this true? Why would there be this limit in\npg_dump? Is it scheduled to be fixed?\n\nThanks,\n\n-- \nLaurette Cisneros\nDatabase Roadie\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nWhere's my bus?\n\n\n",
"msg_date": "Thu, 28 Mar 2002 16:30:00 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "pg_dump 2GB limit?"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n\n> The archives search is not working on postgresql.org so I need to ask this\n> question...\n> \n> We are using postgresql 7.2 and when dumping one of our larger databases,\n> we get the following error:\n> \n> File size limit exceeded (core dumped)\n> \n> We suspect pg_dump. Is this true? Why would there be this limit in\n> pg_dump? Is it scheduled to be fixed?\n\nThis means one of two things:\n\n1) Your ulimits are set too low, or\n2) Your pg_dump wasn't compiled against a C library with large file\n support (greater than 2GB).\n\nIs this on Linux?\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "28 Mar 2002 19:35:34 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "\nAre you on linux (most likely)? If so, then your pgsql was compiled\nwithout large file support.\n\nDru Nelson\nSan Carlos, California\n\n\n> The archives search is not working on postgresql.org so I need to ask this\n> question...\n>\n> We are using postgresql 7.2 and when dumping one of our larger databases,\n> we get the following error:\n>\n> File size limit exceeded (core dumped)\n>\n> We suspect pg_dump. Is this true? Why would there be this limit in\n> pg_dump? Is it scheduled to be fixed?\n>\n> Thanks,\n>\n> --\n> Laurette Cisneros\n> Database Roadie\n> (510) 420-3137\n> NextBus Information Systems, Inc.\n> www.nextbus.com\n> Where's my bus?\n\n",
"msg_date": "Thu, 28 Mar 2002 16:42:36 -0800 (PST)",
"msg_from": "dru-sql@redwoodsoft.com",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Laurette Cisneros writes:\n\n> We are using postgresql 7.2 and when dumping one of our larger databases,\n> we get the following error:\n>\n> File size limit exceeded (core dumped)\n>\n> We suspect pg_dump. Is this true?\n\nNo, it's your operating sytem.\n\nhttp://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 28 Mar 2002 19:44:09 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Hi,\n\nI'm on Red Hat. Here's the uname info:\nLinux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown\n\nWhat do I need to do to \"turn on large file support\" in the compile?\n\nThanks,\n\nL.\nOn 28 Mar 2002, Doug McNaught wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> \n> > The archives search is not working on postgresql.org so I need to ask this\n> > question...\n> > \n> > We are using postgresql 7.2 and when dumping one of our larger databases,\n> > we get the following error:\n> > \n> > File size limit exceeded (core dumped)\n> > \n> > We suspect pg_dump. Is this true? Why would there be this limit in\n> > pg_dump? Is it scheduled to be fixed?\n> \n> This means one of two things:\n> \n> 1) Your ulimits are set too low, or\n> 2) Your pg_dump wasn't compiled against a C library with large file\n> support (greater than 2GB).\n> \n> Is this on Linux?\n> \n> -Doug\n> \n\n-- \nLaurette Cisneros\nDatabase Roadie\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nWhere's my bus?\n\n",
"msg_date": "Thu, 28 Mar 2002 16:46:05 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n\n> Hi,\n> \n> I'm on Red Hat. Here's the uname info:\n> Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown\n\nThat's an old and buggy kernel, BTW--you should install the errata\nupgrades, \n\n> What do I need to do to \"turn on large file support\" in the compile?\n\nMake sure you are running the latest kernel and libs, and AFAIK\n'configure' should set it up for you automatically.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "28 Mar 2002 19:50:16 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n\n> Hi,\n> \n> I'm on Red Hat. Here's the uname info:\n> Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown\n> \n> What do I need to do to \"turn on large file support\" in the compile?\n> \n\nIIRC old version (format) of reiserFS (3.5 ??) has this limit, too. Solutions is\nto reformat with new version (kernel & reiserfsprogs). (possible test with _dd_).\n\n",
"msg_date": "29 Mar 2002 02:06:06 +0100",
"msg_from": "mmc@maruska.dyndns.org (Michal =?iso-8859-2?q?Maru=B9ka?=)",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "\nOops sent the wrong uname, here's the one from the machine we compiled on:\nLinux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown\n\nand has: libc-2.2.2.so \n\nWe use ./configure \n\nStill a problem?\n\nWe do compress (-Fc) right now, but are working on a backup scheme that\nrequires and uncompressed dump.\n\nThanks for the help!\n\nL.\n\nOn 28 Mar 2002, Doug McNaught wrote:\n\n> Laurette Cisneros <laurette@nextbus.com> writes:\n> \n> > Hi,\n> > \n> > I'm on Red Hat. Here's the uname info:\n> > Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown\n> \n> That's an old and buggy kernel, BTW--you should install the errata\n> upgrades, \n> \n> > What do I need to do to \"turn on large file support\" in the compile?\n> \n> Make sure you are running the latest kernel and libs, and AFAIK\n> 'configure' should set it up for you automatically.\n> \n> -Doug\n> \n\n-- \nLaurette Cisneros\nDatabase Roadie\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\nWhere's my bus?\n\n",
"msg_date": "Thu, 28 Mar 2002 17:13:31 -0800 (PST)",
"msg_from": "Laurette Cisneros <laurette@nextbus.com>",
"msg_from_op": true,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n\n> Oops sent the wrong uname, here's the one from the machine we compiled on:\n> Linux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown\n> \n> and has: libc-2.2.2.so \n> \n> We use ./configure \n> \n> Still a problem?\n\nMight be. Make sure you have up to date kernel and libs on the\ncompile machine and the one you're running on. Make sure your\nfilesystem supports files greater than 2GB.\n\nAlso, if you are using shell redirection to create the output file,\nit's possible the shell isn't using the right open() flags.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "28 Mar 2002 20:25:31 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Laurette Cisneros <laurette@nextbus.com> writes:\n\n> Hi,\n> \n> I'm on Red Hat. Here's the uname info:\n> Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown\n\nYou should really upgrade (kernel and the rest), but this kernel\nsupports large files.\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "28 Mar 2002 22:37:22 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n\n> Laurette Cisneros writes:\n> \n> > We are using postgresql 7.2 and when dumping one of our larger databases,\n> > we get the following error:\n> >\n> > File size limit exceeded (core dumped)\n> >\n> > We suspect pg_dump. Is this true?\n> \n> No, it's your operating sytem.\n\nRed Hat Linux 7.x which he seems to be using supports this.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "28 Mar 2002 22:45:26 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "> > File size limit exceeded (core dumped)\n> >\n> > We suspect pg_dump. Is this true? Why would there be this limit in\n> > pg_dump? Is it scheduled to be fixed?\n\nTry piping the output of pg_dump through bzip2 before writing it to disk.\nOr else, I think that pg_dump has -z or something parameters for turning\non compression.\n\nChris\n\n\n",
"msg_date": "Fri, 29 Mar 2002 12:18:05 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> > > File size limit exceeded (core dumped)\n> > >\n> > > We suspect pg_dump. Is this true? Why would there be this limit in\n> > > pg_dump? Is it scheduled to be fixed?\n>\n> Try piping the output of pg_dump through bzip2 before writing it to disk.\n> Or else, I think that pg_dump has -z or something parameters for turning\n> on compression.\n\n And if that isn't enough, you can pipe the output (compressed\n or not) into split(1).\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Fri, 29 Mar 2002 14:02:43 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: pg_dump 2GB limit?"
}
] |
[
{
"msg_contents": "\nAnd available in /pub/source/v7.2.1 ... this one has both man.tar.gz and\npostgres.tar.gz in it ... someone want to make a quick confirm while the\nmirrors pick it up?\n\n",
"msg_date": "Thu, 28 Mar 2002 21:48:57 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "v7.2.1 re-rolled ..."
},
{
"msg_contents": "> And available in /pub/source/v7.2.1 ... this one has both man.tar.gz and\n> postgres.tar.gz in it ... someone want to make a quick confirm while the\n> mirrors pick it up?\n\nAt a quick glance, it seems ok for me. All regression tests\npassed. Docs version is ok. This is a Linux kernel 2.2.\n--\nTatsuo Ishii\n",
"msg_date": "Fri, 29 Mar 2002 11:33:27 +0900",
"msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 re-rolled ..."
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n\n> And available in /pub/source/v7.2.1 ... this one has both man.tar.gz and\n> postgres.tar.gz in it ... someone want to make a quick confirm while the\n> mirrors pick it up?\n\nWhen rerolling something which has been on a public ftp server, upping\nthe number to avoid confusion is always a good idea.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n",
"msg_date": "28 Mar 2002 22:48:21 -0500",
"msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 re-rolled ..."
}
] |
[
{
"msg_contents": "I am attempting to produce a profile for a slow query run on Solaris -\n(c.f previous thread titled \"Solaris performance\"). \n\nI am a bit confused about the the output I am seeing, as the times do not get anywhere close to the 1 hour elapsed time that the query takes....(see \"notes\" \nsection at the end for detail on the query)\n\nI will confess to being a bit of a profile newbie... so if I missed something \nimportant, please dont hesitate to put me right.\n\nSo anyway I thought I would put down here what I actually had...here goes ...\nI tried 2 approaches - tracing via truss and profiling via gprof.\n\ni) Tracing\n\nBefore profiling I used truss on the backend process, latter bits of the \noutput being : (left hand column is the time delta)\n\n0.0003 read(40, \"\\0\\0\\0\\087 KC4\\b\\0\\0\\0\\t\".., 8192) = 8192\n0.0009 lseek(40, 0x14796000, SEEK_SET) = 0x14796000\n0.0002 read(40, \"\\0\\0\\0\\080 t E90\\0\\0\\0\\t\".., 8192) = 8192\n0.0094 read(43, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\".., 8192) = 8192\n0.0154 read(43, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\".., 8192) = 8192\n0.0155 read(43, \"\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\\0\".., 8192) = 8192\n4992.4387 send(7, \" P b l a n k\\0 T\\002 f 3\".., 99, 0) = 99\n0.0012 recv(7, \" Q S E L E C T c u r r\".., 8192, 0) = 22\n0.0011 lseek(36, 0, SEEK_END) = 8192\n0.0013 send(7, \" P b l a n k\\0 T\\001 t i\".., 60, 0) = 60\nrecv(7, 0x002929A8, 8192, 0) (sleeping...)\n46994.9585 recv(7, \" X\", 8192, 0) = 1\n\nIncluding more librarys in the truss'ing showed thousands of loops, consisting \nmainly of memcpy+memcmp, unfortunatly this file is _huge_ ... so I did not take \na copy of it (I am not sending this mail from the Solaris box....).\n\nii) Profiling\n\nRebuilt Postgresql with :\nCFLAGS=-pg ./configure --disable-shared --prefix=/opt/pgsql\n\nstarted up the database, ran the query and shut it down again.\n\nA raw profile gmon.out was created, which I ran gprof on.\n\n\nCall Graph for Top 10\n\ngranularity: each sample hit covers 4 byte(s) for 4.17% of 0.24 seconds\n\n called/total parents \nindex %time self descendents called+self name \tindex\n called/total children\n\n 0.00 0.23 1/1 main [2]\n[1] 95.8 0.00 0.23 1 PostmasterMain [1]\n 0.00 0.22 1/1 pgstat_start [4]\n 0.00 0.01 1/1 reset_shared [11]\n 0.00 0.00 1/1 MemoryContextInit [41]\n 0.00 0.00 1/1 checkDataDir [42]\n 0.00 0.00 1/1 CreateOptsFile [43]\n 0.00 0.00 1/12 AllocSetContextCreate [24]\n 0.00 0.00 1/1 ProcessConfigFile [46]\n 0.00 0.00 1/1 StreamServerPort [47]\n 0.00 0.00 11/25 pqsignal [72]\n 0.00 0.00 2/2 getenv [131]\n 0.00 0.00 2/2 _getopt [4862]\n 0.00 0.00 1/1 _umask [4891]\n 0.00 0.00 1/5 _getpid [4841]\n 0.00 0.00 1/1 EnableExceptionHandling [149]\n 0.00 0.00 1/1 MemoryContextSwitchTo [163]\n 0.00 0.00 1/1 ResetAllOptions [168]\n 0.00 0.00 1/1 SetDataDir [170]\n 0.00 0.00 1/1 IgnoreSystemIndexes [155]\n 0.00 0.00 1/2 FindExec [122]\n 0.00 0.00 1/1 CreateDataDirLockFile [143]\n 0.00 0.00 1/1 RemovePgTempFiles [167]\n 0.00 0.00 1/1 XLOGPathInit [172]\n 0.00 0.00 1/1 DLNewList [148]\n 0.00 0.00 1/1 pqinitmask [193]\n 0.00 0.00 1/1 _sigprocmask [4884]\n 0.00 0.00 1/1 pgstat_init [190]\n\n-----------------------------------------------\n\n 0.00 0.23 1/1 _start [3]\n[2] 95.8 0.00 0.23 1 main [2]\n 0.00 0.23 1/1 PostmasterMain [1]\n 0.00 0.00 2/5 _geteuid [4840]\n 0.00 0.00 1/1 save_ps_display_args [194]\n 0.00 0.00 1/82 malloc [57]\n 0.00 0.00 1/49 _strdup [4802]\n 0.00 0.00 1/2 _getuid [4863]\n 0.00 0.00 1/137 strlen [55]\n 0.00 0.00 1/15 strcmp [83]\n\n-----------------------------------------------\n\n <spontaneous>\n[3] 95.8 0.00 0.23 _start [3]\n 0.00 0.23 1/1 main [2]\n 0.00 0.00 2/4 atexit [103]\n\n-----------------------------------------------\n\n 0.00 0.22 1/1 PostmasterMain [1]\n[4] 91.7 0.00 0.22 1 pgstat_start [4]\n 0.02 0.19 1/1 pgstat_main [5]\n 0.00 0.01 1/2 _fork [15]\n 0.00 0.00 2/4 fflush [104]\n 0.00 0.00 1/1 on_exit_reset [188]\n 0.00 0.00 1/1 ClosePostmasterPorts [142]\n 0.00 0.00 1/1 exit [179]\n\n-----------------------------------------------\n\n 0.02 0.19 1/1 pgstat_start [4]\n[5] 89.6 0.02 0.19 1 pgstat_main [5]\n 0.00 0.17 30497/30497 select [6]\n 0.02 0.00 30499/30556 _memset [9]\n 0.00 0.01 1/2 _fork [15]\n 0.00 0.00 1/1 pgstat_read_statsfile [32]\n 0.00 0.00 1/1 pgstat_recv_bestart [33]\n 0.00 0.00 1/8 hash_create [20]\n 0.00 0.00 4/4 pgstat_write_statsfile [38]\n 0.00 0.00 1/1 set_ps_display [49]\n 0.00 0.00 14/25 pqsignal [72]\n 0.00 0.00 6/6 _gettimeofday [4838]\n 0.00 0.00 3/8 close [91]\n 0.00 0.00 3/5 read [102]\n 0.00 0.00 2/45 .div [61]\n 0.00 0.00 2/17 .rem [75]\n 0.00 0.00 1/2 _pipe [4864]\n 0.00 0.00 1/1 init_ps_display [184]\n 0.00 0.00 1/82 malloc [57]\n 0.00 0.00 1/1 pgstat_recv_beterm [191]\n\n-----------------------------------------------\n\n 0.00 0.17 30497/30497 pgstat_main [5]\n[6] 70.8 0.00 0.17 30497 select [6]\n 0.00 0.17 30497/30497 _select [8]\n\n-----------------------------------------------\n\n 0.17 0.00 30497/30497 _select [8]\n[7] 70.8 0.17 0.00 30497 _poll [7]\n\n-----------------------------------------------\n\n 0.00 0.17 30497/30497 select [6]\n[8] 70.8 0.00 0.17 30497 _select [8]\n 0.17 0.00 30497/30497 _poll [7]\n 0.00 0.00 6/53 .mul [59]\n\n-----------------------------------------------\n\n 0.00 0.00 1/30556 CLOGShmemInit [44]\n 0.00 0.00 1/30556 StreamServerPort [47]\n 0.00 0.00 1/30556 pgstat_read_statsfile [32]\n 0.00 0.00 1/30556 InitShmemAllocation [45]\n 0.00 0.00 1/30556 LockMethodTableInit [27]\n 0.00 0.00 1/30556 set_ps_display [49]\n 0.00 0.00 2/30556 pgstat_add_backend [31]\n 0.00 0.00 3/30556 XLOGShmemInit [40]\n 0.00 0.00 9/30556 getiop [28]\n 0.00 0.00 12/30556 MemoryContextCreate [25]\n 0.00 0.00 25/30556 seg_alloc [21]\n 0.02 0.00 30499/30556 pgstat_main [5]\n[9] 8.3 0.02 0.00 30556 _memset [9]\n\n-----------------------------------------------\n\n 0.00 0.01 1/1 reset_shared [11]\n[10] 4.2 0.00 0.01 1 CreateSharedMemoryAndSemaphores [10]\n 0.00 0.01 1/1 IpcMemoryCreate [13]\n 0.00 0.00 1/1 InitLockTable [26]\n 0.00 0.00 1/1 InitShmemIndex [37]\n 0.00 0.00 1/1 InitBufferPool [35]\n 0.00 0.00 1/1 InitFreeSpaceMap [36]\n 0.00 0.00 1/1 XLOGShmemInit [40]\n 0.00 0.00 1/1 InitShmemAllocation [45]\n 0.00 0.00 1/1 CLOGShmemInit [44]\n 0.00 0.00 1/1 BufferShmemSize [139]\n 0.00 0.00 1/1 LockShmemSize [162]\n 0.00 0.00 1/1 XLOGShmemSize [173]\n 0.00 0.00 1/1 CLOGShmemSize [140]\n 0.00 0.00 1/2 LWLockShmemSize [127]\n 0.00 0.00 1/2 SInvalShmemSize [128]\n 0.00 0.00 1/1 FreeSpaceShmemSize [150]\n 0.00 0.00 1/1 CreateSpinlocks [147]\n 0.00 0.00 1/1 CreateLWLocks [144]\n 0.00 0.00 1/1 InitLocks [157]\n 0.00 0.00 1/1 InitProcGlobal [158]\n 0.00 0.00 1/1 CreateSharedInvalidationState [145]\n 0.00 0.00 1/1 PMSignalInit [164]\n\n-----------------------------------------------\n\n\nFlat Graph\n\ngranularity: each sample hit covers 4 byte(s) for 4.17% of 0.24 seconds\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 70.8 0.17 0.17 30497 0.01 0.01 _poll [7]\n 8.3 0.19 0.02 30556 0.00 0.00 _memset [9]\n 8.3 0.21 0.02 1 20.00 214.98 pgstat_main [5]\n 4.2 0.22 0.01 2 5.00 5.00 _libc_fork [16]\n 4.2 0.23 0.01 2 5.00 5.00 _syscall [17]\n 4.2 0.24 0.01 internal_mcount [14]\n 0.0 0.24 0.00 30497 0.00 0.01 _select [8]\n 0.0 0.24 0.00 30497 0.00 0.01 select [6]\n 0.0 0.24 0.00 8008 0.00 0.00 LWLockAssign [51]\n 0.0 0.24 0.00 442 0.00 0.00 strcasecmp [52]\n\nIndex by function name\n\n [61] .div [4844] __open64 [4830] _xflsbuf \n [59] .mul [4809] __sigaction [174] assign_defaultxacti\n [75] .rem [4869] __sigfillset [175] assign_xlog_sync_me\n [136] .udiv [4870] _atexit_init [103] atexit \n [98] .umul [4821] _cerror [176] atfork_alloc \n [63] AllocSetAlloc [4871] _chmod [177] atfork_append \n [24] AllocSetContextCrea[4872] _cleanup [79] call_hash \n [137] AllocSetFree [4817] _close [42] checkDataDir \n [62] AllocSetFreeIndex [4859] _closedir [178] check_nlspath_env \n [86] AllocSetInit [4813] _doprnt [56] cleanfree \n [39] AllocateFile [4873] _doscan [91] close \n [138] BasicOpenFile [4801] _dowrite [115] element_alloc \n [139] BufferShmemSize [4825] _endopen [179] exit \n [44] CLOGShmemInit [4874] _exithandle [92] fclose \n [140] CLOGShmemSize [4810] _ferror_unlocked [180] fcntl \n [141] ClearDateCache [4819] _fflush_u [104] fflush \n [142] ClosePostmasterPort[4860] _fflush_u_iops [93] find_option \n [143] CreateDataDirLockFi[4861] _fileno [29] fopen \n [144] CreateLWLocks [4834] _findbuf [80] fputc \n [121] CreateLockFile [30] _findiop [181] fputs \n [43] CreateOptsFile [15] _fork [130] fread \n [145] CreateSharedInvalid[4875] _fprintf [73] free \n [10] CreateSharedMemoryA[4814] _free_unlocked [116] free_mem \n [146] CreateSocketLockFil[4824] _fstat64 [182] free_name_value_lis\n [147] CreateSpinlocks [4831] _fwrite_unlocked [183] fscanf \n [148] DLNewList [4835] _getc_unlocked [94] fwrite \n [70] DynaHashAlloc [4845] _getdents64 [105] get_mem \n [149] EnableExceptionHand[4876] _getegid [131] getenv \n [122] FindExec [4840] _geteuid [28] getiop \n [109] FreeFile [4877] _getgid [20] hash_create \n [150] FreeSpaceShmemSize [4862] _getopt [100] hash_estimate_size \n [151] GUC_yy_create_buffe[4841] _getpid [81] hash_search \n [123] GUC_yy_flex_alloc [4838] _gettimeofday [101] hash_select_dirsize\n [124] GUC_yy_flush_buffer[4863] _getuid [89] hash_seq_init \n [125] GUC_yy_get_next_buf[4826] _ioctl [76] hash_seq_search \n [152] GUC_yy_get_previous[4827] _isatty [95] hdefault \n [126] GUC_yy_init_buffer [4878] _libc_cond_init [22] init_htab \n [110] GUC_yy_load_buffer_[4853] _libc_fcntl [184] init_ps_display \n [53] GUC_yylex [16] _libc_fork [14] internal_mcount \n [153] GUC_yyrestart [4815] _libc_mutex_init [185] issock \n [154] GUC_yywrap [4822] _libc_open [186] keys_destruct \n [155] IgnoreSystemIndexes[4846] _libc_open64 [2] main \n [34] InitBufTable [4854] _libc_rwlock_init [57] malloc \n [35] InitBufferPool [4811] _libc_sigaction [187] mem_init \n [156] InitFreeList [4879] _libc_thr_keycreate [54] mutex_unlock \n [36] InitFreeSpaceMap [4880] _lseek [68] my_log2 \n [26] InitLockTable [4842] _lseek64 [132] number \n [157] InitLocks [4799] _malloc_unlocked [67] nvmatch \n [158] InitProcGlobal [4881] _memccpy [188] on_exit_reset \n [45] InitShmemAllocation[4798] _memcpy [117] on_proc_exit \n [37] InitShmemIndex [9] _memset [118] on_shmem_exit \n [12] InternalIpcMemoryCr[4818] _morecore [106] open \n [111] InternalIpcSemaphor[4797] _mutex_lock [96] parse_int \n [159] IpcInitKeyAssignmen[4847] _opendir [189] pfree \n [13] IpcMemoryCreate [4864] _pipe [31] pgstat_add_backend \n [112] IpcSemaphoreCreate [7] _poll [190] pgstat_init \n [113] IpcSemaphoreUnlock [4882] _private_sigprocmas [5] pgstat_main \n [84] LWLockAcquire [4883] _profil [32] pgstat_read_statsfi\n [51] LWLockAssign [4848] _putc_unlocked [33] pgstat_recv_bestart\n [85] LWLockRelease [4832] _read [191] pgstat_recv_beterm \n [127] LWLockShmemSize [4828] _readdir [4] pgstat_start \n [160] LockMethodInit [4829] _readdir64 [192] pgstat_sub_backend \n [27] LockMethodTableInit[4805] _realbufend [38] pgstat_write_statsf\n [161] LockMethodTableRena[4849] _rename [193] pqinitmask \n [162] LockShmemSize [4806] _return_negone [72] pqsignal \n [64] MemoryContextAlloc [4796] _return_zero [82] putc \n [25] MemoryContextCreate[4803] _sbrk [102] read \n [41] MemoryContextInit [4804] _sbrk_unlocked [60] realfree \n [163] MemoryContextSwitch [8] _select [133] realloc \n [114] NumLWLocks [4839] _semctl [107] rename \n [164] PMSignalInit [4855] _semget [11] reset_shared \n [1] PostmasterMain [4856] _semop [194] save_ps_display_arg\n [46] ProcessConfigFile [4823] _semsys [21] seg_alloc \n [165] ReadControlFile [4836] _setbufend [6] select \n [166] RecordSharedMemoryI[4837] _setorientation [97] set_config_option \n [167] RemovePgTempFiles [18] _shmat [49] set_ps_display \n [168] ResetAllOptions [19] _shmget [74] sigvalid \n [169] SIBufferInit [4812] _sigaction [77] snprintf \n [128] SInvalShmemSize [4816] _sigdelset [99] sprintf \n [170] SetDataDir [4808] _sigemptyset [52] strcasecmp \n [65] ShmemAlloc [4865] _sigfillset [134] strcat \n [23] ShmemInitHash [4884] _sigprocmask [135] strchr \n [87] ShmemInitStruct [4800] _smalloc [83] strcmp \n [171] StreamClose [4866] _so_bind [58] strcpy \n [47] StreamServerPort [4885] _so_connect [88] string_hash \n [129] ValidateBinary [4886] _so_getsockname [55] strlen \n [48] ValidatePgVersion [4887] _so_getsockopt [78] strncpy \n [172] XLOGPathInit [4888] _so_listen [108] strrchr \n [40] XLOGShmemInit [4889] _so_setsockopt [90] strtol \n [173] XLOGShmemSize [4857] _stat [195] t_delete \n[4807] ___errno [4802] _strdup [119] tag_hash \n[4868] __doscan_u [17] _syscall [66] 1:tas \n[4851] __fcntl [4890] _sysconfig [71] 1:tas \n[4852] __filbuf [4891] _umask [69] thr_main \n[4843] __flsbuf [4867] _unlink [120] write \n[4858] __mul64 [4833] _write [50] <cycle 1> \n[4820] __open [4850] _wrtchk \n\nObject modules\n\n 1: /opt/pgsql/bin/postmaster\n\n\nNotes\n-----\n\ni) Background\n\nThis query takes 30s on an old, slow Intel box (333Mhz) running Freebsd.\nIt takes about 1 hour on this Sparc box. \nThis suggested something not quite right somewhere.\n\nii) Platform Info\n\nSun E220 2x450Mhz Utlrasparc CPU + 2G memory.\nOS + Postgresql is installed on 2x36G 15000 RPM Seagate Cheetah drives\nRemaining IO subsystem is fibre to a Compaq SAN (RAID 5).\nThe Postgresql database files are located on this SAN system.\n\n\niii) Version Info\n\ndb1=# SELECT version();\n version\n----------------------------------------------------------------\n PostgreSQL 7.2 on sparc-sun-solaris2.8, compiled by GCC 2.95.2\n(1 row)\n\n\niv) Tables And Indexes\n\nCREATE TABLE dim0 ( d0key INTEGER,\n f1 TIMESTAMP,\n f2 VARCHAR(20),\n f3 VARCHAR(20)\n ); \n\nCREATE TABLE fact0 ( d0key INTEGER,\n d1key INTEGER,\n d2key INTEGER,\n val INTEGER,\n filler TEXT\n ); \n\nCREATE UNIQUE INDEX dim0_pk ON dim0 (d0key);\nCREATE UNQIUE INDEX dim0_q1 ON dim0 (f1);\nCREATE INDEX fact0_q1 ON fact0(d0key) ;\n\n\ndb1=# SELECT relname,tuples FROM pg_class WHERE relname IN\n('dim0','fact0');\n\n relname | reltuples\n---------+-----------\n dim0 | 10080\n fact0 | 1e+07\n(2 rows) \n\nv) Query Access Plan And Elapsed Time\n\ndb1=# EXPLAIN\nSELECT\n d0.f3,\n count(f.val)\nFROM dim0 d0,\n fact0 f\nWHERE d0.d0key = f.d0key\nAND d0.f1 between '1999-12-01' AND '2000-02-29'\nGROUP BY d0.f3\n; \n\nAggregate (cost=27560.60..27594.10 rows=670 width=18)\n -> Group (cost=27560.60..27577.35 rows=6700 width=18)\n -> Sort (cost=27560.60..27560.60 rows=6700 width=18)\n -> Nested Loop (cost=0.00..27134.84 rows=6700 width=18)\n -> Index Scan using dim0_q1 on dim0 d0 (cost=0.00..4.40 \n rows=87 width=10)\n -> Index Scan using fact0_q1 on fact0 f (cost=0.00..310.06\n rows=77 width=8)\n\nEXPLAIN \ndb1=# \\i qtype1.sql\n timetz\n--------------------\n 18:54:43.366466+12\n(1 row)\n\n f3 | count\n----+-------\n 01 | 30000\n 02 | 28000\n 12 | 30000\n(3 rows)\n\n timetz\n--------------------\n 20:18:01.102664+12\n(1 row) \n\nvi) Non Default Postgresql Server Parameters\n\nshared_buffers=4000\nsort_mem=10240\nwal_buffers=100\nwal_files=10\n\n\n\n",
"msg_date": "29 Mar 2002 19:24:30 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Re : Solaris Performance - Profiling"
},
{
"msg_contents": "Mark kirkwood <markir@slingshot.co.nz> writes:\n> I will confess to being a bit of a profile newbie... so if I missed something\n> important, please dont hesitate to put me right.\n\nIt looks like you profiled the postmaster (parent process), not the\nbackend you were interested in. I'm aware of two ways to make that\nsort of mistake:\n\n1. Backends will drop their gmon.out files into the database\nsubdirectory they connect to ($PGDATA/base/nnn/gmon.out).\nAny gmon.out you see in the top-level directory is from the postmaster\nitself or a checkpoint process --- neither of which are likely to be\ninteresting.\n\n2. I dunno if Solaris has this problem, but on Linux profiling is not\ninherited across fork(), so the postmaster's children fail to collect\nuseful data at all. There is a patch in current sources to work around\nthis by explicitly re-enabling the ITIMER_PROF interrupt after fork.\nIf this seems like the problem then try the attached (slightly old)\npatch.\n\n\t\t\tregards, tom lane\n\n*** src/backend/postmaster/postmaster.c.orig\tWed Dec 12 14:52:03 2001\n--- src/backend/postmaster/postmaster.c\tMon Dec 17 19:38:29 2001\n***************\n*** 1823,1828 ****\n--- 1823,1829 ----\n {\n \tBackend *bn;\t\t\t\t/* for backend cleanup */\n \tpid_t\t\tpid;\n+ \tstruct itimerval svitimer;\n \n \t/*\n \t * Compute the cancel key that will be assigned to this backend. The\n***************\n*** 1858,1869 ****\n--- 1859,1874 ----\n \tbeos_before_backend_startup();\n #endif\n \n+ \tgetitimer(ITIMER_PROF, &svitimer);\n+ \n \tpid = fork();\n \n \tif (pid == 0)\t\t\t\t/* child */\n \t{\n \t\tint\t\t\tstatus;\n \n+ \t\tsetitimer(ITIMER_PROF, &svitimer, NULL);\n+ \n \t\tfree(bn);\n #ifdef __BEOS__\n \t\t/* Specific beos backend startup actions */\n",
"msg_date": "Fri, 29 Mar 2002 09:44:28 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re : Solaris Performance - Profiling "
},
{
"msg_contents": ">Previously (snippage....)\n>\n> It looks like you profiled the postmaster (parent process), not the\n> backend you were interested in....\n> 1. Backends will drop their gmon.out files into the database\n> subdirectory they connect to ($PGDATA/base/nnn/gmon.out).\n> Any gmon.out you see in the top-level directory is from the postmaster\n> itself or a checkpoint process --- neither of which are likely to be\n> interesting.\n> \nThanks Tom... I looked just about everywhere _except_ there :-(\n\nThe backend profile is in $PGDATA/base/nnnn.\n\nI am a little concerned that the cpu time recored by the profiler seems\nto be a bit on the short side at 4047.53 seconds (67 minutes), as\nopposed to 119.2 for the backend itself. Are these two numbers supposed\nto match closely ? (they did for shorter queries..)\n\nAnyway in the hope that this captured output is interesting enough, here\nit is...or rather here is some of it (biggish file), so I am leaving a\ncomplete copy under :\n\nhttp://homepages.slingshot.co.nz/~markir/tar/solaris/\n\nLet me know if further information is likely to be helpful. I have\nprofiled the same query (on the same dataset) on Freebsd and have the\noutput available at :\n\nhttp://homepages.slingshot.co.nz/~markir/tar/freebsd/\n\nregards\n\nMark\n\nThe interesting bits of the flat profile looks like :\n\n\n % cumulative self self total \n time seconds seconds calls ms/call ms/call name \n 30.8 1268.83 1268.83 internal_mcount [20]\n 19.5 2070.96 802.13 1680561053 0.00 0.00 comparetup_heap [7]\n 18.8 2843.64 772.68 3361298446 0.00 0.00 nocachegetattr [24]\n 4.8 3041.20 197.56 1680926512 0.00 0.00 _memset [27]\n 4.3 3218.96 177.76 1680561053 0.00 0.00 varcharcmp [25]\n 4.1 3386.71 167.75 1680561053 0.00 0.00 qsort_comparetup [6]\n 3.8 3543.21 156.50 1680835238 0.00 0.00 FunctionCall2\t<cycle 14> [23]\n 3.4 3684.64 141.43 1680561053 0.00 0.00 ApplySortFunction [21]\n 3.4 3825.56 140.92 3361298219 0.00 0.00 pg_detoast_datum [28]\n 2.8 3941.52 115.96 1680649052 0.00 0.00 varstr_cmp [26]\n 2.0 4023.16 81.64 1680649731 0.00 0.00 strncmp [29]\n 1.7 4094.92 71.76 _mcount (5712)\n 0.5 4114.65 19.73 1 19730.00 2774314.94 qst [5]\n 0.0 4115.28 0.63 3078 0.20 0.20 _read [41]\n 0.0 4115.59 0.31 616267 0.00 0.00 ExecEvalVar [52]\n 0.0 4115.76 0.17 440367 0.00 0.00 AllocSetReset [62]\n 0.0 4115.93 0.17 264093 0.00 0.00 heap_formtuple [47]\n 0.0 4116.09 0.16 264093 0.00 0.00 ExecTargetList [38]\n 0.0 4116.24 0.15 533566 0.00 0.00 AllocSetAlloc [59]\n 0.0 4116.39 0.15 365666 0.00 0.00 LWLockAcquire [58]\n 0.0 4116.53 0.14 96464 0.00 0.00 .rem [67]\n 0.0 4116.67 0.14 88400 0.00 0.02 heap_fetch [34]\n 0.0 4116.78 0.11 365666 0.00 0.00 LWLockRelease [63]\n 0.0 4116.88 0.10 731332 0.00 0.00 tas [75]\n 0.0 4116.98 0.10 352190 0.00 0.01 ExecProcNode\t<cycle 3> [30]\n 0.0 4117.07 0.09 977319 0.00 0.00 AllocSetFreeIndex [78]\n 0.0 4117.16 0.09 264093 0.00 0.00 ComputeDataSize [82]\n 0.0 4117.25 0.09 264093 0.00 0.00 DataFill [70]\n 0.0 4117.34 0.09 120472 0.00 0.00 _memcmp [80]\n 0.0 4117.43 0.09 89883 0.00 0.01 ReadBufferInternal [36]\n 0.0 4117.52 0.09 88491 0.00 0.00 _bt_checkkeys\t<cycle 14> [81]\n 0.0 4117.61 0.09 88088 0.00 0.00 HeapTupleSatisfiesSnapshot [74]\n 0.0 4117.70 0.09 5198 0.02 0.02 .div [79]\n 0.0 4117.77 0.07 88497 0.00 0.00 btgettuple\t<cycle 14> [56]\n 0.0 4117.84 0.07 88001 0.00 0.01 ExecNestLoop\t<cycle 3> [44]\n 0.0 4117.90 0.06 528386 0.00 0.00 ExecClearTuple [54]\n 0.0 4117.96 0.06 264093 0.00 0.00 ExecProject [37]\n 0.0 4118.02 0.06 96981 0.00 0.00 hash_search [50]\n 0.0 4118.08 0.06 88177 0.00 0.03 ExecScan [32]\n 0.0 4118.14 0.06 88092 0.00 0.00 _bt_restscan [72]\n 0.0 4118.19 0.05 533564 0.00 0.00 MemoryContextAlloc [55]\n 0.0 4118.24 0.05 443729 0.00 0.00 AllocSetFree [77]\n 0.0 4118.29 0.05 271989 0.00 0.00 _memcpy [92]\n 0.0 4118.34 0.05 264291 0.00 0.00 heap_freetuple [71]\n 0.0 4118.39 0.05 89797 0.00 0.00 UnpinBuffer [86]\n 0.0 4118.44 0.05 88177 0.00 0.02 IndexNext [33]\n 0.0 4118.49 0.05 88094 0.00 0.00 _bt_step [87]\n 0.0 4118.54 0.05 86858 0.00 0.00 PinBuffer [91]\n\n\nand similarly the call graph like :\n\ngranularity: each sample hit covers 4 byte(s) for 0.00% of 4047.53 seconds\n\n called/total parents \nindex %time self descendents called+self name \tindex\n called/total children\n\n 0.00 2778.68 1/1 main [2]\n[1] 68.7 0.00 2778.68 1 PostmasterMain [1]\n 0.00 1391.42 1/1 ServerLoop [8]\n 0.00 1387.23 1/1 load_password_cache [18]\n 0.00 0.01 1/1 load_hba_and_ident [136]\n 0.00 0.01 1/1 reset_shared [138]\n 0.00 0.00 1/1 pgstat_start [221]\n 0.00 0.00 1/1 SSDataBase [220]\n 0.00 0.00 1/1 ProcessConfigFile [335]\n 0.00 0.00 1/1 checkDataDir [345]\n 0.00 0.00 1/1 CreateOptsFile [440]\n 0.00 0.00 1/1 StreamServerPort [443]\n 0.00 0.00 1/1 CreateDataDirLockFile [445]\n 0.00 0.00 1/2 FindExec [560]\n 0.00 0.00 1/1 RemovePgTempFiles [639]\n 0.00 0.00 1/1 XLOGPathInit [696]\n 0.00 0.00 1/1 pgstat_init [697]\n 0.00 0.00 1/1 MemoryContextInit [701]\n 0.00 0.00 1/70 AllocSetContextCreate [505]\n 0.00 0.00 1/1056908 MemoryContextSwitchTo [112]\n 0.00 0.00 11/28 pqsignal [873]\n 0.00 0.00 2/39 getenv [857]\n 0.00 0.00 2/5 _getopt [4913]\n 0.00 0.00 1/1 _umask [4973]\n 0.00 0.00 1/7 _getpid [4902]\n 0.00 0.00 1/1 EnableExceptionHandling [1096]\n 0.00 0.00 1/1 ResetAllOptions [1120]\n 0.00 0.00 1/1 SetDataDir [1124]\n 0.00 0.00 1/1 IgnoreSystemIndexes [1109]\n 0.00 0.00 1/1 DLNewList [1094]\n 0.00 0.00 1/2 pqinitmask [1077]\n 0.00 0.00 1/11 _sigprocmask [4893]\n\n-----------------------------------------------\n\n 0.00 2778.68 1/1 _start [3]\n[2] 68.7 0.00 2778.68 1 main [2]\n 0.00 2778.68 1/1 PostmasterMain [1]\n 0.00 0.00 2/5 _geteuid [4912]\n 0.00 0.00 1/1 save_ps_display_args [1173]\n 0.00 0.00 1/264 malloc [818]\n 0.00 0.00 1/53 _strdup [4875]\n 0.00 0.00 1/2 _getuid [4939]\n 0.00 0.00 1/3826 strlen [768]\n 0.00 0.00 1/403 strcmp [809]\n\n-----------------------------------------------\n\n <spontaneous>\n[3] 68.7 0.00 2778.68 _start [3]\n 0.00 2778.68 1/1 main [2]\n 0.00 0.00 2/4 atexit [987]\n\n-----------------------------------------------\n\n 0.00 1387.23 1/2 load_password_cache [18]\n 0.00 1387.23 1/2 tuplesort_performsort [19]\n[4] 68.5 0.00 2774.46 2 qsort [4]\n 19.73 2754.58 1/1 qst [5]\n 0.01 0.14 88012/1680561053 qsort_comparetup [6]\n 0.00 0.00 1/61473 .umul [154]\n\n-----------------------------------------------\n\n 19.73 2754.58 1/1 qsort [4]\n[5] 68.5 19.73 2754.58 1 qst [5]\n 167.74 2586.81 1680473041/1680561053 qsort_comparetup [6]\n 0.02 0.00 59630/59703 .udiv [114]\n 0.01 0.00 59630/61473 .umul [154]\n\n-----------------------------------------------\n\n 0.01 0.14 88012/1680561053 qsort [4]\n 167.74 2586.81 1680473041/1680561053 qst [5]\n[6] 68.1 167.75 2586.95 1680561053 qsort_comparetup [6]\n 802.13 1784.82 1680561053/1680561053 comparetup_heap [7]\n\n-----------------------------------------------\n\n 802.13 1784.82 1680561053/1680561053 qsort_comparetup [6]\n[7] 63.9 802.13 1784.82 1680561053 comparetup_heap [7]\n 141.43 870.75 1680561053/1680561053 ApplySortFunction [21]\n 772.64 0.00 3361122106/3361298446 nocachegetattr [24]\n\n-----------------------------------------------\n\n 0.00 1391.42 1/1 PostmasterMain [1]\n[8] 34.4 0.00 1391.42 1 ServerLoop [8]\n 0.00 1391.42 1/1 BackendStartup [9]\n 0.00 0.00 1/1 ConnCreate [526]\n 0.00 0.00 4/271989 _memcpy [92]\n 0.00 0.00 1/1 initMasks [750]\n 0.00 0.00 4/11 _sigprocmask [4893]\n 0.00 0.00 2/7 _gettimeofday [4903]\n 0.00 0.00 2/2 select [1078]\n\n-----------------------------------------------\n\n 0.00 1391.42 1/1 ServerLoop [8]\n[9] 34.4 0.00 1391.42 1 BackendStartup [9]\n 0.00 1391.42 1/1 DoBackend [10]\n 0.00 0.00 1/3 _fork [152]\n 0.00 0.00 2/8 fflush [915]\n 0.00 0.00 1/5 PostmasterRandom [945]\n 0.00 0.00 1/264 malloc [818]\n 0.00 0.00 1/99 free [836]\n\n-----------------------------------------------\n\n 0.00 1391.42 1/1 BackendStartup [9]\n[10] 34.4 0.00 1391.42 1 DoBackend [10]\n 0.00 1391.41 1/1 PostgresMain [11]\n 0.01 0.00 1/1 ClientAuthentication [145]\n 0.00 0.00 1/1 ProcessStartupPacket [535]\n 0.00 0.00 1/1 enable_sigalrm_interrupt [536]\n 0.00 0.00 2/2140 strncpy [155]\n 0.00 0.00 1/9 set_ps_display [502]\n 0.00 0.00 1/1 init_ps_display [729]\n 0.00 0.00 1/62 sprintf [515]\n 0.00 0.00 1/33 MemoryContextDelete\t<cycle 15> [571]\n 0.00 0.00 1/1056908 MemoryContextSwitchTo [112]\n 0.00 0.00 3/28 pqsignal [873]\n 0.00 0.00 2/11 _sigprocmask [4893]\n 0.00 0.00 2/2 split_opts [1081]\n 0.00 0.00 1/1 on_exit_reset [1160]\n 0.00 0.00 1/1 ClosePostmasterPorts [1089]\n 0.00 0.00 1/7 _getpid [4902]\n 0.00 0.00 1/1 pq_init [1170]\n 0.00 0.00 1/1 disable_sigalrm_interrupt [1145]\n 0.00 0.00 1/7 _gettimeofday [4903]\n 0.00 0.00 1/2 srandom [1082]\n\n-----------------------------------------------\n\n 0.00 1391.41 1/1 DoBackend [10]\n[11] 34.4 0.00 1391.41 1 PostgresMain [11]\n 0.00 1391.39 3/3 pg_exec_query_string [12]\n 0.00 0.01 1/1 InitPostgres [134]\n 0.00 0.00 5/9 set_ps_display [502]\n 0.00 0.00 1/1 proc_exit [537]\n 0.00 0.00 1/1 BaseInit [582]\n 0.00 0.00 4/4 makeStringInfo [612]\n 0.00 0.00 4/7 MemoryContextResetAndDeleteChildren [603]\n 0.00 0.00 4/4 ReadyForQuery [725]\n 0.00 0.00 1/70 AllocSetContextCreate [505]\n 0.00 0.00 1/10 pq_endmessage [593]\n 0.00 0.00 1/14 initStringInfo [577]\n 0.00 0.00 2/31 pq_sendint [587]\n 0.00 0.00 4/1056908 MemoryContextSwitchTo [112]\n 0.00 0.00 14/28 pqsignal [873]\n 0.00 0.00 7/7 pgstat_report_activity [929]\n 0.00 0.00 4/4 pgstat_report_tabstat [997]\n 0.00 0.00 4/4 IsTransactionBlock [977]\n 0.00 0.00 4/4 EnableNotifyInterrupt [969]\n 0.00 0.00 4/4 ReadCommand [978]\n 0.00 0.00 4/4 DisableNotifyInterrupt [968]\n 0.00 0.00 3/5 _getopt [4913]\n 0.00 0.00 3/3 strspn [1042]\n 0.00 0.00 2/11 _sigprocmask [4893]\n 0.00 0.00 1/1 set_default_datestyle [1175]\n 0.00 0.00 1/53 _strdup [4875]\n 0.00 0.00 1/1 atoi [1139]\n 0.00 0.00 1/2 pqinitmask [1077]\n 0.00 0.00 1/39 _sigdelset [4879]\n 0.00 0.00 1/10 pq_sendbyte [906]\n 0.00 0.00 1/1 pgstat_bestart [1164]\n 0.00 0.00 1/1 _sigsetjmp [4965]\n\n-----------------------------------------------\n\n 0.00 1391.39 3/3 PostgresMain [11]\n[12] 34.4 0.00 1391.39 3 pg_exec_query_string [12]\n 0.00 1391.35 3/3 ProcessQuery [13]\n 0.00 0.03 3/3 pg_plan_query [95]\n 0.00 0.02 3/3 pg_analyze_and_rewrite [125]\n 0.00 0.00 3/3 start_xact_command [287]\n 0.00 0.00 3/3 pg_parse_query [501]\n 0.00 0.00 3/3 finish_xact_command [516]\n 0.00 0.00 3/3 SetQuerySnapshot [585]\n 0.00 0.00 3/3 CommandCounterIncrement [626]\n 0.00 0.00 3/7 MemoryContextResetAndDeleteChildren [603]\n 0.00 0.00 12/1056908 MemoryContextSwitchTo [112]\n 0.00 0.00 3/3 IsAbortedTransactionBlockState [1017]\n\n-----------------------------------------------\n\n 0.00 1391.35 3/3 pg_exec_query_string [12]\n[13] 34.4 0.00 1391.35 3 ProcessQuery [13]\n 0.00 1391.32 3/3 ExecutorRun [14]\n 0.00 0.02 3/3 ExecutorEnd [103]\n 0.00 0.01 3/3 ExecutorStart [177]\n 0.00 0.00 3/9 set_ps_display [502]\n 0.00 0.00 3/3 BeginCommand [572]\n 0.00 0.00 3/3 EndCommand [634]\n 0.00 0.00 3/3 CreateExecutorState [674]\n 0.00 0.00 3/3 CreateQueryDesc [685]\n 0.00 0.00 3/3 CreateOperationTag [1009]\n 0.00 0.00 3/3 UpdateCommandInfo [1020]\n\n-----------------------------------------------\n\n 0.00 1391.32 3/3 ProcessQuery [13]\n[14] 34.4 0.00 1391.32 3 ExecutorRun [14]\n 0.00 1391.32 3/3 ExecutePlan [15]\n 0.00 0.00 3/3 printtup_cleanup [671]\n 0.00 0.00 3/3 DestToFunction [686]\n 0.00 0.00 3/3 printtup_setup [1037]\n\n-----------------------------------------------\n\n 0.00 1391.32 3/3 ExecutorRun [14]\n[15] 34.4 0.00 1391.32 3 ExecutePlan [15]\n 0.20 1391.12 8/8 ExecProcNode\t<cycle 3> [30]\n 0.00 0.00 5/5 ExecRetrieve [238]\n\n-----------------------------------------------\n\n[16] 34.4 0.20 1391.12 8+704194 <cycle 3 as a whole>\t[16]\n 0.03 1387.42 88001 ExecSort\t<cycle 3> [17]\n 0.10 2.50 352190 ExecProcNode\t<cycle 3> [30]\n 0.07 0.50 88001 ExecNestLoop\t<cycle 3> [44]\n 0.00 0.55 88003 ExecGroupEveryTuple\t<cycle 3> [45]\n 0.00 0.15 4 ExecAgg\t<cycle 3> [65]\n 0.00 0.00 88003 ExecGroup\t<cycle 3> [763]\n\n-----------------------------------------------\n\n 88001 ExecProcNode\t<cycle 3> [30]\n[17] 34.3 0.03 1387.42 88001 ExecSort\t<cycle 3> [17]\n 0.00 1387.23 1/1 tuplesort_performsort [19]\n 0.02 0.12 88000/88000 tuplesort_puttuple [69]\n 0.01 0.04 88001/528274 ExecStoreTuple [53]\n 0.00 0.00 1/1 tuplesort_begin_heap [332]\n 0.00 0.00 1/1 ExtractSortKeys [705]\n 0.00 0.00 2/443727 pfree [73]\n 0.00 0.00 88001/88001 tuplesort_gettuple [765]\n 0.00 0.00 1/8 ExecGetTupType [912]\n 0.00 0.00 1/9 ExecAssignResultType [907]\n 88001 ExecProcNode\t<cycle 3> [30]\n\n-----------------------------------------------\n\n 0.00 1387.23 1/1 PostmasterMain [1]\n[18] 34.3 0.00 1387.23 1 load_password_cache [18]\n 0.00 1387.23 1/2 qsort [4]\n 0.00 0.00 2/2 fgets [384]\n 0.00 0.00 1/1 crypt_openpwdfile [444]\n 0.00 0.00 1/92 MemoryContextStrdup [486]\n 0.00 0.00 1/533564 MemoryContextAlloc [55]\n 0.00 0.00 1/3826 strlen [768]\n 0.00 0.00 1/7 FreeFile [925]\n\n-----------------------------------------------\n\n 0.00 1387.23 1/1 ExecSort\t<cycle 3> [17]\n[19] 34.3 0.00 1387.23 1 tuplesort_performsort [19]\n 0.00 1387.23 1/2 qsort [4]\n\n-----------------------------------------------\n\n <spontaneous>\n[20] 31.3 1268.83 0.00 internal_mcount [20]\n 0.00 0.00 1/4 atexit [987]\n\n-----------------------------------------------\n\n 141.43 870.75 1680561053/1680561053 comparetup_heap [7]\n[21] 25.0 141.43 870.75 1680561053 ApplySortFunction [21]\n 156.67 714.08 1680561053/1680739842 FunctionCall2\t<cycle 14> [23]\n\n-----------------------------------------------\n\n[22] 21.5 156.69 714.15 1680739842+368109 <cycle 14 as a whole>\t[22]\n 156.50 713.84 1680835238 FunctionCall2\t<cycle 14> [23]\n 0.07 0.17 88497 btgettuple\t<cycle 14> [56]\n 0.03 0.13 88092 _bt_next\t<cycle 14> [64]\n 0.09 0.00 88491 _bt_checkkeys\t<cycle 14> [81]\n 0.00 0.02 405 _bt_search\t<cycle 14> [117]\n 0.00 0.00 405 _bt_first\t<cycle 14> [270]\n 0.00 0.00 5666 _bt_compare\t<cycle 14> [4858]\n 0.00 0.00 781 _bt_binsrch\t<cycle 14> [4862]\n 0.00 0.00 376 _bt_moveright\t<cycle 14> [4865]\n\n-----------------------------------------------\n\n",
"msg_date": "02 Apr 2002 22:01:09 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: Re : Solaris Performance - Profiling"
},
{
"msg_contents": "Mark kirkwood <markir@slingshot.co.nz> writes:\n> I am a little concerned that the cpu time recored by the profiler seems\n> to be a bit on the short side at 4047.53 seconds (67 minutes), as\n> opposed to 119.2 for the backend itself. Are these two numbers supposed\n> to match closely ? (they did for shorter queries..)\n\nHmm. Where exactly did you get those numbers from? I see 4118.54 sec\nas the total CPU accounted for in the profile.\n\nSome versions of gprof hide the time spent in the profiler subroutine,\nleading to discrepancies between the time accounted for in the profile\nand the actual process CPU time. However, yours doesn't seem to be one\nof them --- internal_mcount is right there.\n\n> Anyway in the hope that this captured output is interesting enough, here\n> it is...or rather here is some of it (biggish file), so I am leaving a\n> complete copy under :\n> http://homepages.slingshot.co.nz/~markir/tar/solaris/\n> Let me know if further information is likely to be helpful. I have\n> profiled the same query (on the same dataset) on Freebsd and have the\n> output available at :\n> http://homepages.slingshot.co.nz/~markir/tar/freebsd/\n\nHmm. Assuming that the profile data is trustworthy and the queries are\nindeed the same (did you compare EXPLAIN output?), it seems that\nSolaris' problem is a spectacularly bad qsort() implementation.\nThe FreeBSD profile shows:\n\n 1 qsort [29]\n 0.00 0.31 1/2 load_password_cache [44]\n 0.00 0.31 1/2 tuplesort_performsort [45]\n[29] 9.8 0.01 0.62 2+1 qsort [29]\n 0.01 0.62 148033/148033 qsort_comparetup [30]\n 1 qsort [29]\n\nwhere Solaris has\n\n 0.00 1387.23 1/2 load_password_cache [18]\n 0.00 1387.23 1/2 tuplesort_performsort [19]\n[4] 68.5 0.00 2774.46 2 qsort [4]\n 19.73 2754.58 1/1 qst [5]\n 0.01 0.14 88012/1680561053 qsort_comparetup [6]\n 0.00 0.00 1/61473 .umul [154]\n\n-----------------------------------------------\n\n 19.73 2754.58 1/1 qsort [4]\n[5] 68.5 19.73 2754.58 1 qst [5]\n 167.74 2586.81 1680473041/1680561053 qsort_comparetup [6]\n 0.02 0.00 59630/59703 .udiv [114]\n 0.01 0.00 59630/61473 .umul [154]\n\nand all the rest of the top profile entries are explained by the fact\nthat qsort_comparetup is called 1.68 billion times instead of 148K\ntimes.\n\nCan these really be the same dataset? Can Solaris' qsort really be that\noutstandingly incompetent? How many rows are actually being sorted\nhere, anyway?\n\nIt might be entertaining to snarf a qsort off the net (from glibc,\nperhaps) and link it into Postgres to see if you get better results.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 11:02:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re : Solaris Performance - Profiling "
},
{
"msg_contents": "On Wed, 2002-04-03 at 04:02, Tom Lane wrote:\n > \n> Hmm. Where exactly did you get those numbers from? I see 4118.54 sec\n> as the total CPU accounted for in the profile.\n>\nodd ...the call graph has 4047.53 and the flat graph has 4118.54 \n>\n> Hmm. Assuming that the profile data is trustworthy and the queries are\n> indeed the same (did you compare EXPLAIN output?), it seems that\n> Solaris' problem is a spectacularly bad qsort() implementation.\n>\nA bit surfing finds heaps of unhappy Solaris qsort users... apparently\nit cannot sort lists with many repeated items... so our GROUP BY will be\ncausing it grief here\n>\n> It might be entertaining to snarf a qsort off the net (from glibc,\n> perhaps) and link it into Postgres to see if you get better results.\n> \n> \t\t\tregards, tom lane\n>\nIndeed it is - obtained qsort.c from Freebsd CVS and rebuilt Postgresql :\nThe query now takes 6 seconds instead of 1 hour ! Thanks for an\nexcellent suggestion.\n\nFor those in need to a quick fix :\n\nI did a cheap and dirty mod to src/backend/utils/sort/Makefile\n\nchanged OBJS = logtape.o -> OBJS = qsort.o logtape.o\n\nand copied qsort.c into this directory\n\n(had to comment out a couple of lines to compile under Solaris :\n\n/*#include <sys/cdefs.h>\n__FBSDID(\"$FreeBSD: src/lib/libc/stdlib/qsort.c,v 1.11 2002/03/22\n21:53:10 obrien Exp $\");\n*/\n\n)\n\nWhat do you think about providing something like this for the Solaris\nport ? (since its clearly quicker...)\n\nregards\n\nMark\n \n\n\n \n\n\n",
"msg_date": "03 Apr 2002 19:00:06 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Hi Tom,\n\nHow about we include this and have configure somehow ensure the Solaris\nusers get it automatically?\n\nThere are a *bunch* of Solaris users out there.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nMark kirkwood wrote:\n> \n> On Wed, 2002-04-03 at 04:02, Tom Lane wrote:\n> >\n> > Hmm. Where exactly did you get those numbers from? I see 4118.54 sec\n> > as the total CPU accounted for in the profile.\n> >\n> odd ...the call graph has 4047.53 and the flat graph has 4118.54\n> >\n> > Hmm. Assuming that the profile data is trustworthy and the queries are\n> > indeed the same (did you compare EXPLAIN output?), it seems that\n> > Solaris' problem is a spectacularly bad qsort() implementation.\n> >\n> A bit surfing finds heaps of unhappy Solaris qsort users... apparently\n> it cannot sort lists with many repeated items... so our GROUP BY will be\n> causing it grief here\n> >\n> > It might be entertaining to snarf a qsort off the net (from glibc,\n> > perhaps) and link it into Postgres to see if you get better results.\n> >\n> > regards, tom lane\n> >\n> Indeed it is - obtained qsort.c from Freebsd CVS and rebuilt Postgresql :\n> The query now takes 6 seconds instead of 1 hour ! Thanks for an\n> excellent suggestion.\n> \n> For those in need to a quick fix :\n> \n> I did a cheap and dirty mod to src/backend/utils/sort/Makefile\n> \n> changed OBJS = logtape.o -> OBJS = qsort.o logtape.o\n> \n> and copied qsort.c into this directory\n> \n> (had to comment out a couple of lines to compile under Solaris :\n> \n> /*#include <sys/cdefs.h>\n> __FBSDID(\"$FreeBSD: src/lib/libc/stdlib/qsort.c,v 1.11 2002/03/22\n> 21:53:10 obrien Exp $\");\n> */\n> \n> )\n> \n> What do you think about providing something like this for the Solaris\n> port ? (since its clearly quicker...)\n> \n> regards\n> \n> Mark\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 03 Apr 2002 17:59:14 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Just curious - why is solaris qsort that way? Any good reasons? I saw a \nvery old post by a solaris guy, but it didn't seem very convincing.\n\nBy the way are there faster sorts which Postgresql can use for its sorting \nother than quick sort? e.g. BSD 4.4 radixsort (which DJB seems to keep \ngoing on about :)).\n\nWould it make a significant improvement in performance?\n\nCheerio,\nLink.\n\np.s. We have postgresql on solaris too ;).\n\nAt 05:59 PM 4/3/02 +1000, you wrote:\n>Hi Tom,\n>\n>How about we include this and have configure somehow ensure the Solaris\n>users get it automatically?\n>\n>There are a *bunch* of Solaris users out there.\n>\n>:-)\n\n\n",
"msg_date": "Wed, 03 Apr 2002 23:13:31 +0800",
"msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>",
"msg_from_op": false,
"msg_subject": "Sorting. Re: Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > Mark kirkwood wrote:\n> >> Indeed it is - obtained qsort.c from Freebsd CVS and rebuilt Postgresql :\n> >> The query now takes 6 seconds instead of 1 hour ! Thanks for an\n> >> excellent suggestion.\n> \n> > How about we include this and have configure somehow ensure the Solaris\n> > users get it automatically?\n> \n> > There are a *bunch* of Solaris users out there.\n> \n> Hmm. I suppose there'd be no license issues with borrowing a BSD qsort.\n> But I can't see any reasonable way for configure to decide automatically\n> whether we should replace the system qsort. I think we'd have to put\n> a USE_PRIVATE_QSORT symbol definition into src/template/solaris.\n> \n> Can anyone see a problem with doing it that way --- are there any\n> versions of Solaris where this'd be a bad idea?\n\nI noticed poor performance on Solaris, does one see this problem when compiling\nPostgreSQL with gcc on solaris?\n\nAs a suggestion, why not find the *best* version of qsort available, anywhere,\nand always use that version on all platforms?\n",
"msg_date": "Wed, 03 Apr 2002 10:23:33 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> Mark kirkwood wrote:\n>> Indeed it is - obtained qsort.c from Freebsd CVS and rebuilt Postgresql :\n>> The query now takes 6 seconds instead of 1 hour ! Thanks for an\n>> excellent suggestion.\n\n> How about we include this and have configure somehow ensure the Solaris\n> users get it automatically?\n\n> There are a *bunch* of Solaris users out there.\n\nHmm. I suppose there'd be no license issues with borrowing a BSD qsort.\nBut I can't see any reasonable way for configure to decide automatically\nwhether we should replace the system qsort. I think we'd have to put\na USE_PRIVATE_QSORT symbol definition into src/template/solaris.\n\nCan anyone see a problem with doing it that way --- are there any\nversions of Solaris where this'd be a bad idea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 10:23:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved) "
},
{
"msg_contents": "On Wed, Apr 03, 2002 at 10:23:41AM -0500, Tom Lane wrote:\n> Justin Clift <justin@postgresql.org> writes:\n\n> > How about we include this and have configure somehow ensure the Solaris\n> > users get it automatically?\n> \n> Hmm. I suppose there'd be no license issues with borrowing a BSD qsort.\n> But I can't see any reasonable way for configure to decide automatically\n> whether we should replace the system qsort. I think we'd have to put\n> a USE_PRIVATE_QSORT symbol definition into src/template/solaris.\n> \n> Can anyone see a problem with doing it that way --- are there any\n> versions of Solaris where this'd be a bad idea?\n\nWould it be possible instead to make it a --configure option, or just\nto add a note to the Solaris FAQ about adding an option to CFLAGS? \nI'd be leery of automatically replacing system libraries, if only\nbecause it might surprise people. Clearly the improvement is a win\nin this case, but if Sun fixes their library, it might be yet faster.\n\n(By the way, I've been following this thread, and noticed that the\nproblem shows up with gcc 2.95; AFAIK, 2.95 couldn't generate 64 bit\nSolaris binaries, so we can be fairly certain the problem is in the\n32 bit library. Maybe the 64 bit one is better? I _may_ have time\nto check this week, but it's looking unlikely. If no-one else does,\nI'll try it out as soon as I can.)\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 3 Apr 2002 10:41:00 -0500",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> I noticed poor performance on Solaris, does one see this problem\n> when compiling PostgreSQL with gcc on solaris?\n\nSince it's libc that's the culprit, I would imagine so.\n\n> As a suggestion, why not find the *best* version of qsort available,\n> anywhere, and always use that version on all platforms?\n\nBecause qsort() is *supposed* to be optimized by the vendor for their\nplatform, perhaps even written in assembler. It makes sense to trust\nthe vendor except when their implementation is provably pessimized.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "03 Apr 2002 10:49:28 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Doug McNaught wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> \n> > I noticed poor performance on Solaris, does one see this problem\n> > when compiling PostgreSQL with gcc on solaris?\n> \n> Since it's libc that's the culprit, I would imagine so.\n\nThanks, that explains what I have seen.\n> \n> > As a suggestion, why not find the *best* version of qsort available,\n> > anywhere, and always use that version on all platforms?\n> \n> Because qsort() is *supposed* to be optimized by the vendor for their\n> platform, perhaps even written in assembler. It makes sense to trust\n> the vendor except when their implementation is provably pessimized.\n\nPerhaps *supposed* to be optimized, but, in reality, are they? Is it a\nrealistic expectation?\n\nqsort() is a great sort for very random data, when data is mostly in the\ncorrect order, it is very bad. Perhaps replacing it with an alternate sort\nwould improve performance in general.\n",
"msg_date": "Wed, 03 Apr 2002 10:57:08 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n\n> > Because qsort() is *supposed* to be optimized by the vendor for their\n> > platform, perhaps even written in assembler. It makes sense to trust\n> > the vendor except when their implementation is provably pessimized.\n> \n> Perhaps *supposed* to be optimized, but, in reality, are they? Is it a\n> realistic expectation?\n\nI think most vendors do a pretty good job. Don't forget, optimizing a\nroutine like that depends a lot on the cache size and behavior of the\nCPU and other architecture-dependent stuff. \n\n> qsort() is a great sort for very random data, when data is mostly in the\n> correct order, it is very bad. Perhaps replacing it with an alternate sort\n> would improve performance in general.\n\nActually, the C standard says nothing about what algorithm should be\nused for qsort(); it's simply supposed to be a fast in-memory sort.\nThe qsort() name is just a historical artifact.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "03 Apr 2002 11:17:35 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Doug McNaught wrote:\n> I think most vendors do a pretty good job. Don't forget, optimizing a\n> routine like that depends a lot on the cache size and behavior of the\n> CPU and other architecture-dependent stuff. \n\n>> qsort() is a great sort for very random data, when data is mostly in the\n>> correct order, it is very bad. Perhaps replacing it with an alternate sort\n>> would improve performance in general.\n\n> Actually, the C standard says nothing about what algorithm should be\n> used for qsort(); it's simply supposed to be a fast in-memory sort.\n> The qsort() name is just a historical artifact.\n\nPerhaps so, but maybe that is the issue with Solaris, it actually may use qsort\nalgorithm.\n\nI am not too sure how one would optimize the qsort() API on a platform basis.\nThe sorting algorithm uses a callback function, this precludes any meaningful\noptimization. Yea, you can play with memory page sizes, and so on, but you\nstill have to do a function call for some multiple of the number of elements in\nthe array.\n",
"msg_date": "Wed, 03 Apr 2002 11:24:18 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Doug McNaught <doug@wireboard.com> writes:\n> Actually, the C standard says nothing about what algorithm should be\n> used for qsort(); it's simply supposed to be a fast in-memory sort.\n> The qsort() name is just a historical artifact.\n\nIn practice I believe qsort usually is a quicksort; it's just too good\nof a general-purpose algorithm. However you do need a good heuristic\nfor selecting the median value to split on, or you can get burnt by\ncorner cases. I'm guessing that Sun was careless and got burnt ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 11:25:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved) "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Doug McNaught <doug@wireboard.com> writes:\n> > Actually, the C standard says nothing about what algorithm should be\n> > used for qsort(); it's simply supposed to be a fast in-memory sort.\n> > The qsort() name is just a historical artifact.\n> \n> In practice I believe qsort usually is a quicksort; it's just too good\n> of a general-purpose algorithm. However you do need a good heuristic\n> for selecting the median value to split on, or you can get burnt by\n> corner cases. I'm guessing that Sun was careless and got burnt ...\n\nquicksort is a pretty poor algorithm if your data is in some kind of order\nalready. If you are sorting a list that is mostly in the order in which you\nwant, it will perform very badly indeed. If you could choose the sorting\nalgorithm based on knowledge of the order of the data, it may improve\nperformance.\n\nData retrieved from an index scan is likely to be in some sort of order. If the\norder of the data retrieved from an index scan is the same as the order to\nwhich it will be sorted at a later stage of the query, quicksort will probably\nbe worse than something like shell sort.\n",
"msg_date": "Wed, 03 Apr 2002 11:32:57 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> On Wed, Apr 03, 2002 at 10:23:41AM -0500, Tom Lane wrote:\n>> But I can't see any reasonable way for configure to decide automatically\n>> whether we should replace the system qsort. I think we'd have to put\n>> a USE_PRIVATE_QSORT symbol definition into src/template/solaris.\n\n> Would it be possible instead to make it a --configure option, or just\n> to add a note to the Solaris FAQ about adding an option to CFLAGS? \n\nI think the default should be to replace, but we could probably have a\nconfigure option to prevent it --- or to force it, in case people want\nto try a non-system qsort on other platforms besides Solaris. Whenever\nI see something like this, I wonder whether the problem is more\nwidespread than we know.\n\n> ... so we can be fairly certain the problem is in the\n> 32 bit library. Maybe the 64 bit one is better?\n\nGood point. Please check it out and let us know.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 11:35:30 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - Profiling (Solved) "
},
{
"msg_contents": "Mark kirkwood wrote:\n> > It might be entertaining to snarf a qsort off the net (from glibc,\n> > perhaps) and link it into Postgres to see if you get better results.\n> > \n> > \t\t\tregards, tom lane\n> >\n> Indeed it is - obtained qsort.c from Freebsd CVS and rebuilt Postgresql :\n> The query now takes 6 seconds instead of 1 hour ! Thanks for an\n> excellent suggestion.\n\nThat is shocking. I have the Solaris 8 source code here so if people\nwant info on exactly what is done in their qsort() routine, I can supply\nthat. How can a routine change a query from 1 hour to 6 seconds?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 13:41:14 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> quicksort is a pretty poor algorithm if your data is in some kind of order\n> already.\n\nOnly if you fail to take standard precautions against making a bad\nchoice of pivot element; every discussion I've ever seen of quicksort\nexplains ways to avoid that pitfall. Solaris' problem seems to be a\nmore subtle issue having to do with large numbers of equal keys. The\nform of quicksort that Knuth presents is tuned to behave well in that\nsituation, at the cost of exchanging equal records (cf. his exercise\n5.2.2.18) ... perhaps Sun overlooked that particular hack, or got it\nwrong.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 16:30:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [GENERAL] Re : Solaris Performance - Profiling (Solved) "
},
{
"msg_contents": "On Thu, 2002-04-04 at 06:41, Bruce Momjian wrote:\n\n> That is shocking. I have the Solaris 8 source code here so if people\n> want info on exactly what is done in their qsort() routine, I can supply\n> that. How can a routine change a query from 1 hour to 6 seconds?\n>\nI am considering raising a bug with Sun ( I have a simple test c\nprogram, that runs about 100x faster with BSD qsort.. enough to prove\nthe point for the support analyst)\n\nHowever considering this problem seems to have been around since 1996, I\nam not all that hopeful of persuading them to alter it.\n\nregards\n\nMark\n\n\n",
"msg_date": "04 Apr 2002 19:45:29 +1200",
"msg_from": "Mark kirkwood <markir@slingshot.co.nz>",
"msg_from_op": true,
"msg_subject": "Re: Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "\nAdded to TODO:\n\n\t* Add BSD-licensed qsort() for Solaris\n\n---------------------------------------------------------------------------\n\nMark kirkwood wrote:\n> On Wed, 2002-04-03 at 04:02, Tom Lane wrote:\n> > \n> > Hmm. Where exactly did you get those numbers from? I see 4118.54 sec\n> > as the total CPU accounted for in the profile.\n> >\n> odd ...the call graph has 4047.53 and the flat graph has 4118.54 \n> >\n> > Hmm. Assuming that the profile data is trustworthy and the queries are\n> > indeed the same (did you compare EXPLAIN output?), it seems that\n> > Solaris' problem is a spectacularly bad qsort() implementation.\n> >\n> A bit surfing finds heaps of unhappy Solaris qsort users... apparently\n> it cannot sort lists with many repeated items... so our GROUP BY will be\n> causing it grief here\n> >\n> > It might be entertaining to snarf a qsort off the net (from glibc,\n> > perhaps) and link it into Postgres to see if you get better results.\n> > \n> > \t\t\tregards, tom lane\n> >\n> Indeed it is - obtained qsort.c from Freebsd CVS and rebuilt Postgresql :\n> The query now takes 6 seconds instead of 1 hour ! Thanks for an\n> excellent suggestion.\n> \n> For those in need to a quick fix :\n> \n> I did a cheap and dirty mod to src/backend/utils/sort/Makefile\n> \n> changed OBJS = logtape.o -> OBJS = qsort.o logtape.o\n> \n> and copied qsort.c into this directory\n> \n> (had to comment out a couple of lines to compile under Solaris :\n> \n> /*#include <sys/cdefs.h>\n> __FBSDID(\"$FreeBSD: src/lib/libc/stdlib/qsort.c,v 1.11 2002/03/22\n> 21:53:10 obrien Exp $\");\n> */\n> \n> )\n> \n> What do you think about providing something like this for the Solaris\n> port ? (since its clearly quicker...)\n> \n> regards\n> \n> Mark\n> \n> \n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 23:06:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "On Wed, Apr 03, 2002 at 11:35:30AM -0500, Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> \n> > ... so we can be fairly certain the problem is in the\n> > 32 bit library. Maybe the 64 bit one is better?\n> \n> Good point. Please check it out and let us know.\n\nSorry this has taken me so long, but what with various stuff going on\nhere [1] and some vacation, I didn't have a chance to get to it\nsooner.\n\nBut now I'm wondering, is there anyone who knows of troubles with the\nprofiling of programs compiled with -pg under gcc 3.0.3 64 bit on\nSolaris 7? Here's my configure line:\n\nCFLAGS=\"-pg -mcmodel=medlow\" ./configure\n--prefix=/opt/OXRS/pgsql721-profile --with-pgport=12000\n--disable-shared\n\nBut I can't do anything with it:\n\npostgres721-profdata$ /opt/OXRS/pgsql721-profile/bin/postgres \n\nNo space for profiling buffer(s)\nFATAL 2: invalid checksum in control file\nSegmentation Fault (core dumped)\n\nAnyone with advice?\n\n[1] We were purchased by our largest customer, which is probably a\ngood thing, but meant more meetings and less, um, other work.\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 30 Apr 2002 11:55:53 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - Profiling (Solved)"
},
{
"msg_contents": "Sorry to reply to myself, but this might be useful for the archive.\n\nOn Tue, Apr 30, 2002 at 11:55:53AM -0400, Andrew Sullivan wrote:\n\n> But now I'm wondering, is there anyone who knows of troubles with the\n> profiling of programs compiled with -pg under gcc 3.0.3 64 bit on\n> Solaris 7? Here's my configure line:\n\nYou can't use the -pg cflag without -mcmodel=medlow, because gcc (at\nleast in its 64 bit incarnation on Solaris) apparently can't produce\nbinaries that way: only the medlow memory model is supported for\nprofiling. Unhappily, this appears to cause conflicting libraries to\nbe invoked (I _think_ that's what's going on, anyway). I think this\nmeans I can't build a 64-bit system with gcc for profiling. I might\nbe wrong (I'm sort of puzzling this out from two or three rather\ncryptic entries in some man pages; and as anyone who's ever seen my\nposts will attest, I'm not that bright anyway, so I may have\nmisunderstood something). In any case, I can't offer a definite\nanswer about the 64-bit qsort for now. If I have a chance to\ncome back to it, and discover anything, I'll post it here.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 30 Apr 2002 14:38:28 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "Andrew Sullivan <andrew@libertyrms.info> writes:\n> ... In any case, I can't offer a definite\n> answer about the 64-bit qsort for now.\n\nDo you need to profile it? It seemed that the 32-bit behavior for\nmany-equal-keys was so bad that it'd be easy to tell whether it's\nfixed, just by rough overall timing of a test case...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 30 Apr 2002 15:28:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle "
},
{
"msg_contents": "On Tue, Apr 30, 2002 at 03:28:13PM -0400, Tom Lane wrote:\n> Andrew Sullivan <andrew@libertyrms.info> writes:\n> > ... In any case, I can't offer a definite\n> > answer about the 64-bit qsort for now.\n> \n> Do you need to profile it? It seemed that the 32-bit behavior for\n> many-equal-keys was so bad that it'd be easy to tell whether it's\n> fixed, just by rough overall timing of a test case...\n\nYes, that's what I thought, too, so I figured I'd do that instead\n(although I didn't think of it until after I sent the mail). On the\nother hand, now I'm like a dog with a bone, because I want to know\nwhy in the world it doesn't work. No wonder I never get anything\ndone.\n\nThanks to Travis Hoyt, who pointed out that I could at least test for\nlibrary problems with truss. I did, and the interesting thing is\nthat it appears to be the profile writing that's causing the segfault\n(it's during the write to gmon.out that the segfault occurs). So my\nearlier view was wrong. But in any case, it looks like there really\nis something broken about profiling with this configuration.\n\nSince the original case was so bad, can anyone tell me roughly how\nmany equal keys were in the set, and how big the total set was? That\nway I'll be able to get something reasonably close, and I can use\nwall-clock time or something to expose whether there's a problem for\n64 bit libraries too. \n\nThanks,\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 30 Apr 2002 16:34:24 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "Hi \n\nI am fairly new to PostgreSQL. I have been trying to install\nPostgreSQL-7.2.1 on my Sun server, however it fails at this point:\n\n...\n...\n\ngcc -Wall -Wmissing-prototypes -Wmissing-declarations -fPIC -I.\n-I../../../src/include -DFRONTEND -DSYSCONFDIR='\"/usr/local/pgsql/etc\"' -c\n-o pqsignal.o pqsignal.c\nar crs libpq.a fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o\nfe-lobj.o pqexpbuffer.o dllist.o md5.o pqsignal.o\nmake[3]: ar: Command not found\nmake[3]: *** [libpq.a] Error 127\nmake[3]: Leaving directory\n`/usr/local/install/postgresql-7.2/src/interfaces/libpq'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory\n`/usr/local/install/postgresql-7.2/src/interfaces'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/usr/local/install/postgresql-7.2/src'\nmake: *** [all] Error 2\n\nSystem details:\n\nbash-2.03$ uname -a\nSunOS webdev1 5.8 Generic_108528-12 sun4u sparc SUNW,UltraAX-i2\n\nIs this a common problem on this platform? Or am I doing something wrong.\n\nAny help in getting this working would be greatly appreciated.\n\nThanks\n\nAdam\n\n",
"msg_date": "Wed, 01 May 2002 10:38:26 +0000",
"msg_from": "Adam Witney <awitney@sghms.ac.uk>",
"msg_from_op": false,
"msg_subject": "Failed compile on Sun"
},
{
"msg_contents": "Hi Adam,\n\nAre you using the Solaris compilation instructions from?\n\nhttp://techdocs.postgresql.org/installguides.php#solaris\n\nThey're known to work.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nAdam Witney wrote:\n> \n> Hi\n> \n> I am fairly new to PostgreSQL. I have been trying to install\n> PostgreSQL-7.2.1 on my Sun server, however it fails at this point:\n> \n> ...\n> ...\n> \n> gcc -Wall -Wmissing-prototypes -Wmissing-declarations -fPIC -I.\n> -I../../../src/include -DFRONTEND -DSYSCONFDIR='\"/usr/local/pgsql/etc\"' -c\n> -o pqsignal.o pqsignal.c\n> ar crs libpq.a fe-auth.o fe-connect.o fe-exec.o fe-misc.o fe-print.o\n> fe-lobj.o pqexpbuffer.o dllist.o md5.o pqsignal.o\n> make[3]: ar: Command not found\n> make[3]: *** [libpq.a] Error 127\n> make[3]: Leaving directory\n> `/usr/local/install/postgresql-7.2/src/interfaces/libpq'\n> make[2]: *** [all] Error 2\n> make[2]: Leaving directory\n> `/usr/local/install/postgresql-7.2/src/interfaces'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory `/usr/local/install/postgresql-7.2/src'\n> make: *** [all] Error 2\n> \n> System details:\n> \n> bash-2.03$ uname -a\n> SunOS webdev1 5.8 Generic_108528-12 sun4u sparc SUNW,UltraAX-i2\n> \n> Is this a common problem on this platform? Or am I doing something wrong.\n> \n> Any help in getting this working would be greatly appreciated.\n> \n> Thanks\n> \n> Adam\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 01 May 2002 21:16:31 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: Failed compile on Sun"
},
{
"msg_contents": "On Wed, May 01, 2002 at 10:38:26AM +0000, Adam Witney wrote:\n> fe-lobj.o pqexpbuffer.o dllist.o md5.o pqsignal.o\n> make[3]: ar: Command not found\n ^^^^^^^^^^^^^^^^^^^^^\n\n'ar' isn't in your PATH. It's a common problem. It's probably in\n/usr/ccs/bin. Just append that to your path (make sure you add it\nlast, because there's probably a non-GNU 'make' in there too) and\nyou'll be set to go.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 1 May 2002 10:19:58 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: Failed compile on Sun"
},
{
"msg_contents": "On Tue, Apr 30, 2002 at 03:28:13PM -0400, Tom Lane wrote:\n\n> Do you need to profile it? It seemed that the 32-bit behavior for\n> many-equal-keys was so bad that it'd be easy to tell whether it's\n> fixed, just by rough overall timing of a test case...\n\nSorry for taking yet again so long. Fitting in little tests of this\nsort of thing can be a bit of a bear -- there's always about 50 other\nthings to do. Anyway, I've performed some simple timed tests that, I\nthink, confirm that the 64 bit library on Solaris is not so bad.\n\n version \n-----------------------------------------------------------------\n PostgreSQL 7.2.1 on sparc-sun-solaris2.7, compiled by GCC 3.0.3\n\nbin$ file postmaster \npostmaster: ELF 64-bit MSB executable SPARCV9 Version 1,\ndynamically linked, not stripped\n\nThe config file is the default\n\nI _think_ I've captured the case that was problematic. As I\nunderstood it, qsort was having trouble when hit with many equal\nkeys. I created this table: \n\nCREATE TABLE table1 (_date_stamp timestamp default current_timestamp,\nfoo text);\n\nThe table has no index. It has 5120000 records; field \"foo\" has only\nfour distinct values.\n\nNo matter whether I compiled with the system qsort or the qsort from\nFreeBSD, I got roughly equivalent results running psql under time. I\nknow that's hardly an ideal test, but as Tom suggested, the 32-bit\ncase seemed to be so astonishingly bad that it should have been\nenough. I ran the test repeatedly, and the results seem pretty\nconsistent. Here are some typical results:\n\nsystem lib:\n\nsrc$ time psql -p 12000 -o /dev/null -c \"select * from table1 order\n^by foo\" test1\n\nreal 29m23.822s\nuser 2m10.241s\nsys 0m7.432s\n\nFreeBSD lib:\n\npostgresql-7.2.1$ time psql -p 12000 -o /dev/null -c \"select * from\ntable1 order by foo\" test1\n\n\nreal 29m38.880s\nuser 2m10.571s\nsys 0m8.032s\n\n\nThis example suggests the FreeBSD library is slightly worse in the\n64-bit case. That's consistently the case, but the difference is not\nso great that I'd put any stock in it.\n\nI do not know whether there might be any trouble using the FreeBSD\nlibrary in a 64-bit configuration. I'd say, if you're going to use a\n64-bit postmaster, use the Solaris libraries.\n\nHope this is helpful,\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Thu, 16 May 2002 15:54:32 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "\nTODO updated:\n\n\tAdd BSD-licensed qsort() for 32-bit Solaris \n\n---------------------------------------------------------------------------\n\nAndrew Sullivan wrote:\n> On Tue, Apr 30, 2002 at 03:28:13PM -0400, Tom Lane wrote:\n> \n> > Do you need to profile it? It seemed that the 32-bit behavior for\n> > many-equal-keys was so bad that it'd be easy to tell whether it's\n> > fixed, just by rough overall timing of a test case...\n> \n> Sorry for taking yet again so long. Fitting in little tests of this\n> sort of thing can be a bit of a bear -- there's always about 50 other\n> things to do. Anyway, I've performed some simple timed tests that, I\n> think, confirm that the 64 bit library on Solaris is not so bad.\n> \n> version \n> -----------------------------------------------------------------\n> PostgreSQL 7.2.1 on sparc-sun-solaris2.7, compiled by GCC 3.0.3\n> \n> bin$ file postmaster \n> postmaster: ELF 64-bit MSB executable SPARCV9 Version 1,\n> dynamically linked, not stripped\n> \n> The config file is the default\n> \n> I _think_ I've captured the case that was problematic. As I\n> understood it, qsort was having trouble when hit with many equal\n> keys. I created this table: \n> \n> CREATE TABLE table1 (_date_stamp timestamp default current_timestamp,\n> foo text);\n> \n> The table has no index. It has 5120000 records; field \"foo\" has only\n> four distinct values.\n> \n> No matter whether I compiled with the system qsort or the qsort from\n> FreeBSD, I got roughly equivalent results running psql under time. I\n> know that's hardly an ideal test, but as Tom suggested, the 32-bit\n> case seemed to be so astonishingly bad that it should have been\n> enough. I ran the test repeatedly, and the results seem pretty\n> consistent. Here are some typical results:\n> \n> system lib:\n> \n> src$ time psql -p 12000 -o /dev/null -c \"select * from table1 order\n> ^by foo\" test1\n> \n> real 29m23.822s\n> user 2m10.241s\n> sys 0m7.432s\n> \n> FreeBSD lib:\n> \n> postgresql-7.2.1$ time psql -p 12000 -o /dev/null -c \"select * from\n> table1 order by foo\" test1\n> \n> \n> real 29m38.880s\n> user 2m10.571s\n> sys 0m8.032s\n> \n> \n> This example suggests the FreeBSD library is slightly worse in the\n> 64-bit case. That's consistently the case, but the difference is not\n> so great that I'd put any stock in it.\n> \n> I do not know whether there might be any trouble using the FreeBSD\n> library in a 64-bit configuration. I'd say, if you're going to use a\n> 64-bit postmaster, use the Solaris libraries.\n> \n> Hope this is helpful,\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 87 Mowat Avenue \n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M6K 3E3\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 27 May 2002 21:00:43 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "On Mon, 27 May 2002 21:00:43 -0400 (EDT)\n\"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> TODO updated:\n> \n> \tAdd BSD-licensed qsort() for 32-bit Solaris \n\nIs this necessary? Didn't someone say that Sun had acknowledged the\nperformance problem and were going to be releasing a patch for it?\nIf that patch exists (or will exist), it would probably be better to\nsuggest in the docs that users of 32-bit Solaris apply the patch.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Mon, 27 May 2002 21:17:31 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "On Mon, May 27, 2002 at 09:00:43PM -0400, Bruce Momjian wrote:\n> \n> TODO updated:\n> \n> \tAdd BSD-licensed qsort() for 32-bit Solaris \n\nI've received an email noting that someone else ran a test program\nwith the 64 bit library, and had just as bad performance as the 32\nbit one. I haven't had a chance to look at it yet, but it suggests\nthat the result is still inconclusive. Maybe, if just one more fire\ngoes out here, I can look at it this week.\n\nA\n\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Tue, 28 May 2002 10:53:46 -0400",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "Neil Conway wrote:\n> On Mon, 27 May 2002 21:00:43 -0400 (EDT)\n> \"Bruce Momjian\" <pgman@candle.pha.pa.us> wrote:\n> > TODO updated:\n> > \n> > \tAdd BSD-licensed qsort() for 32-bit Solaris \n> \n> Is this necessary? Didn't someone say that Sun had acknowledged the\n> performance problem and were going to be releasing a patch for it?\n> If that patch exists (or will exist), it would probably be better to\n> suggest in the docs that users of 32-bit Solaris apply the patch.\n\nSun said they would look at it (actually McNeeley (sp?)), but I haven't\nseen any mention of a patch.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 May 2002 20:35:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
},
{
"msg_contents": "Andrew Sullivan wrote:\n> On Mon, May 27, 2002 at 09:00:43PM -0400, Bruce Momjian wrote:\n> > \n> > TODO updated:\n> > \n> > \tAdd BSD-licensed qsort() for 32-bit Solaris \n> \n> I've received an email noting that someone else ran a test program\n> with the 64 bit library, and had just as bad performance as the 32\n> bit one. I haven't had a chance to look at it yet, but it suggests\n> that the result is still inconclusive. Maybe, if just one more fire\n> goes out here, I can look at it this week.\n\nTODO reverted to be:\n\n\tAdd BSD-licensed qsort() for Solaris\n\nMy guess is that your test case didn't tickle the bug.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 28 May 2002 20:37:09 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Re : Solaris Performance - 64 bit puzzle"
}
] |
[
{
"msg_contents": "Hello.\n\nSome example:\ncompile and install contrib/tsearch module\nand now:\nwow=# create table foo (bar txtidx);\nCREATE\nwow=# \\d foo\n Table \"foo\"\n Column | Type | Modifiers\n--------+--------+-----------\n bar | txtidx | not null\n\n\nWhy field 'bar' has modifier 'not null'?\n\n-- \nTeodor Sigaev\nteodor@stack.net\n\n\n",
"msg_date": "Fri, 29 Mar 2002 13:27:54 +0300",
"msg_from": "Teodor Sigaev <teodor@stack.net>",
"msg_from_op": true,
"msg_subject": "Bug or feature in 7.3?"
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> Why field 'bar' has modifier 'not null'?\n\nHmm ... looks like CREATE TYPE is leaving typnotnull set TRUE for user-\ncreated types; all the types created on-the-fly in the regression tests\nhave the same problem. A bug, I agree. Don't see where the error is\nyet...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Mar 2002 10:01:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug or feature in 7.3? "
},
{
"msg_contents": "Teodor Sigaev <teodor@stack.net> writes:\n> Why field 'bar' has modifier 'not null'?\n\nOh ... last parameter of TypeCreate should be false not 'f' ...\ntwo places in define.c. Bug introduced in DOMAIN patch evidently.\nI have a ton of other changes I'm about to commit in that area,\nwill fix this one too.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Mar 2002 10:09:38 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bug or feature in 7.3? "
}
] |
[
{
"msg_contents": "Original titles:\n Patch to add real cancel to ODBC driver\n Patch to add real can--cel to ODBC driver\n\nBruce, sorry to bother you, would you forward this onto the list?\nI can't post for reasons I can't fathom.\n\n-----\nPatch against 7,2 submitted for comment.\n \nIt's a little messy; I had some trouble trying to reconcile the code\nstyle of libpq which I copied from, and odbc.\n \nSuggestions on what parts look ugly, and or where to send this\n(is there a separate ODBC place?) are welcome.\n \nThis seems to work just fine; Now, when our users submit a 2 hour\nquery with four million row sorts by accident, then cancel it 30 seconds\nlater, it doesn't bog down the server ...\n \nregards,\n \n-Brad",
"msg_date": "Fri, 29 Mar 2002 08:38:12 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "hackers mail broken?"
}
] |
[
{
"msg_contents": "It's done.\n\nNote some behavioral changes:\n\n* The cache file config.cache is now off by default. If you still want\nit, run './configure -C'.\n\n* Specifying --host now means you are cross compiling. So don't do it.\nUse --build if you really have to. Which you don't.\n\n* Running 'autoconf' is now very slow. Too bad.\n\n* Running 'autoconf' will leave a directory called autom4te.cache/ around.\nDon't worry about it; make clean will remove it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 29 Mar 2002 12:39:01 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Autoconf upgraded"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> It's done.\n\nIt seems a tad broken.\n\n...\nchecking string usability... yes\nchecking string presence... yes\nchecking for string... yes\nchecking for class string in <string.h>... no\nconfigure: error: neither <string> nor <string.h> seem to define the C++ class 'string'\n$ \n\nNot having Autoconf 2.53 installed quite yet, I can't try altering the\nconfigure inputs, but I think there may be a comma missing at the end of\nline 15 of cxx.m4?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Mar 2002 14:27:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgraded "
},
{
"msg_contents": "On Fri, Mar 29, 2002 at 12:39:01PM -0500, Peter Eisentraut wrote:\n> * Running 'autoconf' is now very slow. Too bad.\n\nBut rerunning autoconf should be fast, thanks to autom4te.cache.\n\n-- \nalbert chin (china@thewrittenword.com)\n",
"msg_date": "Tue, 2 Apr 2002 00:39:32 -0600",
"msg_from": "Albert Chin <pgsql-hackers@thewrittenword.com>",
"msg_from_op": false,
"msg_subject": "Re: Autoconf upgraded"
}
] |
[
{
"msg_contents": "I have been talking with Bruce Momjian about implementing query\ntimeouts in the JDBC driver. As things stand, you can call\nsetQueryTimeout() or getQueryTimeout(), but a slow query will never\nactually timeout, even if a timeout is set. The result of a timeout\nshould be a SQLException.\n\nBruce feels that this should be implemented in the backend: set an\nalarm() in the backend on transaction start, then call the query\ncancel() code if the alarm() goes off, and reset the alam if the query\nfinishes before the timeout.\n\nI am concerned that this method does not provide a means of triggering\nthe SQLException in the driver. For an example, look at how cancel is\nimplemented (org.postgresql.Connection::cancelQuery()): we create a\nnew PG_Stream and send some integers to it which represent the cancel\nrequest. Then we close the PG_Stream. There is no point at which we\nreceive any notification from the backend that the query has been\ncancelled.\n\nI looked in postmaster.c, processCancelRequest() to see what the\nbackend does. A SIGINT is sent to the backend when the cancel request\nis successfully fulfilled, but nothing seems to be sent to the\ninterface.\n\nOne possibility is that the driver might just notice that the connection\nhas closed, and throw an Exception then. javax.sql.PooledConnection has an\naddConnectionEventListener() method; we could add a\nConnectionEventListener there which would throw an Exception when the\nconnection closes.\n\nIn practice, this may or may not be a good idea. The place to get hold\nof a PooledConnection seems to be in XAConnectionImpl (I am not sure\nhow the driver would actually request the relevant XAConnectionImpl\nobject, but I am sure I could figure that out). The thing is that this\nclass only allows one ConnectionEventListener to be set, so if we set\nit, the user would be out of luck if he wanted to add his own\nlistener.\n\nMy proposal, then, is that the Java driver should submit the\ntransaction request; wait for the timeout; if it goes off, submit a\ncancel request; and then throw a SQLException. We would not handle\nthis in the backend at all.\n\nBruce agreed that this was a good point to ask what the rest of the\nhackers list thought. Any input?\n\nThanks,\nJessica\n\n\n\n\n",
"msg_date": "Fri, 29 Mar 2002 15:44:34 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> [snip]\n> My proposal, then, is that the Java driver should submit the\n> transaction request; wait for the timeout; if it goes off, submit a\n> cancel request; and then throw a SQLException. We would not handle\n> this in the backend at all.\n\n> Bruce agreed that this was a good point to ask what the rest of the\n> hackers list thought. Any input?\n\nI guess the $64 question is whether any frontends other than JDBC want\nthis behavior. If it's JDBC-only then I'd certainly vote for making\nJDBC handle it ... but as soon as we see several different frontends\nimplementing similar behavior, I'd say it makes sense to implement it\nonce in the backend.\n\nSo, what's the market?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 29 Mar 2002 23:36:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> > [snip]\n> > My proposal, then, is that the Java driver should submit the\n> > transaction request; wait for the timeout; if it goes off, submit a\n> > cancel request; and then throw a SQLException. We would not handle\n> > this in the backend at all.\n> \n> > Bruce agreed that this was a good point to ask what the rest of the\n> > hackers list thought. Any input?\n> \n> I guess the $64 question is whether any frontends other than JDBC want\n> this behavior. If it's JDBC-only then I'd certainly vote for making\n> JDBC handle it ... but as soon as we see several different frontends\n> implementing similar behavior, I'd say it makes sense to implement it\n> once in the backend.\n> \n> So, what's the market?\n\nThere is clearly interest from all interfaces. This item has been\nrequested quite often, usually related to client apps or web apps.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Mar 2002 00:16:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Sat, 30 Mar 2002, Bruce Momjian wrote:\n\n> There is clearly interest from all interfaces. This item has been\n> requested quite often, usually related to client apps or web apps.\n\nI definitely agree that implementing it in the backend would be the best\nplan, if it's feasible. I just can't figure out how to pass information\nback to the driver that the request has been cancelled (and that, in\nJDBC's case, a SQLException should be thrown). Any thoughts about that?\n\nj\n\n",
"msg_date": "Sat, 30 Mar 2002 08:43:32 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> I definitely agree that implementing it in the backend would be the best\n> plan, if it's feasible. I just can't figure out how to pass information\n> back to the driver that the request has been cancelled (and that, in\n> JDBC's case, a SQLException should be thrown). Any thoughts about that?\n\nWhy would this be any different from a cancel-signal-instigated abort?\nYou'd be reporting elog(ERROR) in any case.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Mar 2002 12:59:03 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "On Sat, 30 Mar 2002, Tom Lane wrote:\n\n> Why would this be any different from a cancel-signal-instigated abort?\n> You'd be reporting elog(ERROR) in any case.\n\nIf I understand the code correctly, in the case of a cancel signal, the\ndriver sends the signal and then assumes that the backend has accepted it\nand cancelled; the back end does not report back. In this case, the driver\nwould not be sending a signal, so it would not know that the process had\nreached the timeout and stopped (and it needs to know that). What we\n*could* do is have *both* the driver and the backend run timers and both\nstop when the timeout is reached. This seems like a solution just begging\nto produce ugly bugs, though -- and if we have to implement such a wait in\nthe driver, we may as well implement the whole thing in the driver and\njust have it send a cancel signal when it times out.\n\nOr am I misunderstanding the situation?\n\nj\n\n\n",
"msg_date": "Sat, 30 Mar 2002 14:20:19 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "On Sat, 30 Mar 2002, Tom Lane wrote:\n\n> Au contraire, it is not assuming anything. It is sending off a cancel\n> request and then waiting to see what happens. Maybe the query will be\n> canceled, or maybe it will complete normally, or maybe it will fail\n> because of some error unrelated to the cancel request. In any case the\n> backend *will* eventually report completion/error status, and the\n> frontend does not assume anything until it gets that report.\n\nAh, okay; this was not my understanding. I'll look at the code again.\n\n> Why does it need to know that? When it gets the error report back, it\n> can notice that the error says \"Query aborted by timeout\" (or however we\n> phrase it) ... but I'm not seeing why it should care.\n\nI just meant it needed to know that the process had stopped prematurely; I\ndidn't mean it needed to know why.\n\nI'll get back to you after doing a little more research.\n\nj\n\n",
"msg_date": "Sat, 30 Mar 2002 14:31:34 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> If I understand the code correctly, in the case of a cancel signal, the\n> driver sends the signal and then assumes that the backend has accepted it\n> and cancelled; the back end does not report back.\n\nAu contraire, it is not assuming anything. It is sending off a cancel\nrequest and then waiting to see what happens. Maybe the query will be\ncanceled, or maybe it will complete normally, or maybe it will fail\nbecause of some error unrelated to the cancel request. In any case the\nbackend *will* eventually report completion/error status, and the\nfrontend does not assume anything until it gets that report.\n\n> In this case, the driver\n> would not be sending a signal, so it would not know that the process had\n> reached the timeout and stopped (and it needs to know that).\n\nWhy does it need to know that? When it gets the error report back, it\ncan notice that the error says \"Query aborted by timeout\" (or however we\nphrase it) ... but I'm not seeing why it should care.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Mar 2002 14:32:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "> On Sat, 30 Mar 2002, Tom Lane wrote:\n> \n> > Au contraire, it is not assuming anything. It is sending off a cancel\n> > request and then waiting to see what happens. Maybe the query will be\n\nOkay, I see now: when processCancelRequest() is called, a return of 127 is\nsent. That would indeed work; thanks for walking me through it.\n\nMy other question was how to send the timeout value to the backend. Bruce\nsaid at one point:\n\n> Timeout can be part of BEGIN, or a SET value, which would work from\n> jdbc.\n\nI'm not sure how this would work. The timeout value would be sent as part\nof a SQL query?\n\nj\n\n",
"msg_date": "Mon, 1 Apr 2002 10:49:04 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> > On Sat, 30 Mar 2002, Tom Lane wrote:\n> > \n> > > Au contraire, it is not assuming anything. It is sending off a cancel\n> > > request and then waiting to see what happens. Maybe the query will be\n> \n> Okay, I see now: when processCancelRequest() is called, a return of 127 is\n> sent. That would indeed work; thanks for walking me through it.\n> \n> My other question was how to send the timeout value to the backend. Bruce\n> said at one point:\n> \n> > Timeout can be part of BEGIN, or a SET value, which would work from\n> > jdbc.\n> \n> I'm not sure how this would work. The timeout value would be sent as part\n> of a SQL query?\n\nI think there are two ways of making this capability visible to users. \nFirst, you could do:\n\n\tSET query_timeout = 5;\n\t\nand all queries after that would time out at 5 seconds. Another option\nis:\n\n\tBEGIN WORK TIMEOUT 5;\n\t...\n\tCOMMIT;\n\nwhich would make the transaction timeout after 5 seconds. We never\ndecided which one we wanted, or both.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Apr 2002 11:22:21 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> My other question was how to send the timeout value to the backend.\n\nI would imagine that the most convenient way to handle it would be as\na SET variable:\n\n\tSET query_timeout = n;\n\nEstablishes a time limit on subsequent queries (n expressed in\nmilliseconds, perhaps).\n\n\tSET query_timeout = 0;\n\nDisables query time limit.\n\nThis assumes that the query timeout should apply to each subsequent\nquery, individually, until explicitly canceled. If you want a timeout\nthat applies to only one query and is then forgotten, then maybe this\nwouldn't be the most convenient definition. What semantics are you\ntrying to obtain, exactly?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 11:26:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "On Mon, 1 Apr 2002, Tom Lane wrote:\n\n> This assumes that the query timeout should apply to each subsequent\n> query, individually, until explicitly canceled. If you want a timeout\n> that applies to only one query and is then forgotten, then maybe this\n> wouldn't be the most convenient definition. What semantics are you\n> trying to obtain, exactly?\n\nThe semantices of the JDBC API:\n\n\"Transaction::setQueryTimeout(): Sets the number of seconds the driver\n will wait for a Statement to execute to the given number of seconds.\n If the limit is exceeded, a SQLException is thrown.\"\n\nSo it should apply to all queries on a given transaction. I think that the\nabove implemenation suggestion (and Bruce's) would apply to all queries,\nregardless of which transaction they were associated with. If each\ntransaction has some kind of unique ID, maybe that could be added to the\nSET statement?\n\nDoes anyone know how someone else did this (mSQL, mySQL, etc)? It seems\nlike there ought to already exist some sort of standard. I'll poke around\nand see if I can find anything.\n\nj\n\n",
"msg_date": "Mon, 1 Apr 2002 11:50:16 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> > My other question was how to send the timeout value to the backend.\n>\n> I would imagine that the most convenient way to handle it would be as\n> a SET variable:\n>\n> SET query_timeout = n;\n>\n> Establishes a time limit on subsequent queries (n expressed in\n> milliseconds, perhaps).\n>\n> SET query_timeout = 0;\n>\n> Disables query time limit.\n>\n> This assumes that the query timeout should apply to each subsequent\n> query, individually, until explicitly canceled. If you want a timeout\n> that applies to only one query and is then forgotten, then maybe this\n> wouldn't be the most convenient definition. What semantics are you\n> trying to obtain, exactly?\n\n Why don't we use two separate GUC variables and leave the\n BEGIN syntax as is completely?\n\n SET transaction_timeout = m;\n SET statement_timeout = n;\n\n The alarm is set to the smaller of (what's left for) the\n transaction or statement.\n\n If you want to go sub-second, I suggest making it\n microseconds. That's what struct timeval (used in struct\n itimerval) uses. But I strongly suggest not doing so at all,\n because the usage of itimers disables the ability to profile\n with gprof completely. Compute the time spent so far in a\n transaction exactly, but round UP to full seconds for the\n alarm allways.\n\n And before someone asks, no, I don't think that a\n connection_timeout is a good thing.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 1 Apr 2002 15:08:41 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Mon, 1 Apr 2002, Jan Wieck wrote:\n\n> Why don't we use two separate GUC variables and leave the\n> BEGIN syntax as is completely?\n> \n> SET transaction_timeout = m;\n> SET statement_timeout = n;\n\nWhat's a GUC variable? Would this apply to all subsequent statements? I\nthink it needs to apply to just the specified statement.\n\nI'm sorry about the confusion earlier when I said that\nsetQueryTimeout() was transaction-level; Barry Lind correctly pointed out\nthat it is statement-level. We mostly seem to feel that we don't want to\ndo both, so is statement-only okay? Jan, do you feel strongly that you\nwant to see both implemented?\n\n> If you want to go sub-second, I suggest making it\n> microseconds. That's what struct timeval (used in struct\n\nI don't think that's necessary. JDBC only wants it specified in seconds.\n\nj\n\n",
"msg_date": "Mon, 1 Apr 2002 17:27:04 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com> writes:\n> What's a GUC variable?\n\nA parameter that you can set with SET.\n\n> Would this apply to all subsequent statements? I\n> think it needs to apply to just the specified statement.\n\nYes, if the JDBC spec expects this to be applied to just a single\nstatement, then a SET variable doesn't fit very nicely with that.\nYou'd have to have logic on the application side to reset the variable\nto \"no limit\" after the statement --- and this could be rather\ndifficult. (For example, if you are inside a transaction block and\nthe statement errors out, you won't be able to simply issue a new SET;\nso you'd have to remember that you needed a SET until after you exit\nthe transaction block. Ugh.)\n\nOn the other hand, we do not have anything in the backend now that\napplies to just one statement and then automatically resets afterwards;\nand I'm not eager to add a parameter with that behavior just for JDBC's\nconvenience. It seems like it'd be a big wart.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 18:15:00 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "On Mon, 1 Apr 2002, Tom Lane wrote:\n\n> On the other hand, we do not have anything in the backend now that\n> applies to just one statement and then automatically resets afterwards;\n> and I'm not eager to add a parameter with that behavior just for JDBC's\n> convenience. It seems like it'd be a big wart.\n\nDoes that leave us with implementing query timeouts in JDBC (timer in the\ndriver; then the driver sends a cancel request to the backend)?\n\nj\n\n",
"msg_date": "Tue, 2 Apr 2002 11:13:19 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Since both the JDBC and ODBC specs have essentially the same symantics \nfor this, I would hope this is done in the backend instead of both \ninterfaces.\n\n--Barry\n\nJessica Perry Hekman wrote:\n> On Mon, 1 Apr 2002, Tom Lane wrote:\n> \n> \n>>On the other hand, we do not have anything in the backend now that\n>>applies to just one statement and then automatically resets afterwards;\n>>and I'm not eager to add a parameter with that behavior just for JDBC's\n>>convenience. It seems like it'd be a big wart.\n> \n> \n> Does that leave us with implementing query timeouts in JDBC (timer in the\n> driver; then the driver sends a cancel request to the backend)?\n> \n> j\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n",
"msg_date": "Tue, 02 Apr 2002 10:35:12 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> On Mon, 1 Apr 2002, Tom Lane wrote:\n> \n> > On the other hand, we do not have anything in the backend now that\n> > applies to just one statement and then automatically resets afterwards;\n> > and I'm not eager to add a parameter with that behavior just for JDBC's\n> > convenience. It seems like it'd be a big wart.\n> \n> Does that leave us with implementing query timeouts in JDBC (timer in the\n> driver; then the driver sends a cancel request to the backend)?\n\nNo, I think we have to find a way to do this in the backend; just not\nsure how yet.\n\nI see the problem Tom is pointing out, that SET is ignored if the\ntransaction has already aborted:\n\t\n\ttest=> begin;\n\tBEGIN\n\ttest=> lkjasdf;\n\tERROR: parser: parse error at or near \"lkjasdf\"\n\ttest=> set server_min_messages = 'log';\n\tWARNING: current transaction is aborted, queries ignored until end of\n\ttransaction block\n\t*ABORT STATE*\n\ttest=> \n\nso if the transaction aborted, the reset of the statement_timeout would\nnot happen. The only way the application could code this would be with\nthis:\n\n\tBEGIN WORK;\n\tquery;\n\tSET statement_timeout = 4;\n\tquery;\n\tSET statement_timeout = 0;\n\tquery;\n\tCOMMIT;\n\tSET statement_timeout = 0;\n\nBasically, it does the reset twice, once assuming the transaction\ndoesn't abort, and another assuming it does abort. Is this something\nthat the JDBC and ODBC drivers can do automatically?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Apr 2002 13:39:30 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Tue, 2 Apr 2002, Bruce Momjian wrote:\n\n> \tBEGIN WORK;\n> \tquery;\n> \tSET statement_timeout = 4;\n> \tquery;\n> \tSET statement_timeout = 0;\n> \tquery;\n> \tCOMMIT;\n> \tSET statement_timeout = 0;\n> \n> Basically, it does the reset twice, once assuming the transaction\n> doesn't abort, and another assuming it does abort. Is this something\n> that the JDBC and ODBC drivers can do automatically?\n\nI can't speak for ODBC. Seems like in JDBC, Connection::commit() would\ncall code clearing the timeout, and Statement::executeQuery() and\nexecuteUpdate() would do the same.\n\nj\n\n",
"msg_date": "Tue, 2 Apr 2002 14:17:20 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> On Tue, 2 Apr 2002, Bruce Momjian wrote:\n> \n> > \tBEGIN WORK;\n> > \tquery;\n> > \tSET statement_timeout = 4;\n> > \tquery;\n> > \tSET statement_timeout = 0;\n> > \tquery;\n> > \tCOMMIT;\n> > \tSET statement_timeout = 0;\n> > \n> > Basically, it does the reset twice, once assuming the transaction\n> > doesn't abort, and another assuming it does abort. Is this something\n> > that the JDBC and ODBC drivers can do automatically?\n> \n> I can't speak for ODBC. Seems like in JDBC, Connection::commit() would\n> call code clearing the timeout, and Statement::executeQuery() and\n> executeUpdate() would do the same.\n\nWell, then a SET variable would work fine for statement-level queries. \nJust add the part for commit/abort transaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Apr 2002 14:39:08 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Tue, 2 Apr 2002, Barry Lind wrote:\n\n> Since both the JDBC and ODBC specs have essentially the same symantics \n> for this, I would hope this is done in the backend instead of both \n> interfaces.\n\nThe current plan seems to be to make changes in the backend and the JDBC\ninterface, the bulk of the implementation being in the backend.\n\nj\n\n",
"msg_date": "Wed, 3 Apr 2002 10:26:04 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> On Tue, 2 Apr 2002, Barry Lind wrote:\n> \n> > Since both the JDBC and ODBC specs have essentially the same symantics \n> > for this, I would hope this is done in the backend instead of both \n> > interfaces.\n> \n> The current plan seems to be to make changes in the backend and the JDBC\n> interface, the bulk of the implementation being in the backend.\n\nYes, ODBC and JDBC need this, and I am sure psql folks will use it too,\nnot counting libpq and all the others.\n\nWe just need a way to specify statement-level SET options inside a\ntransaction where the statement may fail and ignore the SET command that\nresets the timeout. We don't have any mechanism to reset the timeout\nparameter at the end of a transaction automatically, which would solve\nour problem with failed transactions.\n\nDoes anyone know the ramifications of allowing SET to work in an aborted\ntransaction? It is my understanding that SET doesn't really have\ntransaction semantics anyway, e.g. a SET that is done in a transaction\nthat is later aborted is still valid:\n\n\ttest=> BEGIN;\n\tBEGIN\n\ttest=> SET server_min_messages to 'debug5';\n\tSET VARIABLE\n\ttest=> ABORT;\n\tROLLBACK\n\ttest=> SHOW server_min_messages;\n\tINFO: server_min_messages is debug5\n\tSHOW VARIABLE\n\nHaving shown this, it could be argued that SET should work in an\nalready-aborted transaction. Why should having the SET before or after\nthe transaction is canceled have any effect. This illustrates it a\nlittle clearer:\n\t\n\ttest=> BEGIN;\n\tBEGIN\n\ttest=> SET server_min_messages to 'debug3';\n\tSET VARIABLE\n\ttest=> asdf; \n\tERROR: parser: parse error at or near \"asdf\"\n\ttest=> SET server_min_messages to 'debug1';\n\tWARNING: current transaction is aborted, queries ignored until end of\n\ttransaction block\n\t*ABORT STATE*\n\ttest=> COMMIT;\n\tCOMMIT\n\ttest=> SHOW server_min_messages;\n\tINFO: server_min_messages is debug3\n\tSHOW VARIABLE\n\ttest=> \n\nWhy should the 'debug3' be honored if the transaction aborted. And if\nit is OK that is was honored, is it OK that the 'debug1' was not\nhonored?\n\nAllowing SET to be valid after a transaction aborts would solve our SET\ntimeout problem.\n\nThere is also a feeling that people may want to set maximum counts for\ntransactions too because the transaction could be holding locks you want\nreleased.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 13:36:33 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Does anyone know the ramifications of allowing SET to work in an aborted\n> transaction?\n\nThis is not an option.\n\nThe case that will definitely Not Work is SET variables whose setting\nor checking requires database accesses. The new search_path variable\ncertainly works that way; not sure if there are any other cases at the\nmoment, but I'd not like to say that there can never be any such\nvariables.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 15:03:23 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Jessica Perry Hekman wrote:\n> > On Tue, 2 Apr 2002, Barry Lind wrote:\n> >\n> > > Since both the JDBC and ODBC specs have essentially the same symantics\n> > > for this, I would hope this is done in the backend instead of both\n> > > interfaces.\n> >\n> > The current plan seems to be to make changes in the backend and the JDBC\n> > interface, the bulk of the implementation being in the backend.\n> \n> Yes, ODBC and JDBC need this, and I am sure psql folks will use it too,\n> not counting libpq and all the others.\n\nI wasn't able to follow this thread sorry.\nODBC has QUERY_TIMEOUT and CONNECTION_TIMEOUT.\n\n> We just need a way to specify statement-level SET options inside a\n> transaction where the statement may fail and ignore the SET command that\n> resets the timeout. We don't have any mechanism to reset the timeout\n> parameter at the end of a transaction automatically,\n\nWhy should the timeout be reset automatically ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 10:25:47 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> > > The current plan seems to be to make changes in the backend and the JDBC\n> > > interface, the bulk of the implementation being in the backend.\n> > \n> > Yes, ODBC and JDBC need this, and I am sure psql folks will use it too,\n> > not counting libpq and all the others.\n> \n> I wasn't able to follow this thread sorry.\n> ODBC has QUERY_TIMEOUT and CONNECTION_TIMEOUT.\n> \n> > We just need a way to specify statement-level SET options inside a\n> > transaction where the statement may fail and ignore the SET command that\n> > resets the timeout. We don't have any mechanism to reset the timeout\n> > parameter at the end of a transaction automatically,\n> \n> Why should the timeout be reset automatically ?\n\nIt doesn't need to be reset automatically, but the problem is that if\nyou are doing a timeout for single statement in a transaction, and that\nstatement aborts the transaction, the SET command after it to reset the\ntimeout fails.\n\nI am attaching the email that describes the issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026",
"msg_date": "Wed, 3 Apr 2002 21:21:53 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > The current plan seems to be to make changes in the backend and the JDBC\n> > > > interface, the bulk of the implementation being in the backend.\n> > >\n> > > Yes, ODBC and JDBC need this, and I am sure psql folks will use it too,\n> > > not counting libpq and all the others.\n> >\n> > I wasn't able to follow this thread sorry.\n> > ODBC has QUERY_TIMEOUT and CONNECTION_TIMEOUT.\n> >\n> > > We just need a way to specify statement-level SET options inside a\n> > > transaction where the statement may fail and ignore the SET command that\n> > > resets the timeout. We don't have any mechanism to reset the timeout\n> > > parameter at the end of a transaction automatically,\n> >\n> > Why should the timeout be reset automatically ?\n> \n> It doesn't need to be reset automatically, but the problem is that if\n> you are doing a timeout for single statement in a transaction, and that\n> statement aborts the transaction, the SET command after it to reset the\n> timeout fails.\n\nAs for ODBC, there's no state that *abort* but still inside\na transaction currently.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 11:48:21 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > Why should the timeout be reset automatically ?\n> > \n> > It doesn't need to be reset automatically, but the problem is that if\n> > you are doing a timeout for single statement in a transaction, and that\n> > statement aborts the transaction, the SET command after it to reset the\n> > timeout fails.\n> \n> As for ODBC, there's no state that *abort* but still inside\n> a transaction currently.\n\nYes, the strange thing is that SET inside a transaction _after_ the\ntransaction aborts is ignored, while SET before inside a transaction\nbefore the transaction aborts is accepted.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 23:05:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > > Why should the timeout be reset automatically ?\n> > >\n> > > It doesn't need to be reset automatically, but the problem is that if\n> > > you are doing a timeout for single statement in a transaction, and that\n> > > statement aborts the transaction, the SET command after it to reset the\n> > > timeout fails.\n> >\n> > As for ODBC, there's no state that *abort* but still inside\n> > a transaction currently.\n> \n> Yes, the strange thing is that SET inside a transaction _after_ the\n> transaction aborts is ignored, while SET before inside a transaction\n> before the transaction aborts is accepted.\n\nWhat I meant is there's no such problem with psqlodbc\nat least currently because the driver issues ROLLBACK\nautomatically on abort inside a transaction.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 13:08:59 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > > > Why should the timeout be reset automatically ?\n> > > >\n> > > > It doesn't need to be reset automatically, but the problem is that if\n> > > > you are doing a timeout for single statement in a transaction, and that\n> > > > statement aborts the transaction, the SET command after it to reset the\n> > > > timeout fails.\n> > >\n> > > As for ODBC, there's no state that *abort* but still inside\n> > > a transaction currently.\n> > \n> > Yes, the strange thing is that SET inside a transaction _after_ the\n> > transaction aborts is ignored, while SET before inside a transaction\n> > before the transaction aborts is accepted.\n> \n> What I meant is there's no such problem with psqlodbc\n> at least currently because the driver issues ROLLBACK\n> automatically on abort inside a transaction.\n\nIf it does that, what happens with the rest of the queries in a\ntransaction? Do they get executed in their own transactions, or are\nthey somehow ignored.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 23:09:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Hiroshi Inoue wrote:\n> > > > > > Why should the timeout be reset automatically ?\n> > > > >\n> > > > > It doesn't need to be reset automatically, but the problem is that if\n> > > > > you are doing a timeout for single statement in a transaction, and that\n> > > > > statement aborts the transaction, the SET command after it to reset the\n> > > > > timeout fails.\n> > > >\n> > > > As for ODBC, there's no state that *abort* but still inside\n> > > > a transaction currently.\n> > >\n> > > Yes, the strange thing is that SET inside a transaction _after_ the\n> > > transaction aborts is ignored, while SET before inside a transaction\n> > > before the transaction aborts is accepted.\n> >\n> > What I meant is there's no such problem with psqlodbc\n> > at least currently because the driver issues ROLLBACK\n> > automatically on abort inside a transaction.\n> \n> If it does that, what happens with the rest of the queries in a\n> transaction? Do they get executed in their own transactions, or are\n> they somehow ignored.\n\nThey would be executed in a new transaction. Queries shouldn't\nbe issued blindly(without error checking).\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 13:27:41 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "\nOK, I have a few ideas on this and I think one of them will have to be\nimplemented. Basically, we have this SET problem with all our\nvariables, e.g. if you SET explain_pretty_print or enable_seqscan in a\nmulti-statement transaction, and the transaction aborts after the\nvariable is turned on but before the variable is turned off, it will\nremain on for the remainder of the session. See the attached email for\nan example. It shows this problem with timeout, but all the SET\nvariables have this issue.\n\nI think we have only a few options:\n\t\n\to Allow SET to execute even if the transaction is in ABORT\n\t state (Tom says some SET variables need db access and will\n\t fail.)\n\to If a SET is performed while in transaction ABORT state, queue\n\t up the SET commands to run after the transaction completes\n\to Issue a RESET on transaction completion (commit or abort) for any\n\t SET variable set in the transaction. (This will cause problems\n\t for API's like ecpg which are always in a transaction.)\n\to Issue a variable RESET on transaction ABORT for any SET variable\n\t modified by a transaction.\n\nI think the last one is the most reasonable option.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Jessica Perry Hekman wrote:\n> > On Tue, 2 Apr 2002, Barry Lind wrote:\n> > \n> > > Since both the JDBC and ODBC specs have essentially the same symantics \n> > > for this, I would hope this is done in the backend instead of both \n> > > interfaces.\n> > \n> > The current plan seems to be to make changes in the backend and the JDBC\n> > interface, the bulk of the implementation being in the backend.\n> \n> Yes, ODBC and JDBC need this, and I am sure psql folks will use it too,\n> not counting libpq and all the others.\n> \n> We just need a way to specify statement-level SET options inside a\n> transaction where the statement may fail and ignore the SET command that\n> resets the timeout. We don't have any mechanism to reset the timeout\n> parameter at the end of a transaction automatically, which would solve\n> our problem with failed transactions.\n> \n> Does anyone know the ramifications of allowing SET to work in an aborted\n> transaction? It is my understanding that SET doesn't really have\n> transaction semantics anyway, e.g. a SET that is done in a transaction\n> that is later aborted is still valid:\n> \n> \ttest=> BEGIN;\n> \tBEGIN\n> \ttest=> SET server_min_messages to 'debug5';\n> \tSET VARIABLE\n> \ttest=> ABORT;\n> \tROLLBACK\n> \ttest=> SHOW server_min_messages;\n> \tINFO: server_min_messages is debug5\n> \tSHOW VARIABLE\n> \n> Having shown this, it could be argued that SET should work in an\n> already-aborted transaction. Why should having the SET before or after\n> the transaction is canceled have any effect. This illustrates it a\n> little clearer:\n> \t\n> \ttest=> BEGIN;\n> \tBEGIN\n> \ttest=> SET server_min_messages to 'debug3';\n> \tSET VARIABLE\n> \ttest=> asdf; \n> \tERROR: parser: parse error at or near \"asdf\"\n> \ttest=> SET server_min_messages to 'debug1';\n> \tWARNING: current transaction is aborted, queries ignored until end of\n> \ttransaction block\n> \t*ABORT STATE*\n> \ttest=> COMMIT;\n> \tCOMMIT\n> \ttest=> SHOW server_min_messages;\n> \tINFO: server_min_messages is debug3\n> \tSHOW VARIABLE\n> \ttest=> \n> \n> Why should the 'debug3' be honored if the transaction aborted. And if\n> it is OK that is was honored, is it OK that the 'debug1' was not\n> honored?\n> \n> Allowing SET to be valid after a transaction aborts would solve our SET\n> timeout problem.\n> \n> There is also a feeling that people may want to set maximum counts for\n> transactions too because the transaction could be holding locks you want\n> released.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 09:23:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we have only a few options:\n\nYou forgot\n\n\to Do nothing.\n\nIMHO the current behavior is not broken, and does not need fixed.\nAll of the options you suggest are surely more broken than the current\nbehavior.\n\n> \to Issue a RESET on transaction completion (commit or abort) for any\n> \t SET variable set in the transaction. (This will cause problems\n> \t for API's like ecpg which are always in a transaction.)\n\nRESET would certainly not be a desirable behavior. If we want SET vars\nto roll back on abort, then they should roll back --- ie, resume their\ntransaction-start-time values. But I doubt it's worth the trouble.\nThat behavior would do nothing to help JDBC implement timeouts, since\nthey'd still need to change the value again explicitly after successful\ntransaction completion.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 10:25:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we have only a few options:\n> \n> You forgot\n> \n> \to Do nothing.\n> \n> IMHO the current behavior is not broken, and does not need fixed.\n> All of the options you suggest are surely more broken than the current\n> behavior.\n\nI think it is broken. What logic is there that SET before transaction\nabort is performed, but after abort it is ignored? What if someone\nwants a specific optimizer parameter for a statement in a transaction,\nlike geqo_* or enable_seqscan off, and they perform the SET before the\nstatement OK but if the statement fails, the SET after it is ignored.\nThat doesn't seem like very normal behavior to me.\n\nWe are seeing this in the timeout case, but in fact the other SET\ncommands when run in a transaction have the same problem.\n\n> > \to Issue a RESET on transaction completion (commit or abort) for any\n> > \t SET variable set in the transaction. (This will cause problems\n> > \t for API's like ecpg which are always in a transaction.)\n> \n> RESET would certainly not be a desirable behavior. If we want SET vars\n> to roll back on abort, then they should roll back --- ie, resume their\n> transaction-start-time values. But I doubt it's worth the trouble.\n> That behavior would do nothing to help JDBC implement timeouts, since\n> they'd still need to change the value again explicitly after successful\n> transaction completion.\n\nYes, I now think that saving the SET commands that are ignored in a\ntransaction and running them _after_ the transaction completes may be\nthe best thing. They can be stored as C strings in a stable memory\ncontext and just run on transaction completion.\n\nIf we don't somehow get this to work, how do we do timeouts, which we\nall know we should have?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 14:11:55 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, I now think that saving the SET commands that are ignored in a\n> transaction and running them _after_ the transaction completes may be\n> the best thing.\n\nNo, that's just plain ridiculous. If you want to change the semantics\nof SET, then make it work *correctly*, viz like an SQL statement: roll\nit back on transaction abort. Otherwise leave it alone.\n\n> If we don't somehow get this to work, how do we do timeouts, which we\n> all know we should have?\n\nThis is utterly unrelated to timeouts. With or without any changes in\nSET behavior, JDBC would need to issue a SET after completion of the\ntransaction if they wanted to revert a query_timeout variable to the\nno-timeout state.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 14:52:29 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, I now think that saving the SET commands that are ignored in a\n> > transaction and running them _after_ the transaction completes may be\n> > the best thing.\n> \n> No, that's just plain ridiculous. If you want to change the semantics\n\nNo more ridiculous than what we have now.\n\n> of SET, then make it work *correctly*, viz like an SQL statement: roll\n> it back on transaction abort. Otherwise leave it alone.\n\nI am not going to leave it alone based only on your say-so, Tom.\n\n> > If we don't somehow get this to work, how do we do timeouts, which we\n> > all know we should have?\n> \n> This is utterly unrelated to timeouts. With or without any changes in\n> SET behavior, JDBC would need to issue a SET after completion of the\n> transaction if they wanted to revert a query_timeout variable to the\n> no-timeout state.\n\n\"Utterly unrelated?\" No. If we can get SET to work properly in\ntransactions, jdbc can cleanly issue SET timeout=4, statement, SET\ntimeout=0. Without it, using SET for timeout is a problem. That's how\nwe got to this issue in the first place.\n\nI am still looking for a constructive idea on how we can get this to\nwork, rather than calling my ideas \"ridiculous\".\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 16:18:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I am still looking for a constructive idea on how we can get this to\n> work, rather than calling my ideas \"ridiculous\".\n\nWe know very well how to make it work: JDBC can issue a SET timeout = 0\nafter exiting the transaction. You're proposing to change the semantics\nof SET into something quite bizarre in order to allow JDBC to not have\nto work as hard. I think that's a bad tradeoff.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 17:33:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am still looking for a constructive idea on how we can get this to\n> > work, rather than calling my ideas \"ridiculous\".\n> \n> We know very well how to make it work: JDBC can issue a SET timeout = 0\n> after exiting the transaction. You're proposing to change the semantics\n> of SET into something quite bizarre in order to allow JDBC to not have\n> to work as hard. I think that's a bad tradeoff.\n\nOr we don't have to reset the timeout at all.\nFor example when we are about to issue a command, we\ncan check if the requested timeout is different from\nthe current server's timeout. We don't have to (re)set\nthe timeout unless they are different.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 05 Apr 2002 08:48:24 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I am still looking for a constructive idea on how we can get this to\n> > work, rather than calling my ideas \"ridiculous\".\n> \n> We know very well how to make it work: JDBC can issue a SET timeout = 0\n> after exiting the transaction. You're proposing to change the semantics\n> of SET into something quite bizarre in order to allow JDBC to not have\n> to work as hard. I think that's a bad tradeoff.\n\nIt that acceptable to the JDBC folks? It requires two \"SET timeout = 0\"\nstatements, one after the statement in the transaction, and another\nafter the transaction COMMIT WORK.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 22:22:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > Yes, I now think that saving the SET commands that are ignored in a\n> > > transaction and running them _after_ the transaction completes may be\n> > > the best thing.\n> >\n> > No, that's just plain ridiculous. If you want to change the semantics\n>\n> No more ridiculous than what we have now.\n>\n> > of SET, then make it work *correctly*, viz like an SQL statement: roll\n> > it back on transaction abort. Otherwise leave it alone.\n>\n> I am not going to leave it alone based only on your say-so, Tom.\n\n I have to agree with Tom here. It's not right to hack up SET\n to be accepted in transaction abort state. Nor is it right to\n queue up SET requests then. If those queued SET's lead to\n errors, when do you report them? On ROLLBACK?\n\n If at all, SET commands should behave like everything else.\n If done inside a transaction, they have to rollback.\n\n> > > If we don't somehow get this to work, how do we do timeouts, which we\n> > > all know we should have?\n> >\n> > This is utterly unrelated to timeouts. With or without any changes in\n> > SET behavior, JDBC would need to issue a SET after completion of the\n> > transaction if they wanted to revert a query_timeout variable to the\n> > no-timeout state.\n>\n> \"Utterly unrelated?\" No. If we can get SET to work properly in\n> transactions, jdbc can cleanly issue SET timeout=4, statement, SET\n> timeout=0. Without it, using SET for timeout is a problem. That's how\n> we got to this issue in the first place.\n\n Could we get out of this by defining that \"timeout\" is\n automatically reset at next statement end? So that the entire\n thing is\n\n SET timeout=4;\n SELECT ...;\n -- We're back in no-timeout\n\n And that it doesn't matter if we're in a transaction, if the\n statement aborts, yadda yadda...\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 5 Apr 2002 10:04:09 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Thu, 4 Apr 2002, Bruce Momjian wrote:\n\n> It that acceptable to the JDBC folks? It requires two \"SET timeout = 0\"\n> statements, one after the statement in the transaction, and another\n> after the transaction COMMIT WORK.\n\nThat's fine, though probably about as much work as just implementing the\nwhole thing in JDBC.\n\nj\n\n",
"msg_date": "Fri, 5 Apr 2002 10:13:48 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Could we get out of this by defining that \"timeout\" is\n> automatically reset at next statement end?\n\nI was hoping to avoid that, because it seems like a wart. OTOH,\nit'd be less of a wart than the global changes of semantics that\nBruce is proposing :-(\n\nHow exactly would you make this happen? The simplest way I can think of\nto do it (reset timeout in outer loop in postgres.c) would not work,\nbecause it'd reset the timeout as soon as the SET statement completes.\nHow would you get the setting to survive for exactly one additional\nstatement?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 11:19:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "On Fri, Apr 05, 2002 at 11:19:04AM -0500, Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Could we get out of this by defining that \"timeout\" is\n> > automatically reset at next statement end?\n> \n> I was hoping to avoid that, because it seems like a wart. OTOH,\n> it'd be less of a wart than the global changes of semantics that\n> Bruce is proposing :-(\n> \n> How exactly would you make this happen? The simplest way I can think of\n> to do it (reset timeout in outer loop in postgres.c) would not work,\n> because it'd reset the timeout as soon as the SET statement completes.\n> How would you get the setting to survive for exactly one additional\n> statement?\n\nHow about not messing with the SET, but adding it to the SELECT syntax\nitself? a \"WITH TIMEOUT\" clause?\n\nThis is the first of the (proposed) SET variables that affects query\nperformance that is not a 'twiddle with the internals because something\nis really wrong' hack (or debugging tool, if you will) Argueably,\nthose also suffer from the punching through the transaction problem:\nI'd certainly hate (for example) to have sequential scans disabled for\nan entire connection because one gnarly query that the optimizer guesses\nwrong on died, and my reset got ignored. I'd hate it, but understand\nthat it's a crufty hack to get around a problem, and just deal with\nresetting the transaction/connection.\n\nTimeouts, on the other hand, are a much more respectable mainline sort\nof extension, apparently required for certain standards (The JDBC people\nstarted this discussion, right?). They should be fully supported by the\ntransactional machinery, however that is decided. If that means all\nSETs become transactional, I don't really see a problem with that.\n\nOr, as I suggested above, extend the SELECT (and other querys?) syntax\nseems reasonable. More so than the non-standard 'use this index, really'\ntypes of extensions that other RDBMSs provide, that we've rightly avoided.\n\nThoughts?\n\nRoss\n",
"msg_date": "Fri, 5 Apr 2002 11:23:26 -0600",
"msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Could we get out of this by defining that \"timeout\" is\n> > automatically reset at next statement end?\n>\n> I was hoping to avoid that, because it seems like a wart. OTOH,\n> it'd be less of a wart than the global changes of semantics that\n> Bruce is proposing :-(\n>\n> How exactly would you make this happen? The simplest way I can think of\n> to do it (reset timeout in outer loop in postgres.c) would not work,\n> because it'd reset the timeout as soon as the SET statement completes.\n> How would you get the setting to survive for exactly one additional\n> statement?\n\n I would vote for a general callback registering mechanism,\n where you can specify an event, a function and an opaque\n pointer. Possible events then would be end of statement, end\n of transaction, commit, abort, regular end of session.\n\n Sure, it looks like total overkill for this minor JDBC\n problem. But I like general support structures to be in\n place early.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 5 Apr 2002 13:53:56 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> If at all, SET commands should behave like everything else.\n> If done inside a transaction, they have to rollback.\n\nI have thought of a scenario that may be sufficient to justify fixing\nSETs to roll back on transaction abort. Consider\n\n\tBEGIN;\n\n\tCREATE SCHEMA foo;\n\n\tSET search_path = 'foo, public';\n\n\tROLLBACK;\n\nAs the code stands, this will leave you with an invalid search path.\n(What's worse, if you now execute CREATE TABLE, it will happily create\ntables belonging to the vanished namespace foo. Everything will seem\nto work fine ... until you try to find those tables again in a new\nsession ...)\n\nIt seems clear to me that SET *should* roll back on abort. Just a\nmatter of how important is it to fix.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 14:13:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Ross J. Reedstrom wrote:\n> On Fri, Apr 05, 2002 at 11:19:04AM -0500, Tom Lane wrote:\n> > Jan Wieck <janwieck@yahoo.com> writes:\n> > > Could we get out of this by defining that \"timeout\" is\n> > > automatically reset at next statement end?\n> >\n> > I was hoping to avoid that, because it seems like a wart. OTOH,\n> > it'd be less of a wart than the global changes of semantics that\n> > Bruce is proposing :-(\n> >\n> > How exactly would you make this happen? The simplest way I can think of\n> > to do it (reset timeout in outer loop in postgres.c) would not work,\n> > because it'd reset the timeout as soon as the SET statement completes.\n> > How would you get the setting to survive for exactly one additional\n> > statement?\n>\n> How about not messing with the SET, but adding it to the SELECT syntax\n> itself? a \"WITH TIMEOUT\" clause?\n\n Only SELECT? I thought all DML-statements should honour it.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Fri, 5 Apr 2002 14:22:02 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> Or, as I suggested above, extend the SELECT (and other querys?) syntax\n> seems reasonable. More so than the non-standard 'use this index, really'\n> types of extensions that other RDBMSs provide, that we've rightly avoided.\n\nI think we need timeout for all statement.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Apr 2002 20:32:47 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Could we get out of this by defining that \"timeout\" is\n> > automatically reset at next statement end?\n> \n> I was hoping to avoid that, because it seems like a wart. OTOH,\n> it'd be less of a wart than the global changes of semantics that\n> Bruce is proposing :-(\n> \n> How exactly would you make this happen? The simplest way I can think of\n> to do it (reset timeout in outer loop in postgres.c) would not work,\n> because it'd reset the timeout as soon as the SET statement completes.\n> How would you get the setting to survive for exactly one additional\n> statement?\n\nSure, you could reset it, but there are going to be cases where you want\nto do a timeout=6000 for the entire session. If it resets after the\nfirst statement, this is hard to do.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Apr 2002 20:33:37 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > If at all, SET commands should behave like everything else.\n> > If done inside a transaction, they have to rollback.\n> \n> I have thought of a scenario that may be sufficient to justify fixing\n> SETs to roll back on transaction abort. Consider\n> \n> \tBEGIN;\n> \n> \tCREATE SCHEMA foo;\n> \n> \tSET search_path = 'foo, public';\n> \n> \tROLLBACK;\n> \n> As the code stands, this will leave you with an invalid search path.\n> (What's worse, if you now execute CREATE TABLE, it will happily create\n> tables belonging to the vanished namespace foo. Everything will seem\n> to work fine ... until you try to find those tables again in a new\n> session ...)\n> \n> It seems clear to me that SET *should* roll back on abort. Just a\n> matter of how important is it to fix.\n\nThat was my point, that having SET work pre-abort and ignored post-abort\nis broken itself, whether we implement timeout or not. Before we had\ntuple-reading SET variables, it probably didn't matter, but now with\nschemas, I can see it is more of an issue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Fri, 5 Apr 2002 20:38:58 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Could we get out of this by defining that \"timeout\" is\n> > automatically reset at next statement end?\n> \n> I was hoping to avoid that, because it seems like a wart. OTOH,\n> it'd be less of a wart than the global changes of semantics that\n> Bruce is proposing :-(\n\nProbably I'm misunderstanding this thread.\nWhy must the query_timeout be reset particularly ?\nWhat's wrong with simply issueing set query_timeout\ncommand just before every query ?\n\nregards,\nHiroshi Inoue \n",
"msg_date": "Sat, 6 Apr 2002 18:14:53 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > -----Original Message-----\n> > From: Tom Lane\n> > \n> > Jan Wieck <janwieck@yahoo.com> writes:\n> > > Could we get out of this by defining that \"timeout\" is\n> > > automatically reset at next statement end?\n> > \n> > I was hoping to avoid that, because it seems like a wart. OTOH,\n> > it'd be less of a wart than the global changes of semantics that\n> > Bruce is proposing :-(\n> \n> Probably I'm misunderstanding this thread.\n> Why must the query_timeout be reset particularly ?\n> What's wrong with simply issueing set query_timeout\n> command just before every query ?\n\nYou could do that, but we also imagine cases where people would want to\nset a timeout for each query in an entire session.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Apr 2002 17:17:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Sat, 6 Apr 2002, Bruce Momjian wrote:\n\n> > What's wrong with simply issueing set query_timeout\n> > command just before every query ?\n> \n> You could do that, but we also imagine cases where people would want to\n> set a timeout for each query in an entire session.\n\nOne approach might be for the interface to take care of setting the query\ntimeout before each query, and just ask the backend to handle timeouts\nper-query. So from the user's perspective, session-level timeouts would\nexist, but the backend would not have to worry about rolling back\ntimeouts.\n\nj\n\n",
"msg_date": "Sat, 6 Apr 2002 17:23:00 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica Perry Hekman wrote:\n> On Sat, 6 Apr 2002, Bruce Momjian wrote:\n> \n> > > What's wrong with simply issueing set query_timeout\n> > > command just before every query ?\n> > \n> > You could do that, but we also imagine cases where people would want to\n> > set a timeout for each query in an entire session.\n> \n> One approach might be for the interface to take care of setting the query\n> timeout before each query, and just ask the backend to handle timeouts\n> per-query. So from the user's perspective, session-level timeouts would\n> exist, but the backend would not have to worry about rolling back\n> timeouts.\n\nYes, that would work, but libpq and psql would have trouble doing full\nsession timeouts.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Apr 2002 18:16:22 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> One approach might be for the interface to take care of setting the query\n>> timeout before each query, and just ask the backend to handle timeouts\n>> per-query. So from the user's perspective, session-level timeouts would\n>> exist, but the backend would not have to worry about rolling back\n>> timeouts.\n\n> Yes, that would work, but libpq and psql would have trouble doing full\n> session timeouts.\n\n From the backend's perspective it'd be a *lot* cleaner to support\npersistent timeouts (good 'til canceled) than one-shots. If that's\nthe choice then let's let the frontend library worry about implementing\none-shots.\n\nNote: I am now pretty well convinced that we *must* fix SET to roll back\nto start-of-transaction settings on transaction abort. If we do that,\nat least some of the difficulty disappears for JDBC to handle one-shot\ntimeouts by issuing SETs before and after the target query against a\nquery_timeout variable that otherwise acts like a good-til-canceled\nsetting. Can we all compromise on that?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Apr 2002 20:40:14 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> One approach might be for the interface to take care of setting the query\n> >> timeout before each query, and just ask the backend to handle timeouts\n> >> per-query. So from the user's perspective, session-level timeouts would\n> >> exist, but the backend would not have to worry about rolling back\n> >> timeouts.\n> \n> > Yes, that would work, but libpq and psql would have trouble doing full\n> > session timeouts.\n> \n> >From the backend's perspective it'd be a *lot* cleaner to support\n> persistent timeouts (good 'til canceled) than one-shots. If that's\n> the choice then let's let the frontend library worry about implementing\n> one-shots.\n> \n> Note: I am now pretty well convinced that we *must* fix SET to roll back\n> to start-of-transaction settings on transaction abort. If we do that,\n> at least some of the difficulty disappears for JDBC to handle one-shot\n> timeouts by issuing SETs before and after the target query against a\n> query_timeout variable that otherwise acts like a good-til-canceled\n> setting. Can we all compromise on that?\n\nAdded to TODO:\n\n\t* Abort SET changes made in aborted transactions \n\nWe do have on_shmem_exit and on_proc_exit function call queues. Seems\nwe will need SET to create a queue of function calls containing previous\nvalues of variables SEt in multi-statement transactions. If we execute\nthe queue in last-in-first-out order, the variables will be restored\nproperly.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Apr 2002 20:59:16 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> We do have on_shmem_exit and on_proc_exit function call queues. Seems\n> we will need SET to create a queue of function calls containing previous\n> values of variables SEt in multi-statement transactions. If we execute\n> the queue in last-in-first-out order, the variables will be restored\n> properly.\n\nThat's most certainly the hard way. I was planning to just make GUC\nsave a spare copy of the start-of-transaction value of each variable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Apr 2002 21:14:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > We do have on_shmem_exit and on_proc_exit function call queues. Seems\n> > we will need SET to create a queue of function calls containing previous\n> > values of variables SEt in multi-statement transactions. If we execute\n> > the queue in last-in-first-out order, the variables will be restored\n> > properly.\n> \n> That's most certainly the hard way. I was planning to just make GUC\n> save a spare copy of the start-of-transaction value of each variable.\n\nEwe, I was hoping for something with zero overhead for the non-SET case.\nCan we trigger the save for the first SET in the transaction?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Apr 2002 21:15:50 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Ewe, I was hoping for something with zero overhead for the non-SET case.\n\nWell, a function call and immediate return if no SET has been executed\nin the current xact seems low enough overhead to me. We'll just keep\na flag showing whether there's anything to do.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 06 Apr 2002 21:20:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Ewe, I was hoping for something with zero overhead for the non-SET case.\n> \n> Well, a function call and immediate return if no SET has been executed\n> in the current xact seems low enough overhead to me. We'll just keep\n> a flag showing whether there's anything to do.\n\nOh, I thought you were going to save all the GUC variables on\ntransaction start. I now assume you are going to have one field per\nvariable for the pre-xact value. That is fine.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 6 Apr 2002 21:22:27 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Note: I am now pretty well convinced that we *must* fix SET to roll back\n> to start-of-transaction settings on transaction abort. If we do that,\n> at least some of the difficulty disappears for JDBC to handle one-shot\n> timeouts by issuing SETs before and after the target query against a\n> query_timeout variable that otherwise acts like a good-til-canceled\n> setting. Can we all compromise on that?\n> \n\nThis plan should work well for JDBC. (It actually makes the code on the \njdbc side pretty easy).\n\nthanks,\n--Barry\n\n",
"msg_date": "Sat, 06 Apr 2002 20:40:22 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane writes:\n\n> Note: I am now pretty well convinced that we *must* fix SET to roll back\n> to start-of-transaction settings on transaction abort. If we do that,\n> at least some of the difficulty disappears for JDBC to handle one-shot\n> timeouts by issuing SETs before and after the target query against a\n> query_timeout variable that otherwise acts like a good-til-canceled\n> setting. Can we all compromise on that?\n\nNo.\n\nI agree that there may be some variables that must be rolled back, or\nwhere automatic reset on transaction end may be desirable (note that these\nare two different things), but for some variables it's completely\nnonsensical. Those variables describe session characteristics, not\ndatabase state. For instance, time zone, default_transaction_isolation.\nOr consider you're raising the debug level, but it gets reset during\ncommit so you can't debug the commit process. Or in the future we may\nhave some SQL-compatible always-in-transaction mode which would mean that\nyou could never set any variable to last.\n\nIf you want something that's transaction-specific, invent a new mechanism.\nHook in the set transaction isolation level command while you're at it.\nBut don't break everything that's worked so far.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 7 Apr 2002 00:09:10 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Can we all compromise on that?\n\n> No.\n\nOh dear...\n\n> I agree that there may be some variables that must be rolled back, or\n> where automatic reset on transaction end may be desirable (note that these\n> are two different things), but for some variables it's completely\n> nonsensical. Those variables describe session characteristics, not\n> database state. For instance, time zone, default_transaction_isolation.\n\nUh, why? I do not see why it's unreasonable for\n\tBEGIN;\n\tSET time_zone = whatever;\n\tROLLBACK;\nto be a no-op. The fact that we haven't done that historically doesn't\ncount for much (unless your argument is \"backwards compatibility\" ...\nbut you didn't say that). Not long ago we couldn't roll back a DROP\nTABLE command; but that didn't make it right.\n\n> Or consider you're raising the debug level, but it gets reset during\n> commit so you can't debug the commit process.\n\nIt wouldn't get reset during commit, so I assume you really meant you\nwanted to debug an abort problem. But even there, what's the problem?\nSet the variable *before* you enter the transaction that will abort.\n\n> Or in the future we may\n> have some SQL-compatible always-in-transaction mode which would mean that\n> you could never set any variable to last.\n\nOnly if this mode prevents you from ever committing anything. Somehow\nI doubt that that's either SQL-compatible or useful.\n\n> If you want something that's transaction-specific, invent a new mechanism.\n\nI didn't say \"transaction specific\". I said that if you do a SET inside\na transaction block, and then the transaction is aborted, the effects of\nthe SET ought to roll back along with everything else you did inside\nthat transaction block. I'm not seeing what the argument is against\nthis.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Apr 2002 00:32:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane writes:\n\n> I didn't say \"transaction specific\". I said that if you do a SET inside\n> a transaction block, and then the transaction is aborted, the effects of\n> the SET ought to roll back along with everything else you did inside\n> that transaction block. I'm not seeing what the argument is against\n> this.\n\nI consider SET variables metadata that are not affected by transactions.\nI should be able to change my mind about my session preferences in the\nmiddle of a transaction, no matter what happens to the data in it. Say\nsomewhere in the middle of a long transaction I think, \"I should really be\nlogging this stuff\". I turn a knob to do so, and the next command fails.\nIs the failure logged? In which order does the rollback happen? What if\nI want to continue logging?\n\nIf anything were to change I would like to continue accepting SET commands\nafter an error. Of course, I would like to continue accepting any command\nafter an error, but that's a different debate.\n\nI guess it's a matter of definition: Do you consider SET variables\ndatabase state or session metadata? I think some are this and some are\nthat. I'm not sure how to draw the line, but throwing everything from one\ncategory into the other isn't my favorite solution.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sun, 7 Apr 2002 01:01:07 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I consider SET variables metadata that are not affected by transactions.\n\nWhy? Again, the fact that historically they've not acted that way isn't\nsufficient reason for me.\n\n> I should be able to change my mind about my session preferences in the\n> middle of a transaction, no matter what happens to the data in it. Say\n> somewhere in the middle of a long transaction I think, \"I should really be\n> logging this stuff\". I turn a knob to do so, and the next command fails.\n> Is the failure logged? In which order does the rollback happen? What if\n> I want to continue logging?\n\nHm. That's a slightly more interesting example than before ... but it\ncomes close to arguing that logging should be under transaction control.\nSurely you'd not argue that a failed transaction should erase all its\nentries from the postmaster log? Why would you expect changes in log\nlevels to be retroactive?\n\n> I guess it's a matter of definition: Do you consider SET variables\n> database state or session metadata? I think some are this and some are\n> that. I'm not sure how to draw the line, but throwing everything from one\n> category into the other isn't my favorite solution.\n\nYou seem to be suggesting that we should make a variable-by-variable\ndecision about whether SET variables roll back on ABORT or not. I think\nthat way madness lies; we could spend forever debating which vars are\nwhich, and then who will remember without consulting the documentation?\n\nI feel we should just do it. Yeah, there might be some corner cases\nwhere it's not the ideal behavior; but you haven't convinced me that\nthere are more cases where it's bad than where it's good. You sure\nhaven't convinced me that it's worth making SET's behavior\nnigh-unpredictable-without-a-manual, which is what per-variable behavior\nwould be.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sun, 07 Apr 2002 01:08:46 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "> -----Original Message-----\n> From: Peter Eisentraut [mailto:peter_e@gmx.net]\n> \n> \n> I guess it's a matter of definition: Do you consider SET variables\n> database state or session metadata? \n\nSession metadata IMHO. If there are(would be) database state\nvariables we should introduce another command for them.\nFor example I don't think QUERY_TIMEOUT is such a variable.\nAs I mentioned many times we can set QUERY_TIMEOUT before\neach query. If the overhead is an issue we can keep track of the\nvaraible and reduce the command calls to minimum easily. \n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sun, 7 Apr 2002 16:59:16 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> \n> Hiroshi Inoue wrote:\n> > > -----Original Message-----\n> > > From: Tom Lane\n> > > \n> > > Jan Wieck <janwieck@yahoo.com> writes:\n> > > > Could we get out of this by defining that \"timeout\" is\n> > > > automatically reset at next statement end?\n> > > \n> > > I was hoping to avoid that, because it seems like a wart. OTOH,\n> > > it'd be less of a wart than the global changes of semantics that\n> > > Bruce is proposing :-(\n> > \n> > Probably I'm misunderstanding this thread.\n> > Why must the query_timeout be reset particularly ?\n> > What's wrong with simply issueing set query_timeout\n> > command just before every query ?\n> \n> You could do that, but we also imagine cases where people would want to\n> set a timeout for each query in an entire session.\n\nSorry I couldn't understand your point.\nIt seems the simplest and the most certain way is to call \n'SET QUERY_TIMEOUT per query. The way dosen't require\nRESET at all. Is the overhead an issue ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Sun, 7 Apr 2002 16:59:38 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> > > Probably I'm misunderstanding this thread.\n> > > Why must the query_timeout be reset particularly ?\n> > > What's wrong with simply issueing set query_timeout\n> > > command just before every query ?\n> > \n> > You could do that, but we also imagine cases where people would want to\n> > set a timeout for each query in an entire session.\n> \n> Sorry I couldn't understand your point.\n> It seems the simplest and the most certain way is to call \n> 'SET QUERY_TIMEOUT per query. The way dosen't require\n> RESET at all. Is the overhead an issue ?\n\nWhat about psql and libpq. Doing a timeout before every query is a\npain. I realize it can be done easily in ODBC and JDBC, but we need a\ngeneral timeout mechanism.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Apr 2002 19:27:52 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> > I guess it's a matter of definition: Do you consider SET variables\n> > database state or session metadata? I think some are this and some are\n> > that. I'm not sure how to draw the line, but throwing everything from one\n> > category into the other isn't my favorite solution.\n> \n> You seem to be suggesting that we should make a variable-by-variable\n> decision about whether SET variables roll back on ABORT or not. I think\n> that way madness lies; we could spend forever debating which vars are\n> which, and then who will remember without consulting the documentation?\n> \n> I feel we should just do it. Yeah, there might be some corner cases\n> where it's not the ideal behavior; but you haven't convinced me that\n> there are more cases where it's bad than where it's good. You sure\n> haven't convinced me that it's worth making SET's behavior\n> nigh-unpredictable-without-a-manual, which is what per-variable behavior\n> would be.\n\nI am with Tom on this one. (Nice to see he is now arguing on my side.) \nMaking different variables behave differently is clearly going to\nconfuse users. The argument that we should allow SET to work when the\ntransaction is in ABORT state seems very wierd to me because we ignore\nevery other command in that state. I think reversing out any SET's done\nin an aborted transaction is the clear way to go. If users want their\nSET to not be affected by the transaction abort, they should put their\nSET's outside a transaction; seems pretty clear to me.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Apr 2002 19:34:02 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > Probably I'm misunderstanding this thread.\n> > > > Why must the query_timeout be reset particularly ?\n> > > > What's wrong with simply issueing set query_timeout\n> > > > command just before every query ?\n> > >\n> > > You could do that, but we also imagine cases where people would want to\n> > > set a timeout for each query in an entire session.\n> >\n> > Sorry I couldn't understand your point.\n> > It seems the simplest and the most certain way is to call\n> > 'SET QUERY_TIMEOUT per query. The way dosen't require\n> > RESET at all. Is the overhead an issue ?\n> \n> What about psql and libpq. Doing a timeout before every query is a\n> pain. \n\nPsql and libpq would simply issue the query according to the\nuser's request as they currently do. What's pain with it ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 08 Apr 2002 08:55:58 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> > > Sorry I couldn't understand your point.\n> > > It seems the simplest and the most certain way is to call\n> > > 'SET QUERY_TIMEOUT per query. The way dosen't require\n> > > RESET at all. Is the overhead an issue ?\n> > \n> > What about psql and libpq. Doing a timeout before every query is a\n> > pain. \n> \n> Psql and libpq would simply issue the query according to the\n> user's request as they currently do. What's pain with it ?\n\nIf they wanted to place a timeout on all queries in a session, they\nwould need a SET for every query, which seems like a pain.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Apr 2002 20:11:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > I guess it's a matter of definition: Do you consider SET variables\n> > > database state or session metadata? I think some are this and some are\n> > > that. I'm not sure how to draw the line, but throwing everything from one\n> > > category into the other isn't my favorite solution.\n> >\n> > You seem to be suggesting that we should make a variable-by-variable\n> > decision about whether SET variables roll back on ABORT or not. I think\n> > that way madness lies; we could spend forever debating which vars are\n> > which, and then who will remember without consulting the documentation?\n> >\n> > I feel we should just do it. Yeah, there might be some corner cases\n> > where it's not the ideal behavior; but you haven't convinced me that\n> > there are more cases where it's bad than where it's good. You sure\n> > haven't convinced me that it's worth making SET's behavior\n> > nigh-unpredictable-without-a-manual, which is what per-variable behavior\n> > would be.\n> \n> I am with Tom on this one. (Nice to see he is now arguing on my side.)\n\nI vote against you. If a variable is local to the session, you\ncan change it as you like without bothering any other user(session).\nAutomatic resetting of the varibales is rather confusing to me.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 08 Apr 2002 09:27:44 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> > > > Sorry I couldn't understand your point.\n> > > > It seems the simplest and the most certain way is to call\n> > > > 'SET QUERY_TIMEOUT per query. The way dosen't require\n> > > > RESET at all. Is the overhead an issue ?\n> > >\n> > > What about psql and libpq. Doing a timeout before every query is a\n> > > pain.\n> >\n> > Psql and libpq would simply issue the query according to the\n> > user's request as they currently do. What's pain with it ?\n> \n> If they wanted to place a timeout on all queries in a session, they\n> would need a SET for every query, which seems like a pain.\n\nOh I see. You mean users' pain ?\nIf a user wants to place a timeout on all the query, he\nwould issue SET query_timeout command only once.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Mon, 08 Apr 2002 09:33:33 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > > > I guess it's a matter of definition: Do you consider SET variables\n> > > > database state or session metadata? I think some are this and some are\n> > > > that. I'm not sure how to draw the line, but throwing everything from one\n> > > > category into the other isn't my favorite solution.\n> > >\n> > > You seem to be suggesting that we should make a variable-by-variable\n> > > decision about whether SET variables roll back on ABORT or not. I think\n> > > that way madness lies; we could spend forever debating which vars are\n> > > which, and then who will remember without consulting the documentation?\n> > >\n> > > I feel we should just do it. Yeah, there might be some corner cases\n> > > where it's not the ideal behavior; but you haven't convinced me that\n> > > there are more cases where it's bad than where it's good. You sure\n> > > haven't convinced me that it's worth making SET's behavior\n> > > nigh-unpredictable-without-a-manual, which is what per-variable behavior\n> > > would be.\n> > \n> > I am with Tom on this one. (Nice to see he is now arguing on my side.)\n> \n> I vote against you. If a variable is local to the session, you\n> can change it as you like without bothering any other user(session).\n> Automatic resetting of the varibales is rather confusing to me.\n\nI don't see how this relates to other users. All SET commands that can\nbe changed in psql are per backend, as far as I remember.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Apr 2002 23:17:45 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > > > > Sorry I couldn't understand your point.\n> > > > > It seems the simplest and the most certain way is to call\n> > > > > 'SET QUERY_TIMEOUT per query. The way dosen't require\n> > > > > RESET at all. Is the overhead an issue ?\n> > > >\n> > > > What about psql and libpq. Doing a timeout before every query is a\n> > > > pain.\n> > >\n> > > Psql and libpq would simply issue the query according to the\n> > > user's request as they currently do. What's pain with it ?\n> > \n> > If they wanted to place a timeout on all queries in a session, they\n> > would need a SET for every query, which seems like a pain.\n> \n> Oh I see. You mean users' pain ?\n\nSorry I was unclear.\n\n> If a user wants to place a timeout on all the query, he\n> would issue SET query_timeout command only once.\n\nI am confused. Above you state you want SET QUERY_TIMEOUT to be\nper-query. I assume you mean that the timeout applies for only the next\nquery and is turned off after that. If you do that, it is hard to set a\nmaximum duration for all queries in your session, especially in psql or\nlibpq.\n\nAlso, I am not saying that the timeout is for the entire session, but\nthat the timeout makes sure that any query in the session that takes\nlonger than X milliseconds is automatically cancelled.\n\nPlease reply and let me know what you think. I am sure I am missing\nsomething in your comments.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sun, 7 Apr 2002 23:22:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "\n\n\n\n\nBruce Momjian wrote:\n\nHiroshi Inoue wrote:\n\nBruce Momjian wrote:\n\n\n\nI guess it's a matter of definition: Do you consider SET variablesdatabase state or session metadata? I think some are this and some arethat. I'm not sure how to draw the line, but throwing everything from onecategory into the other isn't my favorite solution.\n\nYou seem to be suggesting that we should make a variable-by-variabledecision about whether SET variables roll back on ABORT or not. I thinkthat way madness lies; we could spend forever debating which vars arewhich, and then who will remember without consulting the documentation?I feel we should just do it. Yeah, there might be some corner caseswhere it's not the ideal behavior; but you haven't convinced me thatthere are more cases where it's bad than where it's good. You surehaven't convinced me that it's worth making SET's behaviornigh-unpredictable-without-a-manual, which is what per-variable behaviorwould be.\n\nI am with Tom on this one. (Nice to see he is now arguing on my side.)\n\nI vote against you. If a variable is local to the session, youcan change it as you like without bothering any other user(session).Automatic resetting of the varibales is rather confusing to me.\n\nI don't see how this relates to other users. All SET commands that canbe changed in psql are per backend, as far as I remember.\n\nPer backend or per session?\n\n\n\n\n\n\n\n",
"msg_date": "Sun, 07 Apr 2002 23:09:07 -0500",
"msg_from": "Thomas Swan <tswan@olemiss.edu>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Fri, Apr 05, 2002 at 08:32:47PM -0500, Bruce Momjian wrote:\n> > Or, as I suggested above, extend the SELECT (and other querys?) syntax\n> > seems reasonable. More so than the non-standard 'use this index, really'\n> > types of extensions that other RDBMSs provide, that we've rightly avoided.\n> \n> I think we need timeout for all statement.\n\n The Oracle has:\n\n CREATE PROFILE statement with for example following options:\n\n CONNECT_TIME\n IDLE_TIME\n\n\n I think system resource control per user is more useful than simple\n SET command. There is no problem add other limits like QUERY_TIMEOUT.\n \n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 8 Apr 2002 10:22:11 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Fri, Apr 05, 2002 at 02:13:26PM -0500, Tom Lane wrote:\n\n> It seems clear to me that SET *should* roll back on abort. Just a\n> matter of how important is it to fix.\n\n I want control on this :-)\n\n\n SET valname = 'vatdata' ON ROLLBACK UNSET;\n \n or\n \n SET valname = 'vatdata';\n \n\n Karel\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 8 Apr 2002 10:29:35 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Sun, Apr 07, 2002 at 01:01:07AM -0500, Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > I didn't say \"transaction specific\". I said that if you do a SET inside\n> > a transaction block, and then the transaction is aborted, the effects of\n> > the SET ought to roll back along with everything else you did inside\n> > that transaction block. I'm not seeing what the argument is against\n> > this.\n> \n> I consider SET variables metadata that are not affected by transactions.\n> I should be able to change my mind about my session preferences in the\n> middle of a transaction, no matter what happens to the data in it. Say\n\n I agree with Peter. For example I have multi-encoding client program \n that changing client-encoding in the middle of transaction and this\n change not depend on transaction. And the other thing: I have DB\n driver in an program there is not possible do SQL query outsite\n transaction.\n\n Is there some problem implement \"SET ... ON ROLLBACK UNSET\" ?\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Mon, 8 Apr 2002 10:44:37 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > > \n> > > > > > Sorry I couldn't understand your point.\n> > > > > > It seems the simplest and the most certain way is to call\n> > > > > > 'SET QUERY_TIMEOUT per query. The way dosen't require\n> > > > > > RESET at all. Is the overhead an issue ?\n> > > > >\n> > > > > What about psql and libpq. Doing a timeout before every \n> query is a\n> > > > > pain.\n> > > >\n> > > > Psql and libpq would simply issue the query according to the\n> > > > user's request as they currently do. What's pain with it ?\n> > > \n> > > If they wanted to place a timeout on all queries in a session, they\n> > > would need a SET for every query, which seems like a pain.\n> > \n> > Oh I see. You mean users' pain ?\n> \n> Sorry I was unclear.\n> \n> > If a user wants to place a timeout on all the query, he\n> > would issue SET query_timeout command only once.\n> \n> I am confused. Above you state you want SET QUERY_TIMEOUT to be\n> per-query. I assume you mean that the timeout applies for only the next\n> query and is turned off after that.\n\nHmm there seems a misunderstanding between you and I but I\ndon't see what it is. Does *SET QUERY_TIMEOUT* start a timer in\nyour scenario ? In my scenario *SET QUERY_TIMEOUT* only\nregisters the timeout value for subsequent queries.\n\nregards,\nHiroshi inoue\n\n",
"msg_date": "Mon, 8 Apr 2002 20:45:25 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > I am confused. Above you state you want SET QUERY_TIMEOUT to be\n> > per-query. I assume you mean that the timeout applies for only the next\n> > query and is turned off after that.\n> \n> Hmm there seems a misunderstanding between you and I but I\n> don't see what it is. Does *SET QUERY_TIMEOUT* start a timer in\n> your scenario ? In my scenario *SET QUERY_TIMEOUT* only\n> registers the timeout value for subsequent queries.\n\nSET QUERY_TIMEOUT does not start a timer. It makes sure each query\nafter the SET is timed and automatically canceled if the single query\nexceeds the timeout interval.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 09:02:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Karel Zak wrote:\n> I agree with Peter. For example I have multi-encoding client program \n> that changing client-encoding in the middle of transaction and this\n> change not depend on transaction. And the other thing: I have DB\n> driver in an program there is not possible do SQL query outsite\n> transaction.\n\nNo problem executing a SET inside its own transaction. The rollback\nhappens only if the SET fails, which for a single SEt command, should be\nfine.\n\n> \n> Is there some problem implement \"SET ... ON ROLLBACK UNSET\" ?\n\nSeems kind of strange. If anything, I can imagine a NO ROLLBACK\ncapability. However, because this can be easily done by executing the\nSET in its own transaction, it seems like overengineering.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 09:10:55 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> Is there some problem implement \"SET ... ON ROLLBACK UNSET\" ?\n\nYes. See my previous example concerning search_path: that variable\nMUST be rolled back at transaction abort, else we risk its value being\ninvalid. We cannot offer the user a choice.\n\nSo far I have not seen one single example against SET rollback that\nI thought was at all compelling. In all cases you can simply issue\nthe SET in a separate transaction if you want to be sure that its\neffects persist. And there seems to be no consideration of the\npossibility that applications might find SET rollback to be useful.\nISTM that the example with JDBC and query_timeout generalizes to other\nparameters that you might want to set on a per-statement basis, such\nas enable_seqscan or transform_null_equals. Consider\n\n\tBEGIN;\n\tSET enable_seqscan = false;\n\tsome-queries-that-might-fail;\n\tSET enable_seqscan = true;\n\tEND;\n\nThis does not work as intended if the initial SET doesn't roll back\nupon transaction failure. Yeah, you can restructure it to\n\n\tSET enable_seqscan = false;\n\tBEGIN;\n\tsome-queries-that-might-fail;\n\tEND;\n\tSET enable_seqscan = true;\n\nbut what was that argument about some apps/drivers finding it\ninconvenient to issue commands outside a transaction block?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Apr 2002 10:08:17 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Bruce Momjian wrote:\n> > > > Sorry I couldn't understand your point.\n> > > > It seems the simplest and the most certain way is to call\n> > > > 'SET QUERY_TIMEOUT per query. The way dosen't require\n> > > > RESET at all. Is the overhead an issue ?\n> > >\n> > > What about psql and libpq. Doing a timeout before every query is a\n> > > pain.\n> >\n> > Psql and libpq would simply issue the query according to the\n> > user's request as they currently do. What's pain with it ?\n>\n> If they wanted to place a timeout on all queries in a session, they\n> would need a SET for every query, which seems like a pain.\n\n Er, how many \"applications\" have you implemented by simply\n providing a schema and psql?\n\n I mean, users normally don't use psql. And if you do, what's\n wrong with controlling the timeout yourself and hitting ^C\n when \"you\" time out? If you do it in a script, it's\n\n yy... p p p p p.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Mon, 8 Apr 2002 10:15:18 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > Is there some problem implement \"SET ... ON ROLLBACK UNSET\" ?\n>\n> Yes. See my previous example concerning search_path: that variable\n> MUST be rolled back at transaction abort, else we risk its value being\n> invalid. We cannot offer the user a choice.\n\n Not really on topic, but I was wondering how you ensure that\n you correct the search path in case someone drops the schema?\n\n Is an invalid search path really that critical (read security\n issue)?\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Mon, 8 Apr 2002 10:46:11 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> Is an invalid search path really that critical (read security\n> issue)?\n\nIt's not a security issue (unless the OID counter wraps around soon\nenough to let someone else get assigned the same OID for a namespace).\nBut it could be pretty annoying anyway, because the front element of\nthe search path is also the default creation target namespace. You\ncould create a bunch of tables and then be unable to access them later\nfor lack of a way to name them.\n\nI'm not really excited about establishing positive interlocks across\nbackends to prevent DROPping a namespace that someone else has in their\nsearch path ... but I do want to handle the simple local-effect cases,\nlike rollback of creation of a namespace.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Apr 2002 11:08:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Jan Wieck wrote:\n> > > Psql and libpq would simply issue the query according to the\n> > > user's request as they currently do. What's pain with it ?\n> >\n> > If they wanted to place a timeout on all queries in a session, they\n> > would need a SET for every query, which seems like a pain.\n> \n> Er, how many \"applications\" have you implemented by simply\n> providing a schema and psql?\n\nActually, I would assume nightly batch jobs are configured this way.\n\n> \n> I mean, users normally don't use psql. And if you do, what's\n> wrong with controlling the timeout yourself and hitting ^C\n> when \"you\" time out? If you do it in a script, it's\n\nYes, clearly meaningless for interactive use.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 11:28:42 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> This does not work as intended if the initial SET doesn't roll back\n> upon transaction failure. Yeah, you can restructure it to\n> \n> \tSET enable_seqscan = false;\n> \tBEGIN;\n> \tsome-queries-that-might-fail;\n> \tEND;\n> \tSET enable_seqscan = true;\n> \n> but what was that argument about some apps/drivers finding it\n> inconvenient to issue commands outside a transaction block?\n\nYes, and if you want to place the SET on a single statement in a\nmulti-statement transaction, doing SET outside the transaction will not\nwork either because it will apply to all statements in the transaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 11:29:58 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> > I consider SET variables metadata that are not affected by transactions.\n> Why? Again, the fact that historically they've not acted that way isn't\n> sufficient reason for me.\n\nHmm. Historically, SET controls behaviors *out of band* with the normal\ntransaction mechanisms. There is strong precedent for this mechanism\n*because it is a useful concept*, not simply because it has always been\ndone this way.\n\n*If* some aspects of SET take on transactional behavior, then this\nshould be *in addition to* the current global scope for those commands.\n\nWhat problem are we trying to solve with this? The topic came up in a\ndiscussion on implementing timeouts for JDBC. afaik it has not come up\n*in any context* for the last seven years, so maybe we should settle\ndown a bit and refocus on the problem at hand...\n\n - Thomas\n",
"msg_date": "Mon, 08 Apr 2002 08:32:50 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Thomas Lockhart wrote:\n> > > I consider SET variables metadata that are not affected by transactions.\n> > Why? Again, the fact that historically they've not acted that way isn't\n> > sufficient reason for me.\n> \n> Hmm. Historically, SET controls behaviors *out of band* with the normal\n> transaction mechanisms. There is strong precedent for this mechanism\n> *because it is a useful concept*, not simply because it has always been\n> done this way.\n\n\nOK, probably good time for summarization. First, consider this:\n\n\tBEGIN WORK;\n\tSET something;\n\tquery fails;\n\tSET something else;\n\tCOMMIT WORK;\n\nUnder current behavior, the first SET is honored, while the second is\nignored because the transaction is in ABORT state. I can see no logical\nreason for this behavior. We ignore normal queries during an ABORT\nbecause the transaction can't possibly change any data because it is\naborted, and the previous non-SET statements in the transactions are\nrolled back. However, the SET commands are not.\n\nThe jdbc timeout issue is this:\n\n\n\tBEGIN WORK;\n\tSET query_timeout=20;\n\tquery fails;\n\tSET query_timeout=0;\n\tCOMMIT WORK;\n\nIn this case, with our current code, the first SET is done, but the\nsecond is ignored. To make this work, you would need this:\n\n\n\tBEGIN WORK;\n\tSET query_timeout=20;\n\tquery fails;\n\tSET query_timeout=0;\n\tCOMMIT WORK;\n\tSET query_timeout=0;\n\nwhich seems kind of strange. The last SET is needed because the query\nmay abort and the second SET ignored.\n\n> *If* some aspects of SET take on transactional behavior, then this\n> should be *in addition to* the current global scope for those commands.\n\nMy point is that SET already doesn't have session behavior because it is\nignored if the transaction has already aborted.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 12:07:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, probably good time for summarization. First, consider this:\n> >\n> > \tBEGIN WORK;\n> > \tSET something;\n> > \tquery fails;\n> > \tSET something else;\n> > \tCOMMIT WORK;\n> >\n> > Under current behavior, the first SET is honored, while the second is\n> > ignored because the transaction is in ABORT state. I can see no logical\n> > reason for this behavior.\n> \n> But that is not a shortcoming of the SET command. The problem is that the\n> system does not accept any commands after one command has failed in a\n> transaction even though it could usefully do so.\n\nUh, yes, we could allow the second SET to succeed even in an aborted\ntransaction, but Tom says his schema stuff will not work in an aborted\nstate, so Tom/I figured the only other option was rollback of the first\nSET.\n\n> > The jdbc timeout issue is this:\n> >\n> >\n> > \tBEGIN WORK;\n> > \tSET query_timeout=20;\n> > \tquery fails;\n> > \tSET query_timeout=0;\n> > \tCOMMIT WORK;\n> >\n> > In this case, with our current code, the first SET is done, but the\n> > second is ignored.\n> \n> Given appropriate functionality, you could rewrite this thus:\n> \n> BEGIN WORK;\n> SET FOR THIS TRANSACTION ONLY query_timeout=20;\n> query;\n> COMMIT WORK;\n\nYes, but why bother with that when rollback of the first SET is cleaner\nand more predictable?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 12:27:32 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> OK, probably good time for summarization. First, consider this:\n>\n> \tBEGIN WORK;\n> \tSET something;\n> \tquery fails;\n> \tSET something else;\n> \tCOMMIT WORK;\n>\n> Under current behavior, the first SET is honored, while the second is\n> ignored because the transaction is in ABORT state. I can see no logical\n> reason for this behavior.\n\nBut that is not a shortcoming of the SET command. The problem is that the\nsystem does not accept any commands after one command has failed in a\ntransaction even though it could usefully do so.\n\n> The jdbc timeout issue is this:\n>\n>\n> \tBEGIN WORK;\n> \tSET query_timeout=20;\n> \tquery fails;\n> \tSET query_timeout=0;\n> \tCOMMIT WORK;\n>\n> In this case, with our current code, the first SET is done, but the\n> second is ignored.\n\nGiven appropriate functionality, you could rewrite this thus:\n\nBEGIN WORK;\nSET FOR THIS TRANSACTION ONLY query_timeout=20;\nquery;\nCOMMIT WORK;\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 8 Apr 2002 12:28:18 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n>\n> Hiroshi Inoue wrote:\n> > > >\n> > > > I feel we should just do it. Yeah, there might be some corner cases\n> > > > where it's not the ideal behavior; but you haven't convinced me that\n> > > > there are more cases where it's bad than where it's good. You sure\n> > > > haven't convinced me that it's worth making SET's behavior\n> > > > nigh-unpredictable-without-a-manual, which is what\n> per-variable behavior\n> > > > would be.\n> > >\n> > > I am with Tom on this one. (Nice to see he is now arguing on\n> my side.)\n> >\n> > I vote against you. If a variable is local to the session, you\n> > can change it as you like without bothering any other user(session).\n> > Automatic resetting of the varibales is rather confusing to me.\n>\n> I don't see how this relates to other users. All SET commands that can\n> be changed in psql are per backend, as far as I remember.\n\nSorry for my poor explanation. What I meant is that *Rollback*\nis to cancel the changes made to SQL-data or schemas\nnot to put back the variables which are local to the session.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Tue, 9 Apr 2002 01:32:52 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > I vote against you. If a variable is local to the session, you\n> > > can change it as you like without bothering any other user(session).\n> > > Automatic resetting of the varibales is rather confusing to me.\n> >\n> > I don't see how this relates to other users. All SET commands that can\n> > be changed in psql are per backend, as far as I remember.\n> \n> Sorry for my poor explanation. What I meant is that *Rollback*\n> is to cancel the changes made to SQL-data or schemas\n> not to put back the variables which are local to the session.\n\nOK, got it, so if someone makes a session change while in a transaction,\nand the transaction aborts, should the SET be rolled back too? If not,\nthen we should honor the SET's that happen after the transaction aborts.\nHowever, Tom's schema changes require a db connection, so it is hard to\nhonor the SET's once the transaction aborts. That is how we got to the\nabort all SET's in an aborted transaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 12:35:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Jan Wieck <janwieck@yahoo.com> writes:\n> > Is an invalid search path really that critical (read security\n> > issue)?\n>\n> It's not a security issue (unless the OID counter wraps around soon\n> enough to let someone else get assigned the same OID for a namespace).\n> But it could be pretty annoying anyway, because the front element of\n> the search path is also the default creation target namespace. You\n> could create a bunch of tables and then be unable to access them later\n> for lack of a way to name them.\n>\n> I'm not really excited about establishing positive interlocks across\n> backends to prevent DROPping a namespace that someone else has in their\n> search path ... but I do want to handle the simple local-effect cases,\n> like rollback of creation of a namespace.\n\n How are namespaces different from any other objects? Can I\n specify a foreign key reference to a table that was there at\n some time in the past? Can I create a view using functions\n that have been there last week? Sure, I can break those\n objects once created by dropping the underlying stuff, but\n that's another issue.\n\n If namespace dropping allows for creation of objects that\n cannot be dropped afterwards any more, I would call that a\n bug or design flaw, which has to be fixed. Just preventing an\n invalid search path resulting from a rollback operation like\n in your example is totally insufficient.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Mon, 8 Apr 2002 12:57:13 -0400 (EDT)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> But that is not a shortcoming of the SET command. The problem is that the\n>> system does not accept any commands after one command has failed in a\n>> transaction even though it could usefully do so.\n\nIn a situation where the reason for failure was a syntax error, it seems\nto me quite dangerous to try to execute any further commands; you may\nnot be executing what the user thought he typed. So I'm leery of any\nproposals that we allow SETs to execute in transaction-abort state,\neven if the implementation could support it.\n\n\n> Uh, yes, we could allow the second SET to succeed even in an aborted\n> transaction, but Tom says his schema stuff will not work in an aborted\n> state, so Tom/I figured the only other option was rollback of the first\n> SET.\n\nThe search_path case is the main reason why I'm intent on changing\nthe behavior of SET; without that, I'd just leave well enough alone.\nPossibly some will suggest that search_path shouldn't be a SET variable\nbecause it needs to be able to be rolled back on error. But what else\nshould it be? It's definitely per-session status, not persistent\ndatabase state. I don't much care for the notion of having SET act\ndifferently for some variables than others, or requiring people to use\na different command for some variables than others.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Apr 2002 13:03:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Sorry for my poor explanation. What I meant is that *Rollback*\n> is to cancel the changes made to SQL-data or schemas\n> not to put back the variables which are local to the session.\n\nUh, why? Seems to me you are asserting as a given exactly the\npoint that is under debate. Let me give a counterexample:\n\n\tBEGIN;\n\tCREATE TEMP TABLE foo;\n\tsomething-erroneous;\n\tEND;\n\nThe creation of the temp table will be rolled back on error, no?\nNow the temp table is certainly session local --- ideally our\nimplementation would not let any other session see any trace of\nit at all. (In practice it is visible if you know where to look,\nbut surely that's just an implementation artifact.)\n\nIf you argue that SETs should not roll back because they are\nsession-local, it seems to me that a logical consequence of that\nposition is that operations on temp tables should not roll back\neither ... and that can hardly be deemed desirable.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Apr 2002 13:09:44 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> If namespace dropping allows for creation of objects that\n> cannot be dropped afterwards any more, I would call that a\n> bug or design flaw, which has to be fixed.\n\nI will not require schema support to wait upon the existence of\ndependency checking, if that's what you're suggesting.\n\nThis does suggest an interesting hole in our thoughts so far about\ndependency checking. If someone is, say, trying to drop type T,\nit's not really sufficient to verify that there are no existing\ntables or functions referencing type T. What of created but as yet\nuncommitted objects? Seems like a full defense would require being\nable to obtain a lock on the object to be dropped, while creators\nof references must obtain some conflicting lock that they hold until\nthey commit. Right now we only have locks on tables ... seems like\nthat's not sufficient.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Apr 2002 13:20:21 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> \n> Hiroshi Inoue wrote:\n> > > I am confused. Above you state you want SET QUERY_TIMEOUT to be\n> > > per-query. I assume you mean that the timeout applies for \n> only the next\n> > > query and is turned off after that.\n> > \n> > Hmm there seems a misunderstanding between you and I but I\n> > don't see what it is. Does *SET QUERY_TIMEOUT* start a timer in\n> > your scenario ? In my scenario *SET QUERY_TIMEOUT* only\n> > registers the timeout value for subsequent queries.\n> \n> SET QUERY_TIMEOUT does not start a timer. It makes sure each query\n> after the SET is timed and automatically canceled if the single query\n> exceeds the timeout interval.\n\nOK using your example, one by one\n\n \tBEGIN WORK;\n \tSET query_timeout=20;\n \tquery fails;\n \tSET query_timeout=0;\n\nFor what the SET was issued ?\nWhat command is issued if the query was successful ?\n\n \tCOMMIT WORK;\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 9 Apr 2002 02:56:08 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Sorry for my poor explanation. What I meant is that *Rollback*\n> > is to\n\n>> cancel the changes made to SQL-data or schemas\n\nThis line is a quote from SQL99 not my creation.\n \n> > not to put back the variables which are local to the session.\n> \n> Uh, why? Seems to me you are asserting as a given exactly the\n> point that is under debate. Let me give a counterexample:\n> \n> \tBEGIN;\n> \tCREATE TEMP TABLE foo;\n> \tsomething-erroneous;\n> \tEND;\n> \n> The creation of the temp table will be rolled back on error, no?\n\n??? TEMP TABLE is a SQL-data not a variable.\nI don't think rolling back SETs makes things plain.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 9 Apr 2002 02:56:27 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > \n> > Hiroshi Inoue wrote:\n> > > > I am confused. Above you state you want SET QUERY_TIMEOUT to be\n> > > > per-query. I assume you mean that the timeout applies for \n> > only the next\n> > > > query and is turned off after that.\n> > > \n> > > Hmm there seems a misunderstanding between you and I but I\n> > > don't see what it is. Does *SET QUERY_TIMEOUT* start a timer in\n> > > your scenario ? In my scenario *SET QUERY_TIMEOUT* only\n> > > registers the timeout value for subsequent queries.\n> > \n> > SET QUERY_TIMEOUT does not start a timer. It makes sure each query\n> > after the SET is timed and automatically canceled if the single query\n> > exceeds the timeout interval.\n> \n> OK using your example, one by one\n> \n> \tBEGIN WORK;\n> \tSET query_timeout=20;\n> \tquery fails;\n> \tSET query_timeout=0;\n> \n> For what the SET was issued ?\n> What command is issued if the query was successful ?\n> \n> \tCOMMIT WORK;\n\nHere, SET should only to the query labeled \"query fails\". However,\nright now, because the query failed, the second SET would not be seen,\nand the timout would apply to all remaining queries in the session.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 14:05:03 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "> -----Original Message-----\n> From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > \n> > OK using your example, one by one\n> > \n> > \tBEGIN WORK;\n> > \tSET query_timeout=20;\n> > \tquery fails;\n> > \tSET query_timeout=0;\n> > \n> > For what the SET was issued ?\n> > What command is issued if the query was successful ?\n> > \n> > \tCOMMIT WORK;\n> \n> Here, SET should only to the query labeled \"query fails\". \n\nWhy should the SET query_timeout = 0 command be issued\nonly when the query failed ? Is it a JDBC driver's requirement\nor some applications' requirements which uses the JDBC driver ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 9 Apr 2002 06:14:46 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > -----Original Message-----\n> > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > > \n> > > OK using your example, one by one\n> > > \n> > > \tBEGIN WORK;\n> > > \tSET query_timeout=20;\n> > > \tquery fails;\n> > > \tSET query_timeout=0;\n> > > \n> > > For what the SET was issued ?\n> > > What command is issued if the query was successful ?\n> > > \n> > > \tCOMMIT WORK;\n> > \n> > Here, SET should only to the query labeled \"query fails\". \n> \n> Why should the SET query_timeout = 0 command be issued\n> only when the query failed ? Is it a JDBC driver's requirement\n> or some applications' requirements which uses the JDBC driver ?\n\nThey want the timeout for only the one statement, so they have to set it\nto non-zero before the statement, and to zero after the statement. In\nour current code, if the query fails, the setting to zero is ignored,\nmeaning all following queries have the timeout, even ones outside that\ntransaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 17:15:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > -----Original Message-----\n> > > From: Bruce Momjian [mailto:pgman@candle.pha.pa.us]\n> > > >\n> > > > OK using your example, one by one\n> > > >\n> > > > BEGIN WORK;\n> > > > SET query_timeout=20;\n> > > > query fails;\n> > > > SET query_timeout=0;\n> > > >\n> > > > For what the SET was issued ?\n> > > > What command is issued if the query was successful ?\n> > > >\n> > > > COMMIT WORK;\n> > >\n> > > Here, SET should only to the query labeled \"query fails\".\n> >\n> > Why should the SET query_timeout = 0 command be issued\n> > only when the query failed ? Is it a JDBC driver's requirement\n> > or some applications' requirements which uses the JDBC driver ?\n> \n> They want the timeout for only the one statement, so they have to set it\n> to non-zero before the statement, and to zero after the statement.\n\nDoes setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\ncommand immediately in the scenario ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 09 Apr 2002 08:38:31 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > Why should the SET query_timeout = 0 command be issued\n> > > only when the query failed ? Is it a JDBC driver's requirement\n> > > or some applications' requirements which uses the JDBC driver ?\n> > \n> > They want the timeout for only the one statement, so they have to set it\n> > to non-zero before the statement, and to zero after the statement.\n> \n> Does setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\n> command immediately in the scenario ?\n\nYes. If we don't make the SET rollback-able, we have to do all sorts of\ntricks in jdbc so aborted transactions get the proper SET value.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 20:35:48 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > > Why should the SET query_timeout = 0 command be issued\n> > > > only when the query failed ? Is it a JDBC driver's requirement\n> > > > or some applications' requirements which uses the JDBC driver ?\n> > >\n> > > They want the timeout for only the one statement, so they have to set it\n> > > to non-zero before the statement, and to zero after the statement.\n> >\n> > Does setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\n> > command immediately in the scenario ?\n> \n> Yes. If we don't make the SET rollback-able, we have to do all sorts of\n> tricks in jdbc so aborted transactions get the proper SET value.\n\nIn my scenario, setQueryTimeout() only saves the timeout\nvalue and issues the corrsponding SET QUERY_TIMEOUT command\nimmediately before each query if necessary.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 09 Apr 2002 09:44:01 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > > > Why should the SET query_timeout = 0 command be issued\n> > > > > only when the query failed ? Is it a JDBC driver's requirement\n> > > > > or some applications' requirements which uses the JDBC driver ?\n> > > >\n> > > > They want the timeout for only the one statement, so they have to set it\n> > > > to non-zero before the statement, and to zero after the statement.\n> > >\n> > > Does setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\n> > > command immediately in the scenario ?\n> > \n> > Yes. If we don't make the SET rollback-able, we have to do all sorts of\n> > tricks in jdbc so aborted transactions get the proper SET value.\n> \n> In my scenario, setQueryTimeout() only saves the timeout\n> value and issues the corrsponding SET QUERY_TIMEOUT command\n> immediately before each query if necessary.\n\nYes, we can do that, but it requires an interface like odbc or jdbc. It\nis hard to use for libpq or psql.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 20:45:46 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n\n> > > > > They want the timeout for only the one statement, so they have to set it\n> > > > > to non-zero before the statement, and to zero after the statement.\n> > > >\n> > > > Does setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\n> > > > command immediately in the scenario ?\n> > >\n> > > Yes. If we don't make the SET rollback-able, we have to do all sorts of\n> > > tricks in jdbc so aborted transactions get the proper SET value.\n> >\n> > In my scenario, setQueryTimeout() only saves the timeout\n> > value and issues the corrsponding SET QUERY_TIMEOUT command\n> > immediately before each query if necessary.\n> \n> Yes, we can do that,\n\nSomething like my scenario is needed because there could be\nmore than 1 statement objects with relatively different\nquery timeout at the same time in theory.\n\n> but it requires an interface like odbc or jdbc. It\n> is hard to use for libpq or psql.\n\nWe shouldn't expect too much on psql in the first place\nbecause it isn't procedural. I don't expect too much on\nlibpq either because it's a low level interface. However\napplications which use libpq could do like odbc or jdbc\ndoes. Or libpq could also provide a function which encap-\nsulates the query timeout handling if necessary.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 09 Apr 2002 10:06:12 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > Bruce Momjian wrote:\n> \n> > > > > > They want the timeout for only the one statement, so they have to set it\n> > > > > > to non-zero before the statement, and to zero after the statement.\n> > > > >\n> > > > > Does setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\n> > > > > command immediately in the scenario ?\n> > > >\n> > > > Yes. If we don't make the SET rollback-able, we have to do all sorts of\n> > > > tricks in jdbc so aborted transactions get the proper SET value.\n> > >\n> > > In my scenario, setQueryTimeout() only saves the timeout\n> > > value and issues the corrsponding SET QUERY_TIMEOUT command\n> > > immediately before each query if necessary.\n> > \n> > Yes, we can do that,\n> \n> Something like my scenario is needed because there could be\n> more than 1 statement objects with relatively different\n> query timeout at the same time in theory.\n\nYes, if you want multiple timeouts, you clearly could go in that\ndirection. Right now, we are considering only single-statement timing\nand no one has asked for multiple timers.\n\n> \n> > but it requires an interface like odbc or jdbc. It\n> > is hard to use for libpq or psql.\n> \n> We shouldn't expect too much on psql in the first place\n> because it isn't procedural. I don't expect too much on\n> libpq either because it's a low level interface. However\n> applications which use libpq could do like odbc or jdbc\n> does. Or libpq could also provide a function which encap-\n> sulates the query timeout handling if necessary.\n\nI certainly would like _something_ that works in psql/libpq, and the\nsimple SET QUERY_TIMEOUT does work for them. More sophisticated stuff\nprobably should be done in the application or interface.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 21:10:00 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Hiroshi Inoue wrote:\n> > > > Bruce Momjian wrote:\n> >\n> > > > > > > They want the timeout for only the one statement, so they have to set it\n> > > > > > > to non-zero before the statement, and to zero after the statement.\n> > > > > >\n> > > > > > Does setQueryTimeout() issue a corresponding SET QUERY_TIMEOUT\n> > > > > > command immediately in the scenario ?\n> > > > >\n> > > > > Yes. If we don't make the SET rollback-able, we have to do all sorts of\n> > > > > tricks in jdbc so aborted transactions get the proper SET value.\n> > > >\n> > > > In my scenario, setQueryTimeout() only saves the timeout\n> > > > value and issues the corrsponding SET QUERY_TIMEOUT command\n> > > > immediately before each query if necessary.\n> > >\n> > > Yes, we can do that,\n> >\n> > Something like my scenario is needed because there could be\n> > more than 1 statement objects with relatively different\n> > query timeout at the same time in theory.\n> \n> Yes, if you want multiple timeouts, you clearly could go in that\n> direction. Right now, we are considering only single-statement timing\n> and no one has asked for multiple timers.\n\nI don't ask multiple timers. ODBC driver would be able\nto handle multiple timeouts without multiple timers in\nmy scenario.\n\n> > > but it requires an interface like odbc or jdbc. It\n> > > is hard to use for libpq or psql.\n> >\n> > We shouldn't expect too much on psql in the first place\n> > because it isn't procedural. I don't expect too much on\n> > libpq either because it's a low level interface. However\n> > applications which use libpq could do like odbc or jdbc\n> > does. Or libpq could also provide a function which encap-\n> > sulates the query timeout handling if necessary.\n> \n> I certainly would like _something_ that works in psql/libpq,\n\nPlease don't make things complicated by sticking to such\nlow level interfaces.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 09 Apr 2002 10:58:10 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > Yes, if you want multiple timeouts, you clearly could go in that\n> > direction. Right now, we are considering only single-statement timing\n> > and no one has asked for multiple timers.\n> \n> I don't ask multiple timers. ODBC driver would be able\n> to handle multiple timeouts without multiple timers in\n> my scenario.\n\nI understand.\n\n> > > > but it requires an interface like odbc or jdbc. It\n> > > > is hard to use for libpq or psql.\n> > >\n> > > We shouldn't expect too much on psql in the first place\n> > > because it isn't procedural. I don't expect too much on\n> > > libpq either because it's a low level interface. However\n> > > applications which use libpq could do like odbc or jdbc\n> > > does. Or libpq could also provide a function which encap-\n> > > sulates the query timeout handling if necessary.\n> > \n> > I certainly would like _something_ that works in psql/libpq,\n> \n> Please don't make things complicated by sticking to such\n> low level interfaces.\n\nOK, what is your proposal?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 8 Apr 2002 21:59:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Mon, Apr 08, 2002 at 12:28:18PM -0400, Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > OK, probably good time for summarization. First, consider this:\n> >\n> > \tBEGIN WORK;\n> > \tSET something;\n> > \tquery fails;\n> > \tSET something else;\n> > \tCOMMIT WORK;\n> >\n> > Under current behavior, the first SET is honored, while the second is\n> > ignored because the transaction is in ABORT state. I can see no logical\n> > reason for this behavior.\n> \n> But that is not a shortcoming of the SET command. The problem is that the\n> system does not accept any commands after one command has failed in a\n> transaction even though it could usefully do so.\n> \n> > The jdbc timeout issue is this:\n> >\n> >\n> > \tBEGIN WORK;\n> > \tSET query_timeout=20;\n> > \tquery fails;\n> > \tSET query_timeout=0;\n> > \tCOMMIT WORK;\n> >\n> > In this case, with our current code, the first SET is done, but the\n> > second is ignored.\n> \n> Given appropriate functionality, you could rewrite this thus:\n> \n> BEGIN WORK;\n> SET FOR THIS TRANSACTION ONLY query_timeout=20;\n> query;\n> COMMIT WORK;\n\n If I compare Peter's and Bruce's examples the Peter is still winner :-)\n\n Sorry, but a code with \"set-it-after-abort\" seems ugly.\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 9 Apr 2002 09:54:56 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Mon, Apr 08, 2002 at 01:03:41PM -0400, Tom Lane wrote:\n\n> The search_path case is the main reason why I'm intent on changing\n> the behavior of SET; without that, I'd just leave well enough alone.\n\n Is there more variables like \"search_path\"? If not, I unsure if one\n item is good consideration for change others things.\n\n> Possibly some will suggest that search_path shouldn't be a SET variable\n> because it needs to be able to be rolled back on error. But what else\n> should it be? It's definitely per-session status, not persistent\n\n It's good point. Why not make it more transparent? You want\n encapsulate it to standard and current SET statement, but if it's\n something different why not use for it different statement?\n\n SET SESSION search_path TO 'something';\n \n (...or something other)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n",
"msg_date": "Tue, 9 Apr 2002 10:19:33 +0200",
"msg_from": "Karel Zak <zakkr@zf.jcu.cz>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Heh pardon me but...\n\nI was under the impression that for a transaction either all commands \nsucceed or all commands fail, atleast according to everything I've ever \nread. So followign that all SETs done within the scope of a \nBEGIN/COMMIT pair should only take effect if the whole set finishes, if \nnot the system shoudl roll back to the way it was before the BEGIN.\n\nI might be missing something though, I just got onto the list and there \nmight be other parts of the thread I missed....\n\nKarel Zak wrote:\n\n>On Mon, Apr 08, 2002 at 01:03:41PM -0400, Tom Lane wrote:\n>\n>>The search_path case is the main reason why I'm intent on changing\n>>the behavior of SET; without that, I'd just leave well enough alone.\n>>\n>\n> Is there more variables like \"search_path\"? If not, I unsure if one\n> item is good consideration for change others things.\n>\n>>Possibly some will suggest that search_path shouldn't be a SET variable\n>>because it needs to be able to be rolled back on error. But what else\n>>should it be? It's definitely per-session status, not persistent\n>>\n>\n> It's good point. Why not make it more transparent? You want\n> encapsulate it to standard and current SET statement, but if it's\n> something different why not use for it different statement?\n>\n> SET SESSION search_path TO 'something';\n> \n> (...or something other)\n>\n> Karel\n>\n\n\n",
"msg_date": "Tue, 09 Apr 2002 01:47:53 -0700",
"msg_from": "Michael Loftis <mloftis@wgops.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Karel Zak <zakkr@zf.jcu.cz> writes:\n> It's good point. Why not make it more transparent? You want\n> encapsulate it to standard and current SET statement, but if it's\n> something different why not use for it different statement?\n\n> SET SESSION search_path TO 'something';\n\nBut a plain SET is also setting the value for the session. What's\nthe difference? Why should a user remember that he must use this\nsyntax for search_path, and not for any other variables (or perhaps\nonly one or two other ones, further down the road)?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 09 Apr 2002 09:20:22 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Michael Loftis writes:\n\n> I was under the impression that for a transaction either all commands\n> succeed or all commands fail, atleast according to everything I've ever\n> read.\n\nThat's an urban legend.\n\nA transaction guarantees (among other things) that all modifications to\nthe database with the transaction are done atomicly (either all or done or\nnone). This does not extend to the commands that supposedly initiate such\nmodifications.\n\nTake out a database other than PostgreSQL and do\n\nBEGIN; -- or whatever they use; might be implicit\nINSERT INTO existing_table ('legal value');\nbarf;\nCOMMIT;\n\nThe INSERT will most likely succeed. The reason is that \"barf\" does not\nmodify or access the data in the database, so it does not affect the\ntransactional integrity of the database.\n\nWe are trying to make the same argument for SET. SET does not modify the\ndatabase, so it doesn't have to fall under transaction control.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 9 Apr 2002 14:07:14 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Michael Loftis writes:\n> \n> > I was under the impression that for a transaction either all commands\n> > succeed or all commands fail, atleast according to everything I've ever\n> > read.\n> \n> That's an urban legend.\n> \n> A transaction guarantees (among other things) that all modifications to\n> the database with the transaction are done atomicly (either all or done or\n> none). This does not extend to the commands that supposedly initiate such\n> modifications.\n> \n> Take out a database other than PostgreSQL and do\n> \n> BEGIN; -- or whatever they use; might be implicit\n> INSERT INTO existing_table ('legal value');\n> barf;\n> COMMIT;\n> \n> The INSERT will most likely succeed. The reason is that \"barf\" does not\n> modify or access the data in the database, so it does not affect the\n> transactional integrity of the database.\n\nEwe, we do fail that test.\n\n> We are trying to make the same argument for SET. SET does not modify the\n> database, so it doesn't have to fall under transaction control.\n\nOK, we have three possibilities:\n\n\to All SETs are honored in an aborted transaction\n\to No SETs are honored in an aborted transaction\n\to Some SETs are honored in an aborted transaction (current)\n\nI think the problem is our current behavior. I don't think anyone can\nsay our it is correct (only honor SET before the transaction reaches\nabort state). Whether we want the first or second is the issue, I think.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 14:21:23 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> OK, we have three possibilities:\n> \n> o All SETs are honored in an aborted transaction\n> o No SETs are honored in an aborted transaction\n> o Some SETs are honored in an aborted transaction (current)\n> \n> I think the problem is our current behavior. I don't think anyone can\n> say our it is correct (only honor SET before the transaction reaches\n> abort state). Whether we want the first or second is the issue, I think.\n\nI think the current state is not that bad at least\nis better than the first. I don't think it's a \n*should be* kind of thing and we shouldn't stick \nto it any longer.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 08:33:19 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Karel Zak <zakkr@zf.jcu.cz> writes:\n> > It's good point. Why not make it more transparent? You want\n> > encapsulate it to standard and current SET statement, but if it's\n> > something different why not use for it different statement?\n> \n> > SET SESSION search_path TO 'something';\n> \n> But a plain SET is also setting the value for the session. What's\n> the difference? Why should a user remember that he must use this\n> syntax for search_path, and not for any other variables (or perhaps\n> only one or two other ones, further down the road)?\n\nISTM what Karel meant is that if the search_path is a\nmuch more significant variable than others you had better\nexpress the difference using a different statement.\nI agree with Karel though I don't know how siginificant\nthe varible is.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 08:42:59 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Bruce Momjian wrote:\n> >\n> > OK, we have three possibilities:\n> >\n> > o All SETs are honored in an aborted transaction\n> > o No SETs are honored in an aborted transaction\n> > o Some SETs are honored in an aborted transaction (current)\n> >\n> > I think the problem is our current behavior. I don't think anyone can\n> > say our it is correct (only honor SET before the transaction reaches\n> > abort state). Whether we want the first or second is the issue, I think.\n> \n> I think the current state is not that bad at least\n> is better than the first.\n\nOops does the first mean rolling back the variables on abort ?\nIf so I made a mistake. The current is better than the second.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 08:51:55 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Hiroshi Inoue wrote:\n> > \n> > Bruce Momjian wrote:\n> > >\n> > > OK, we have three possibilities:\n> > >\n> > > o All SETs are honored in an aborted transaction\n> > > o No SETs are honored in an aborted transaction\n> > > o Some SETs are honored in an aborted transaction (current)\n> > >\n> > > I think the problem is our current behavior. I don't think anyone can\n> > > say our it is correct (only honor SET before the transaction reaches\n> > > abort state). Whether we want the first or second is the issue, I think.\n> > \n> > I think the current state is not that bad at least\n> > is better than the first.\n> \n> Oops does the first mean rolling back the variables on abort ?\n> If so I made a mistake. The current is better than the second.\n\nThe second means all SET's are rolled back on abort.\n \n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 20:13:21 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Hiroshi Inoue wrote:\n> > >\n> > > Bruce Momjian wrote:\n> > > >\n> > > > OK, we have three possibilities:\n> > > >\n> > > > o All SETs are honored in an aborted transaction\n> > > > o No SETs are honored in an aborted transaction\n> > > > o Some SETs are honored in an aborted transaction (current)\n> > > >\n> > > > I think the problem is our current behavior. I don't think anyone can\n> > > > say our it is correct (only honor SET before the transaction reaches\n> > > > abort state). Whether we want the first or second is the issue, I think.\n> > >\n> > > I think the current state is not that bad at least\n> > > is better than the first.\n> >\n> > Oops does the first mean rolling back the variables on abort ?\n> > If so I made a mistake. The current is better than the second.\n> \n> The second means all SET's are rolled back on abort.\n\nI see.\nBTW what varibles are rolled back on abort currently ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 09:25:50 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > Oops does the first mean rolling back the variables on abort ?\n> > > If so I made a mistake. The current is better than the second.\n> > \n> > The second means all SET's are rolled back on abort.\n> \n> I see.\n> BTW what varibles are rolled back on abort currently ?\n\nCurrently, none, though the SET commands after the query aborts are\nignored, which is effectively the same as rolling them back.\n\n\tBEGIN WORK;\n\tSET x=3;\n\tfailed query;\n\tSET x=5;\n\tCOMMIT;\n\nIn this case, x=3 at end of query.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 20:27:01 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > > Oops does the first mean rolling back the variables on abort ?\n> > > > If so I made a mistake. The current is better than the second.\n> > >\n> > > The second means all SET's are rolled back on abort.\n> >\n> > I see.\n> > BTW what varibles are rolled back on abort currently ?\n> \n> Currently, none,\n\n??? What do you mean by \n o Some SETs are honored in an aborted transaction (current)\n?\nIs the current state different from\n o All SETs are honored in an aborted transaction\n?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 09:46:43 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Bruce Momjian wrote:\n> > \n> > Hiroshi Inoue wrote:\n> > > > > Oops does the first mean rolling back the variables on abort ?\n> > > > > If so I made a mistake. The current is better than the second.\n> > > >\n> > > > The second means all SET's are rolled back on abort.\n> > >\n> > > I see.\n> > > BTW what varibles are rolled back on abort currently ?\n> > \n> > Currently, none,\n> \n> ??? What do you mean by \n> o Some SETs are honored in an aborted transaction (current)\n> ?\n> Is the current state different from\n> o All SETs are honored in an aborted transaction\n> ?\n\nIn the case of:\n\n\tBEGIN WORK;\n\tSET x=1;\n\tbad query that aborts transaction;\n\tSET x=2;\n\tCOMMIT WORK;\n\nOnly the first SET is done, so at the end, x = 1. If all SET's were\nhonored, x = 2. If no SETs in an aborted transaction were honored, x\nwould equal whatever it was before the BEGIN WORK above.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 20:51:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > Bruce Momjian wrote:\n> > >\n> > > Hiroshi Inoue wrote:\n> > > > > > Oops does the first mean rolling back the variables on abort ?\n> > > > > > If so I made a mistake. The current is better than the second.\n> > > > >\n> > > > > The second means all SET's are rolled back on abort.\n> > > >\n> > > > I see.\n> > > > BTW what varibles are rolled back on abort currently ?\n> > >\n> > > Currently, none,\n> >\n> > ??? What do you mean by\n> > o Some SETs are honored in an aborted transaction (current)\n> > ?\n> > Is the current state different from\n> > o All SETs are honored in an aborted transaction\n> > ?\n> \n> In the case of:\n> \n> BEGIN WORK;\n> SET x=1;\n> bad query that aborts transaction;\n> SET x=2;\n> COMMIT WORK;\n> \n> Only the first SET is done, so at the end, x = 1. If all SET's were\n> honored, x = 2. If no SETs in an aborted transaction were honored, x\n> would equal whatever it was before the BEGIN WORK above.\n\nIMHO\n o No SETs are honored in an aborted transaction(current)\n\nThe first SET isn't done in an aborted transaction.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 11:46:04 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > ??? What do you mean by\n> > > o Some SETs are honored in an aborted transaction (current)\n> > > ?\n> > > Is the current state different from\n> > > o All SETs are honored in an aborted transaction\n> > > ?\n> > \n> > In the case of:\n> > \n> > BEGIN WORK;\n> > SET x=1;\n> > bad query that aborts transaction;\n> > SET x=2;\n> > COMMIT WORK;\n> > \n> > Only the first SET is done, so at the end, x = 1. If all SET's were\n> > honored, x = 2. If no SETs in an aborted transaction were honored, x\n> > would equal whatever it was before the BEGIN WORK above.\n> \n> IMHO\n> o No SETs are honored in an aborted transaction(current)\n> \n> The first SET isn't done in an aborted transaction.\n\nWell, yes, when I say aborted transaction, I mean the entire\ntransaction, not just the part after the abort happens. All non-SET\ncommands in the transaction are rolled back already. I can't think of a\ngood argument for our current behavior.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 23:48:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> > > ??? What do you mean by\n> > > o Some SETs are honored in an aborted transaction (current)\n> > > ?\n> > > Is the current state different from\n> > > o All SETs are honored in an aborted transaction\n> > > ?\n> > \n> > In the case of:\n> > \n> > BEGIN WORK;\n> > SET x=1;\n> > bad query that aborts transaction;\n> > SET x=2;\n> > COMMIT WORK;\n> > \n> > Only the first SET is done, so at the end, x = 1. If all SET's were\n> > honored, x = 2. If no SETs in an aborted transaction were honored, x\n> > would equal whatever it was before the BEGIN WORK above.\n> \n> IMHO\n> o No SETs are honored in an aborted transaction(current)\n> \n> The first SET isn't done in an aborted transaction.\n\nI guess my point is that with our current code, there is a distinction\nthat SETs are executed before a transaction aborts, but are ignored\nafter a transaction aborts, even if the SETs are in the same\ntransaction.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 9 Apr 2002 23:50:28 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Take out a database other than PostgreSQL and do\n\n> BEGIN; -- or whatever they use; might be implicit\n> INSERT INTO existing_table ('legal value');\n> barf;\n> COMMIT;\n\n> The INSERT will most likely succeed. The reason is that \"barf\" does not\n> modify or access the data in the database, so it does not affect the\n> transactional integrity of the database.\n\nNo; this example is completely irrelevant to our discussion. The reason\nthat (some) other DBMSes will allow the INSERT to take effect in the\nabove case is that they have savepoints, and the failure of the \"barf\"\ncommand only rolls back to the savepoint not to the start of the\ntransaction. It's a generally-acknowledged shortcoming that we don't\nhave savepoints ... but this has no relevance to the question of whether\nSETs should be rolled back or not. If we did have savepoints then I'd\nbe saying that SETs should roll back to a savepoint just like everything\nelse.\n\nPlease note that even in those other databases, if one replaces the\nCOMMIT with ROLLBACK in the above scenario, the effects of the INSERT\n*will* roll back. Transpose this into current Postgres, and replace\nINSERT with SET, and the effects do *not* roll back. How is that a\ngood idea?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Apr 2002 00:13:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Take out a database other than PostgreSQL and do\n> \n> > BEGIN; -- or whatever they use; might be implicit\n> > INSERT INTO existing_table ('legal value');\n> > barf;\n> > COMMIT;\n> \n> > The INSERT will most likely succeed. The reason is that \"barf\" does not\n> > modify or access the data in the database, so it does not affect the\n> > transactional integrity of the database.\n> \n> No; this example is completely irrelevant to our discussion. The reason\n\nActually, we could probably prevent transaction abort on syntax(yacc)\nerrors, but the other errors like mistyped table names would be hard to\nprevent a rollback, so I guess we just roll back on any error.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 10 Apr 2002 00:22:56 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> In the case of:\n\n> \tBEGIN WORK;\n> \tSET x=1;\n> \tbad query that aborts transaction;\n> \tSET x=2;\n> \tCOMMIT WORK;\n\n> Only the first SET is done, so at the end, x = 1.\n\nPerhaps even more to the point:\n\n\tSET x=0;\n\tBEGIN;\n\tSET x=1;\n\tbad query;\n\tSET x=2;\n\tROLLBACK;\n\nNow x=1. How is this sensible?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Apr 2002 00:30:32 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Hiroshi Inoue wrote:\n> > > > ??? What do you mean by\n> > > > o Some SETs are honored in an aborted transaction (current)\n> > > > ?\n> > > > Is the current state different from\n> > > > o All SETs are honored in an aborted transaction\n> > > > ?\n> > >\n> > > In the case of:\n> > >\n> > > BEGIN WORK;\n> > > SET x=1;\n> > > bad query that aborts transaction;\n> > > SET x=2;\n> > > COMMIT WORK;\n> > >\n> > > Only the first SET is done, so at the end, x = 1. If all SET's were\n> > > honored, x = 2. If no SETs in an aborted transaction were honored, x\n> > > would equal whatever it was before the BEGIN WORK above.\n> >\n> > IMHO\n> > o No SETs are honored in an aborted transaction(current)\n> >\n> > The first SET isn't done in an aborted transaction.\n> \n> I guess my point is that with our current code, there is a distinction\n> that SETs are executed before a transaction aborts, but are ignored\n> after a transaction aborts, even if the SETs are in the same\n> transaction.\n\nNot only SET commands but also most commands are ignored\nafter a transaction aborts currently. SET commands are out\nof transactional control but it doesn't mean they are never\nignore(rejecte)d.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 13:31:58 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Actually, we could probably prevent transaction abort on syntax(yacc)\n> errors, but the other errors like mistyped table names would be hard to\n> prevent a rollback, so I guess we just roll back on any error.\n\nI don't think that what we categorize as an error or not is very\nrelevant to the discussion, either. The real point is: should SET\nhave rollback behavior similar to other SQL commands, or not?\nIf we had savepoints, or ignorable syntax errors, or other frammishes\nthis question would still be the same.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Apr 2002 00:42:59 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "...\n> Please note that even in those other databases, if one replaces the\n> COMMIT with ROLLBACK in the above scenario, the effects of the INSERT\n> *will* roll back. Transpose this into current Postgres, and replace\n> INSERT with SET, and the effects do *not* roll back. How is that a\n> good idea?\n\nWell, as you should have concluded by now, \"good\" is not the same for\neveryone ;)\n\nFrankly, I've been happy with the current SET behavior, but would also\nbe willing to consider most of the alternatives which have been\nsuggested, including ones you have dismissed out of hand. Constraints\nwhich seem to have been imposed include:\n\n1) All commands starting with \"SET\" must have the same transactional\nsemantics. I'll agree that it might be nice for consistancy, but imho is\nnot absolutely required.\n\n2) No commands which could be expected to start with \"SET\" will start\nwith some other keyword. If we do have \"set class\" commands which have\ndifferent transactional semantics, then we could explore alternative\nsyntax for specifying each category.\n\n3) \"SET\" commands must respect transactions. I'm happy with the idea\nthat these commands are out of band and take effect immediately. And if\nthey take effect even in the middle of a failing/failed transaction,\nthat is OK too. The surrounding code would have reset the values anyway,\nif necessary.\n\n\nI do have a concern about how to implement some of the SET commands if\nwe *do* respect transactional semantics. For example, SET TIME ZONE\nsaves the current value of an environment variable (if available), and\nwould need *at least* a \"before transaction\" and \"after transaction\nstarted\" pair of values. How would we propagate SET variables to\ntransaction-specific structures, clearing or resetting them later? Right\nnow these variables are pretty independent and can be accessed through\nglobal storage; having transactional semantics means that the\ninterdependencies between different variable types in the SET handlers\nmay increase.\n\n - Thomas\n",
"msg_date": "Tue, 09 Apr 2002 23:22:14 -0700",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Thomas Lockhart <lockhart@fourpalms.org> writes:\n> I do have a concern about how to implement some of the SET commands if\n> we *do* respect transactional semantics. For example, SET TIME ZONE\n> saves the current value of an environment variable (if available), and\n> would need *at least* a \"before transaction\" and \"after transaction\n> started\" pair of values.\n\nI intended for guc.c to manage this bookkeeping, thus freeing individual\nmodules from worrying about it. That would require us to transpose the\nlast few special-cased SET variables into generic GUC variables, but\nI consider that a Good Thing anyway.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 10 Apr 2002 10:13:51 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Thomas Lockhart writes:\n\n> 1) All commands starting with \"SET\" must have the same transactional\n> semantics. I'll agree that it might be nice for consistancy, but imho is\n> not absolutely required.\n\nThis rule is already violated anyway. SET TRANSACTION ISOLATION, SET\nCONSTRAINTS, SET SESSION AUTHORIZATION, and SET mostly_anything_else\nalready behave quite differently.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 11 Apr 2002 01:01:56 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "\nI have added this to the TODO list, with a question mark. Hope this is\nOK with everyone.\n\n o Abort SET changes made in aborted transactions (?)\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Thomas Lockhart <lockhart@fourpalms.org> writes:\n> > I do have a concern about how to implement some of the SET commands if\n> > we *do* respect transactional semantics. For example, SET TIME ZONE\n> > saves the current value of an environment variable (if available), and\n> > would need *at least* a \"before transaction\" and \"after transaction\n> > started\" pair of values.\n> \n> I intended for guc.c to manage this bookkeeping, thus freeing individual\n> modules from worrying about it. That would require us to transpose the\n> last few special-cased SET variables into generic GUC variables, but\n> I consider that a Good Thing anyway.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 23:58:07 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have added this to the TODO list, with a question mark. Hope this is\n> OK with everyone.\n\n> o Abort SET changes made in aborted transactions (?)\n\nActually, I was planning to make only search_path act that way, because\nof all the push-back I'd gotten on applying it to other SET variables.\nsearch_path really *has* to have it, but if there's anyone who agrees\nwith me about doing it for all SET vars, they didn't speak up :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Apr 2002 01:31:41 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "\n\nTom Lane wrote:\n\n>Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>\n>>I have added this to the TODO list, with a question mark. Hope this is\n>>OK with everyone.\n>>\n>\n>> o Abort SET changes made in aborted transactions (?)\n>>\n>\n>Actually, I was planning to make only search_path act that way, because\n>of all the push-back I'd gotten on applying it to other SET variables.\n>search_path really *has* to have it, but if there's anyone who agrees\n>with me about doing it for all SET vars, they didn't speak up :-(\n>\nI did and do, strongly. TRANSACTIONS are supposed to leave things as \nthey were before the BEGIN. It either all happens or it all doesnt' \nhappen. If you need soemthing inside of a transaction to go \nirregardless then it shouldn't be within the transaction.\n\n>regards, tom lane\n>\n\n\n",
"msg_date": "Thu, 18 Apr 2002 01:56:43 -0700",
"msg_from": "Michael Loftis <mloftis@wgops.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Michael Loftis wrote:\n> \n> Tom Lane wrote:\n> \n> >Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >\n> >>I have added this to the TODO list, with a question mark. Hope this is\n> >>OK with everyone.\n> >>\n> >\n> >> o Abort SET changes made in aborted transactions (?)\n> >>\n> >\n> >Actually, I was planning to make only search_path act that way, because\n> >of all the push-back I'd gotten on applying it to other SET variables.\n> >search_path really *has* to have it, but if there's anyone who agrees\n> >with me about doing it for all SET vars, they didn't speak up :-(\n> >\n> I did and do, strongly. TRANSACTIONS are supposed to leave things as\n> they were before the BEGIN. It either all happens or it all doesnt'\n> happen. If you need soemthing inside of a transaction to go\n> irregardless then it shouldn't be within the transaction.\n\nOops is this issue still living ?\nI object to the TODO(why ????) strongly.\nPlease remove it from the TODO first and put it back\nto the neutral position.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Thu, 18 Apr 2002 18:43:43 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> Michael Loftis wrote:\n> > \n> > Tom Lane wrote:\n> > \n> > >Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >\n> > >>I have added this to the TODO list, with a question mark. Hope this is\n> > >>OK with everyone.\n> > >>\n> > >\n> > >> o Abort SET changes made in aborted transactions (?)\n> > >>\n> > >\n> > >Actually, I was planning to make only search_path act that way, because\n> > >of all the push-back I'd gotten on applying it to other SET variables.\n> > >search_path really *has* to have it, but if there's anyone who agrees\n> > >with me about doing it for all SET vars, they didn't speak up :-(\n> > >\n> > I did and do, strongly. TRANSACTIONS are supposed to leave things as\n> > they were before the BEGIN. It either all happens or it all doesnt'\n> > happen. If you need soemthing inside of a transaction to go\n> > irregardless then it shouldn't be within the transaction.\n> \n> Oops is this issue still living ?\n> I object to the TODO(why ????) strongly.\n> Please remove it from the TODO first and put it back\n> to the neutral position.\n\nOK, how is this:\n\n o Abort all or commit all SET changes made in an aborted transaction\n\nIs this neutral? I don't think our current behavior is defended by anyone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Apr 2002 10:32:05 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have added this to the TODO list, with a question mark. Hope this is\n> > OK with everyone.\n> \n> > o Abort SET changes made in aborted transactions (?)\n> \n> Actually, I was planning to make only search_path act that way, because\n> of all the push-back I'd gotten on applying it to other SET variables.\n> search_path really *has* to have it, but if there's anyone who agrees\n> with me about doing it for all SET vars, they didn't speak up :-(\n\nWoh, this all started because of timeout, which needs this fix too. We\ncertainly need something and I don't want to get into on of those \"we\ncan't all decide, so we do nothing\" situations.\n\nI have updated the TODO to:\n\n o Abort all or commit all SET changes made in an aborted transaction \n\nI don't think our current behavior is defended by anyone. Is abort all\nor commit all the only two choices? If so, we will take a vote and be\ndone with it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Apr 2002 10:34:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have updated the TODO to:\n> o Abort all or commit all SET changes made in an aborted transaction \n> I don't think our current behavior is defended by anyone.\n\nHiroshi seems to like it ...\n\nHowever, \"commit SETs even after an error\" is most certainly NOT\nacceptable. It's not even sensible --- what if the SETs themselves\nthrow errors, or are depending on the results of failed non-SET\ncommands; will you try to commit them anyway?\n\nIt seems to me that the choices we realistically have are\n\n\t(a) leave the behavior the way it is\n\n\t(b) cause all SETs in an aborted transaction to roll back.\n\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Apr 2002 10:52:40 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have updated the TODO to:\n> > o Abort all or commit all SET changes made in an aborted transaction \n> > I don't think our current behavior is defended by anyone.\n> \n> Hiroshi seems to like it ...\n> \n> However, \"commit SETs even after an error\" is most certainly NOT\n> acceptable. It's not even sensible --- what if the SETs themselves\n> throw errors, or are depending on the results of failed non-SET\n> commands; will you try to commit them anyway?\n> \n> It seems to me that the choices we realistically have are\n> \n> \t(a) leave the behavior the way it is\n> \n> \t(b) cause all SETs in an aborted transaction to roll back.\n\nI disagree. You commit all the SET's you can, even if in aborted\ntransactions. If they throw an error, or rely on a previous non-SET\nthat aborted, oh well. That is what some are asking for.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 18 Apr 2002 11:19:41 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have updated the TODO to:\n> > o Abort all or commit all SET changes made in an aborted transaction\n> > I don't think our current behavior is defended by anyone.\n> \n> Hiroshi seems to like it ...\n\nProbably I don't love it. Honestly I don't understand\nwhat the new TODO means exactly.\nI don't think this is *all* *should be* or *all\nor nothing* kind of thing. If a SET variable has\nits reason, it would behave in its own right.\n\n> However, \"commit SETs even after an error\" is most certainly NOT\n> acceptable. \n\nWhat I've meant is that SET commands are out of transactional\ncontrol and so the word *commit SETs even after* has no meaning\nto me. Basically it's a user's responsisbilty to manage the\nerrors. He only knows what's to do with the errors.\n\nregards,\nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Fri, 19 Apr 2002 08:53:02 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I don't think this is *all* *should be* or *all\n> or nothing* kind of thing. If a SET variable has\n> its reason, it would behave in its own right.\n\nWell, we could provide some kind of escape hatch to let the behavior\nvary from one variable to the next. But can you give any specific\nexamples? Which SET variables should not roll back on error?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 18 Apr 2002 19:56:13 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I don't think this is *all* *should be* or *all\n> > or nothing* kind of thing. If a SET variable has\n> > its reason, it would behave in its own right.\n> \n> Well, we could provide some kind of escape hatch to let the behavior\n> vary from one variable to the next. But can you give any specific\n> examples? Which SET variables should not roll back on error?\n\nIt seems veeery dangerous to conclude that SET *should* \nroll back even if there's no *should not* roll back case.\nThere could be no *should not* roll back case because\na user could set the variable as he likes in the next\ntransaction.\n \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Fri, 19 Apr 2002 09:19:16 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "\n\nHiroshi Inoue wrote:\n\n>Tom Lane wrote:\n>\n>>Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>>\n>>>I don't think this is *all* *should be* or *all\n>>>or nothing* kind of thing. If a SET variable has\n>>>its reason, it would behave in its own right.\n>>>\n>>Well, we could provide some kind of escape hatch to let the behavior\n>>vary from one variable to the next. But can you give any specific\n>>examples? Which SET variables should not roll back on error?\n>>\n>\n>It seems veeery dangerous to conclude that SET *should* \n>roll back even if there's no *should not* roll back case.\n>There could be no *should not* roll back case because\n>a user could set the variable as he likes in the next\n>transaction.\n>\nIn whihc case, if I'm understanding you correctly Hiroshi-san, the\nrollback is moot anyway...\n\nIE\n\n\nBEGIN transaction_1\n...\nSET SOMEVAR=SOMETHING\n...\nCOMMIT\n\n(transaction_1 fails and rolls back)\n\nBEGIN transaction_2\n...\nSET SOMEVAR=SOMETHINGELSE\n...\nCOMMIT\n\n(transaction_2 succeeds)\n\nSOMEVAR, in either case, assuming transaction_2 succeeds, would be\nSOMETHINGELSE. If both succeed SOMEVAR is SOMETHINGELSE, if the first\nsucceeds and the second fails SOMEVAR will be SOMETHING. If neither\nsucceed SOMEVAR (for this short example) is whatever it was before the\ntwo transactions.\n\n\nAm I understanding you correctly in that this is the example you were\ntrying to point out?\n\n>\n> \n>Hiroshi Inoue\n>\thttp://w2422.nsk.ne.jp/~inoue/\n>\n\n\n",
"msg_date": "Thu, 18 Apr 2002 17:27:17 -0700",
"msg_from": "Michael Loftis <mloftis@wgops.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Michael Loftis wrote:\n> \n> Hiroshi Inoue wrote:\n> \n> >Tom Lane wrote:\n> >\n> >>Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> >>\n> >>>I don't think this is *all* *should be* or *all\n> >>>or nothing* kind of thing. If a SET variable has\n> >>>its reason, it would behave in its own right.\n> >>>\n> >>Well, we could provide some kind of escape hatch to let the behavior\n> >>vary from one variable to the next. But can you give any specific\n> >>examples? Which SET variables should not roll back on error?\n> >>\n> >\n> >It seems veeery dangerous to conclude that SET *should*\n> >roll back even if there's no *should not* roll back case.\n> >There could be no *should not* roll back case because\n> >a user could set the variable as he likes in the next\n> >transaction.\n> >\n> In whihc case, if I'm understanding you correctly Hiroshi-san, the\n> rollback is moot anyway...\n> \n> IE\n> \n> BEGIN transaction_1\n> ...\n> SET SOMEVAR=SOMETHING\n> ...\n> COMMIT\n> \n> (transaction_1 fails and rolls back)\n\nProbably you are misunderstanding my point.\nI don't think that SOMEVAR *should* be put back\non failure.\nUsers must know what value would be set to the\nSOMEVAR after an error. In some cases it must\nbe put back, in some cases the current value\nis OK, in other cases new SOMEVAR is needed.\nBasically it's a user's resposibilty to set\nthe value.\n\nregards, \nHiroshi Inoue\n\thttp://w2422.nsk.ne.jp/~inoue/\n",
"msg_date": "Fri, 19 Apr 2002 12:07:54 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
}
] |
[
{
"msg_contents": "\nthere, let's see if that doens't hurt things too much\n\n\n",
"msg_date": "Fri, 29 Mar 2002 16:47:34 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "administrivia = no"
}
] |
[
{
"msg_contents": "Configure fails checking for types of arguements to accept().\n\n\nFull log at: http://www.zort.ca/temp/config.log\n\nchecking sys/ipc.h presence... yes\nconfigure: WARNING: sys/ipc.h: present but cannot be compiled\nconfigure: WARNING: sys/ipc.h: check for missing prerequisite headers?\nconfigure: WARNING: sys/ipc.h: proceeding with the preprocessor's result\nchecking for sys/ipc.h... yes\nchecking sys/pstat.h usability... no\nchecking sys/pstat.h presence... no\nchecking for sys/pstat.h... no\nchecking sys/select.h usability... no\nchecking sys/select.h presence... yes\nconfigure: WARNING: sys/select.h: present but cannot be compiled\nconfigure: WARNING: sys/select.h: check for missing prerequisite\nheaders?\nconfigure: WARNING: sys/select.h: proceeding with the preprocessor's\nresult\nchecking for sys/select.h... yes\nchecking sys/sem.h usability... no\nchecking sys/sem.h presence... yes\nconfigure: WARNING: sys/sem.h: present but cannot be compiled\nconfigure: WARNING: sys/sem.h: check for missing prerequisite headers?\nconfigure: WARNING: sys/sem.h: proceeding with the preprocessor's result\nchecking for sys/sem.h... yes\nchecking sys/socket.h usability... no\nchecking sys/socket.h presence... yes\nconfigure: WARNING: sys/socket.h: present but cannot be compiled\nconfigure: WARNING: sys/socket.h: check for missing prerequisite\nheaders?\nconfigure: WARNING: sys/socket.h: proceeding with the preprocessor's\nresult\nchecking for sys/socket.h... yes\nchecking sys/shm.h usability... no\nchecking sys/shm.h presence... yes\nconfigure: WARNING: sys/shm.h: present but cannot be compiled\nconfigure: WARNING: sys/shm.h: check for missing prerequisite headers?\nconfigure: WARNING: sys/shm.h: proceeding with the preprocessor's result\nchecking for sys/shm.h... yes\nchecking sys/un.h usability... no\nchecking sys/un.h presence... yes\nconfigure: WARNING: sys/un.h: present but cannot be compiled\nconfigure: WARNING: sys/un.h: check for missing prerequisite headers?\nconfigure: WARNING: sys/un.h: proceeding with the preprocessor's result\nchecking for sys/un.h... yes\n...\nchecking types of arguments for accept()... configure: error: could not\ndetermine argument types\n\n\n\n\n\n",
"msg_date": "29 Mar 2002 18:33:49 -0500",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Configure issues - FreeBSD 4.5-RELEASE"
},
{
"msg_contents": "Rod Taylor writes:\n\n> Configure fails checking for types of arguements to accept().\n\nFix committed. Try again please.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 29 Mar 2002 19:25:27 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Configure issues - FreeBSD 4.5-RELEASE"
},
{
"msg_contents": "It appears to work now -- as stated.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <peter_e@gmx.net>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, March 29, 2002 8:05 PM\nSubject: Re: [HACKERS] Configure issues - FreeBSD 4.5-RELEASE\n\n\n> I wrote:\n>\n> > > Configure fails checking for types of arguements to accept().\n> >\n> > Fix committed. Try again please.\n>\n> OK, it needed two more fixes, but I just ran it on FreeBSD and now\nit\n> works. I promise.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n\n",
"msg_date": "Fri, 29 Mar 2002 20:05:16 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Configure issues - FreeBSD 4.5-RELEASE"
},
{
"msg_contents": "I wrote:\n\n> > Configure fails checking for types of arguements to accept().\n>\n> Fix committed. Try again please.\n\nOK, it needed two more fixes, but I just ran it on FreeBSD and now it\nworks. I promise.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 29 Mar 2002 20:05:42 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Configure issues - FreeBSD 4.5-RELEASE"
}
] |
[
{
"msg_contents": "Configure worked, compile failed :)\n\ngcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n-I../../../src/include -c -o pqcomm.o pqcomm.c\npqcomm.c: In function `StreamConnection':\npqcomm.c:395: `TCP_NODELAY' undeclared (first use in this function)\npqcomm.c:395: (Each undeclared identifier is reported only once\npqcomm.c:395: for each function it appears in.)\n\n",
"msg_date": "29 Mar 2002 20:08:56 -0500",
"msg_from": "Rod Taylor <rbt@zort.ca>",
"msg_from_op": true,
"msg_subject": "Ok, I lied about it working... TCP_NODELAY?"
},
{
"msg_contents": "Rod Taylor writes:\n\n> Configure worked, compile failed :)\n>\n> gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> -I../../../src/include -c -o pqcomm.o pqcomm.c\n> pqcomm.c: In function `StreamConnection':\n> pqcomm.c:395: `TCP_NODELAY' undeclared (first use in this function)\n> pqcomm.c:395: (Each undeclared identifier is reported only once\n> pqcomm.c:395: for each function it appears in.)\n\nThat was among the additional fixes I mentioned. CVS update again.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 29 Mar 2002 20:23:10 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Ok, I lied about it working... TCP_NODELAY?"
},
{
"msg_contents": "Is anonymous cvs on a time delay? I keep getting 1.172 of\nconfigure.in from HEAD.\n\nShould be 173\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Peter Eisentraut\" <peter_e@gmx.net>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: <pgsql-hackers@postgresql.org>\nSent: Friday, March 29, 2002 8:23 PM\nSubject: Re: [HACKERS] Ok, I lied about it working... TCP_NODELAY?\n\n\n> Rod Taylor writes:\n>\n> > Configure worked, compile failed :)\n> >\n> > gcc -pipe -O2 -Wall -Wmissing-prototypes -Wmissing-declarations\n> > -I../../../src/include -c -o pqcomm.o pqcomm.c\n> > pqcomm.c: In function `StreamConnection':\n> > pqcomm.c:395: `TCP_NODELAY' undeclared (first use in this\nfunction)\n> > pqcomm.c:395: (Each undeclared identifier is reported only once\n> > pqcomm.c:395: for each function it appears in.)\n>\n> That was among the additional fixes I mentioned. CVS update again.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n",
"msg_date": "Fri, 29 Mar 2002 20:42:15 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Ok, I lied about it working... TCP_NODELAY?"
},
{
"msg_contents": "Rod Taylor writes:\n\n> Is anonymous cvs on a time delay?\n\nYes. I believe it's one hour.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 29 Mar 2002 20:57:34 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Ok, I lied about it working... TCP_NODELAY?"
}
] |
[
{
"msg_contents": "Hi everyone,\n\nI just read the annoucement of the public beta for Red Hat Linux's new\nserver-oriented edition (code named Pensacola) :\n\nhttps://listman.redhat.com/pipermail/redhat-watch-list/2002-February/000466.html\n\nOne of the things this mentions being included/tuned in its kernel is :\n\n POSIX AIO for disk access (database helper)\n\nDoes anyone know what this does, and if it's something we'll be able to\nleverage (i.e. better performance, etc)?\n\nRegards and best wishes,\n\nJustin Clift\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Sat, 30 Mar 2002 16:29:15 +1100",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": true,
"msg_subject": "Posix AIO in new Red Hat Linux "
},
{
"msg_contents": "It doesn't really say, however, it makes me wonder if it's SGI's KAIO\n(http://oss.sgi.com/projects/kaio/) effort which is reported to provide\nup to 35% performance improvement for heavily I/O bound applications.\n\nAgain, I'm not sure it is SGI's effort that is being talked about here,\nnonetheless, it seems like a likely candidate. Code changes would be\nrequired to support AIO within Postgres. Here's a quote for the link\nprovided above:\n\n The asynchronous I/O (AIO) facility implements interfaces defined by\nthe POSIX standard, although it has not been through formal compliance\ncertification. This version of AIO is implemented with support from\nkernel modifications, and hence will be called KAIO to distinguish it\nfrom AIO facilities available from newer versions of glibc/librt. \nBecause of the kernel support, KAIO is able to perform split-phase I/O\nto maximize concurrency of I/O at the device. With split-phase I/O, the\ninitiating request (such as an aio_read) truly queues the I/O at the\ndevice as the first phase of the I/O request; a second phase of the I/O\nrequest, performed as part of the I/O completion, propagates results of\nthe request. The results may include the contents of the I/O buffer on\na read, the number of bytes read or written, and any error status.\n\nKAIO is also integrated to work well with Raw I/O, another feature\navailabe with SGI Linux Environment 1.1 or as a patch from this web\nsite.\n\nPreliminary experience with KAIO have shown over 35% improvement in\ndatabase performance tests. Unit tests (which only perform I/O) using\nKAIO and Raw I/O have been successful in achieving 93% saturation with\n12 disks hung off 2 X 40 MB/s Ultra-Wide SCSI channels. We believe that\nthese encouraging results are a direct result of implementing a\nsignificant part of KAIO in the kernel using split-phase I/O while\navoiding or minimizing the use of any globally contented locks.\n\n\nEnjoy,\n\tGreg\n\n\n\nOn Fri, 2002-03-29 at 23:29, Justin Clift wrote:\n> Hi everyone,\n> \n> I just read the annoucement of the public beta for Red Hat Linux's new\n> server-oriented edition (code named Pensacola) :\n> \n> https://listman.redhat.com/pipermail/redhat-watch-list/2002-February/000466.html\n> \n> One of the things this mentions being included/tuned in its kernel is :\n> \n> POSIX AIO for disk access (database helper)\n> \n> Does anyone know what this does, and if it's something we'll be able to\n> leverage (i.e. better performance, etc)?\n> \n> Regards and best wishes,\n> \n> Justin Clift\n> \n> -- \n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html",
"msg_date": "30 Mar 2002 09:59:11 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Posix AIO in new Red Hat Linux"
},
{
"msg_contents": "On Sat, Mar 30, 2002 at 09:59:11AM -0600, Greg Copeland wrote:\n> It doesn't really say, however, it makes me wonder if it's SGI's KAIO\n> (http://oss.sgi.com/projects/kaio/) effort which is reported to provide\n> up to 35% performance improvement for heavily I/O bound applications.\n\nI don't think it is. I think the RH announcement refers to their intent\nto quickly stabilize some of the upcoming Linux 2.5 code and package it\nseparately for \"enterprise\" customers. If that's correct, the Linux 2.5\nasynchronous I/O implementation will likely be Ben LaHaise's:\nhttp://www.kvack.org/~blah/aio/\n\nI agree with Greg -- it looks like it could definately improve Pg's\nperformance.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Sat, 30 Mar 2002 13:36:48 -0500",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": false,
"msg_subject": "Re: Posix AIO in new Red Hat Linux"
},
{
"msg_contents": "Cool. Thanks for the information. The only other PAIO effort that I\nknew of was the glibc user space effort...\n\nGreg\n\n\nOn Sat, 2002-03-30 at 12:36, Neil Conway wrote:\n> On Sat, Mar 30, 2002 at 09:59:11AM -0600, Greg Copeland wrote:\n> > It doesn't really say, however, it makes me wonder if it's SGI's KAIO\n> > (http://oss.sgi.com/projects/kaio/) effort which is reported to provide\n> > up to 35% performance improvement for heavily I/O bound applications.\n> \n> I don't think it is. I think the RH announcement refers to their intent\n> to quickly stabilize some of the upcoming Linux 2.5 code and package it\n> separately for \"enterprise\" customers. If that's correct, the Linux 2.5\n> asynchronous I/O implementation will likely be Ben LaHaise's:\n> http://www.kvack.org/~blah/aio/\n> \n> I agree with Greg -- it looks like it could definately improve Pg's\n> performance.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilconway@rogers.com>\n> PGP Key ID: DB3C29FC\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org",
"msg_date": "30 Mar 2002 14:55:13 -0600",
"msg_from": "Greg Copeland <greg@CopelandConsulting.Net>",
"msg_from_op": false,
"msg_subject": "Re: Posix AIO in new Red Hat Linux"
}
] |
[
{
"msg_contents": "\nHello,\n\ni have a C-language function and need to escape some strings returned\nfrom SPI_getvalue to insert into another query string.\nIs there a proper way to do the escaping or should i use\nmy own functions for that?\nOr i'm totally wrong and there's a better way to get the values\nfrom a row and insert it into another table (how to handle \\0 values)?\n\n\nBest regards\n\n-- \n\t\t\t\tAndreas 'ads' Scherbaum\n\n",
"msg_date": "Sat, 30 Mar 2002 09:15:42 +0100 (CET)",
"msg_from": "Andreas Scherbaum <adsmail@htl.de>",
"msg_from_op": true,
"msg_subject": "Escaping in C-language functions"
}
] |
[
{
"msg_contents": "Hi All,\n\nRecently we got into problem of giving permission to data directory.\n\n(1) Actually we are doing project on PostgreSQL in group of two. We installed individual copy of PostgreSQL into our group directory.\n(2) When I created data directory and ran \"initdb\" it makes me( takes my login name ) as the owner of data directory.\n(3) The problem is that now my partner cannot start the postmaster since he does not have right on the data directory. Further one cannot set right on the data directory more than 700 .\n(4) For time being we hacked the postmaster.c and commented the line starting from 318 which actually test the permission on data directory. Then my partner was able to run the postmaster since now I gave him rights(770) on the data directory(But changed rights on postgresql.conf file to 744).\n\n(5) Is there a clean way by which my partner can start postmaster on data directory created by me.\n\nThanks and Regards\n\nAmit Khare\n\n\n\n\n\n\n\nHi All,\n \nRecently we got into problem of giving permission \nto data directory.\n \n(1) Actually we are doing project on PostgreSQL in \ngroup of two. We installed individual copy of PostgreSQL into our group \ndirectory.\n(2) When I created data directory and ran \"initdb\" \nit makes me( takes my login name ) as the owner of data directory.\n(3) The problem is that now my partner cannot start \nthe postmaster since he does not have right on the data directory. Further one \ncannot set right on the data directory more than 700 .\n(4) For time being we hacked the postmaster.c and \ncommented the line starting from 318 which actually test the permission on data \ndirectory. Then my partner was able to run the postmaster since now I gave him \nrights(770) on the data directory(But changed rights on postgresql.conf \nfile to 744).\n \n(5) Is there a clean way by which my partner can \nstart postmaster on data directory created by me.\n \nThanks and Regards\n \nAmit Khare",
"msg_date": "Sat, 30 Mar 2002 18:56:02 +0530",
"msg_from": "\"Amit Khare\" <skamit2000@yahoo.com>",
"msg_from_op": true,
"msg_subject": "How to give permission to others on data directory"
},
{
"msg_contents": "Amit Khare writes:\n\n> (1) Actually we are doing project on PostgreSQL in group of two. We installed individual copy of PostgreSQL into our group directory.\n> (2) When I created data directory and ran \"initdb\" it makes me( takes my login name ) as the owner of data directory.\n> (3) The problem is that now my partner cannot start the postmaster since he does not have right on the data directory. Further one cannot set right on the data directory more than 700 .\n> (4) For time being we hacked the postmaster.c and commented the line starting from 318 which actually test the permission on data directory. Then my partner was able to run the postmaster since now I gave him rights(770) on the data directory(But changed rights on postgresql.conf file to 744).\n>\n> (5) Is there a clean way by which my partner can start postmaster on data directory created by me.\n\nCreate a separate user for the server and give yourself and your partner\naccess to it.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 30 Mar 2002 17:50:21 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: How to give permission to others on data directory"
},
{
"msg_contents": "Hi Peter,\nThank you very much for your reply .\nHowever the problem is that we don't want to create separate user for\nserver. If \"initdb\" takes my login name and makes me owner of the data\ndirectory then how should I be able to give permission to other users in\nthis case my project partner?\n\nThanks again\n\nRegards\nAmit Khare\n----- Original Message -----\nFrom: Peter Eisentraut <peter_e@gmx.net>\nTo: Amit Khare <skamit2000@yahoo.com>\nCc: <pgsql-hackers@postgresql.org>\nSent: Sunday, March 31, 2002 4:20 AM\nSubject: Re: [HACKERS] How to give permission to others on data directory\n\n\n> Amit Khare writes:\n>\n> > (1) Actually we are doing project on PostgreSQL in group of two. We\ninstalled individual copy of PostgreSQL into our group directory.\n> > (2) When I created data directory and ran \"initdb\" it makes me( takes my\nlogin name ) as the owner of data directory.\n> > (3) The problem is that now my partner cannot start the postmaster since\nhe does not have right on the data directory. Further one cannot set right\non the data directory more than 700 .\n> > (4) For time being we hacked the postmaster.c and commented the line\nstarting from 318 which actually test the permission on data directory. Then\nmy partner was able to run the postmaster since now I gave him rights(770)\non the data directory(But changed rights on postgresql.conf file to 744).\n> >\n> > (5) Is there a clean way by which my partner can start postmaster on\ndata directory created by me.\n>\n> Create a separate user for the server and give yourself and your partner\n> access to it.\n>\n> --\n> Peter Eisentraut peter_e@gmx.net\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Sun, 31 Mar 2002 17:19:57 +0530",
"msg_from": "\"Amit Khare\" <skamit2000@yahoo.com>",
"msg_from_op": true,
"msg_subject": "Re: How to give permission to others on data directory"
},
{
"msg_contents": "Create a separate user and both of you use sudo to start the database.\nIf you're insistent on keeping yourself owner of the data then use sudo to \ngive permission to your project partner to start the database.\n\nOn Sunday 31 March 2002 05:49 am, Amit Khare wrote:\n> Hi Peter,\n> Thank you very much for your reply .\n> However the problem is that we don't want to create separate user for\n> server. If \"initdb\" takes my login name and makes me owner of the data\n> directory then how should I be able to give permission to other users in\n> this case my project partner?\n>\n> Thanks again\n>\n> Regards\n> Amit Khare\n> ----- Original Message -----\n> From: Peter Eisentraut <peter_e@gmx.net>\n> To: Amit Khare <skamit2000@yahoo.com>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Sunday, March 31, 2002 4:20 AM\n> Subject: Re: [HACKERS] How to give permission to others on data directory\n>\n> > Amit Khare writes:\n> > > (1) Actually we are doing project on PostgreSQL in group of two. We\n>\n> installed individual copy of PostgreSQL into our group directory.\n>\n> > > (2) When I created data directory and ran \"initdb\" it makes me( takes\n> > > my\n>\n> login name ) as the owner of data directory.\n>\n> > > (3) The problem is that now my partner cannot start the postmaster\n> > > since\n>\n> he does not have right on the data directory. Further one cannot set right\n> on the data directory more than 700 .\n>\n> > > (4) For time being we hacked the postmaster.c and commented the line\n>\n> starting from 318 which actually test the permission on data directory.\n> Then my partner was able to run the postmaster since now I gave him\n> rights(770) on the data directory(But changed rights on postgresql.conf\n> file to 744).\n>\n> > > (5) Is there a clean way by which my partner can start postmaster on\n>\n> data directory created by me.\n>\n> > Create a separate user for the server and give yourself and your partner\n> > access to it.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n>\n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n",
"msg_date": "Sun, 31 Mar 2002 09:13:02 -0600",
"msg_from": "David Walker <dwalker@vorteon.com>",
"msg_from_op": false,
"msg_subject": "Re: How to give permission to others on data directory"
}
] |
[
{
"msg_contents": "Hi,\n\nsomeone asks me about an utility to check any PostgreSQL database\ndata to be sure that:\n1) there is not any page corrupted \n (by a memory fault or a damaged disk)\n2) re-check any constraint inserted into the database\n\nI really don't know if PostgreSQL itself has any crc check on\nits pages. Please, there is anyone able to confirm such function?\n\nI've understood that PostgreSQL trust the operating system for\ndoing its work, but I don't know if there is any operating system\nable to give warranty the memory sanity before allocation, during \nthe memory use. \n\nAccording to me, if the database is well-designed it's not \npossible to find constraint violation on data already inserted\nand accepted from the SQL engine. \nAm I in fault for this sentence?\n\n \nThank you in advance for any reply.\n\n\nBest regards, \\fer\n",
"msg_date": "Sat, 30 Mar 2002 16:08:09 +0100 (CET)",
"msg_from": "Ferruccio Zamuner <nonsolosoft@diff.org>",
"msg_from_op": true,
"msg_subject": "Data integrity and sanity check"
},
{
"msg_contents": "> 2) re-check any constraint inserted into the database\n\nThere should not be any if it was accepted, however if it's a new\nconstraint it doesn't get applied to data that already exists. A dump\nand restore will ignore these as well (with good reason).\n\nI suppose the easiest way to find if data violates current constraints\n(rather than the constraints applied during initial insertion) is to:\n\nupdate table set column = column;\n\nThat should re-process any constraints.\n\n\nPrimary keys, or other index style constraints (UNIQUE for example)\nare always guarenteed. The only way that new constraints are added is\nvia alter table commands.\n\nBTW. There are good reasons sometimes for having data that violates\ncurrent constraints. The top of a tree may have a static record with\na null parent. The NOT NULL constraint added after this entry (via\nalter table add constraint) should not affect the static record, so\nunless you know your data quite well this type of tool wouldn't be\nparticularly useful anyway.\n\nNormally I use triggers which are programmed to account for that, but\nthere are a few cases where the check constraint speed (rather than\nthe trigger) is useful and the assumption the initial record will\nnever be touched is good enough.\n\n",
"msg_date": "Sat, 30 Mar 2002 10:34:31 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Data integrity and sanity check"
},
{
"msg_contents": "> BTW. There are good reasons sometimes for having data that violates\n> current constraints. The top of a tree may have a static record with\n> a null parent. The NOT NULL constraint added after this entry (via\n> alter table add constraint) should not affect the static record, so\n> unless you know your data quite well this type of tool wouldn't be\n> particularly useful anyway.\n\nAs far as I am aware, there is no alter table add constraint syntax for\nNOT NULLs atm. I've submitted a patch that allows alter table/alter\ncolumn set/drop not null though.\n\nChris\n\n\n",
"msg_date": "Sun, 31 Mar 2002 13:31:36 +0800 (WST)",
"msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Data integrity and sanity check"
},
{
"msg_contents": "There was -- kinda\n\nalter table tab add constraint check (value not null);\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Ferruccio Zamuner\" <nonsolosoft@diff.org>;\n<pgsql-hackers@postgresql.org>\nSent: Sunday, March 31, 2002 12:31 AM\nSubject: Re: [HACKERS] Data integrity and sanity check\n\n\n> > BTW. There are good reasons sometimes for having data that\nviolates\n> > current constraints. The top of a tree may have a static record\nwith\n> > a null parent. The NOT NULL constraint added after this entry\n(via\n> > alter table add constraint) should not affect the static record,\nso\n> > unless you know your data quite well this type of tool wouldn't be\n> > particularly useful anyway.\n>\n> As far as I am aware, there is no alter table add constraint syntax\nfor\n> NOT NULLs atm. I've submitted a patch that allows alter table/alter\n> column set/drop not null though.\n>\n> Chris\n>\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Sun, 31 Mar 2002 08:16:11 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Data integrity and sanity check"
},
{
"msg_contents": "Rod Taylor wrote:\n> > 2) re-check any constraint inserted into the database\n>\n> There should not be any if it was accepted, however if it's a new\n> constraint it doesn't get applied to data that already exists. A dump\n> and restore will ignore these as well (with good reason).\n\n Please don't make up any answers. If you don't know for sure,\n look at the code in question or just don't answer.\n\n PostgreSQL does check all existing data when adding a foreign\n key contraint. It skips the check during the restore of a\n dump though.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n",
"msg_date": "Mon, 1 Apr 2002 11:14:10 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: Data integrity and sanity check"
}
] |
[
{
"msg_contents": "We've seen several reports now of 7.2 postmasters failing to start\nbecause of weird networking setups --- if it's impossible to create\na loopback UDP port on 127.0.0.1, 7.2 will exit with\n\tPGSTAT: bind(2): Cannot assign requested address\n\nIt occurs to me that a more friendly behavior would be to disable\nstatistics gathering and proceed with startup anyway. Any objections?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Mar 2002 13:17:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "PGSTAT start failure probably shouldn't disable Postgres"
},
{
"msg_contents": "Tom Lane wrote:\n> We've seen several reports now of 7.2 postmasters failing to start\n> because of weird networking setups --- if it's impossible to create\n> a loopback UDP port on 127.0.0.1, 7.2 will exit with\n> \tPGSTAT: bind(2): Cannot assign requested address\n> \n> It occurs to me that a more friendly behavior would be to disable\n> statistics gathering and proceed with startup anyway. Any objections?\n\nAgreed. Throw an elog(LOG) and continue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Sat, 30 Mar 2002 15:36:18 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: PGSTAT start failure probably shouldn't disable Postgres"
}
] |
[
{
"msg_contents": "Hi all,\n\nIn IRC, \"StuckMojo\" commented that the following behavior doesn't seem\nto be ideal:\n\nnconway=> create table my_table (col1 int default 5, col2 int default\n10);\nCREATE\nnconway=> create view my_view (col1, col2) as select * from my_table;\nCREATE\nnconway=> create rule insert_rule as on insert to my_view do instead\ninsert into my_table values (new.*);\nCREATE\nnconway=> insert into my_table default values;\nINSERT 112714 1\nnconway=> insert into my_view default values;\nINSERT 0 0\nnconway=> select * from my_table;\n col1 | col2 \n------+------\n 5 | 10\n | \n(2 rows)\n\nIn other words, when the insert statement on the view is transformed by\nthe rule, the \"default value\" columns are replaced by explicit NULL\nvalues (which is the default value for the columns of the pseudo-table\ncreated by CREATE VIEW). Is this the correct behavior?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Sat, 30 Mar 2002 18:53:46 -0500",
"msg_from": "nconway@klamath.dyndns.org (Neil Conway)",
"msg_from_op": true,
"msg_subject": "rules and default values"
},
{
"msg_contents": "nconway@klamath.dyndns.org (Neil Conway) writes:\n> In other words, when the insert statement on the view is transformed by\n> the rule, the \"default value\" columns are replaced by explicit NULL\n> values (which is the default value for the columns of the pseudo-table\n> created by CREATE VIEW). Is this the correct behavior?\n\nIt's correct, from the point of view of the rule rewriter, but that\ndoesn't make the behavior useful.\n\nWhat'd make sense to me is to allow defaults to be attached to the\nview columns, say by doing ALTER TABLE ADD DEFAULT on the view.\nUnfortunately that won't do much in the current implementation,\nbecause such defaults will never get applied (the planner certainly\nwon't see them as applicable).\n\nMaybe inserting defaults should be the first phase of rewriting, just\nbefore rule substitution, rather than being left to the planner as it\nis now. We took it out of the parser for good reasons, but perhaps\nwe moved it too far downstream.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 30 Mar 2002 19:26:01 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rules and default values "
},
{
"msg_contents": "Awhile back I said:\n> nconway@klamath.dyndns.org (Neil Conway) writes:\n>> In other words, when the insert statement on the view is transformed by\n>> the rule, the \"default value\" columns are replaced by explicit NULL\n>> values (which is the default value for the columns of the pseudo-table\n>> created by CREATE VIEW). Is this the correct behavior?\n\n> It's correct, from the point of view of the rule rewriter, but that\n> doesn't make the behavior useful.\n\n> What'd make sense to me is to allow defaults to be attached to the\n> view columns, say by doing ALTER TABLE ADD DEFAULT on the view.\n> Unfortunately that won't do much in the current implementation,\n> because such defaults will never get applied (the planner certainly\n> won't see them as applicable).\n\n> Maybe inserting defaults should be the first phase of rewriting, just\n> before rule substitution, rather than being left to the planner as it\n> is now. We took it out of the parser for good reasons, but perhaps\n> we moved it too far downstream.\n\nI recently moved the default-insertion phase to fix a different bug,\nso this is now possible. Given the attached patch, it actually works.\nHowever I have not applied the patch because it needs (a) pg_dump\nsupport and (b) documentation, neither of which I have time for at the\nmoment. Anyone want to pick up the ball?\n\n\t\t\tregards, tom lane\n\n\nDemonstration of defaults for views (with patch):\n\nregression=# create table foo (f1 int);\nCREATE\nregression=# create view vv as select * from foo;\nCREATE\nregression=# create rule vvi as on insert to vv do instead\nregression-# insert into foo select new.*;\nCREATE\nregression=# insert into vv default values;\nINSERT 0 0\nregression=# select * from vv;\n f1\n----\n\n(1 row)\n\nregression=# alter table vv alter column f1 set default 42;\nALTER\nregression=# insert into vv default values;\nINSERT 0 0\nregression=# select * from vv;\n f1\n----\n\n 42\n(2 rows)\n\n\n*** src/backend/commands/tablecmds.c~\tMon Apr 15 01:22:03 2002\n--- src/backend/commands/tablecmds.c\tMon Apr 15 14:16:58 2002\n***************\n*** 622,629 ****\n \n \trel = heap_open(myrelid, AccessExclusiveLock);\n \n! \tif (rel->rd_rel->relkind != RELKIND_RELATION)\n! \t\telog(ERROR, \"ALTER TABLE: relation \\\"%s\\\" is not a table\",\n \t\t\t RelationGetRelationName(rel));\n \n \tif (!allowSystemTableMods\n--- 622,635 ----\n \n \trel = heap_open(myrelid, AccessExclusiveLock);\n \n! \t/*\n! \t * We allow defaults on views so that INSERT into a view can have\n! \t * default-ish behavior. This works because the rewriter substitutes\n! \t * default values into INSERTs before it expands rules.\n! \t */\n! \tif (rel->rd_rel->relkind != RELKIND_RELATION &&\n! \t\trel->rd_rel->relkind != RELKIND_VIEW)\n! \t\telog(ERROR, \"ALTER TABLE: relation \\\"%s\\\" is not a table or view\",\n \t\t\t RelationGetRelationName(rel));\n \n \tif (!allowSystemTableMods\n",
"msg_date": "Mon, 15 Apr 2002 14:25:28 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: rules and default values "
},
{
"msg_contents": "On Mon, 15 Apr 2002 14:25:28 -0400\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> Awhile back I said:\n> > nconway@klamath.dyndns.org (Neil Conway) writes:\n> >> In other words, when the insert statement on the view is transformed by\n> >> the rule, the \"default value\" columns are replaced by explicit NULL\n> >> values (which is the default value for the columns of the pseudo-table\n> >> created by CREATE VIEW). Is this the correct behavior?\n> \n> > It's correct, from the point of view of the rule rewriter, but that\n> > doesn't make the behavior useful.\n> \n> > What'd make sense to me is to allow defaults to be attached to the\n> > view columns, say by doing ALTER TABLE ADD DEFAULT on the view.\n> > Unfortunately that won't do much in the current implementation,\n> > because such defaults will never get applied (the planner certainly\n> > won't see them as applicable).\n> \n> > Maybe inserting defaults should be the first phase of rewriting, just\n> > before rule substitution, rather than being left to the planner as it\n> > is now. We took it out of the parser for good reasons, but perhaps\n> > we moved it too far downstream.\n> \n> I recently moved the default-insertion phase to fix a different bug,\n> so this is now possible. Given the attached patch, it actually works.\n\nGreat!\n\n> However I have not applied the patch because it needs (a) pg_dump\n> support and (b) documentation, neither of which I have time for at the\n> moment. Anyone want to pick up the ball?\n\nSure, I'll do this stuff.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Mon, 15 Apr 2002 14:42:54 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: rules and default values"
}
] |
[
{
"msg_contents": "Create a separate user and both of you use sudo to start the database.\nIf you're insistent on keeping yourself owner of the data then use sudo to\ngive permission to your project partner to start the database.\n\nOn Sunday 31 March 2002 05:49 am, Amit Khare wrote:\n> Hi Peter,\n> Thank you very much for your reply .\n> However the problem is that we don't want to create separate user for\n> server. If \"initdb\" takes my login name and makes me owner of the data\n> directory then how should I be able to give permission to other users in\n> this case my project partner?\n>\n> Thanks again\n>\n> Regards\n> Amit Khare\n> ----- Original Message -----\n> From: Peter Eisentraut <peter_e@gmx.net>\n> To: Amit Khare <skamit2000@yahoo.com>\n> Cc: <pgsql-hackers@postgresql.org>\n> Sent: Sunday, March 31, 2002 4:20 AM\n> Subject: Re: [HACKERS] How to give permission to others on data directory\n>\n> > Amit Khare writes:\n> > > (1) Actually we are doing project on PostgreSQL in group of two. We\n>\n> installed individual copy of PostgreSQL into our group directory.\n>\n> > > (2) When I created data directory and ran \"initdb\" it makes me( takes\n> > > my\n>\n> login name ) as the owner of data directory.\n>\n> > > (3) The problem is that now my partner cannot start the postmaster\n> > > since\n>\n> he does not have right on the data directory. Further one cannot set right\n> on the data directory more than 700 .\n>\n> > > (4) For time being we hacked the postmaster.c and commented the line\n>\n> starting from 318 which actually test the permission on data directory.\n> Then my partner was able to run the postmaster since now I gave him\n> rights(770) on the data directory(But changed rights on postgresql.conf\n> file to 744).\n>\n> > > (5) Is there a clean way by which my partner can start postmaster on\n>\n> data directory created by me.\n>\n> > Create a separate user for the server and give yourself and your partner\n> > access to it.\n> >\n> > --\n> > Peter Eisentraut peter_e@gmx.net\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n>\n> _________________________________________________________\n> Do You Yahoo!?\n> Get your free @yahoo.com address at http://mail.yahoo.com\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-------------------------------------------------------\n",
"msg_date": "Sun, 31 Mar 2002 09:16:20 -0600",
"msg_from": "David Walker <pgsql@grax.com>",
"msg_from_op": true,
"msg_subject": "Re: How to give permission to others on data directory"
}
] |
[
{
"msg_contents": "> Yeah, although it'd still be a good idea probably to convert the dump form\n> to ALTER TABLE in any case. The one downside that was brought up in the\n> past was the time involved in checking dumped (presumably correct) data\n> when the constraint is added to very large tables. I can probably make\n> that faster since right now it's just running the check on each row,\n> but it'll still be slow on big tables possibly. Another option would\n> be to have an argument that would disable the check on an add constraint,\n> except that wouldn't work for unique/primary key.\n\nMaybe it could be a really evil SET CONSTRAINTS command like:\n\nSET CONSTRAINTS UNCHECKED;\n\n...\n\nChris\n\n",
"msg_date": "Mon, 1 Apr 2002 15:08:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: RI triggers and schemas "
}
] |
[
{
"msg_contents": "Patch against 7,2 submitted for comment.\n\nIt's a little messy; I had some trouble trying to reconcile the code\nstyle of libpq which I copied from, and odbc.\n\nSuggestions on what parts look ugly, and or where to send this\n(is there a separate ODBC place?) are welcome.\n\nThis seems to work just fine; Now, when our users submit a 2 hour\nquery with four million row sorts by accident, then cancel it 30 seconds\nlater, it doesn't bog down the server ...\n\nregards,\n\n-Brad",
"msg_date": "Mon, 1 Apr 2002 10:33:53 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": true,
"msg_subject": "Proposed patch for ODBC driver w/ C-a-n-c-e-l"
},
{
"msg_contents": "Bradley McLean wrote:\n> \n> Patch against 7,2 submitted for comment.\n> \n> It's a little messy; I had some trouble trying to reconcile the code\n> style of libpq which I copied from, and odbc.\n> \n> Suggestions on what parts look ugly, and or where to send this\n> (is there a separate ODBC place?) are welcome.\n\nPlease send it to pgsql-patches or pgsql-odbc.\nAnyway I would commit your change soon.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 02 Apr 2002 19:39:23 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Proposed patch for ODBC driver w/ C-a-n-c-e-l"
}
] |
[
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > I think there are two ways of making this capability visible to users.\n> > First, you could do:\n> >\n> > \tSET query_timeout = 5;\n> >\n> > and all queries after that would time out at 5 seconds. Another option\n> > is:\n> >\n> > \tBEGIN WORK TIMEOUT 5;\n> > \t...\n> > \tCOMMIT;\n> >\n> > which would make the transaction timeout after 5 seconds. We never\n> > decided which one we wanted, or both.\n> \n> Note that the first is a statement-level timeout and the second is a\n> transaction-level timeout. Be sure to clarify which one we want.\n\nOh, wow, that is an interesting distinction. If there is a multi-query\ntransaction, do we time each query separately or the entire transaction?\nI don't know which people want, and maybe this is why we need both GUC\nand BEGIN WORK timeouts. I don't remember this distinction in previous\ndiscussions but it may be significant. Of course, the GUC could behave\nat a transaction level as well. It will be tricky to manage multiple\nalarms in a single process, but it can be done by creating an alarm\nqueue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Apr 2002 12:48:45 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> ... It will be tricky to manage multiple\n> alarms in a single process, but it can be done by creating an alarm\n> queue.\n\nI would argue that we should only support *one* kind of timeout, either\ntransaction-level or statement-level, so as to avoid that complexity.\nI don't want to see us gilding the lily in the first implementation of\nsomething that IMHO is of dubious usefulness in the first place.\nWe can think about extending the facility later, when and if it proves\nsufficiently useful to justify more complexity.\n\nI don't have a very strong feeling about whether transaction-level or\nstatement-level is more useful; am willing to do whichever one the\nJDBC spec wants.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 13:00:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues "
},
{
"msg_contents": "On Mon, 1 Apr 2002, Bruce Momjian wrote:\n\n> I don't know which people want, and maybe this is why we need both GUC\n> and BEGIN WORK timeouts. I don't remember this distinction in previous\n> discussions but it may be significant. Of course, the GUC could behave\n> at a transaction level as well. It will be tricky to manage multiple\n> alarms in a single process, but it can be done by creating an alarm\n> queue.\n\nI think we should do just BEGIN WORK (transaction-level) timeouts; that is\nall that the JDBC spec asks for. Does that sound good to people?\n\nSo the work that would need to be done is asking the driver to request the\ntimeout via \"BEGIN WORK TIMEOUT 5\"; getting the backend to parse that\nrequest and set the alarm on each query in that transaction; getting the\nbackend to send a cancel request if the alarm goes off. I am right now in\nthe process of finding the place where BEGIN-level queries are parsed. Any\npointers to the right files to read would be appreciated.\n\nj\n\n\n\n",
"msg_date": "Mon, 1 Apr 2002 13:12:02 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > ... It will be tricky to manage multiple\n> > alarms in a single process, but it can be done by creating an alarm\n> > queue.\n> \n> I would argue that we should only support *one* kind of timeout, either\n> transaction-level or statement-level, so as to avoid that complexity.\n> I don't want to see us gilding the lily in the first implementation of\n> something that IMHO is of dubious usefulness in the first place.\n> We can think about extending the facility later, when and if it proves\n> sufficiently useful to justify more complexity.\n> \n> I don't have a very strong feeling about whether transaction-level or\n> statement-level is more useful; am willing to do whichever one the\n> JDBC spec wants.\n\nAgreed, only one timeout. I just considered the statement/transaction\nlevel quite interesting. We could easily do GUC for query level, and\nallow BEGIN WORK to override that for transaction level. That would\ngive us the best of both worlds, if we want it. I am not sure what\npeople are going to use this timeout for. My guess is that only\ntransaction level is the way to go.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 1 Apr 2002 13:18:39 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Jessica,\n\nMy reading of the JDBC spec would indicate that this is a statement \nlevel property (aka query level) since the method to enable this is on \nthe Statement object and is named setQueryTimeout(). There is nothing I \ncan find that would indicate that this would apply to the transaction in \nmy reading of the jdbc spec.\n\nthanks,\n--Barry\n\nJessica Perry Hekman wrote:\n> On Mon, 1 Apr 2002, Bruce Momjian wrote:\n> \n> \n>>I don't know which people want, and maybe this is why we need both GUC\n>>and BEGIN WORK timeouts. I don't remember this distinction in previous\n>>discussions but it may be significant. Of course, the GUC could behave\n>>at a transaction level as well. It will be tricky to manage multiple\n>>alarms in a single process, but it can be done by creating an alarm\n>>queue.\n> \n> \n> I think we should do just BEGIN WORK (transaction-level) timeouts; that is\n> all that the JDBC spec asks for. Does that sound good to people?\n> \n> So the work that would need to be done is asking the driver to request the\n> timeout via \"BEGIN WORK TIMEOUT 5\"; getting the backend to parse that\n> request and set the alarm on each query in that transaction; getting the\n> backend to send a cancel request if the alarm goes off. I am right now in\n> the process of finding the place where BEGIN-level queries are parsed. Any\n> pointers to the right files to read would be appreciated.\n> \n> j\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n\n",
"msg_date": "Mon, 01 Apr 2002 13:25:21 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "Barry Lind writes:\n\n> My reading of the JDBC spec would indicate that this is a statement\n> level property (aka query level) since the method to enable this is on\n> the Statement object and is named setQueryTimeout(). There is nothing I\n> can find that would indicate that this would apply to the transaction in\n> my reading of the jdbc spec.\n\nDoes it time out only queries or any kind of statement?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 1 Apr 2002 18:22:26 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Mon, 1 Apr 2002, Peter Eisentraut wrote:\n\n> Does it time out only queries or any kind of statement?\n\nAny kind, I believe.\n\nFWIW, I took a look at the recommended JDBC driver for MySQL, hoping for\nideas; it does not implement query timeouts at all. I'll take a look at\nmSQL next.\n\nj\n\n",
"msg_date": "Mon, 1 Apr 2002 18:36:37 -0500 (EST)",
"msg_from": "Jessica Perry Hekman <jphekman@dynamicdiagrams.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "The spec isn't clear on that point, but my interpretation is that it \nwould apply to all types of statements not just queries.\n\n--Barry\n\nPeter Eisentraut wrote:\n> Barry Lind writes:\n> \n> \n>>My reading of the JDBC spec would indicate that this is a statement\n>>level property (aka query level) since the method to enable this is on\n>>the Statement object and is named setQueryTimeout(). There is nothing I\n>>can find that would indicate that this would apply to the transaction in\n>>my reading of the jdbc spec.\n> \n> \n> Does it time out only queries or any kind of statement?\n> \n\n\n",
"msg_date": "Mon, 01 Apr 2002 15:41:32 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues"
},
{
"msg_contents": "On Monday 01 April 2002 20:18, Bruce Momjian wrote:\n> Tom Lane wrote:>\n> Agreed, only one timeout. \n> ...\n\nWe have (at least) two ortogonal reasons why we want \nto abort a long running transaction:\n\n- The long running transaction might compute a result \n we are not interesed anymore (because it just takes\n too long to wait for the result). We do NOT always\n know in advance how patient we will be to wait for\n the result. Therefore I think the client should tell \n the server, when his client (user?) got impatinet\n and aborted the whole transaction...\n\n- The long running transaction might hold exclusive locks \n and therefore decreases (or even nullifies) the overall \n concurrency. We want to be able to disallow this by design.\n\nI think a nice timout criteria would be a maximum lock time \nfor all resources aquired exclusivly within a transaction. \nThis would then affect transaction timeouts as well as statement \ntimeouts with the advantage, the get concurrency guaratees.\n\nRobert\n",
"msg_date": "Tue, 2 Apr 2002 10:58:13 +0200",
"msg_from": "Robert Schrem <robert.schrem@WiredMinds.de>",
"msg_from_op": false,
"msg_subject": "Re: timeout implementation issues, lock timeouts"
}
] |
[
{
"msg_contents": "I recently discovered a problem inserting a user-defined type when \ngoing through a rule. I'm not sure if it's a -hackers or -users question,\nbut since it involves the interaction of a user-defined type and rules\nI thought it envitable that I would end up here anyway.\n\nThe object in question is my X.509 type. For compelling reasons\nbeyond the scope of this discussion, I need to define a table as:\n\ncreate table certs (\n name varchar(20),\n cert x509,\n\n -- fields used with CRL lookups\n serial_number hugeint not null\n constraint c1 check (serial_number = serial_number(cert)),\n issuer principal not null\n constraint c2 check (issuer = issuer(cert)),\n subject principal not null unique\n constraint c3 check (subject = subject(cert)),\n\n ...\n);\n\nwhere the constraints guarantee that the cached attributes accurately\nreflect the contents of the cert (but these fields can be indexed and\nsearched). In practice it's impossible to get those fields right in\na query so I also defined:\n\n create view cert_insert as select name, cert from certs;\n\n create rule certi as on insert to cert_insert do instead\n insert into certs (name, cert, serial_number, subject, issuer,...\n )\n values (new.name, new.cert,\n serial_number(new.cert), subject(new.cert), issuer(new.cert),...\n );\n\nThe problem is that I can insert literal text:\n\n create table t ( cert x509 );\n insert into t values ('---- BEGIN CERTIFICATE ---- ....');\n\nbut when I try the same with cert_insert it's clear that \"new.cert\" \nisn't getting initialized properly. (It works fine when the cert is\nalready in the database.) Trying to explicitly cast the literal to \nas part of the query doesn't help - it seems that the rule just rewrites\nthe query and the cast is getting lost.\n\nWorkarounds don't seem to be viable. I can't use a trigger on a temporary\ntable since there doesn't seem to be a clean way to trigger a rule from\none. (I need to get parameters from the trigger to the SQL function to\nthe rule, and SQL functions don't seem to be able to take parameters --\nor its undocumented if it can take something like $1, $2, etc.) I can't\nuse a rule on the temporary table since it appears a rule still looks\nat the original parameters, not the temp table.\n\nAny ideas? Is this something addressed in 7.2? (I'm trying to stick\nwith the oldest useable version to avoid forcing DB upgrades.) Or is\nthis a genuine hole in the user type/rules/triggers model?\n\nBear\n",
"msg_date": "Mon, 1 Apr 2002 15:14:59 -0700 (MST)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "inserting user defined types through a rule?"
},
{
"msg_contents": "Bear Giles <bgiles@coyotesong.com> writes:\n> I recently discovered a problem inserting a user-defined type when \n> going through a rule. ...\n\n> The problem is that I can insert literal text:\n> create table t ( cert x509 );\n> insert into t values ('---- BEGIN CERTIFICATE ---- ....');\n> but when I try the same with cert_insert it's clear that \"new.cert\" \n> isn't getting initialized properly. (It works fine when the cert is\n> already in the database.) Trying to explicitly cast the literal to \n> as part of the query doesn't help - it seems that the rule just rewrites\n> the query and the cast is getting lost.\n\nThis seems like a bug, but I don't have much hope of being able to find\nit without a test case to step through. Could you boil things down to a\nreproducible test case?\n\nFWIW, it seems unlikely that the issue is your user-defined type per se;\nthe rule rewriter mechanisms are quite type-ignorant. You may be able\nto develop a test case that doesn't use your own type at all.\n\n> Any ideas? Is this something addressed in 7.2?\n\nCan't tell at this point. What version are you using, anyway?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 01 Apr 2002 21:58:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: inserting user defined types through a rule? "
},
{
"msg_contents": "I'm using 7.1.3 currently, but am building and installing 7.2.1 tonight\nto see if this fixes the problem.\n\nI don't know the standard types and functions well enough to be able to\nwhip out a test case, but I think I do have an idea on what the problem\nis. If I'm right, the problem is triggered by any rule with a function \nthat operates on one of the parameters. If the parameter is already the\ntype then the rule succeeds. If the parameter needs to be cast (e.g.,\nbecause it's a literal value) then the rule fails.\n\nE.g., if there is a function like\n\n function strlen(varchar) returns int4 ...\n\ntry\n\n create table test (s varchar(20), len int4);\n\n create view test_view as select s from test;\n\n create rule test_rule as on insert to test_view \n do instead insert into test (s, strlen(s));\n\nthen\n\n insert into test_view values ('crash-n-burn!');\n\nwill fail.\n\nTaken even further, you could probably use\n\n create rule test_rule2 as on insert to test_view\n do instead insert into test2 (strlen(s));\n\nThe earlier example is just an updateable view with the tweak that\nsome of hidden underlying fields are also updated. Strictly speaking\nthis breaks 3NF, but with the consistency checks it's a useful way of \ncaching derived values while ensuring that they can't get out of sync\nwith the objects they cache.\n\nBear\n\nP.S., it just occured to me that rules can allow multiple statements.\nMaybe the workaround is\n\n create rule...\n do instead (\n insert into temporary table;\n insert into final table from temporary table using functions;\n clear temporary table\n );\n",
"msg_date": "Mon, 1 Apr 2002 20:37:34 -0700 (MST)",
"msg_from": "Bear Giles <bgiles@coyotesong.com>",
"msg_from_op": true,
"msg_subject": "Re: inserting user defined types through a rule?"
},
{
"msg_contents": "Bear Giles <bgiles@coyotesong.com> writes:\n> I don't know the standard types and functions well enough to be able to\n> whip out a test case, but I think I do have an idea on what the problem\n> is. If I'm right, the problem is triggered by any rule with a function \n> that operates on one of the parameters. If the parameter is already the\n> type then the rule succeeds. If the parameter needs to be cast (e.g.,\n> because it's a literal value) then the rule fails.\n\nI tried this, but apparently there's more to it than that; AFAICT it\nworks in the cases where I'd expect it to work (viz, where there is a\nsuitable cast function available).\n\ntest71=# create function strlen(varchar) returns int as\ntest71-# 'select length($1)::int' language 'sql';\nCREATE\ntest71=# create table test (s varchar(20), len int4);\nCREATE\ntest71=# create view test_view as select s from test;\nCREATE\ntest71=# create rule test_rule as on insert to test_view\ntest71-# do instead insert into test values (new.s, strlen(new.s));\nCREATE\ntest71=# insert into test_view values ('crash-n-burn!');\nINSERT 1610948 1\ntest71=# insert into test_view values (33::int);\nINSERT 1610949 1\ntest71=# insert into test_view values (33::numeric);\nERROR: Attribute 's' is of type 'varchar' but expression is of type 'numeric'\n You will need to rewrite or cast the expression\ntest71=# select * from test;\n s | len\n---------------+-----\n crash-n-burn! | 13\n 33 | 2\n(2 rows)\n\nPerhaps there's a particular case where it fails, but you'll have to\ngive us more of a clue...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 01:51:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: inserting user defined types through a rule? "
}
] |
[
{
"msg_contents": "----- Original Message ----- \nFrom: Nicolas Bazin \nTo: PostgreSQL-development \nCc: Tom Lane ; Bruce Momjian ; Michael Meskes \nSent: Thursday, March 28, 2002 9:30 AM\nSubject: Always the same ecpg bug - please (re)apply patch\n\n\nHere is the description:\n\nWhen a macro is replaced by the preprocessor, pgc.l reaches a end of file, which is not the actual end of the file. One side effect of that is that if you are in a ifdef block, you get a wrong error telling you that a endif is missing.\n\nThis patch corrects pgc.l and also adds a test of this problem to test1.pgc. To convince you apply the patch to test1.pgc first then try to compile the test then apply the patch to pgc.l.\n\nThe patch moves the test of the scope of an ifdef block to the end of the file beeing parsed, including all includes files, ... .\n\nFor the record, this patch was applied a first time by bruce then overwritten by Micheal and reapplied by him. But the big mystery is that there is no trace of that in CVS ????\n\nNicolas",
"msg_date": "Tue, 2 Apr 2002 10:41:51 +1000",
"msg_from": "\"Nicolas Bazin\" <nbazin@ingenico.com.au>",
"msg_from_op": true,
"msg_subject": "please apply patch"
}
] |
[
{
"msg_contents": "Hi,\n\nI created a schema *inoue* and tried the following.\n\n # create table inoue.t1 (id serial primary key, dt text);\n NOTICE: CREATE TABLE will create implicit sequence 't1_id_seq'\n for SERIAL column 't1.id'\n NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n 't1_pkey' for table 't1'\n CREATE\n # insert into inoue.t1 (dt) values ('abc');\n ERROR: Relation \"t1_id_seq\" does not exist\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Tue, 02 Apr 2002 13:39:57 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "serial and namespace"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> # create table inoue.t1 (id serial primary key, dt text);\n> NOTICE: CREATE TABLE will create implicit sequence 't1_id_seq'\n> for SERIAL column 't1.id'\n> NOTICE: CREATE TABLE / PRIMARY KEY will create implicit index\n> 't1_pkey' for table 't1'\n> CREATE\n> # insert into inoue.t1 (dt) values ('abc');\n> ERROR: Relation \"t1_id_seq\" does not exist\n\nOkay, I fixed SERIAL column creation so that you get a default like\nthis:\n\nregression=# \\d t1\n Table \"t1\"\n Column | Type | Modifiers\n--------+---------+-------------------------------------------------------\n id | integer | not null default nextval('\"inoue\".\"t1_id_seq\"'::text)\n dt | text |\nIndexes: t1_pkey primary key btree (id)\n\nI'm not entirely thrilled with this solution, because it forecloses the\npossibility of dumping the table definition and then reloading it into\na different schema. We haven't yet talked much about how pg_dump should\nbehave with schemas --- but I think it will be important for pg_dump to\nbe able to choose whether to qualify object names with a schema name or\nnot in its dump output. The above approach makes it harder to do so.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 01:37:51 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: serial and namespace "
}
] |
[
{
"msg_contents": "We have a request from our customers to link two database servers through the ISDN link.\n\nWe found the dblink in the contrib directory, and it works ,but there is one big problem.\nI'll try to explain it using the sample from README.dblink:\n\nSAMPLE:\n create view myremotetable as\n select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2\n from (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1 user=postgres password=postgres'\n ,'select proname, prosrc from pg_proc') as dblink_p) as t1;\n\nselect f1, f2 from myremotetable where f1 like 'bytea%';\n\nWhen the select is executed:\n\n 1. all the data from table pg_proc are retrieved from remote database\n 2. then where clause is executed against that data (on the local side)\n\nThis behaviour is OK if the whole story is happenning on local network, but\nin our case data should be send through slow ISDN connection.\n\nIs it possible to write a rule that uses the current SQL expression and sends this expression to the remote database ? \nIn this case only wanted data would be send through the network.\n\nThank You in advance !\n\n\n\n\n\n\n\n\nWe have a request from our customers to link \ntwo database servers through the ISDN link.\n \nWe found the dblink in the contrib directory, and \nit works ,but there is one big problem.\nI'll try to explain it using the sample from \nREADME.dblink:\n \nSAMPLE:\n create view myremotetable as select \ndblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2 from \n(select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1 user=postgres \npassword=postgres' \n,'select proname, prosrc from pg_proc') as dblink_p) as t1;\n \nselect f1, f2 from myremotetable where f1 like \n'bytea%';\nWhen the select is executed:\n \n 1. all the data from table \npg_proc are retrieved from remote database\n 2. then where clause is executed \nagainst that data (on the local side)\n \nThis behaviour is OK if the whole story is \nhappenning on local network, but\nin our case data should be send through slow ISDN \nconnection.\n \nIs it possible to write a rule that uses the \ncurrent SQL expression and sends this expression to the remote database ? \nIn this case only wanted data would be send \nthrough the network.\n \nThank You in advance \n!",
"msg_date": "Tue, 2 Apr 2002 10:25:44 +0200",
"msg_from": "\"Darko Prenosil\" <Darko.Prenosil@finteh.hr>",
"msg_from_op": true,
"msg_subject": "Dblink and ISDN"
},
{
"msg_contents": "Darko Prenosil wrote:\n> SAMPLE:\n> \n> create view myremotetable as\n> select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2\n> from (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1 \n> user=postgres password=postgres'\n> ,'select proname, prosrc from pg_proc') as dblink_p) \n> as t1;\n> \n> \n> \n> select f1, f2 from myremotetable where f1 like 'bytea%';\n> \n\nYou could write the query directly instead of using a view, i.e.\n\nselect dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2\nfrom (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1\nuser=postgres password=postgres','select proname, prosrc from pg_proc') \nas dblink_p WHERE proname LIKE 'bytea%') as t1;\n\n\n> \n> \n> Is it possible to write a rule that uses the current SQL expression and \n> sends this expression to the remote database ?\n> \n> In this case only wanted data would be send through the network.\n> \n\nI'm not experienced in using PostgreSQL rules, but I don't see a way to \naccess the current SQL expression. Hopefully someone more knowledgeable \nwill chime in here.\n\nJoe\n\n",
"msg_date": "Tue, 02 Apr 2002 09:49:52 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Dblink and ISDN"
},
{
"msg_contents": "Joe Conway wrote:\n> Darko Prenosil wrote:\n> \n>> SAMPLE:\n>>\n>> create view myremotetable as\n>> select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2\n>> from (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1 \n>> user=postgres password=postgres'\n>> ,'select proname, prosrc from pg_proc') as \n>> dblink_p) as t1;\n>>\n>> \n>>\n>> select f1, f2 from myremotetable where f1 like 'bytea%';\n>>\n> \n> You could write the query directly instead of using a view, i.e.\n> \n> select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2\n> from (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1\n> user=postgres password=postgres','select proname, prosrc from pg_proc') \n> as dblink_p WHERE proname LIKE 'bytea%') as t1;\n>\n\nOops, messed up my cut and paste, and forgot to double the quotes around \nbytea%. This one I tested ;) to work fine:\nselect dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as f2\nfrom (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1\nuser=postgres password=postgres','select proname, prosrc from pg_proc \nWHERE proname LIKE ''bytea%''')\nas dblink_p) as t1;\n\nJoe\n\n",
"msg_date": "Tue, 02 Apr 2002 09:58:55 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Dblink and ISDN"
},
{
"msg_contents": "Out of curiousity, what happens if the remove server is unavailable?\n\n\n\n\n----- Original Message -----\nFrom: \"Joe Conway\" <mail@joeconway.com>\nTo: \"Darko Prenosil\" <Darko.Prenosil@finteh.hr>\nCc: \"Hackers\" <pgsql-hackers@postgresql.org>\nSent: Tuesday, April 02, 2002 12:58 PM\nSubject: Re: [HACKERS] Dblink and ISDN\n\n\n> Joe Conway wrote:\n> > Darko Prenosil wrote:\n> >\n> >> SAMPLE:\n> >>\n> >> create view myremotetable as\n> >> select dblink_tok(t1.dblink_p,0) as f1,\ndblink_tok(t1.dblink_p,1) as f2\n> >> from (select dblink('hostaddr=127.0.0.1 port=5432\ndbname=template1\n> >> user=postgres password=postgres'\n> >> ,'select proname, prosrc from pg_proc') as\n> >> dblink_p) as t1;\n> >>\n> >>\n> >>\n> >> select f1, f2 from myremotetable where f1 like 'bytea%';\n> >>\n> >\n> > You could write the query directly instead of using a view, i.e.\n> >\n> > select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1)\nas f2\n> > from (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1\n> > user=postgres password=postgres','select proname, prosrc from\npg_proc')\n> > as dblink_p WHERE proname LIKE 'bytea%') as t1;\n> >\n>\n> Oops, messed up my cut and paste, and forgot to double the quotes\naround\n> bytea%. This one I tested ;) to work fine:\n> select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) as\nf2\n> from (select dblink('hostaddr=127.0.0.1 port=5432 dbname=template1\n> user=postgres password=postgres','select proname, prosrc from\npg_proc\n> WHERE proname LIKE ''bytea%''')\n> as dblink_p) as t1;\n>\n> Joe\n>\n>\n> ---------------------------(end of\nbroadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to\nmajordomo@postgresql.org)\n>\n\n",
"msg_date": "Tue, 2 Apr 2002 13:16:57 -0500",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: Dblink and ISDN"
},
{
"msg_contents": "Rod Taylor wrote:\n> Out of curiousity, what happens if the remove server is unavailable?\n> \n\nI tried it against a bogus IP, and this is what I got:\n\ntest=# select dblink_tok(t1.dblink_p,0) as f1, dblink_tok(t1.dblink_p,1) \nas f2 from (select dblink('hostaddr=123.45.67.8 \ndbname=template1','select proname, prosrc from pg_proc WHERE proname \nLIKE ''bytea%''') as dblink_p) as t1;\nERROR: dblink: connection error: could not connect to server: \nConnection timed out\n Is the server running on host 123.45.67.8 and accepting\n TCP/IP connections on port 5432?\n\ntest=#\n\ndblink just uses libpq to make a client connection, and thus inherits \nlibpq's response.\n\nJoe\n\n",
"msg_date": "Tue, 02 Apr 2002 11:30:07 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Dblink and ISDN"
}
] |
[
{
"msg_contents": "\n> So the work that would need to be done is asking the driver to request the\n> timeout via \"BEGIN WORK TIMEOUT 5\"; getting the backend to parse that\n> request and set the alarm on each query in that transaction; getting the\n\nWell imho that interpretation would be completely unobvious. \nMy first guess would have been, that with this syntax the whole transaction \nmust commit or rollback within 5 seconds.\n\nThus I think we only need statement_timeout. ODBC, same as JDBC wants it at the \nstatement handle level. ODBC also provides for a default that applies to all \nstatement handles of this connection (They call the statement attr QUERY_TIMEOUT,\nso imho there is room for interpretation whether it applies to selects only, which \nI would find absurd).\n\nAndreas\n",
"msg_date": "Tue, 2 Apr 2002 11:12:27 +0200",
"msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>",
"msg_from_op": true,
"msg_subject": "Re: timeout implementation issues"
}
] |
[
{
"msg_contents": "Is PostgreSQL unicode compliant/ready?\n\nDoes it store/export text in Unicode wide-character format, or single\ncharacter strings?\n\n\n\n",
"msg_date": "Tue, 2 Apr 2002 18:53:03 +0800",
"msg_from": "\"Kevin McPherson\" <kevinmcp@en-tranz.com>",
"msg_from_op": true,
"msg_subject": "Unicode ready? "
},
{
"msg_contents": "Le Mardi 2 Avril 2002 12:53, vous avez écrit :\n> Is PostgreSQL unicode compliant/ready?\n> Does it store/export text in Unicode wide-character format, or single\n> character strings?\n\n[By the way : there are several Unicode encodings (UTF-8, UTF-16, UCS2). \nUTF-8 is the most popular because wide characters are coded using 1 to 3 \nsingle ASCII character. Thus UTF-8 extracts can be read in a normal text \neditor. On the converse, UTF-16 is coded on 16 bytes, thus can't be read \neasily.]\n\nI guess your question was \"Is PostgreSQL multi-byte safe and Unicode ready?\"\n\n1) Server-side :\na) PostgreSQL needs to be compiled with \n--enable-recode\n--enable-multibyte\nb) Create a database with\nCREATE DATABASE foo WITH ENCODING ='UNICODE' (which means UTF-8 in POstgreSQL)\n\nSeveral other multi-byte encodings are available. In the case of Unicode, \ndata is stored in UTF-8 format. Data and searches are performed on \nwide-characters, not 8 bits characters.\n\n2) Client side\nBy default connection is done with server encoding. But it is possible to \nautomatically recode connections on the fly using :\n\nSET CLIENT_ENCODING = Latin9 (this example recodes Unicode streams to Western \nEuropean with Euro symbol). It is possible to recode several streams at the \nsame time.\n\n3) ODBC interface\nThe current odbc interface provides Unicode UTF-8 Unicode encoding. But \nMicrosoft platform needs a Unicode UCS-2 encoding (ex: Access 2K). Therefore, \nyou will be able to view data under OpenOffice but not Microsoft Office.\n\nThe new ODBC driver in CVS supports UCS-2.\n\n4) Server side languages\nServer-side languages are the traditional weakness of Unicode programming. \nWhen writing code, you need to calculate the lenght of a string, crop the \nleft side of it, etc... In PHP, this is dones using special mb_string \nlibraries. Usually, this breaks your code because these libraries provide \nadditional programming words.\n\nThis is not the case in PostgreSQL where all PLpgSQL functions are multi-byte \nsafe. Because of PHP instability, I ported several functions to PLpgSQL.\n\nPostgreSQL is a pure marvel.\n\nFor additional questions, please post to pgsql-general@postgresql.org.\n\nCheers,\nJean-MIchel POURE\n",
"msg_date": "Tue, 2 Apr 2002 22:56:55 +0200",
"msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Unicode ready?"
}
] |
[
{
"msg_contents": "\nOver this past weekend, the PostgreSQL Global Development Group packaged\nup and put onto our ftp server PostgreSQL v7.2.1 ... a bug fix release, it\nfixes a critical bug in v7.2:\n\n sequence counters will go backwards after a crash\n\nOther fixes since v7.2 include:\n\n Fix pgaccess kanji-coversion key binding (Tatsuo)\n Optimizer improvements (Tom)\n cash I/O improvements (Tom)\n New Russian FAQ\n Compile fix for missing AuthBlockSig (Heiko)\n Additional time zones and time zone fixes (Thomas)\n Allow psql \\connect to handle mixed case database and user names (Tom)\n Return proper OID on command completion even with ON INSERT rules (Tom)\n Allow COPY FROM to use 8-bit DELIMITERS (Tatsuo)\n Fix bug in extract/date_part for milliseconds/microseconds (Tatsuo)\n Improve handling of multiple UNIONs with different lengths (Tom)\n contrib/btree_gist improvements (Teodor Sigaev)\n contrib/tsearch dictionary improvements, see README.tsearch for\n an additional installation step (Thomas T. Thai, Teodor Sigaev)\n Fix for array subscripts handling (Tom)\n Allow EXECUTE of \"CREATE TABLE AS ... SELECT\" in PL/PgSQL (Tom)\n\n\nUpgrading to v7.2.1 from v7.2 *does not* require a dump/reload, but it is\nrequired from all previous releases ...\n\nDue to the nature of the bug with the sequence counters, it is *highly*\nrecommended that anyone running v7.2 upgrade to the latest version at\ntheir earliest convience ...\n\nMarc G. Fournier\n\n\n",
"msg_date": "Tue, 2 Apr 2002 10:08:05 -0400 (AST)",
"msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>",
"msg_from_op": true,
"msg_subject": "v7.2.1 Released: Critical Bug Fix"
},
{
"msg_contents": "I was wondering if it is documented as to exactly how to do a minor\nupgrade. I've not been able to find it in the past, and I end up doing\na full install, dump/reload. I'm running postgresql on nt/2000 using\ncygwin. Thanks -Dean\n\n-----Original Message-----\nFrom: pgsql-general-owner@postgresql.org\n[mailto:pgsql-general-owner@postgresql.org] On Behalf Of Marc G.\nFournier\nSent: Tuesday, April 02, 2002 9:08 AM\nTo: pgsql-announce@postgresql.org\nCc: pgsql-hackers@postgresql.org; pgsql-general@postgresql.org\nSubject: [GENERAL] v7.2.1 Released: Critical Bug Fix\n\n\n\nOver this past weekend, the PostgreSQL Global Development Group packaged\nup and put onto our ftp server PostgreSQL v7.2.1 ... a bug fix release,\nit fixes a critical bug in v7.2:\n\n sequence counters will go backwards after a crash\n\nOther fixes since v7.2 include:\n\n Fix pgaccess kanji-coversion key binding (Tatsuo)\n Optimizer improvements (Tom)\n cash I/O improvements (Tom)\n New Russian FAQ\n Compile fix for missing AuthBlockSig (Heiko)\n Additional time zones and time zone fixes (Thomas)\n Allow psql \\connect to handle mixed case database and user names (Tom)\nReturn proper OID on command completion even with ON INSERT rules (Tom)\nAllow COPY FROM to use 8-bit DELIMITERS (Tatsuo) Fix bug in\nextract/date_part for milliseconds/microseconds (Tatsuo) Improve\nhandling of multiple UNIONs with different lengths (Tom)\ncontrib/btree_gist improvements (Teodor Sigaev) contrib/tsearch\ndictionary improvements, see README.tsearch for\n an additional installation step (Thomas T. Thai, Teodor Sigaev) Fix\nfor array subscripts handling (Tom) Allow EXECUTE of \"CREATE TABLE AS\n... SELECT\" in PL/PgSQL (Tom)\n\n\nUpgrading to v7.2.1 from v7.2 *does not* require a dump/reload, but it\nis required from all previous releases ...\n\nDue to the nature of the bug with the sequence counters, it is *highly*\nrecommended that anyone running v7.2 upgrade to the latest version at\ntheir earliest convience ...\n\nMarc G. Fournier\n\n\n\n---------------------------(end of broadcast)---------------------------\nTIP 2: you can get off all lists at once with the unregister command\n (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Tue, 2 Apr 2002 09:39:55 -0500",
"msg_from": "\"Dean Hill\" <dean@metweld.com>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix"
},
{
"msg_contents": "> cash I/O improvements (Tom)\n\nIf it will change the I/O cash flow to more I than O, it will definitely be \na success... :-) \n\n --\nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Web: www.suse.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Email: kar@kakidata.dk \n",
"msg_date": "Tue, 02 Apr 2002 15:12:02 GMT",
"msg_from": "\"Kaare Rasmussen\" <kar@kakidata.dk>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix"
},
{
"msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n> Over this past weekend, the PostgreSQL Global Development Group packaged\n> up and put onto our ftp server PostgreSQL v7.2.1 ... a bug fix release, it\n> fixes a critical bug in v7.2:\n\n> sequence counters will go backwards after a crash\n\nIt seems worth pointing out that said bug is not new in 7.2; it has\nexisted in all 7.1.* releases as well.\n\nIf you were looking for a reason to update to 7.2.* from 7.1.*, this\nmight be a good one.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 10:36:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix "
},
{
"msg_contents": "\"Dean Hill\" <dean@metweld.com> writes:\n\n> I was wondering if it is documented as to exactly how to do a minor\n> upgrade. I've not been able to find it in the past, and I end up doing\n> a full install, dump/reload. I'm running postgresql on nt/2000 using\n> cygwin. Thanks -Dean\n\nMinor upgrades do not require a dump/restore; the on-disk file format\nremains the same.\n\n-Doug\n-- \nDoug McNaught Wireboard Industries http://www.wireboard.com/\n\n Custom software development, systems and network consulting.\n Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...\n",
"msg_date": "02 Apr 2002 10:56:36 -0500",
"msg_from": "Doug McNaught <doug@wireboard.com>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix"
},
{
"msg_contents": "Generally, what is the duration between such an announcement and the\nappearence of\nthe RPMs found on the Download PostgreSQL page?\n\nAlso, the http://www.us.postgresql.org/news.html has no mention of this\nupgrade.\n\nThanks.\n\nRichard\n\n\n\"Marc G. Fournier\" wrote:\n\n> Over this past weekend, the PostgreSQL Global Development Group packaged\n> up and put onto our ftp server PostgreSQL v7.2.1 ... a bug fix release, it\n> fixes a critical bug in v7.2:\n>\n> sequence counters will go backwards after a crash\n>\n> Other fixes since v7.2 include:\n>\n> Fix pgaccess kanji-coversion key binding (Tatsuo)\n> Optimizer improvements (Tom)\n> cash I/O improvements (Tom)\n> New Russian FAQ\n> Compile fix for missing AuthBlockSig (Heiko)\n> Additional time zones and time zone fixes (Thomas)\n> Allow psql \\connect to handle mixed case database and user names (Tom)\n> Return proper OID on command completion even with ON INSERT rules (Tom)\n> Allow COPY FROM to use 8-bit DELIMITERS (Tatsuo)\n> Fix bug in extract/date_part for milliseconds/microseconds (Tatsuo)\n> Improve handling of multiple UNIONs with different lengths (Tom)\n> contrib/btree_gist improvements (Teodor Sigaev)\n> contrib/tsearch dictionary improvements, see README.tsearch for\n> an additional installation step (Thomas T. Thai, Teodor Sigaev)\n> Fix for array subscripts handling (Tom)\n> Allow EXECUTE of \"CREATE TABLE AS ... SELECT\" in PL/PgSQL (Tom)\n>\n> Upgrading to v7.2.1 from v7.2 *does not* require a dump/reload, but it is\n> required from all previous releases ...\n>\n> Due to the nature of the bug with the sequence counters, it is *highly*\n> recommended that anyone running v7.2 upgrade to the latest version at\n> their earliest convience ...\n>\n> Marc G. Fournier\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n",
"msg_date": "Tue, 02 Apr 2002 09:00:11 -0800",
"msg_from": "Richard Emberson <emberson@phc.net>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix"
},
{
"msg_contents": "On Tue, 2 Apr 2002, Richard Emberson wrote:\n\n> Generally, what is the duration between such an announcement and the\n> appearence of\n> the RPMs found on the Download PostgreSQL page?\n>\n> Also, the http://www.us.postgresql.org/news.html has no mention of this\n\nIt takes up to 24 hours for all of the mirror sites to catch up.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 2 Apr 2002 12:04:41 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix"
},
{
"msg_contents": "Vince Vielhaber <vev@michvhf.com> writes:\n> On Tue, 2 Apr 2002, Richard Emberson wrote:\n>> Generally, what is the duration between such an announcement and the\n>> appearence of\n>> the RPMs found on the Download PostgreSQL page?\n>> \n>> Also, the http://www.us.postgresql.org/news.html has no mention of this\n\n> It takes up to 24 hours for all of the mirror sites to catch up.\n\nHowever, the tarballs were uploaded to the FTP sites several days ago,\nso you should be able to fetch the code already from any FTP mirror,\neven if your favorite WWW mirror is still behind.\nLook under source/v7.2.1/ if you do not see a v7.2.1 link at the top\nlevel of your FTP mirror.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 12:26:45 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix "
},
{
"msg_contents": "On Tue, 2 Apr 2002, Tom Lane wrote:\n\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > On Tue, 2 Apr 2002, Richard Emberson wrote:\n> >> Generally, what is the duration between such an announcement and the\n> >> appearence of\n> >> the RPMs found on the Download PostgreSQL page?\n> >>\n> >> Also, the http://www.us.postgresql.org/news.html has no mention of this\n>\n> > It takes up to 24 hours for all of the mirror sites to catch up.\n>\n> However, the tarballs were uploaded to the FTP sites several days ago,\n> so you should be able to fetch the code already from any FTP mirror,\n> even if your favorite WWW mirror is still behind.\n> Look under source/v7.2.1/ if you do not see a v7.2.1 link at the top\n> level of your FTP mirror.\n\nThe links didn't exist until just a little while ago so most of the\nmirrors won't have them yet. source/v7.2.1 does exist tho. RPMs aren't\navailable yet.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n",
"msg_date": "Tue, 2 Apr 2002 12:30:36 -0500 (EST)",
"msg_from": "Vince Vielhaber <vev@michvhf.com>",
"msg_from_op": false,
"msg_subject": "Re: v7.2.1 Released: Critical Bug Fix "
},
{
"msg_contents": "On Tue, 2002-04-02 at 18:26, Tom Lane wrote:\n> Vince Vielhaber <vev@michvhf.com> writes:\n> > On Tue, 2 Apr 2002, Richard Emberson wrote:\n> >> Generally, what is the duration between such an announcement and the\n> >> appearence of\n> >> the RPMs found on the Download PostgreSQL page?\n> >> \n> >> Also, the http://www.us.postgresql.org/news.html has no mention of this\n> \n> > It takes up to 24 hours for all of the mirror sites to catch up.\n> \n> However, the tarballs were uploaded to the FTP sites several days ago,\n> so you should be able to fetch the code already from any FTP mirror,\n> even if your favorite WWW mirror is still behind.\n> Look under source/v7.2.1/ if you do not see a v7.2.1 link at the top\n> level of your FTP mirror.\n\nA Debian release of 7.2.1 is now in the Debian unstable archive.\n\n-- \nOliver Elphick Oliver.Elphick@lfix.co.uk\nIsle of Wight http://www.lfix.co.uk/oliver\nGPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839 932A 614D 4C34 3E1D 0C1C\n\n \"No temptation has seized you except what is common to \n man. And God is faithful; he will not let you be \n tempted beyond what you can bear. But when you are \n tempted, he will also provide a way out so that you \n can stand up under it.\" I Corinthians 10:13",
"msg_date": "03 Apr 2002 06:48:20 +0100",
"msg_from": "Oliver Elphick <olly@lfix.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] v7.2.1 Released: Critical Bug Fix"
}
] |
[
{
"msg_contents": "Hi,\n\nWe are implementing a database for maintaining our IP addresses. Looking \nin the current documentation, it seems that INET/CIDR types only support \nIPv4 addresses until now, although \nhttp://archives.postgresql.org/pgsql-patches/2001-09/msg00236.php\nseems to suggest a patch for IPv6 has been ready for some time now.\n\nWhat is the status of IPv6 types at this moment?\n\n\n-- \n__________________________________________________\n\"Nothing is as subjective as reality\"\nReinoud van Leeuwen reinoud.v@n.leeuwen.net\nhttp://www.xs4all.nl/~reinoud\n__________________________________________________\n",
"msg_date": "Tue, 2 Apr 2002 18:09:26 +0200",
"msg_from": "Reinoud van Leeuwen <reinoud.v@n.leeuwen.net>",
"msg_from_op": true,
"msg_subject": "status of IPv6 Support for INET/CIDR types"
},
{
"msg_contents": "Reinoud van Leeuwen wrote:\n> Hi,\n> \n> We are implementing a database for maintaining our IP addresses. Looking \n> in the current documentation, it seems that INET/CIDR types only support \n> IPv4 addresses until now, although \n> http://archives.postgresql.org/pgsql-patches/2001-09/msg00236.php\n> seems to suggest a patch for IPv6 has been ready for some time now.\n> \n> What is the status of IPv6 types at this moment?\n\nSome merging of code from the BIND code and our IPv4 changes need to be\nmade. No one has done it yet.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 2 Apr 2002 14:59:44 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: status of IPv6 Support for INET/CIDR types"
}
] |
[
{
"msg_contents": "Has anyone seen this:\n\nERROR: dtoi4: integer out of range\n\n\non 7.1.3\n\nWhat worries me, is that at startup time, the log shows:\n\nDEBUG: database system was shut down at 2002-04-02 23:16:52 EEST\nDEBUG: CheckPoint record at (82, 1928435208)\nDEBUG: Redo record at (82, 1928435208); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 517528628; NextOid: 2148849196\nDEBUG: database system is in production state\n\nNote the NextOid, while i /usr/include/machine/limits.h defines INT_MAX as \n2147483647. Are oid really singed ints?\n\nDaniel\n\nPS: This database indeed has an increasing oid counter in that range. Grep \nfrom the log shows\n\nDEBUG: NextTransactionId: 386003914; NextOid: 1551075952\nDEBUG: NextTransactionId: 397667914; NextOid: 1643984428\nDEBUG: NextTransactionId: 444453748; NextOid: 1864857132\nDEBUG: NextTransactionId: 450233305; NextOid: 1888540204\nDEBUG: NextTransactionId: 454987662; NextOid: 1917687340\nDEBUG: NextTransactionId: 501775621; NextOid: 2078209580\nDEBUG: NextTransactionId: 517524499; NextOid: 2148849196\nDEBUG: NextTransactionId: 517528628; NextOid: 2148849196\n\nthis is from one month ago.\n\n",
"msg_date": "Tue, 02 Apr 2002 23:25:15 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "maxint reached?"
},
{
"msg_contents": "An followup to my previous post.\n\nIt turned out to be an query containing \"oid = somenumber\" called from perl script. Is it possible that the default type conversion functions do not work as expected?\n\nChanging this to \"oid = oid(somenumber)\" worked as expected.\n\nDaniel\n\n",
"msg_date": "Tue, 02 Apr 2002 23:39:33 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "Re: maxint reached? "
},
{
"msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> It turned out to be an query containing \"oid = somenumber\" called from perl script. Is it possible that the default type conversion functions do not work as expected?\n\nNo, but you do have to cast an oversize value to oid explicitly to\nprevent it from being taken as int4, eg\n\nregression=# select oid = 2444444444 from int4_tbl;\nERROR: dtoi4: integer out of range\nregression=# select oid = 2444444444::oid from int4_tbl;\n<< works >>\n\n(In releases before about 7.1 you'd have had to single-quote the\nliteral, too.)\n\nThis is one of a whole raft of cases involving undesirable assignment\nof types to numeric constants; see past complaints about int4 being used\nwhere int2 or int8 was wanted, numeric vs float8 constants, etc etc.\nWe're still looking for a promotion rule that does what you want every\ntime...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 16:06:27 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: maxint reached? "
},
{
"msg_contents": ">>>Tom Lane said:\n > This is one of a whole raft of cases involving undesirable assignment\n > of types to numeric constants; see past complaints about int4 being used\n > where int2 or int8 was wanted, numeric vs float8 constants, etc etc.\n > We're still looking for a promotion rule that does what you want every\n > time...\n\nSo in essence this means that my best bet is to again dump/reload the \ndatabase... Even pgaccess has hit this problem as it uses oid=something in the \nqueries.\n\nDaniel\n\n",
"msg_date": "Wed, 03 Apr 2002 00:10:53 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "Re: maxint reached? "
},
{
"msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> So in essence this means that my best bet is to again dump/reload the \n> database...\n\nEither that or fix your queries to cast the literals explicitly.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 16:14:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: maxint reached? "
},
{
"msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > So in essence this means that my best bet is to again dump/reload the \n > > database...\n > \n > Either that or fix your queries to cast the literals explicitly.\n\nThere is more to it:\n\ncustomer=# select max(oid) from croute;\n max \n-------------\n -2144025472\n(1 row)\n\nHow to handle this?\n\nDaniel\n\n",
"msg_date": "Wed, 03 Apr 2002 10:09:34 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "Re: maxint reached? "
},
{
"msg_contents": ">>>Tom Lane said:\n > Daniel Kalchev <daniel@digsys.bg> writes:\n > > So in essence this means that my best bet is to again dump/reload the \n > > database...\n > \n > Either that or fix your queries to cast the literals explicitly.\n\nSorry for the incomplete reply:\n\nthis does not work:\n\ncustomer=# select max(oid) from croute;\n max \n-------------\n -2144025472\n(1 row)\n\nthis does work:\n\ncustomer=# select oid(max(oid)) from croute;\n oid \n------------\n 2150941824\n(1 row)\n\n\nweird, isn't it? I guess max should return the same type as it's arguments, no?\n\nDaniel\n\n",
"msg_date": "Wed, 03 Apr 2002 10:11:06 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "Re: maxint reached? "
},
{
"msg_contents": "Daniel Kalchev <daniel@digsys.bg> writes:\n> There is more to it:\n\n> customer=# select max(oid) from croute;\n> max \n> -------------\n> -2144025472\n> (1 row)\n\n> How to handle this?\n\nUse a more recent Postgres release. max(oid) behaves as expected in\n7.2. Before that it was piggybacking on max(int4), which meant that\nit chose the wrong value once you had any entries with the high bit\nset...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 10:28:22 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: maxint reached? "
}
] |
[
{
"msg_contents": "Since I'm about to have to edit pg_proc.h to add a namespace column,\nI thought this would be a good time to revise the current proiscachable\ncolumn into the three-way cachability distinction we've discussed\nbefore. But I need some names for the values, and I'm not satisfied\nwith the ideas I've had so far.\n\nTo refresh people's memory: what we want is to be able to distinguish\nbetween functions that are:\n\n1. Strictly cachable (a/k/a constant-foldable): given fixed input\nvalues, the same result value will always be produced, for ever and\never, amen. Examples: addition operator, sin(x). Given a call\nof such a function with all-constant input values, the system is\nentitled to fold the function call to a constant on sight.\n\n2. Cachable within a single command: given fixed input values, the\nresult will not change if the function were to be repeatedly evaluated\nwithin a single SQL command; but the result could change over time.\nExamples: now(); datetime-related operations that depend on the current\ntimezone (or other SET-able variables); any function that looks in\ndatabase tables to determine its result.\n\n3. Totally non-cachable: result may change from one call to the next,\neven within a single SQL command. Examples: nextval(), random(),\ntimeofday(). (Yes, timeofday() and now() are in different categories.\nSee http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT)\n\nCurrently the system can only distinguish cases 1 and 3, so functions\nthat are really case 2 have to be labeled as case 3; this prevents a lot\nof useful optimizations. In particular, it is safe to use expressions\ninvolving only case-1 and case-2 functions as indexscan conditions,\nwhereas case-3 functions cannot be optimized into an indexscan. So this\nis an important fix to make.\n\nBTW, because of MVCC semantics, case 2 covers more ground than you might\nthink. We are interested in functions whose values cannot change during\na single \"scan\", ie, while the intra-transaction command counter does\nnot increment. So functions that do SELECTs are actually guaranteed to\nbe case 2, even if stuff outside the function is changing the table\nbeing looked at.\n\nMy problem is picking names for the three categories of functions.\nCurrently we use \"with (isCachable)\" to identify category 1, but it\nseems like this name might actually be more sensible for category 2.\nI'm having a hard time picking simple names that convey these meanings\naccurately, or even with a reasonable amount of suggestiveness.\n\nComments, ideas?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 16:40:57 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Suggestions please: names for function cachability attributes"
},
{
"msg_contents": "Tom Lane wrote:\n> BTW, because of MVCC semantics, case 2 covers more ground than you might\n> think. We are interested in functions whose values cannot change during\n> a single \"scan\", ie, while the intra-transaction command counter does\n> not increment. So functions that do SELECTs are actually guaranteed to\n> be case 2, even if stuff outside the function is changing the table\n> being looked at.\n> \n> My problem is picking names for the three categories of functions.\n> Currently we use \"with (isCachable)\" to identify category 1, but it\n> seems like this name might actually be more sensible for category 2.\n> I'm having a hard time picking simple names that convey these meanings\n> accurately, or even with a reasonable amount of suggestiveness.\n> \n> Comments, ideas?\n> \n\n\nHow about:\n\ncase 1: Cachable\ncase 2: ScanCachable or Optimizable\ncase 3: NonCachable\n\nJoe\n\n\n\n",
"msg_date": "Tue, 02 Apr 2002 13:57:04 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "My 2 cents.\n\nLevel 1. with (isCachableStatic)\nLevel 2. with (isCachableDynamic)\nLevel 3. default\n\nIn my mind (isCachable) sounds like level 1\n\nOn Tuesday 02 April 2002 03:40 pm, Tom Lane wrote:\n> Since I'm about to have to edit pg_proc.h to add a namespace column,\n> I thought this would be a good time to revise the current proiscachable\n> column into the three-way cachability distinction we've discussed\n> before. But I need some names for the values, and I'm not satisfied\n> with the ideas I've had so far.\n>\n> To refresh people's memory: what we want is to be able to distinguish\n> between functions that are:\n>\n> 1. Strictly cachable (a/k/a constant-foldable): given fixed input\n> values, the same result value will always be produced, for ever and\n> ever, amen. Examples: addition operator, sin(x). Given a call\n> of such a function with all-constant input values, the system is\n> entitled to fold the function call to a constant on sight.\n>\n> 2. Cachable within a single command: given fixed input values, the\n> result will not change if the function were to be repeatedly evaluated\n> within a single SQL command; but the result could change over time.\n> Examples: now(); datetime-related operations that depend on the current\n> timezone (or other SET-able variables); any function that looks in\n> database tables to determine its result.\n>\n> 3. Totally non-cachable: result may change from one call to the next,\n> even within a single SQL command. Examples: nextval(), random(),\n> timeofday(). (Yes, timeofday() and now() are in different categories.\n> See\n> http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/functions-datet\n>ime.html#FUNCTIONS-DATETIME-CURRENT)\n>\n> Currently the system can only distinguish cases 1 and 3, so functions\n> that are really case 2 have to be labeled as case 3; this prevents a lot\n> of useful optimizations. In particular, it is safe to use expressions\n> involving only case-1 and case-2 functions as indexscan conditions,\n> whereas case-3 functions cannot be optimized into an indexscan. So this\n> is an important fix to make.\n>\n> BTW, because of MVCC semantics, case 2 covers more ground than you might\n> think. We are interested in functions whose values cannot change during\n> a single \"scan\", ie, while the intra-transaction command counter does\n> not increment. So functions that do SELECTs are actually guaranteed to\n> be case 2, even if stuff outside the function is changing the table\n> being looked at.\n>\n> My problem is picking names for the three categories of functions.\n> Currently we use \"with (isCachable)\" to identify category 1, but it\n> seems like this name might actually be more sensible for category 2.\n> I'm having a hard time picking simple names that convey these meanings\n> accurately, or even with a reasonable amount of suggestiveness.\n>\n> Comments, ideas?\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Tue, 2 Apr 2002 16:09:14 -0600",
"msg_from": "David Walker <pgsql@grax.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability attributes"
},
{
"msg_contents": "* Tom Lane (tgl@sss.pgh.pa.us) [020402 16:42]:\n> Since I'm about to have to edit pg_proc.h to add a namespace column,\n> I thought this would be a good time to revise the current proiscachable\n> column into the three-way cachability distinction we've discussed\n> before. But I need some names for the values, and I'm not satisfied\n> with the ideas I've had so far.\n\nInvariant\nCachable\nNoncachable\n\n",
"msg_date": "Tue, 2 Apr 2002 17:46:15 -0500",
"msg_from": "Bradley McLean <brad@bradm.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability attributes"
},
{
"msg_contents": "I am full agreement with proposal. I love it!!\n\n(1) const or constant\n(2) cacheable\n(3) volatile\n\nP.S.\nTom: My mail doesn't reach you. As an AT&T user, you block my machine's IP\naddress with the anti-spam blocking. :-(\n\nTom Lane wrote:\n> \n> Since I'm about to have to edit pg_proc.h to add a namespace column,\n> I thought this would be a good time to revise the current proiscachable\n> column into the three-way cachability distinction we've discussed\n> before. But I need some names for the values, and I'm not satisfied\n> with the ideas I've had so far.\n> \n> To refresh people's memory: what we want is to be able to distinguish\n> between functions that are:\n> \n> 1. Strictly cachable (a/k/a constant-foldable): given fixed input\n> values, the same result value will always be produced, for ever and\n> ever, amen. Examples: addition operator, sin(x). Given a call\n> of such a function with all-constant input values, the system is\n> entitled to fold the function call to a constant on sight.\n> \n> 2. Cachable within a single command: given fixed input values, the\n> result will not change if the function were to be repeatedly evaluated\n> within a single SQL command; but the result could change over time.\n> Examples: now(); datetime-related operations that depend on the current\n> timezone (or other SET-able variables); any function that looks in\n> database tables to determine its result.\n> \n> 3. Totally non-cachable: result may change from one call to the next,\n> even within a single SQL command. Examples: nextval(), random(),\n> timeofday(). (Yes, timeofday() and now() are in different categories.\n> See http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT)\n> \n> Currently the system can only distinguish cases 1 and 3, so functions\n> that are really case 2 have to be labeled as case 3; this prevents a lot\n> of useful optimizations. In particular, it is safe to use expressions\n> involving only case-1 and case-2 functions as indexscan conditions,\n> whereas case-3 functions cannot be optimized into an indexscan. So this\n> is an important fix to make.\n> \n> BTW, because of MVCC semantics, case 2 covers more ground than you might\n> think. We are interested in functions whose values cannot change during\n> a single \"scan\", ie, while the intra-transaction command counter does\n> not increment. So functions that do SELECTs are actually guaranteed to\n> be case 2, even if stuff outside the function is changing the table\n> being looked at.\n> \n> My problem is picking names for the three categories of functions.\n> Currently we use \"with (isCachable)\" to identify category 1, but it\n> seems like this name might actually be more sensible for category 2.\n> I'm having a hard time picking simple names that convey these meanings\n> accurately, or even with a reasonable amount of suggestiveness.\n> \n> Comments, ideas?\n> \n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n",
"msg_date": "Tue, 02 Apr 2002 19:51:47 -0500",
"msg_from": "mlw <markw@nospam.not>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability attributes"
},
{
"msg_contents": "Tom Lane writes:\n\n> Since I'm about to have to edit pg_proc.h to add a namespace column,\n> I thought this would be a good time to revise the current proiscachable\n> column into the three-way cachability distinction we've discussed\n> before. But I need some names for the values, and I'm not satisfied\n> with the ideas I've had so far.\n\nWell, for one thing, we might want to change the name to the correct\nspelling \"cacheable\".\n\n> 1. Strictly cachable (a/k/a constant-foldable): given fixed input\n> values, the same result value will always be produced, for ever and\n> ever, amen. Examples: addition operator, sin(x). Given a call\n> of such a function with all-constant input values, the system is\n> entitled to fold the function call to a constant on sight.\n\ndeterministic\n\n(That's how SQL99 calls it.)\n\n> 2. Cachable within a single command: given fixed input values, the\n> result will not change if the function were to be repeatedly evaluated\n> within a single SQL command; but the result could change over time.\n> Examples: now(); datetime-related operations that depend on the current\n> timezone (or other SET-able variables); any function that looks in\n> database tables to determine its result.\n\n\"cacheable\" seems OK for this.\n\n> 3. Totally non-cachable: result may change from one call to the next,\n> even within a single SQL command. Examples: nextval(), random(),\n> timeofday(). (Yes, timeofday() and now() are in different categories.\n> See http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT)\n\nnot deterministic, not cacheable\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 2 Apr 2002 19:59:35 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "On Tue, 2 Apr 2002, Peter Eisentraut wrote:\n\n> Tom Lane writes:\n> \n> > Since I'm about to have to edit pg_proc.h to add a namespace column,\n> > I thought this would be a good time to revise the current proiscachable\n> > column into the three-way cachability distinction we've discussed\n> > before. But I need some names for the values, and I'm not satisfied\n> > with the ideas I've had so far.\n> \n> Well, for one thing, we might want to change the name to the correct\n> spelling \"cacheable\".\n> \n> > 1. Strictly cachable (a/k/a constant-foldable): given fixed input\n> > values, the same result value will always be produced, for ever and\n> > ever, amen. Examples: addition operator, sin(x). Given a call\n> > of such a function with all-constant input values, the system is\n> > entitled to fold the function call to a constant on sight.\n> \n> deterministic\n> \n> (That's how SQL99 calls it.)\n> \n> > 2. Cachable within a single command: given fixed input values, the\n> > result will not change if the function were to be repeatedly evaluated\n> > within a single SQL command; but the result could change over time.\n> > Examples: now(); datetime-related operations that depend on the current\n> > timezone (or other SET-able variables); any function that looks in\n> > database tables to determine its result.\n> \n> \"cacheable\" seems OK for this.\n\nSQL99 suggests that there are only two types of user defined\nroutines: deterministic and 'possibly non-deterministic'. However, in\nsection 11.49 it defines \n\n<deterministic characteristic> ::= DETERMINISTIC | NOT DETERMINISTIC\n\nSo the real problem is how to qualify this.\n\nTRANSACTIONAL DETERMINISTIC\n\nor\n\nNOT DETERMINISTIC CACHEABLE\n\nare the only ways that come to mind. I'll admit that I don't like either.\n\n> \n> > 3. Totally non-cachable: result may change from one call to the next,\n> > even within a single SQL command. Examples: nextval(), random(),\n> > timeofday(). (Yes, timeofday() and now() are in different categories.\n> > See http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/functions-datetime.html#FUNCTIONS-DATETIME-CURRENT)\n> \n> not deterministic, not cacheable\n> \n> \n\nGavin\n\n",
"msg_date": "Wed, 3 Apr 2002 11:47:29 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Well, for one thing, we might want to change the name to the correct\n> spelling \"cacheable\".\n\nIs that correct?\n\nI looked in the Oxford English Dictionary, the Random House Dictionary,\nand a couple other dictionaries of less substantial heft, and could not\nfind anything authoritative at all. RH gives the derived forms \"cached\"\nand \"caching\"; OED offers nothing. I'd be interested to see an\nauthoritative reference for the spelling of the adjective form.\n\nPossibly we should avoid the issue by using another word ;-)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 23:39:35 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachability "
},
{
"msg_contents": "On Tue, 02 Apr 2002 23:39:35 -0500\n\"Tom Lane\" <tgl@sss.pgh.pa.us> wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Well, for one thing, we might want to change the name to the correct\n> > spelling \"cacheable\".\n> \n> Is that correct?\n\nApparently, other people are confused as well:\n\n\thttp://www.xent.com/FoRK-archive/august97/0431.html\n\nFWIW, google has ~30,000 results for -eable, and ~8,000 results for\n-able. A couple other software projects (notably Apache Jakarta)\nuse -eable.\n\nMy preference would be for -eable, but that's just on the basis of\n\"it looks right\", which is hardly authoritative.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Tue, 2 Apr 2002 23:54:35 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "\nI am full agreement with proposal. I love it!!\n\n(1) const or constant\n(2) cacheable\n(3) volatile\n\nP.S.\nTom: My mail doesn't reach you. As an AT&T user, you block my machine's IP\naddress with the anti-spam blocking. :-(\n",
"msg_date": "Wed, 03 Apr 2002 10:02:15 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability attributes"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> mlw <markw@mohawksoft.com> writes:\n> > Tom: My mail doesn't reach you. As an AT&T user, you block my machine's IP\n> > address with the anti-spam blocking. :-(\n> \n> Sorry about that. I like 510sg's dnsbl list precisely because it's\n> aggressive, but sometimes it's too aggressive. I can whitelist you\n> if you have a stable IP address ... is 24.147.138.78 a permanently\n> assigned address, or not?\n\nAlas we have the irony of me trying to respond to you via email, to give you\ninformation on how to unblock me so I can respond via email. I am laughing.\n\nI wish I could say I have a fixed IP, but I do not. It is a DHCP assigned AT&T\ncable modem. Sorry.\n\nI'm not sure I'm the only one, am I?\n",
"msg_date": "Wed, 03 Apr 2002 11:11:28 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability "
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> (1) const or constant\n> (2) cacheable\n> (3) volatile\n\nI was wondering about \"const\" for case 1, also. I think there is some\nprecedent for using \"const\" with this meaning in other programming\nlanguages. \"volatile\" for case 3 seems reasonable.\n\n> Tom: My mail doesn't reach you. As an AT&T user, you block my machine's IP\n> address with the anti-spam blocking. :-(\n\nSorry about that. I like 510sg's dnsbl list precisely because it's\naggressive, but sometimes it's too aggressive. I can whitelist you\nif you have a stable IP address ... is 24.147.138.78 a permanently\nassigned address, or not?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 11:12:09 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachability attributes "
},
{
"msg_contents": "...\n> I'm not sure I'm the only one, am I?\n\nNo, I was also blocked from Tom's mail a while ago. I have a static IP,\nbut my ISP's entire block of addresses made it on to the spam list Tom\nuses, and the strategy of the list maintainers seems to be to maximize\nthe collateral damage to force me to somehow force my ISP to change\ntheir policies, whatever those are. If I researched it enough, I might\nbe able to find out what my ISP does or does not do, and what I'm\nsupposed to do or not do. What a pain...\n\nNot sure if my status has changed. I'll bet not, since the anti-spam\nfolks have high enough standards that someone like me can't make the\ngrade. I suppose they don't rely on PostgreSQL for their database... ;)\n\nThat said, I'd like to block some spam myself. I'd rather find a spam\nlist which doesn't already have me disallowed however...\n\n - Thomas\n",
"msg_date": "Wed, 03 Apr 2002 08:45:03 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Tom Lane writes:\n> \n> > mlw <markw@mohawksoft.com> writes:\n> > > (1) const or constant\n> > > (2) cacheable\n> > > (3) volatile\n> >\n> > I was wondering about \"const\" for case 1, also. I think there is some\n> > precedent for using \"const\" with this meaning in other programming\n> > languages.\n> \n> I think the meaning of \"const\" tends to be \"cannot change the result\" --\n> which may actually make sense in SQL in a future life if you can pass\n> around table descriptors or cursor references.\n\nA function, such as sin(x) could be considered constant for the result based on\nvalue 'x'\n",
"msg_date": "Wed, 03 Apr 2002 11:46:37 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function "
},
{
"msg_contents": "Tom Lane writes:\n\n> mlw <markw@mohawksoft.com> writes:\n> > (1) const or constant\n> > (2) cacheable\n> > (3) volatile\n>\n> I was wondering about \"const\" for case 1, also. I think there is some\n> precedent for using \"const\" with this meaning in other programming\n> languages.\n\nI think the meaning of \"const\" tends to be \"cannot change the result\" --\nwhich may actually make sense in SQL in a future life if you can pass\naround table descriptors or cursor references.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 11:54:20 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> That said, I'd like to block some spam myself. I'd rather find a spam\n> list which doesn't already have me disallowed however...\n\nIn case it makes you feel better: my *own* address was on the 510sg list\nfor awhile last month. But I still use the list ;-). Nothing to stop\nyou from using some less-aggressive list though; see\nhttp://relays.osirusoft.com/ for links to a dozen or more possibilities.\n\nIn practice, any DNSBL list can cause denial-of-service problems.\n(The original and still most conservatively run one, MAPS RBL, had a\nmemorable episode where someone put 127.0.0.1 into the blacklist for\na few hours...) I deal with this by installing local whitelist\nexceptions for people I talk to regularly. Otherwise, there's always\nthe mailing lists.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 11:57:26 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachability "
},
{
"msg_contents": "On Wed, Apr 03, 2002 at 11:11:28AM -0500, mlw wrote:\n> \n> I'm not sure I'm the only one, am I?\n\nNope, we're on AT&T lines, too.\n\nA\n\n-- \n----\nAndrew Sullivan 87 Mowat Avenue \nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M6K 3E3\n +1 416 646 3304 x110\n\n",
"msg_date": "Wed, 3 Apr 2002 12:00:35 -0500",
"msg_from": "Andrew Sullivan <andrew@libertyrms.info>",
"msg_from_op": false,
"msg_subject": "Re: [HACKERS] Suggestions please: names for function cachability"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> Tom Lane wrote:\n>> Sorry about that. I like 510sg's dnsbl list precisely because it's\n>> aggressive, but sometimes it's too aggressive. I can whitelist you\n>> if you have a stable IP address ... is 24.147.138.78 a permanently\n>> assigned address, or not?\n\n> I wish I could say I have a fixed IP, but I do not. It is a DHCP assigned AT&T\n> cable modem. Sorry.\n\nCable modem IPs are more stable than you might think --- a quick look in\nthe list archives shows you've had this one since early January. In\npractice you'll keep the same IP as long as you don't lose connectivity.\n\nI'll whitelist 24.147.138.* and hope for the best...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 12:10:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachability "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> Tom Lane writes:\n> \n> > mlw <markw@mohawksoft.com> writes:\n> > > (1) const or constant\n> > > (2) cacheable\n> > > (3) volatile\n> >\n> > I was wondering about \"const\" for case 1, also. I think there is some\n> > precedent for using \"const\" with this meaning in other programming\n> > languages.\n> \n> I think the meaning of \"const\" tends to be \"cannot change the result\" --\n> which may actually make sense in SQL in a future life if you can pass\n> around table descriptors or cursor references.\n\nI can buy that. Ok, const isn't a good name.\n\nHow about 'immutable' ?\n",
"msg_date": "Wed, 03 Apr 2002 12:10:45 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "On Wed, Apr 03, 2002 at 08:45:03AM -0800, Thomas Lockhart wrote:\n> ...\n> > I'm not sure I'm the only one, am I?\n> \n> No, I was also blocked from Tom's mail a while ago. I have a static IP,\n> but my ISP's entire block of addresses made it on to the spam list Tom\n> uses, and the strategy of the list maintainers seems to be to maximize\n> the collateral damage to force me to somehow force my ISP to change\n> their policies, whatever those are. If I researched it enough, I might\n> be able to find out what my ISP does or does not do, and what I'm\n> supposed to do or not do. What a pain...\n\nWe had the same problem here. I spoke to the 5-10 list provider and\ngot our ISP delisted since they seem to have kicked the 3 or so spammers\noff their network. It seems a little unreasonable to blacklist an entire\nlarge ISP's netblock just because they have a very small number of spam\nsites. It is also pretty unreasonable to think that any company is\ngoing to switch providers because of one blacklist or somehow complain\nto their ISP about the spammers the ISP is hosting without any more\ndetail than:\n\n\t\"Blacklist X says you provide spam support and/or have too many\n\t spammers on your network. Please remove them so I can send\n\t my email.\"\n\n\nMartin\n",
"msg_date": "Wed, 3 Apr 2002 12:16:21 -0500",
"msg_from": "Martin Renters <martin@datafax.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "mlw writes:\n\n> A function, such as sin(x) could be considered constant for the result based on\n> value 'x'\n\nIt could also be considered deterministic, strict, cacheable,\nmathematically sensible, real, pleasant, or good. ;-)\n\nOut of those, I believe \"const\" is the worst term, because saying \"sin(x)\nis a constant function\" sounds pretty wrong.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 12:16:41 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes"
},
{
"msg_contents": "Martin Renters <martin@datafax.com> writes:\n> It is also pretty unreasonable to think that any company is\n> going to switch providers because of one blacklist or somehow complain\n> to their ISP about the spammers the ISP is hosting without any more\n> detail than:\n\n> \t\"Blacklist X says you provide spam support and/or have too many\n> \t spammers on your network. Please remove them so I can send\n> \t my email.\"\n\nFWIW, all the blacklists I use (and 510sg is only the first line of\ndefense ;-)) have documentation available about the reasons for listing\nIP blocks. F'r instance, looking up Thomas' IP I get:\n\n xo.com.spam-support.blackholes.five-ten-sg.com. 23h32m50s IN TXT \"added 2002-01-05; spam support - dns server at 64.1.121.57 supporting http://www.poxteam2001.com\"\n xo.com.spam-support.blackholes.five-ten-sg.com. 23h32m50s IN TXT \"added 2002-01-07; spam support - dns server at 64.1.121.57 supporting http://compower.numberop.com\"\n xo.com.spam-support.blackholes.five-ten-sg.com. 23h32m50s IN TXT \"added 2002-03-07; spam support - hosting http://207.88.179.193 - terminated\"\n xo.com.spam-support.blackholes.five-ten-sg.com. 23h32m50s IN TXT \"added 2002-03-10; spam support - hosting http://thecottagemonitor.com\"\n xo.com.spam-support.blackholes.five-ten-sg.com. 23h32m50s IN TXT \"added 2002-03-13; spam support - hosting http://shortcuts2learning.com\"\n xo.com.spam-support.blackholes.five-ten-sg.com. 23h32m50s IN TXT \"added 2002-03-24; spam support - hosting http://209.164.32.75/consumer_first_funding\"\n\nBut this is getting pretty far off-topic for the PG lists.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 12:24:37 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachability "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> It could also be considered deterministic, strict, cacheable,\n> mathematically sensible, real, pleasant, or good. ;-)\n\n> Out of those, I believe \"const\" is the worst term, because saying \"sin(x)\n> is a constant function\" sounds pretty wrong.\n\nYeah, that was my problem with \"const\" too. But \"deterministic\" has the\nsame problem --- in the ordinary meaning of the term, a function doesn't\nbecome nondeterministic just because it depends on SET TIMEZONE as well\nas its explicit parameters. It's also too long and too hard to spell\ncorrectly ;-).\n\n\"cacheable\" (whatever spelling; I suppose we should consider accepting\nboth, cf analyze/analyse) is a fine term except that it isn't quite clear\nwhether to use it for case 1 or case 2, which means that people won't be\nable to remember which case it applies to ... unless we come up with\nsome really memorable choice for the other case.\n\nSo far the only suggestion I've seen that really makes me happy is\n\"volatile\" for case 3. Brad's idea of \"invariant\" for case 1 isn't\ntoo bad, but as a partner for \"cacheable\" it seems a bit weak;\nif you haven't looked at the manual lately, will you remember which\nis which?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 12:38:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes "
},
{
"msg_contents": "It occurs to me that we also need a better term for the overall concept.\n\"cacheability\" has misled at least two people (that I can recall) into\nthinking that we maintain some kind of function result cache --- which\nis not true, and if it were true we'd need the term \"cacheable\" for\ncontrol parameters for the cache, which this categorization is not.\n\nI am thinking that \"mutability\" might be a good starting point instead\nof \"cacheability\". This leads immediately to what seems like a fairly\nreasonable set of names:\n\npg_proc column: promutable or proismutable\n\ncase 1: \"immutable\"\ncase 2: \"mutable\", or perhaps \"stable\"\ncase 3: \"volatile\"\n\nThoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 13:29:40 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> It occurs to me that we also need a better term for the overall concept.\n> \"cacheability\" has misled at least two people (that I can recall) into\n> thinking that we maintain some kind of function result cache --- which\n> is not true, and if it were true we'd need the term \"cacheable\" for\n> control parameters for the cache, which this categorization is not.\n> \n> I am thinking that \"mutability\" might be a good starting point instead\n> of \"cacheability\". This leads immediately to what seems like a fairly\n> reasonable set of names:\n> \n> pg_proc column: promutable or proismutable\n> \n> case 1: \"immutable\"\n> case 2: \"mutable\", or perhaps \"stable\"\n> case 3: \"volatile\"\n\nI like 1 and 3 :-) \n\nI think 2 should be something like \"stable.\" Mutable and volitile have very\nsimilar meanings.\n\nI'm not sure, the word stable is right, though. Cacheable has the best meaning,\nbut implies something that isn't. How about \"persistent\" or \"fixed?\"\n",
"msg_date": "Wed, 03 Apr 2002 13:43:21 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function "
},
{
"msg_contents": "> FWIW, all the blacklists I use (and 510sg is only the first line of\n> defense ;-)) have documentation available about the reasons for listing\n> IP blocks. F'r instance, looking up Thomas' IP I get:\n...\n> But this is getting pretty far off-topic for the PG lists.\n\nI'll guess that the list of reasons for the blacklisting I find today is\ndifferent than the list I found a few months ago when this first came\nup. What is relevant to me is that I absolutely cannot get my machine\nremoved from this blacklist, no matter what I do to secure that machine.\nAnd that, istm, reduces the relevance of that particular blacklisting\nstrategy.\n\nI was just using this as an example (I happen to send mail directly to\nyou so have run across it in this context).\n\nI'm interested because spam has affected me in other contexts too, and\nevery time it takes time away from PostgreSQL.\n\nWe could sent up Yet Another List, say pgsql-spam-whiners, and I could\nbe a charter member, and maybe y'all would suggest I should also be on\npgsql-clueless-spam-whiners. But maybe it is better to have an\noccasional discussion on topics that people find affecting their use of\nthe mailing list(s) ;)\n\nI have to say that spam is bumming me out more now than it ever has in\nthe past. So let's hope that the blacklists *do* help somehow!\n\n - Thomas\n",
"msg_date": "Wed, 03 Apr 2002 14:26:44 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachability"
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > It could also be considered deterministic, strict, cacheable,\n> > mathematically sensible, real, pleasant, or good. ;-)\n>\n> > Out of those, I believe \"const\" is the worst term, because saying \"sin(x)\n> > is a constant function\" sounds pretty wrong.\n>\n> Yeah, that was my problem with \"const\" too. But \"deterministic\" has the\n> same problem --- in the ordinary meaning of the term, a function doesn't\n> become nondeterministic just because it depends on SET TIMEZONE as well\n> as its explicit parameters. It's also too long and too hard to spell\n> correctly ;-).\n\nAs it turns out, Oracle, IBM, and Microsoft use it for exactly the same\npurpose, and it is standard ...\n\nIf you're not happy with labelling case 2 nondeterministic, add an\nadditional clause, like USES EXTERNAL STATE. We could dig through all the\nadjectives in the world, but I don't think any will catch the situation\nquite like saying what's actually going on.\n\n> So far the only suggestion I've seen that really makes me happy is\n> \"volatile\" for case 3.\n\nVolatile means \"subject to rapid or unexpected change\", which is not\nreally what case 3 is.\n\n> Brad's idea of \"invariant\" for case 1 isn't too bad, but as a partner\n> for \"cacheable\" it seems a bit weak; if you haven't looked at the\n> manual lately, will you remember which is which?\n\nActually, IBM has VARIANT as an alias for NOT DETERMINISTIC (and NOT\nVARIANT for DETERMINISTIC).\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 18:19:31 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> case 1: \"immutable\"\n>> case 2: \"mutable\", or perhaps \"stable\"\n>> case 3: \"volatile\"\n\n> Since they've changed anyway, how about dropping the silly \"is\" in front\n> of the names?\n\n\"volatile\" would conflict with a C keyword. Possibly we could get away\nwith this at the SQL level, but I was worried...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Apr 2002 02:05:54 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes "
},
{
"msg_contents": "Tom Lane writes:\n\n> case 1: \"immutable\"\n> case 2: \"mutable\", or perhaps \"stable\"\n> case 3: \"volatile\"\n\nSince they've changed anyway, how about dropping the silly \"is\" in front\nof the names?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 13 Apr 2002 02:08:08 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes"
},
{
"msg_contents": "Tom Lane writes:\n\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n> >> case 1: \"immutable\"\n> >> case 2: \"mutable\", or perhaps \"stable\"\n> >> case 3: \"volatile\"\n>\n> > Since they've changed anyway, how about dropping the silly \"is\" in front\n> > of the names?\n>\n> \"volatile\" would conflict with a C keyword. Possibly we could get away\n> with this at the SQL level, but I was worried...\n\nIn general, I was thinking about migrating the CREATE FUNCTION syntax more\ninto consistency with other commmands and with the SQL standard.\nBasically I'd like to write\n\n CREATE FUNCTION name (args, ...) RETURNS type\n AS '...'\n LANGUAGE foo\n STATIC\n IMPLICIT CAST\n\n(where everything after RETURNS can be in random order).\n\nOK, so the key words are not the same as SQL, but it looks a lot\nfriendlier this way. We're already migrating CREATE DATABASE, I think,\nand the names of the options have changed, too, so this might be a good\ntime.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Sat, 13 Apr 2002 02:33:51 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Basically I'd like to write\n\n> CREATE FUNCTION name (args, ...) RETURNS type\n> AS '...'\n> LANGUAGE foo\n> STATIC\n> IMPLICIT CAST\n\n> (where everything after RETURNS can be in random order).\n\nNo strong objection here; but you'll still have to accept the old syntax\nfor backwards compatibility with existing dump scripts. I also worry\nthat this will end up forcing us to reserve a lot more keywords. Not so\nmuch for CREATE FUNCTION, but in CREATE OPERATOR, CREATE DOMAIN and\nfriends I do not think you'll be able to do this without making the\nkeywords reserved (else how do you tell 'em from parts of typenames\nand expressions?).\n\nGiven that it's not gonna be SQL-spec anyway, I'm not entirely sure\nI see the point of changing.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Sat, 13 Apr 2002 11:34:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": true,
"msg_subject": "Re: Suggestions please: names for function cachabilityattributes "
}
] |
[
{
"msg_contents": "Hi,\n\nWould it be an idea to have pg_dump append an ANALYZE; command to the end of\nits dumps to assist newbies / inexperienced admins?\n\nReason being is that I noticed that when I just restored a 50MB dump that\nthe pg_statistic table had no contents...\n\nI think it'd be an idea...\n\nChris\n\n",
"msg_date": "Wed, 3 Apr 2002 09:40:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "ANALYZE after restore"
},
{
"msg_contents": "On Wed, 3 Apr 2002 09:40:13 +0800\n\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> Hi,\n> \n> Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n> its dumps to assist newbies / inexperienced admins?\n\nThat strikes me as a good idea; a lot of the questions we get on\n-general and on IRC are solved by suggesting \"have you run ANALYZE?\"\nAnd that is only the sub-section of the user community that takes the\ntime to track down the problem and posts about it to the mailing\nlist -- I shudder to think how many people have never taken the time\nto tune their database at all.\n\nGiven that ANALYZE is now a separate command, so there is no need to\nrun a VACUUM (which could be much more expensive); furthermore, since\nANALYZE now only takes a statistical sampling of the full table, it\nshouldn't take very long, even on large tables. However, I'd say we\nshould make this behavior optional, controlled by a command-line\nswitch, but it should be enabled by default.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Tue, 2 Apr 2002 20:51:47 -0500",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "On Wed, 3 Apr 2002, Christopher Kings-Lynne wrote:\n\n> Hi,\n> \n> Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n> its dumps to assist newbies / inexperienced admins?\n\nI do not think this is desired behaviour. Firstly, pg_dump is not just for\nrestoring data to the system. Presumably another flag would need to be\nadded to pg_dump to prevent an ANALYZE being appended. This is messing\nand, in my opinion, it goes against the 'does what it says it does' nature\nof Postgres. Secondly, in experienced admins are not going to get\nexperienced with database management unless they see that their database\nruns like a dog and they have to read the manual.\n\nGavin\n\n",
"msg_date": "Wed, 3 Apr 2002 11:52:45 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> On Wed, 3 Apr 2002, Christopher Kings-Lynne wrote:\n>> Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n>> its dumps to assist newbies / inexperienced admins?\n\n> I do not think this is desired behaviour.\n\nI agree with Gavin here ... a forced VACUUM or ANALYZE after a restore\nwill just get in the way of people who know what they're doing, and it's\nnot at all clear that it will help people who do not.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 02 Apr 2002 23:09:55 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore "
},
{
"msg_contents": "On Wed, 2002-04-03 at 06:52, Gavin Sherry wrote:\n> On Wed, 3 Apr 2002, Christopher Kings-Lynne wrote:\n> \n> > Hi,\n> > \n> > Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n> > its dumps to assist newbies / inexperienced admins?\n> \n> I do not think this is desired behaviour. Firstly, pg_dump is not just for\n> restoring data to the system. Presumably another flag would need to be\n> added to pg_dump to prevent an ANALYZE being appended.\n\nYes.\n\n> This is messing and, in my opinion, it goes against the 'does what it says> it does' nature of Postgres.\n\nWhat does pg_dump say it does ?\n\nOr should pg_dump append ANALYZE only if it determines that ANALYZE has\nbeen run on the database being dumped ?\n\nDo you have any tools that will break when ANALYZE is added, (and which\ndon't break on the weird way of dumping foreign keys ;) ?\n\n> Secondly, in experienced admins are not going to get\n> experienced with database management unless they see that their database\n> runs like a dog and they have to read the manual.\n\nRather they think that the database is indeed designed to run like a\ndog.\n\nFor _forcing_ them newbies to learn we could append a new UNANALYZE\ncommand that inserts delibarately bogus info into pg_statistic to make\nit perform even worse by default ;)\n\nIn general, I'd prefer a database that has no need to be explicitly\nmaintained. How many experienced file-system managers do you know ?\n\n---------------------\nHannu\n\n\n",
"msg_date": "03 Apr 2002 12:59:16 +0500",
"msg_from": "Hannu Krosing <hannu@krosing.net>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > On Wed, 3 Apr 2002, Christopher Kings-Lynne wrote:\n> >> Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n> >> its dumps to assist newbies / inexperienced admins?\n> \n> > I do not think this is desired behaviour.\n> \n> I agree with Gavin here ... a forced VACUUM or ANALYZE after a restore\n> will just get in the way of people who know what they're doing, and it's\n> not at all clear that it will help people who do not.\n\nSorry Tom and Gavin, but I feel it really comes down to our idea of what\nwe're\ntrying to do here :\n\na) A database which is very self-maintaining, so people DON'T HAVE to\n learn it's intricacies in order to be getting decent performance.\n (They'll have to learn the intricacies if they want *better*\nperformance)\n\nb) A database which works. But if you want decent performance, you'd\nbetter\n take the time and effort to learn it.\n (This is the approach the commercial vendors take)\n\nI feel we should always target a) where it's possible to without it\nseriously\ngetting in the way of people who've take the time to learn the skills.\n\nThe far majority of people who use PostgreSQL are in the category which\nwill\nbenefit from a) so they can put their time to other uses instead of\nhaving to\nlearn and keep-up-to-date-with PostgreSQL. This will *always* be the\ncase.\n\nHaving decent performance by default should definitely be an important\nobjective, so having an ANALYZE command run at the end of a restore - by\ndefault only - is a good idea.\n\nRegards and best wishes,\n\nJustin Clift\n\n> regards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Wed, 03 Apr 2002 18:16:11 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "Justin Clift wrote:\n> Tom Lane wrote:\n> >\n> > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > On Wed, 3 Apr 2002, Christopher Kings-Lynne wrote:\n> > >> Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n> > >> its dumps to assist newbies / inexperienced admins?\n> >\n> > > I do not think this is desired behaviour.\n> >\n> > I agree with Gavin here ... a forced VACUUM or ANALYZE after a restore\n> > will just get in the way of people who know what they're doing, and it's\n> > not at all clear that it will help people who do not.\n>\n> Sorry Tom and Gavin, but I feel it really comes down to our idea of what\n> we're\n> trying to do here :\n>\n> a) A database which is very self-maintaining, so people DON'T HAVE to\n> learn it's intricacies in order to be getting decent performance.\n> (They'll have to learn the intricacies if they want *better*\n> performance)\n\n The defaults after a restore should result in index scans\n most of the time, resulting in some medium decent\n performance. And PostgreSQL needs some frequent VACUUM\n anyway, so after a while this problem solves itself for the\n average user.\n\n A database wide forced VACUUM on the other hand can make\n things worse. I have seen scenarios, where you have to\n explicitly leave out ANALYZE for specific tables in order to\n keep them index-scanned. So what you're proposing is to force\n professional PostgreSQL users to wait after restore for a\n useless ANALYZE to complete, before they can reset things\n with a normal VACUUM to get their required performance back?\n And all that just to make dummies happier?\n\n\nJan\n\n> b) A database which works. But if you want decent performance, you'd\n> better\n> take the time and effort to learn it.\n> (This is the approach the commercial vendors take)\n>\n> I feel we should always target a) where it's possible to without it\n> seriously\n> getting in the way of people who've take the time to learn the skills.\n>\n> The far majority of people who use PostgreSQL are in the category which\n> will\n> benefit from a) so they can put their time to other uses instead of\n> having to\n> learn and keep-up-to-date-with PostgreSQL. This will *always* be the\n> case.\n>\n> Having decent performance by default should definitely be an important\n> objective, so having an ANALYZE command run at the end of a restore - by\n> default only - is a good idea.\n>\n> Regards and best wishes,\n>\n> Justin Clift\n>\n> > regards, tom lane\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> --\n> \"My grandfather once told me that there are two kinds of people: those\n> who work and those who take the credit. He told me to try to be in the\n> first group; there was less competition there.\"\n> - Indira Gandhi\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n",
"msg_date": "Wed, 3 Apr 2002 11:41:19 -0500 (EST)",
"msg_from": "Jan Wieck <janwieck@yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "Hi Jan,\n\nJan Wieck wrote:\n> \n<snip>\n> The defaults after a restore should result in index scans\n> most of the time, resulting in some medium decent\n> performance. And PostgreSQL needs some frequent VACUUM\n> anyway, so after a while this problem solves itself for the\n> average user.\n> \n> A database wide forced VACUUM on the other hand can make\n> things worse. I have seen scenarios, where you have to\n> explicitly leave out ANALYZE for specific tables in order to\n> keep them index-scanned. So what you're proposing is to force\n> professional PostgreSQL users to wait after restore for a\n> useless ANALYZE to complete, before they can reset things\n> with a normal VACUUM to get their required performance back?\n> And all that just to make dummies happier?\n> \n> Jan\n\nNope, I'm figuring that if it's an option, and the option is on by\ndefault, then for the majority of people that will be a good thing.\n\nAnyone that's a professional PostgreSQL user will know about to turn the\noption off i.e. pg_dump --something (etc). Sure, we all make mistakes\nand will forget now and again, but I don't think that should stop us\nfrom taking into account that the majority of users out there are fairly\nPostgreSQL clue-less.\n\nIf we can make it easy without much inconvenience and without\nsacrificing the power of the database, we should.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n<snip>\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 04 Apr 2002 03:06:02 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "Jan Wieck <janwieck@yahoo.com> writes:\n> ... And PostgreSQL needs some frequent VACUUM\n> anyway, so after a while this problem solves itself for the\n> average user.\n\nYes, that's the key point for me too. Anyone who doesn't set up for\nroutine vacuums/analyzes is going to have performance problems anyway.\nAttacking that by making pg_dump force a vacuum is attacking the wrong\nplace.\n\nThere's been discussion of adding automatic background vacuums to\nPostgres; that seems like a more useful response to the issue.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 12:56:06 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Jan Wieck <janwieck@yahoo.com> writes:\n> > ... And PostgreSQL needs some frequent VACUUM\n> > anyway, so after a while this problem solves itself for the\n> > average user.\n> \n> Yes, that's the key point for me too. Anyone who doesn't set up for\n> routine vacuums/analyzes is going to have performance problems anyway.\n> Attacking that by making pg_dump force a vacuum is attacking the wrong\n> place.\n\nHi Tom,\n\nGood point. Although I also think we're talking about two different\nthings here.\n\nNo-one is proposing running a VACCUM after the load, but instead getting\nsome accurate statistics about the data which was loaded.\n\nI agree adding an automatic background vacuum thread/process/something\nwill be really, really useful too. \nShould we instead have this proposed automatic background something also\nupdate the statistics every now and again?\n\nIf so, I think this will all be a moot point.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n \n> There's been discussion of adding automatic background vacuums to\n> Postgres; that seems like a more useful response to the issue.\n> \n> regards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 04 Apr 2002 05:19:57 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "Justin Clift <justin@postgresql.org> writes:\n> I agree adding an automatic background vacuum thread/process/something\n> will be really, really useful too. \n> Should we instead have this proposed automatic background something also\n> update the statistics every now and again?\n\nYes, I had always assumed that would be part of the feature ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 14:30:10 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Justin Clift <justin@postgresql.org> writes:\n> > I agree adding an automatic background vacuum thread/process/something\n> > will be really, really useful too.\n> > Should we instead have this proposed automatic background something also\n> > update the statistics every now and again?\n>\n> Yes, I had always assumed that would be part of the feature ...\n\nHi Tom,\n\nCool. I wasn't sure of that (probably haven't been following the\ncorrect threads).\n\nThat makes way more sense then.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\n> \n> regards, tom lane\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n",
"msg_date": "Thu, 04 Apr 2002 05:41:18 +1000",
"msg_from": "Justin Clift <justin@postgresql.org>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
},
{
"msg_contents": "On 3 Apr 2002, Hannu Krosing wrote:\n\n> On Wed, 2002-04-03 at 06:52, Gavin Sherry wrote:\n> > On Wed, 3 Apr 2002, Christopher Kings-Lynne wrote:\n> > \n> > > Hi,\n> > > \n> > > Would it be an idea to have pg_dump append an ANALYZE; command to the end of\n> > > its dumps to assist newbies / inexperienced admins?\n> > \n> > I do not think this is desired behaviour. Firstly, pg_dump is not just for\n> > restoring data to the system. Presumably another flag would need to be\n> > added to pg_dump to prevent an ANALYZE being appended.\n> \n> Yes.\n> \n> > This is messing and, in my opinion, it goes against the 'does what it says> it does' nature of Postgres.\n> \n> What does pg_dump say it does ?\n\nfrom man pg_dump:\n\npg_dump - extract a PostgreSQL database into a script file or other \narchive file\n\nPretty simple really.\n\nI've been using postgresql for about three years now, and it only took me \nabout 15 minutes of reading the docs to find the vacuum and vacuum \nanalyze command. It was far harder to figure out subselects, \ntransactions, outer joins, unions, and a dozen other things than vacuum. \nI was a total database newbie back then, by the way.\n\nOne of the things I liked about postgresql was that it wasn't stuffed full \nof marketing fluff to try and impress the PHBs at the top of the corporate \nladder, but was full of useful extensibility and was very much a \"do what \nit said it would\" database.\n\nwhile I agree that postgresql could do with some automated housekeeping \nroutines that would allow joe sixpack to grab it and go, no database that \nhas real power is going to run very well without some administration, \nperiod.\n\nThe last place to put house keeping is in the end of my data dumps. \npg_dump's job is to dump the data from my database in a format that is as \ntransportable as possible. not to hold my hand the next time I need to \nload data into my own database. \n\nWhile I fully support a switch like -z on pg_dump that puts an analyze on \nthe end of my dumps if I so choose, I don't want them showing up \nautomatically and me wondering if the data feeds I make for other will \nwork. \n\nI can see junior dbas who don't understand vacuum and analyze recommending \nto people that they need to dump / restore their whole database once a \nweek to get good performance if we add aht analyze switch to the end of \nthe pg_dump file. NOT a good thing. :-)\n\nanywho, I don't post much here, cause I don't hack postgresql that much, \nbut I love this database, and I don't want it filled up with useless \nmarketing cruft like analyze being haphazardly tacked onto the pg_dump \noutput, so my vote is a great big NO.\n\n\n",
"msg_date": "Thu, 4 Apr 2002 11:27:55 -0700 (MST)",
"msg_from": "Scott Marlowe <smarlowe@ihs.com>",
"msg_from_op": false,
"msg_subject": "Re: ANALYZE after restore"
}
] |
[
{
"msg_contents": "Hi All,\n\nNow that Tom's modified the EXPLAIN output to appear as a query result,\nmaybe SHOW and SHOW ALL should also be modified in that way. The current\nNOTICE: business is a bit messy, and it sure would assist projects just as\npgAccess, phpPgAdmin and pgAdmin with displaying configuration!\n\nAlso, what else could be usefully modified?\n\nChris\n\nps.\n\n>>BTW, see: ~/pgsql/src/backend/commands/explain.c\n>>for the new functions Tom Lane wrote which send explain results to the\n>>front end as if they were from a select statement. Very informative.\n>>Specifically see:\n>> begin_text_output(CommandDest dest, char *title);\n>> do_text_output(TextOutputState *tstate, char *aline);\n>> do_text_output_multiline(TextOutputState *tstate, char *text);\n>> end_text_output(TextOutputState *tstate);\n>\n\n",
"msg_date": "Wed, 3 Apr 2002 09:49:32 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "SHOW ALL as a query result"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> Now that Tom's modified the EXPLAIN output to appear as a query result,\n> maybe SHOW and SHOW ALL should also be modified in that way. The current\n> NOTICE: business is a bit messy, and it sure would assist projects just as\n> pgAccess, phpPgAdmin and pgAdmin with displaying configuration!\n\nYes, I was going to suggest this myself. It would be very useful to have\nthis information available to the JDBC driver so you could query, say, the\ndefault transaction isolation.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 2 Apr 2002 22:01:16 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: SHOW ALL as a query result"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n > Hi All,\n >\n > Now that Tom's modified the EXPLAIN output to appear as a query\n > result, maybe SHOW and SHOW ALL should also be modified in that way.\n > The current NOTICE: business is a bit messy, and it sure would\n > assist projects just as pgAccess, phpPgAdmin and pgAdmin with\n > displaying configuration!\n >\n > Also, what else could be usefully modified?\n >\n > Chris\n >\n > ps.\n >\n >\n >>> BTW, see: ~/pgsql/src/backend/commands/explain.c for the new\n >>> functions Tom Lane wrote which send explain results to the front\n >>> end as if they were from a select statement. Very informative.\n >>> Specifically see: begin_text_output(CommandDest dest, char\n >>> *title); do_text_output(TextOutputState *tstate, char *aline);\n >>> do_text_output_multiline(TextOutputState *tstate, char *text);\n >>> end_text_output(TextOutputState *tstate);\n\n\nI was also thinking about this, but the EXPLAIN approach is only useful\nif you never want to select on the output. Another approach might be to \nwrite a function, say show_all(), and then modify gram.y to make:\n\nSHOW ALL;\n - equivalent to -\nSELECT show_all();\n\nso that you could do:\n\nSELECT show_var() FROM (SELECT show_all()) as s WHERE show_var_name() \nLIKE 'wal%';\n\nor something like that.\n\nJoe\n\n\n",
"msg_date": "Tue, 02 Apr 2002 19:44:24 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: SHOW ALL as a query result"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> Hi All,\n> \n> Now that Tom's modified the EXPLAIN output to appear as a query result,\n> maybe SHOW and SHOW ALL should also be modified in that way. The current\n> NOTICE: business is a bit messy, and it sure would assist projects just as\n> pgAccess, phpPgAdmin and pgAdmin with displaying configuration!\n> \n> Also, what else could be usefully modified?\n\nAdded to TODO:\n\n o Allow SHOW to output as a query result, like EXPLAIN\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 17 Apr 2002 22:58:29 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: SHOW ALL as a query result"
}
] |
[
{
"msg_contents": "\nThere was a message posted in March regarding this. Bruce replied that this \nissue did not come up often. However, I think there is more to it than \nthat. I think one reason that it does not come up is because most Oracle \nDBAs are not going to dig through mailing lists and take the time to post \nquestions. Once they discover that PL/pgSQL != PL/SQL they just move on. \n\nI think that the limitations of PL/pgSQL is a huge factor in people not being \nable to use Postgres instead of Oracle. My company is quite small, but we \nhave several very large insurance companies for clients that we develop web \nbased applications for. Currently I have 5 schemas totaling about 1500 \ntables and about as many stored procedures and functions. The applications \ndo not even have any permissions on a single table. All selects are done on \nviews and all inserts/updates/deletes are done through stored procedures. \nOur procs have many parameters, one per column or more. Most of the app \ndevelopers do not even know that much about the schema. They just know the \nexposed procedural interface.\n\nOther issues similar to this with regards to PL/SQL are the need for packages \nand the ability to declare cursors ahead of time, like in a package so that \nthey can be shared and opened when needed. This also makes much cleaner \ncode since the select statement for many cursors clouds the code where it is \nused if it is inline like PL/pgSQL.\n\nNamed parameters would also be nice and at least allowing the use of giving \nnames to parameters in the declarations instead of $1, $2, etc.\n\nAlso, the inablity to trap database \"exceptions\" is too limiting. In \nOracle, we trap every single exception, start an autonomous transacation, log \nthe exception to an exception table along with the procedure name, the \noperation being performed and some marker to mke it easy to locate the \noffending statement. This also allows us to recover, which is very important \nfor imports and data loads.\n\nI work with many other Oracle DBAs and I think many have interest in \nPostgres, but also know that without a procedural language on par with PL/SQL \nthat it is not possible to switch. All of the Oracle shops that I know of \nare very big on PL/SQL and write almost all business logic and table \ninterfaces in it. It also seems that Microsoft SQLServer shops are moving \nin the same direction now that the procedural support for it is getting much \nbetter.\n\n\nI am not complaining about Postgres at all. I think it is fantastic and I \nenjoy using it for personal projects. However, I think it might be a bit \nmisleading to assume that lack of posts regarding the limits of PL/pgSQL \nequate to it being adequate for most large applications. It is the number \none reason that I could not use Postgres in 4 large insurance companies.\n\n\nJohn Proctor\n\n\n\n",
"msg_date": "Tue, 2 Apr 2002 21:50:52 -0500",
"msg_from": "John Proctor <jproctor@prium.net>",
"msg_from_op": true,
"msg_subject": "16 parameter limit"
},
{
"msg_contents": "John,\n\nYou bring up some interesting points. I agree with you in some parts,\n but some of your difficulties with PL/pgSQL are based on\n misinformation, which would be good to correct.\n\nFirst, some prefaces: PL/pgSQL does not currently have a real devoted\n project head. It was mostly the brainchild of Jan Wieck, who I\n believe developed it as a \"side effect\" of creating PL/pgTCL. So one\n of the reasons that the capabilites of PL/pgSQL have been limited is\n that nobody with the required skills has stepped forward from the\n community to take PL/pgSQL to the next stage of development. The 6\n core developers are a little busy.\n\nSecond, with the robustness of Java, J2EE, C++, and Perl::DBI, I\n believe that it has long been assumed by the core developers and a\n majority of the community that any large application would be\n programmed using a seperate middleware langauge and full-blown n-tier\n development. Thus, for a lot of people, if PL/pgSQL is adequate for\n complex triggers and rules, it is sufficient; if you need incapsulated\n business logic, use Perl or Java.\n\nI'm not putting this forward as what I necessarily believe in, but the\n logic that drives the current \"lightweight\" nature of PL/pgSQL as\n compared with PL/SQL. It's an open-source project, though ... hire a\n C programmer and you can change that.\n\n> I think one reason that it does not come up is because most\n> Oracle \n> DBAs are not going to dig through mailing lists and take the time to\n> post \n> questions. Once they discover that PL/pgSQL != PL/SQL they just\n> move on. \n\nYes, but we're not going to interest those people anyway. If they\n can't handle using mailing lists as your knowledge base, IMNSHO they\n have no place in the Open Source world. Stick to expensive,\n well-documented proprietary products.\n\n> I think that the limitations of PL/pgSQL is a huge factor in people\n> not being \n> able to use Postgres instead of Oracle. \n\nSee above. IMHO, Great Bridge was mistaken to target Oracle instead of\n targeting MS SQL Server as their main competitor, something they paid\n the price for. I still reccommend Oracle to some (but very few) of my\n customers who need some of the add-ons that come with Oracle and have\n more money than time.\n\n> The\n> applications \n> do not even have any permissions on a single table. All selects are\n> done on \n> views and all inserts/updates/deletes are done through stored\n> procedures. \n> Our procs have many parameters, one per column or more. Most of the\n> app \n> developers do not even know that much about the schema. They just\n> know the \n> exposed procedural interface.\n\nI've done this on a smaller scale with Postgres + PHP. It's a good\n rapid development approach for intranet apps, and relatively secure.\n I just don't try to get PL/pgSQL to do anything it can't, and do my\n error handling in interface code.\n\n> Other issues similar to this with regards to PL/SQL are the need for\n> packages \n> and the ability to declare cursors ahead of time, like in a package\n> so that \n> they can be shared and opened when needed. This also makes much\n> cleaner \n> code since the select statement for many cursors clouds the code\n> where it is \n> used if it is inline like PL/pgSQL.\n\nIf you feel strongly enough about this, I am sure that Jan would\n happily give you all of his PL/pgSQL development notes so that you can\n expand the language.\n\n> Named parameters would also be nice and at least allowing the use of\n> giving \n> names to parameters in the declarations instead of $1, $2, etc.\n\nPL/pgSQL has had parameter aliases since Postgres 7.0.0. \n\n> Also, the inablity to trap database \"exceptions\" is too limiting.\n> In \n> Oracle, we trap every single exception, start an autonomous\n> transacation, log \n> the exception to an exception table along with the procedure name,\n> the \n> operation being performed and some marker to mke it easy to locate\n> the \n> offending statement. This also allows us to recover, which is very\n> important \n> for imports and data loads.\n\nThis is a singnificant failing. Once again, I can only point out the\n Postgres team's shortage of skilled manpower. Wanna donate a\n programmer? I'd love to see cursor and error handling in PL/pgSQL\n improved, and I can't think that anybody would object.\n\n> It also seems that Microsoft SQLServer shops are\n> moving \n> in the same direction now that the procedural support for it is\n> getting much \n> better.\n\nHere, I disagree. I am a certified MS SQL Server admin, and PL/pgSQL\n is already miles ahead of Transact-SQL. Further, Microsoft is not\n improving the procedural elements of T-SQL in new versions because MS\n wants you to use .NET objects and not stored procedures that might be\n portable to another platform. Perhaps more importantly, MS did not\n write T-SQL (Sybase did), and as a result has trouble modifying it.\n\n> I am not complaining about Postgres at all. I think it is fantastic\n> and I \n> enjoy using it for personal projects. However, I think it might be\n> a bit \n> misleading to assume that lack of posts regarding the limits of\n> PL/pgSQL \n> equate to it being adequate for most large applications. \n\nYes, but without the posts, we don't know what's wrong, now, do we? \n\n Postgres is an Open Source project. We depend on the community to\n donate resources so that we can continue to offer a great database\n (IMHO, better than anything but Oracle and better than Oracle on a\n couple of issues) for free. At a minimum, that participation must\n include providing detailed and well-considered requests for changes.\n Contributing code, documentation, and/or money is better and more\n likely to realize your goals.\n\nYour post is extremely useful, and will no doubt be seized upon by Red\n Hat as strategic to their RHDB program if they know what's good for\n them. However, it's a mistake to regard the Postgres project as if it\n was a vendor, from whom one expects program improvements just because\n one is a good customer. \n\nFrankly, considering the Oracle DBAs you refer to who can't even be\n bothered to join the mailing list ... I, for one, don't want them as\n part of the Postgres product and don't feel that there is any reason\n for the Postgres developers to consider their needs. \n\nFor anyone else who is lurking on the mailing list, though ... SPEAK\n UP! nobody will address your needs if you never communicate them.\n\n-Josh Berkus\n\n",
"msg_date": "Thu, 04 Apr 2002 09:20:05 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit"
},
{
"msg_contents": "John Proctor wrote:\n>\n> RE: 16 parameter limit\n> \n> There was a message posted in March regarding this. Bruce replied that this \n> issue did not come up often. However, I think there is more to it than \n> that. I think one reason that it does not come up is because most Oracle \n> DBAs are not going to dig through mailing lists and take the time to post \n> questions. Once they discover that PL/pgSQL != PL/SQL they just move on. \n\nActually, I said it didn't come up much, but I know of several heavy\nPL/pgSQL users who do have trouble with the 16 parameter limit, and I am\nlooking into increasing it. If someone wants to do some legwork, go\nahead. I do think it needs to be increases. The lack of complains\nmakes it hard for me to advocate increasing it, especially if there is a\ndisk space penalty, but personally, I do think it needs increasing.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Thu, 4 Apr 2002 21:40:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit"
},
{
"msg_contents": "Bruce,\n\n> Actually, I said it didn't come up much, but I know of several heavy\n> PL/pgSQL users who do have trouble with the 16 parameter limit, and I\n> am\n> looking into increasing it. If someone wants to do some legwork, go\n> ahead. I do think it needs to be increases. The lack of complains\n> makes it hard for me to advocate increasing it, especially if there\n> is a\n> disk space penalty, but personally, I do think it needs increasing.\n\nPersonally, as a heavy user of PL/pgSQL procedures, I'm not sure you\n need to increase the *default* number of parameters. Postgres just\n needs to implement a parameter number change as part of a documented\n command-line compile-time option, i.e. \"--with-parameters=32\".\n Currently, increasing the number of parameters requires altering the\n C config files before compilation, a rather user-hostile process. \n\nI've raised this point 3 or 4 times on this list now, and have not seen\n a respons from you or Thomas on this suggestion. If I had the\n skills, I'd do it myself and upload the changes, but C is not my\n strong suit.\n\nAlso, what is the practical maximum number of parameters?\n\n-Josh Berkus\n",
"msg_date": "Fri, 05 Apr 2002 08:30:08 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit"
},
{
"msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> Personally, as a heavy user of PL/pgSQL procedures, I'm not sure you\n> need to increase the *default* number of parameters. Postgres just\n> needs to implement a parameter number change as part of a documented\n> command-line compile-time option, i.e. \"--with-parameters=32\".\n\nI would not object to providing such a configure option; it seems a\nreasonable thing to do. But the real debate here seems to be what\nthe default should be. The ACS people would like their code to run\non a \"stock\" Postgres installation, so they've been lobbying to change\nthe default, not just to make it fractionally easier to build a\nnon-default configuration.\n\n> Also, what is the practical maximum number of parameters?\n\nIf you tried to make it more than perhaps 500, you'd start to see\nindex-tuple-too-big failures in the pg_proc indexes. Realistically,\nthough, I can't see people calling procedures with hundreds of\npositionally-specified parameters --- such code would be unmanageably\nerror-prone.\n\nI was surprised that people were dissatisfied with 16 (it was 8 not very\nlong ago...). Needing more strikes me as a symptom of either bad coding\npractices or missing features of other sorts.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 13:21:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit "
},
{
"msg_contents": "Tom,\n\n> I was surprised that people were dissatisfied with 16 (it was 8 not\n> very\n> long ago...). Needing more strikes me as a symptom of either bad\n> coding\n> practices or missing features of other sorts.\n\nNo, not really. It's just people wanting to use PL/pgSQL procedures as\n data filters. For example, I have a database with complex\n dependancies and validation rules that I started under 7.0.3, when\n RULES were not an option for such things and triggers were harder to\n write. As a result, I have the interface push new records for, say,\n the CLIENTS table through a PL/pgSQL procedure rather than writing to\n the table directly. Since the table has 18 columns, I need (18 + 2\n for session & user) 20 parameters for this procedure. \n\nAs John has discussed, this kind of data structure is relatively common\n in both Oracle and Informix shops. As such, Postgres emulating this\n ability allows DBAs from those worlds to consider moving to Postgres\n and RHDB. While the same kind of business logic can be implemented\n through Rules and Triggers, the Postgres structure for these things is\n unique and as a result not very portable.\n\n-Josh Berkus\n\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Fri, 05 Apr 2002 14:29:14 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit "
},
{
"msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> Tom,\n>> I was surprised that people were dissatisfied with 16 (it was 8 not\n>> very long ago...). Needing more strikes me as a symptom of either bad\n>> coding practices or missing features of other sorts.\n\n> No, not really. It's just people wanting to use PL/pgSQL procedures as\n> data filters. For example, I have a database with complex\n> dependancies and validation rules that I started under 7.0.3, when\n> RULES were not an option for such things and triggers were harder to\n> write. As a result, I have the interface push new records for, say,\n> the CLIENTS table through a PL/pgSQL procedure rather than writing to\n> the table directly. Since the table has 18 columns, I need (18 + 2\n> for session & user) 20 parameters for this procedure. \n\nYeah, but if we had slightly better support for rowtype parameters in\nplpgsql, you could do it with *three* parameters: session, user, and\ncontents of record as a clients%rowtype structure. And it'd probably\nbe a lot easier to read, and more maintainable in the face of changes\nto the clients table structure. This is why I say that needing lots\nof parameters may be a symptom of missing features rather than an\nindication that we ought to push up FUNC_MAX_ARGS.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 18:18:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit "
},
{
"msg_contents": "Tom,\n\n> Yeah, but if we had slightly better support for rowtype parameters in\n> plpgsql, you could do it with *three* parameters: session, user, and\n> contents of record as a clients%rowtype structure. And it'd probably\n> be a lot easier to read, and more maintainable in the face of changes\n> to the clients table structure. This is why I say that needing lots\n> of parameters may be a symptom of missing features rather than an\n> indication that we ought to push up FUNC_MAX_ARGS.\n\nYou're right for my databases. For that matter, better support for\n rowtype is on the laundry list of PL/SQL compatibility issues.\n\nHowever, we also want to support users who are porting their PL/SQL\n applications, which may not be easily translated into %rowtype\n paramters. As I've said before, all this requires is a good\n compile-time option; increasing the default is unnecessary.\n\nWhat do you (personally) think about trying to get RH involved in\n expanding PL/pgSQL's capabilites as a way fo targeting Oracle's users\n for RHDB?\n\n-Josh Berkus\n\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Fri, 05 Apr 2002 15:26:13 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit "
},
{
"msg_contents": "\"Josh Berkus\" <josh@agliodbs.com> writes:\n> However, we also want to support users who are porting their PL/SQL\n> applications, which may not be easily translated into %rowtype\n> paramters.\n\nWell, probably the $64 question there is: what is Oracle's limit on\nnumber of parameters?\n\n> What do you (personally) think about trying to get RH involved in\n> expanding PL/pgSQL's capabilites as a way fo targeting Oracle's users\n> for RHDB?\n\nSeems like a good idea in the abstract ... but the hard question is what\nare you willing to see *not* get done in order to put cycles on plpgsql.\nAnd there's not a large supply of cycles.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 18:33:59 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit "
},
{
"msg_contents": "Tom,\n\n> Seems like a good idea in the abstract ... but the hard question is\n> what\n> are you willing to see *not* get done in order to put cycles on\n> plpgsql.\n> And there's not a large supply of cycles.\n\nWell, it's back to the idea of raising money, then.\n\n-Josh\n",
"msg_date": "Fri, 05 Apr 2002 15:51:35 -0800",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit "
},
{
"msg_contents": "\n\nTom Lane wrote:\n> \"Josh Berkus\" <josh@agliodbs.com> writes:\n> \n>>However, we also want to support users who are porting their PL/SQL\n>> applications, which may not be easily translated into %rowtype\n>> paramters.\n>\n> Well, probably the $64 question there is: what is Oracle's limit on\n> number of parameters?\n\nAccording to the Oracle 9 documentation the limit for number of \nparameters to a function is 64K.\n\n--Barry\n\n",
"msg_date": "Fri, 05 Apr 2002 21:35:49 -0800",
"msg_from": "Barry Lind <barry@xythos.com>",
"msg_from_op": false,
"msg_subject": "Re: 16 parameter limit"
},
{
"msg_contents": "The following patch adds --maxindfuncparams to configure to allow you to\nmore easily set the maximum number of function parameters and columns\nin an index. (Can someone come up with a better name?)\n\nThe patch also removes --def_maxbackends, which Tom reported a few weeks\nago he wanted to remove. Can people review this? To test it, you have\nto run autoconf.\n\nAre we staying at 16 as the default? I personally think we can\nincrease it to 32 with little penalty, and that we should increase\nNAMEDATALEN to 64.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> \"Josh Berkus\" <josh@agliodbs.com> writes:\n> > Personally, as a heavy user of PL/pgSQL procedures, I'm not sure you\n> > need to increase the *default* number of parameters. Postgres just\n> > needs to implement a parameter number change as part of a documented\n> > command-line compile-time option, i.e. \"--with-parameters=32\".\n> \n> I would not object to providing such a configure option; it seems a\n> reasonable thing to do. But the real debate here seems to be what\n> the default should be. The ACS people would like their code to run\n> on a \"stock\" Postgres installation, so they've been lobbying to change\n> the default, not just to make it fractionally easier to build a\n> non-default configuration.\n> \n> > Also, what is the practical maximum number of parameters?\n> \n> If you tried to make it more than perhaps 500, you'd start to see\n> index-tuple-too-big failures in the pg_proc indexes. Realistically,\n> though, I can't see people calling procedures with hundreds of\n> positionally-specified parameters --- such code would be unmanageably\n> error-prone.\n> \n> I was surprised that people were dissatisfied with 16 (it was 8 not very\n> long ago...). Needing more strikes me as a symptom of either bad coding\n> practices or missing features of other sorts.\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql/configure.in,v\nretrieving revision 1.178\ndiff -c -r1.178 configure.in\n*** configure.in\t14 Apr 2002 17:23:20 -0000\t1.178\n--- configure.in\t16 Apr 2002 01:47:00 -0000\n***************\n*** 215,229 ****\n AC_SUBST(default_port)\n \n #\n! # Maximum number of allowed connections (--with-maxbackends), default 32\n #\n! AC_MSG_CHECKING([for default soft limit on number of connections])\n! PGAC_ARG_REQ(with, maxbackends, [ --with-maxbackends=N set default maximum number of connections [32]],\n [],\n! [with_maxbackends=32])\n! AC_MSG_RESULT([$with_maxbackends])\n! AC_DEFINE_UNQUOTED([DEF_MAXBACKENDS], [$with_maxbackends],\n! [The default soft limit on the number of concurrent connections, i.e., the default for the postmaster -N switch (--with-maxbackends)])\n \n \n #\n--- 215,229 ----\n AC_SUBST(default_port)\n \n #\n! # Maximum number of index/function parameters (--with-maxindfuncparams), default 16\n #\n! AC_MSG_CHECKING([maximum number of index/function parameters])\n! PGAC_ARG_REQ(with, maxindfuncparams, [ --with-maxindfuncparams=N maximum number of index/function parameters [16]],\n [],\n! [with_maxindfuncparams=16])\n! AC_MSG_RESULT([$with_maxindfuncparams])\n! AC_DEFINE_UNQUOTED([MAXINDFUNCPARAMS], [$with_maxindfuncparams],\n! [The maximum number of index/function parameters])\n \n \n #\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql/src/backend/utils/misc/guc.c,v\nretrieving revision 1.65\ndiff -c -r1.65 guc.c\n*** src/backend/utils/misc/guc.c\t3 Apr 2002 05:39:32 -0000\t1.65\n--- src/backend/utils/misc/guc.c\t16 Apr 2002 01:47:02 -0000\n***************\n*** 408,419 ****\n \t */\n \t{\n \t\t\"max_connections\", PGC_POSTMASTER, PGC_S_DEFAULT, &MaxBackends,\n! \t\tDEF_MAXBACKENDS, 1, INT_MAX, NULL, NULL\n \t},\n \n \t{\n \t\t\"shared_buffers\", PGC_POSTMASTER, PGC_S_DEFAULT, &NBuffers,\n! \t\tDEF_NBUFFERS, 16, INT_MAX, NULL, NULL\n \t},\n \n \t{\n--- 408,419 ----\n \t */\n \t{\n \t\t\"max_connections\", PGC_POSTMASTER, PGC_S_DEFAULT, &MaxBackends,\n! \t\t32, 1, INT_MAX, NULL, NULL\n \t},\n \n \t{\n \t\t\"shared_buffers\", PGC_POSTMASTER, PGC_S_DEFAULT, &NBuffers,\n! \t\t64, 16, INT_MAX, NULL, NULL\n \t},\n \n \t{\nIndex: src/include/pg_config.h.in\n===================================================================\nRCS file: /cvsroot/pgsql/src/include/pg_config.h.in,v\nretrieving revision 1.21\ndiff -c -r1.21 pg_config.h.in\n*** src/include/pg_config.h.in\t10 Apr 2002 22:47:09 -0000\t1.21\n--- src/include/pg_config.h.in\t16 Apr 2002 01:47:03 -0000\n***************\n*** 77,87 ****\n #undef DEF_PGPORT_STR\n \n /*\n! * Default soft limit on number of backend server processes per postmaster;\n! * this is just the default setting for the postmaster's -N switch.\n! * (--with-maxbackends=N)\n */\n! #undef DEF_MAXBACKENDS\n \n /* --enable-nls */\n #undef ENABLE_NLS\n--- 77,86 ----\n #undef DEF_PGPORT_STR\n \n /*\n! * The maximum number of columns in an index and the maximum number of\n! * parameters to a function. This controls the length of oidvector.\n */\n! #undef MAXINDFUNCPARAMS\n \n /* --enable-nls */\n #undef ENABLE_NLS\n***************\n*** 107,121 ****\n */\n \n /*\n- * Default number of buffers in shared buffer pool (each of size BLCKSZ).\n- * This is just the default setting for the postmaster's -B switch.\n- * Perhaps it ought to be configurable from a configure switch.\n- * NOTE: default setting corresponds to the minimum number of buffers\n- * that postmaster.c will allow for the default MaxBackends value.\n- */\n- #define DEF_NBUFFERS (DEF_MAXBACKENDS > 8 ? DEF_MAXBACKENDS * 2 : 16)\n- \n- /*\n * Size of a disk block --- this also limits the size of a tuple.\n * You can set it bigger if you need bigger tuples (although TOAST\n * should reduce the need to have large tuples, since fields can now\n--- 106,111 ----\n***************\n*** 162,169 ****\n * switch statement in fmgr_oldstyle() in src/backend/utils/fmgr/fmgr.c.\n * But consider converting such functions to new-style instead...\n */\n! #define INDEX_MAX_KEYS\t\t16\n! #define FUNC_MAX_ARGS\t\tINDEX_MAX_KEYS\n \n /*\n * System default value for pg_attribute.attstattarget\n--- 152,159 ----\n * switch statement in fmgr_oldstyle() in src/backend/utils/fmgr/fmgr.c.\n * But consider converting such functions to new-style instead...\n */\n! #define INDEX_MAX_KEYS\t\tMAXINDFUNCPARAMS\n! #define FUNC_MAX_ARGS\t\tMAXINDFUNCPARAMS\n \n /*\n * System default value for pg_attribute.attstattarget",
"msg_date": "Mon, 15 Apr 2002 22:58:51 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "On the note of NAMEDATALEN, a view in the INFORMATION_SCHEMA\ndefinition is exactly 2 characters over the current limit.\n\nADMINISTRABLE_ROLE_AUTHORIZATIONS\n\nNot that it's a great reason, but it isn't a bad one for increasing\nthe limit ;)\n\n--\nRod Taylor\n\n> Are we staying at 16 as the default? I personally think we can\n> increase it to 32 with little penalty, and that we should increase\n> NAMEDATALEN to 64.\n\n\n",
"msg_date": "Mon, 15 Apr 2002 23:19:45 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "Bruce,\n\n> The following patch adds --maxindfuncparams to configure to allow you\n> to\n> more easily set the maximum number of function parameters and columns\n> in an index. (Can someone come up with a better name?)\n\nHow about simply --max_params ?\n\n> Are we staying at 16 as the default? I personally think we can\n> increase it to 32 with little penalty, \n\nI'd vote for that. But then, you knew that. John Proctor wants 128.\n\n>and that we should increase\n> NAMEDATALEN to 64.\n\nI don't even know that is. \n\n-Josh\n",
"msg_date": "Mon, 15 Apr 2002 20:25:20 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "En Mon, 15 Apr 2002 23:19:45 -0400\n\"Rod Taylor\" <rbt@zort.ca> escribi�:\n\n> On the note of NAMEDATALEN, a view in the INFORMATION_SCHEMA\n> definition is exactly 2 characters over the current limit.\n> \n> ADMINISTRABLE_ROLE_AUTHORIZATIONS\n> \n> Not that it's a great reason, but it isn't a bad one for increasing\n> the limit ;)\n\nhttp://archives.postgresql.org/pgsql-general/2002-01/msg00939.php\n\n(Tom Lane says both SQL92 and SQL99 specify 128 as the maximun\nidentifier length)\n\nAnyway, how does one measure the perfomance impact of such a change?\nBy merely changing the constant definition, or also by actually using\nlong identifiers? I can do that if it's of any help, for various values\nperhaps.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Las cosas son buenas o malas segun las hace nuestra opinion\" (Lisias)\n",
"msg_date": "Mon, 15 Apr 2002 23:34:04 -0400",
"msg_from": "Alvaro Herrera <alvherre@atentus.com>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> The following patch adds --maxindfuncparams to configure to allow you to\n> more easily set the maximum number of function parameters and columns\n> in an index. (Can someone come up with a better name?)\n\n> Are we staying at 16 as the default? I personally think we can\n> increase it to 32 with little penalty,\n\nIf you want to increase it, let's just increase it and not add any more\nconfigure options. If someone wants more than 32 then we really need to\nstart talking about design issues.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Mon, 15 Apr 2002 23:34:06 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "> > Are we staying at 16 as the default? I personally think we can\n> > increase it to 32 with little penalty,\n>\n> If you want to increase it, let's just increase it and not add any more\n> configure options. If someone wants more than 32 then we really need to\n> start talking about design issues.\n\nWhy not give them the configure option? It's not good HCI to impose\narbitrary limits on people...?\n\nWe can default it to 32, since there's demand for it. If a particular user\ndecided to configure it higher, then they do that knowing that it may cause\nperformance degradation. It's good to give them that choice though.\n\nChris\n\n",
"msg_date": "Tue, 16 Apr 2002 11:35:57 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "On Mon, 15 Apr 2002 23:34:04 -0400\n\"Alvaro Herrera\" <alvherre@atentus.com> wrote:\n> En Mon, 15 Apr 2002 23:19:45 -0400\n> \"Rod Taylor\" <rbt@zort.ca> escribi�:\n> \n> > On the note of NAMEDATALEN, a view in the INFORMATION_SCHEMA\n> > definition is exactly 2 characters over the current limit.\n> > \n> > ADMINISTRABLE_ROLE_AUTHORIZATIONS\n> > \n> > Not that it's a great reason, but it isn't a bad one for increasing\n> > the limit ;)\n> \n> http://archives.postgresql.org/pgsql-general/2002-01/msg00939.php\n> \n> (Tom Lane says both SQL92 and SQL99 specify 128 as the maximun\n> identifier length)\n> \n> Anyway, how does one measure the perfomance impact of such a change?\n> By merely changing the constant definition, or also by actually using\n> long identifiers?\n\nName values are stored NULL-padded up to NAMEDATALEN bytes, so\nthere is no need to actually use long identifiers, just change\nthe value of NAMEDATALEN, recompile and run some benchmarks\n(perhaps OSDB? http://osdb.sf.net).\n\nIf you do decide to run some benchmarks (and some more data\nwould be good), please use the current CVS code. I sent in a\npatch a little while ago that should somewhat reduce the\npenalty for increasing NAMEDATALEN.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Mon, 15 Apr 2002 23:42:35 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "Alvaro Herrera wrote:\n> (Tom Lane says both SQL92 and SQL99 specify 128 as the maximun\n> identifier length)\n> \n> Anyway, how does one measure the perfomance impact of such a change?\n> By merely changing the constant definition, or also by actually using\n> long identifiers? I can do that if it's of any help, for various values\n> perhaps.\n\nI think I would measure disk size change in a newly created database,\nand run regression for various values. That uses a lot of identifier\nlookups.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Apr 2002 23:44:16 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "> > Anyway, how does one measure the perfomance impact of such a change?\n> > By merely changing the constant definition, or also by actually using\n> > long identifiers? I can do that if it's of any help, for various values\n> > perhaps.\n>\n> I think I would measure disk size change in a newly created database,\n> and run regression for various values. That uses a lot of identifier\n> lookups.\n\nWith schemas, maybe there'd be less name lookups and comparisons anyway,\nsince there's more reliance on oids instead of names?\n\nChris\n\n",
"msg_date": "Tue, 16 Apr 2002 11:49:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "On Tue, 16 Apr 2002 11:35:57 +0800\n\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> > > Are we staying at 16 as the default? I personally think we can\n> > > increase it to 32 with little penalty,\n> >\n> > If you want to increase it, let's just increase it and not add any more\n> > configure options. If someone wants more than 32 then we really need to\n> > start talking about design issues.\n> \n> Why not give them the configure option? It's not good HCI to impose\n> arbitrary limits on people...?\n\nIt's not an arbitrary limit -- users can easily change pg_config.h.\n\n> We can default it to 32, since there's demand for it. If a particular user\n> decided to configure it higher, then they do that knowing that it may cause\n> performance degradation. It's good to give them that choice though.\n\nWhat if someone actually uses functions with more than 32\narguments? Their code will not longer be portable among\nPostgreSQL installations, and they'll need to get the local\nadmin to recompile.\n\nI could see adding a configure option if there was a justifiable\nreason for using functions with more than 32 arguments -- but\nIMHO that is quite a bizarre situation anyway, as Peter said.\n\nMy vote is to set the default # of function args to some\nreasonable default (32 sounds good), and leave it at that.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Mon, 15 Apr 2002 23:52:16 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "> What if someone actually uses functions with more than 32\n> arguments? Their code will not longer be portable among\n> PostgreSQL installations, and they'll need to get the local\n> admin to recompile.\n>\n> I could see adding a configure option if there was a justifiable\n> reason for using functions with more than 32 arguments -- but\n> IMHO that is quite a bizarre situation anyway, as Peter said.\n>\n> My vote is to set the default # of function args to some\n> reasonable default (32 sounds good), and leave it at that.\n\nOK, agreed. Then they at least are forced to write functions that will work\non all Postgres 7.3 and above...\n\nChris\n\n",
"msg_date": "Tue, 16 Apr 2002 11:57:04 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Neil Conway wrote:\n> On Tue, 16 Apr 2002 11:35:57 +0800\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> wrote:\n> > > > Are we staying at 16 as the default? I personally think we can\n> > > > increase it to 32 with little penalty,\n> > >\n> > > If you want to increase it, let's just increase it and not add any more\n> > > configure options. If someone wants more than 32 then we really need to\n> > > start talking about design issues.\n> > \n> > Why not give them the configure option? It's not good HCI to impose\n> > arbitrary limits on people...?\n> \n> It's not an arbitrary limit -- users can easily change pg_config.h.\n\nLet me just point out that you have to change pg_config.h.in and run\nconfigure _or_ change pg_config.h and _never_ run configure again. It\nis this complexity that makes a configure option look acceptable.\n\nMaybe we should pull some of the hard-coded, non-configure stuff from\npg_config.h into a separate file and just include it from pg_config.h.\n\n> > We can default it to 32, since there's demand for it. If a particular user\n> > decided to configure it higher, then they do that knowing that it may cause\n> > performance degradation. It's good to give them that choice though.\n> \n> What if someone actually uses functions with more than 32\n> arguments? Their code will not longer be portable among\n> PostgreSQL installations, and they'll need to get the local\n> admin to recompile.\n\n\nIt is usually C++ overloading functions that use lots of args, or\nfunctions that pass every table column into the function. In those\ncases, I can easily see 32 params.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Mon, 15 Apr 2002 23:57:20 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Peter,\n\n> If you want to increase it, let's just increase it and not add any\n> more\n> configure options. If someone wants more than 32 then we really need\n> to\n> start talking about design issues.\n\nActually, many Oracle DBAs use functions/procedures with up to 300\nparameters. If we want them to take PostgreSQL seriously as an\nalternative to Oracle, we need to be able to accommodate that, at the\nvery least through an accessable configure-time option.\n\nAlso, this is a very frequent request on the SQL list. The fact that\ncurrently the defualt is 16 and pg_config.h is not documented anywhere,\nis rather unfriendly to developers who like to use their functions as\npseudo-middleware.\n\nJohn, please speak up here so the core team knows this isn't \"just me.\"\n\n-Josh Berkus\n",
"msg_date": "Mon, 15 Apr 2002 21:06:44 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Neil Conway <nconway@klamath.dyndns.org> writes:\n> My vote is to set the default # of function args to some\n> reasonable default (32 sounds good), and leave it at that.\n\nBear in mind that s/32/16/ gives you the exact state of the discussion\nwhen we raised the limit from 8 to 16 ;-)\n\nStill, I do not really see the value of adding a configure argument.\nAnyone who can't figure out how to tweak this in pg_config.h is probably\nnot ready to run a non-default configuration anyhow.\n\nIf the consensus is to raise the default from 16 to 32, I won't object.\nBeyond that, I'd start asking questions about who's measured the\nperformance hit and what they found.\n\nOn the NAMEDATALEN part of the argument: SQL92 clearly expects that\nNAMEDATALEN should be 128. But the first report of the performance\ncost looked rather grim. Has anyone retried that experiment since\nwe tweaked hashname to not hash all the trailing zeroes?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Apr 2002 00:13:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit "
},
{
"msg_contents": "Tom Lane writes:\n\n> Neil Conway <nconway@klamath.dyndns.org> writes:\n> > My vote is to set the default # of function args to some\n> > reasonable default (32 sounds good), and leave it at that.\n>\n> Bear in mind that s/32/16/ gives you the exact state of the discussion\n> when we raised the limit from 8 to 16 ;-)\n\nHow about this: We store the first 16 parameters in some fixed array for\nfast access like now, and when you have more than 16 then 17 and beyond\nget stored in some variable array in pg_proc. This way procedures with\nfew arguments don't lose any performance but we could support an\n\"infinite\" number of parameters easily. It sounds kind of dumb, but\nwithout some sort of break out of the fixed storage scheme we'll have this\nargument forever.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Tue, 16 Apr 2002 00:36:11 -0400 (EDT)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit "
},
{
"msg_contents": "\nJosh is exactly correct with regards to large oracle installs. I personally \nhave oracle functions that have around 70 to 80 params. I saw some \ndiscussion that this is a design issue, as if to indicate design flaw. \nHowever, I think it is good design, based on the tools at hand. I have \ncomplex transactions to create, some involve 10 to 15 large tables. I also \nhave requirements of being accessed via perl, python, c, zope, ruby, \nbash/sqlplus and possibly any other legacy app language that needs to \ninterface. Furthermore, I don't have time to teach every developer the \ndetails of the data model, the order of inserts, which columns to update \nunder different conditions, etc. I also don't have time to build a \nmiddleware interface in C and write wrappers in many languages.\n\nMy stored proc interface to a large and complex system is portable across any \nprogramming language that supports calling stored procs. Furthermore, it \nshields the developers from what most don't even care about. They know in \nthe end, that if they pass the right data to my stored proc (which is usally \njust a hash of vars anyway, oracle supports pass by name) that all will be \nfine. I also, know that I can change the implementation of the data model \nand as long as I keep the \"interface\" the same then perl, python, ruby, zope, \netc all still work. That is good design. No sane DBA would give \ninsert/update/delete permissions on any table to any user other than owner. \nThat is the only way to guarantee data integrity.\n\nI think some of the users here are coming from the perspective of simple \ndynamic web content or a small dev environment where all of the developers \nare multi-talented. However, try an enterprise database that may have 200 to \n300 developers working on it over a 10 year lifetime or the merging of \nmultiple very large clients into a common system. I worked on the database \nfor the Olympics in Atlanta and Nagano (about 200 developers in Atlanta). \nDatabase was DB/2 and all middleware in C. What a nightmare.\n\nBottomline. PL/SQL is one of the top reasons for Oracle's success. If you \nare an Oracle shop then PL/SQL makes a better middleware layer than any other \nlanguage. Simple, fast, stable, single point of entry. What could be better.\n\n\nHowever, none of the above is of any value if the performance penalty is \nlarge. And PL/pgSQL needs much more that just the param number increased. I \nam sorry if I irritated the group. My only purpose for starting this was to \nhelp point out one of the top areas that PostgreSQL will need to address if \nit wants to succeed in the enterprise. If that is not a goal, then my \nrequests are probably not all that valid.\n\n\nOn Tuesday 16 April 2002 12:06 am, Josh Berkus wrote:\n> Peter,\n>\n> > If you want to increase it, let's just increase it and not add any\n> > more\n> > configure options. If someone wants more than 32 then we really need\n> > to\n> > start talking about design issues.\n>\n> Actually, many Oracle DBAs use functions/procedures with up to 300\n> parameters. If we want them to take PostgreSQL seriously as an\n> alternative to Oracle, we need to be able to accommodate that, at the\n> very least through an accessable configure-time option.\n>\n> Also, this is a very frequent request on the SQL list. The fact that\n> currently the defualt is 16 and pg_config.h is not documented anywhere,\n> is rather unfriendly to developers who like to use their functions as\n> pseudo-middleware.\n>\n> John, please speak up here so the core team knows this isn't \"just me.\"\n>\n> -Josh Berkus\n",
"msg_date": "Mon, 15 Apr 2002 23:49:21 -0500",
"msg_from": "John Proctor <jproctor@prium.net>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Tom,\n\n> Still, I do not really see the value of adding a configure argument.\n> Anyone who can't figure out how to tweak this in pg_config.h is\n> probably\n> not ready to run a non-default configuration anyhow.\n\nI disagree *very* strongly. Given that the documentation on\npg_config.h was removed from the idocs and that Pater has made noises\nabout removing pg_config.h entirely, it is not a substitute for\ncommand-line configure options.\n\n> If the consensus is to raise the default from 16 to 32, I won't\n> object.\n> Beyond that, I'd start asking questions about who's measured the\n> performance hit and what they found.\n\nIf you can suggest a reasonable test, I will test this at 32, 64, 128\nand 256 parameters to settle this issue.\n\n-Josh\n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Mon, 15 Apr 2002 21:50:21 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> How about this: We store the first 16 parameters in some fixed array for\n> fast access like now, and when you have more than 16 then 17 and beyond\n> get stored in some variable array in pg_proc.\n\n<<itch>> What's this going to cost us in the function lookup code paths?\n\nIf we can do it with little or no performance cost (at least for the\n\"normal case\" of fewer-than-N parameters) then I'm all ears.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Tue, 16 Apr 2002 01:01:33 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > How about this: We store the first 16 parameters in some fixed array for\n> > fast access like now, and when you have more than 16 then 17 and beyond\n> > get stored in some variable array in pg_proc.\n> \n> <<itch>> What's this going to cost us in the function lookup code paths?\n> \n> If we can do it with little or no performance cost (at least for the\n> \"normal case\" of fewer-than-N parameters) then I'm all ears.\n\nOK, I have an idea. Tom, didn't you just add code that allows the cache\nto return multiple rows for a lookup? I think you did it for schemas.\n\nWhat if we lookup on the first 16 params, then look at every matching\nhit if there are more than 16 params supplied? Another idea would be to\nhash the function arg types and look that up rather than looking for\nexact matches of oidvector.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Tue, 16 Apr 2002 01:06:50 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "On Mon, 15 Apr 2002 23:49:21 -0500\n\"John Proctor\" <jproctor@prium.net> wrote:\n> However, none of the above is of any value if the performance penalty is \n> large. And PL/pgSQL needs much more that just the param number increased.\n\nJohn,\n\nCould you elaborate on what enhancements you'd like to see in PL/pgSQL?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilconway@rogers.com>\nPGP Key ID: DB3C29FC\n",
"msg_date": "Tue, 16 Apr 2002 12:04:37 -0400",
"msg_from": "Neil Conway <nconway@klamath.dyndns.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "On Tue, 2002-04-16 at 07:01, Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > How about this: We store the first 16 parameters in some fixed array for\n> > fast access like now, and when you have more than 16 then 17 and beyond\n> > get stored in some variable array in pg_proc.\n> \n> <<itch>> What's this going to cost us in the function lookup code paths?\n> \n> If we can do it with little or no performance cost (at least for the\n> \"normal case\" of fewer-than-N parameters) then I'm all ears.\n\nPerhaps we could use the 16-th element as an indicator of 16-or-more\nargs. If it is 0 then there are <= 15 args if it is something else, then\nthis something else is hash of extra argument types that need to be\nlooked up separately. \n\nOf course we will need some way of resolving multiple hash matches.\n\n--------------\nHannu\n\n\n",
"msg_date": "16 Apr 2002 21:12:52 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: [PATCHES] [SQL] 16 parameter limit"
},
{
"msg_contents": "\n\nOK, here goes.\n\n1) More than 16 parameters. � This can be parameter configurable if \nnecessary, but up to 128 would cover 99.9%.\n\n2) Better exception handling. �The procedure should be able to trap any data \nrelated exception and decide what to do. No function should ever abort. It should raise a trappable exception and let me decide what to do.\n\n3) Allow transactions inside of functions. � Mostly for incremental commits. \nEach transaction shoud be implicitely started after any CrUD statement and \ncontinue until a commit or rollback.\n\n4) Allow autonomous transactions. �This is related to number 2. �In Oracle, I \ncan track every single exception and log it in a central table with details, \neven if I rollback the current transaction or savepoint. � This is a must for \ntracking every single database error in an application at the exact point of \nfailure.\n\n5) Find a way to get rid of the requirement to quote the entire proc. � This \nis very clumsy. � The PL/pgSQL interpreter should be able to do the quoting \nand escape what it needs.\n\n6) Allow function parameters to be specified by name and type during the definition. Even aliasing is cumbersome and error prone on large procs, especially during development when changes are frequent.\n\n7) Allow function parameters to be passed by name, not just positional. �i.e. \nget_employee_salary(emp_id => 12345, tax_year => 2001).\n\n8) Add packages. �This is a great way to group related functions, create \nreusable objects, like cursors, etc.\n\n9) Allow anonymous PL/pgSQL blocks. � It should not be required to create a \nfunction for every PL/pgSQL block. � Often, I just want to do something quick \nand dirty or write complex blocks that I don't even want saved in the \ndatabase. �I can just keep then in a file and execute when necessary.\n\n\nFor those that have not seen Oracle PL/SQL, here is a complete proc that illustrates the simplicity and power of it.\n\ncreate or replace\nprocedure bp_cmd_chn (\n i_um_evt_lvl123_idn in um_evt_lvl123.um_evt_lvl123_idn%type,\n i_chn_class_group_cd in code_chn_class_group.chn_class_group_cd%type\n) \nas\n\n/* setup vars for footprinting exceptions */\nv_prc error_log.prc%type := 'bp_cmd_chn';\nv_opr error_log.opr%type := 'init';\nv_obj error_log.obj%type := 'init';\n\n/* local vars */\nv_chn_status_cd um_vendor_chn.chn_status_cd%type;\nv_dist_engine_idn dist_engine.dist_engine_idn%type;\nv_dist_format_type_cd xrf_vendor_format_io.send_dist_format_type_cd%type;\nv_io_type_cd xrf_vendor_format_io.send_io_type_cd%type;\nv_app_user_name app_default_schema.user_name%type;\nv_app_schema_name app_default_schema.app_schema_name%type;\nv_send_process_type_cd xrf_vendor_format_io.send_process_type_cd%type;\n\n/* parameterized cursor */\ncursor cur_vnd_chn(\n ci_um_evt_lvl123_idn number,\n ci_chn_class_group_cd varchar2\n) is\nselect umvnd.rdx_vendor_idn, \n umvnd.chn_class_cd\nfrom um_vendor_chn umvnd,\n xrf_chn_class_group xchng\nwhere umvnd.chn_class_cd = xchng.chn_class_cd\nand umvnd.um_evt_lvl123_idn = ci_um_evt_lvl123_idn\nand umvnd.chn_status_cd = 'PEND'\nand xchng.chn_class_group_cd = ci_chn_class_group_cd;\n\n\nbegin\n\n savepoint bp_cmd_chn;\n\n /* open cursor with parameters into row object v_vnd_chn_rec */\n for v_vnd_chn_rec in cur_vnd_chn(i_um_evt_lvl123_idn,\n i_chn_class_group_cd) loop\n /* nice clean select into syntax */\n v_opr := 'select into';\n v_obj := 'xrf_vendor_format_io';\n select send_dist_format_type_cd,\n send_io_type_cd,\n send_process_type_cd\n into v_dist_format_type_cd,\n v_io_type_cd ,\n v_send_process_type_cd\n from xrf_vendor_format_io\n where rdx_vendor_idn = v_vnd_chn_rec.rdx_vendor_idn\n and chn_class_cd = v_vnd_chn_rec.chn_class_cd;\n\n /* call procedure passing parms by name */ \n v_opr := 'call';\n v_obj := 'dist_engine_ins';\n dist_engine_ins(dist_engine_idn => v_dist_engine_idn,\n pending_dt => sysdate,\n source_idn => i_um_evt_lvl123_idn,\n source_type => 'EVTLVL123',\n dist_format_type_cd => v_dist_format_type_cd,\n recipient_type_cd => 'VND',\n io_type_cd => v_io_type_cd);\n \n \n end loop;\n\n/* Trap all exceptions, calling pkg_error.log_error with details.\n This will start an autonymous transaction to log the error\n then rollback the current savepoint and re-raise exception for\n the caller\n*/\nexception\n when others then\n pkg_error.log_error (get_schema_name,v_pkg, v_prc, v_opr, v_obj, sqlcode, sqlerrm);\n rollback to bp_cmd_chn;\n raise;\nend bp_cmd_chn;\n/ \n\n\n\n\nOn Tuesday 16 April 2002 12:04 pm, Neil Conway wrote:\n> On Mon, 15 Apr 2002 23:49:21 -0500\n>\n> \"John Proctor\" <jproctor@prium.net> wrote:\n> > However, none of the above is of any value if the performance penalty is\n> > large. And PL/pgSQL needs much more that just the param number\n> > increased.\n>\n> John,\n>\n> Could you elaborate on what enhancements you'd like to see in PL/pgSQL?\n>\n> Cheers,\n>\n> Neil\n",
"msg_date": "Wed, 17 Apr 2002 01:22:14 -0500",
"msg_from": "John Proctor <jproctor@prium.net>",
"msg_from_op": true,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "I think that this list should definitely be stored in the cvs somewhere -\nTODO.detail perhaps, Bruce?\n\nIt's good stuff.\n\nChris\n\n----- Original Message -----\nFrom: \"John Proctor\" <jproctor@prium.net>\nTo: \"Neil Conway\" <nconway@klamath.dyndns.org>\nCc: <josh@agliodbs.com>; <peter_e@gmx.net>; <pgman@candle.pha.pa.us>;\n<tgl@sss.pgh.pa.us>; <pgsql-patches@postgresql.org>\nSent: Wednesday, April 17, 2002 2:22 PM\nSubject: Re: [PATCHES] [SQL] 16 parameter limit\n\n\n>\n>\n> OK, here goes.\n>\n> 1) More than 16 parameters. This can be parameter configurable if\n> necessary, but up to 128 would cover 99.9%.\n>\n> 2) Better exception handling. The procedure should be able to trap any\ndata\n> related exception and decide what to do. No function should ever abort.\nIt should raise a trappable exception and let me decide what to do.\n>\n> 3) Allow transactions inside of functions. Mostly for incremental commits.\n> Each transaction shoud be implicitely started after any CrUD statement and\n> continue until a commit or rollback.\n>\n> 4) Allow autonomous transactions. This is related to number 2. In Oracle,\nI\n> can track every single exception and log it in a central table with\ndetails,\n> even if I rollback the current transaction or savepoint. This is a must\nfor\n> tracking every single database error in an application at the exact point\nof\n> failure.\n>\n> 5) Find a way to get rid of the requirement to quote the entire proc. This\n> is very clumsy. The PL/pgSQL interpreter should be able to do the quoting\n> and escape what it needs.\n>\n> 6) Allow function parameters to be specified by name and type during the\ndefinition. Even aliasing is cumbersome and error prone on large procs,\nespecially during development when changes are frequent.\n>\n> 7) Allow function parameters to be passed by name, not just positional.\ni.e.\n> get_employee_salary(emp_id => 12345, tax_year => 2001).\n>\n> 8) Add packages. This is a great way to group related functions, create\n> reusable objects, like cursors, etc.\n>\n> 9) Allow anonymous PL/pgSQL blocks. It should not be required to create a\n> function for every PL/pgSQL block. Often, I just want to do something\nquick\n> and dirty or write complex blocks that I don't even want saved in the\n> database. I can just keep then in a file and execute when necessary.\n>\n>\n> For those that have not seen Oracle PL/SQL, here is a complete proc that\nillustrates the simplicity and power of it.\n>\n> create or replace\n> procedure bp_cmd_chn (\n> i_um_evt_lvl123_idn in um_evt_lvl123.um_evt_lvl123_idn%type,\n> i_chn_class_group_cd in code_chn_class_group.chn_class_group_cd%type\n> )\n> as\n>\n> /* setup vars for footprinting exceptions */\n> v_prc error_log.prc%type := 'bp_cmd_chn';\n> v_opr error_log.opr%type := 'init';\n> v_obj error_log.obj%type := 'init';\n>\n> /* local vars */\n> v_chn_status_cd um_vendor_chn.chn_status_cd%type;\n> v_dist_engine_idn dist_engine.dist_engine_idn%type;\n> v_dist_format_type_cd\nxrf_vendor_format_io.send_dist_format_type_cd%type;\n> v_io_type_cd xrf_vendor_format_io.send_io_type_cd%type;\n> v_app_user_name app_default_schema.user_name%type;\n> v_app_schema_name app_default_schema.app_schema_name%type;\n> v_send_process_type_cd xrf_vendor_format_io.send_process_type_cd%type;\n>\n> /* parameterized cursor */\n> cursor cur_vnd_chn(\n> ci_um_evt_lvl123_idn number,\n> ci_chn_class_group_cd varchar2\n> ) is\n> select umvnd.rdx_vendor_idn,\n> umvnd.chn_class_cd\n> from um_vendor_chn umvnd,\n> xrf_chn_class_group xchng\n> where umvnd.chn_class_cd = xchng.chn_class_cd\n> and umvnd.um_evt_lvl123_idn = ci_um_evt_lvl123_idn\n> and umvnd.chn_status_cd = 'PEND'\n> and xchng.chn_class_group_cd = ci_chn_class_group_cd;\n>\n>\n> begin\n>\n> savepoint bp_cmd_chn;\n>\n> /* open cursor with parameters into row object v_vnd_chn_rec */\n> for v_vnd_chn_rec in cur_vnd_chn(i_um_evt_lvl123_idn,\n> i_chn_class_group_cd) loop\n> /* nice clean select into syntax */\n> v_opr := 'select into';\n> v_obj := 'xrf_vendor_format_io';\n> select send_dist_format_type_cd,\n> send_io_type_cd,\n> send_process_type_cd\n> into v_dist_format_type_cd,\n> v_io_type_cd ,\n> v_send_process_type_cd\n> from xrf_vendor_format_io\n> where rdx_vendor_idn = v_vnd_chn_rec.rdx_vendor_idn\n> and chn_class_cd = v_vnd_chn_rec.chn_class_cd;\n>\n> /* call procedure passing parms by name */\n> v_opr := 'call';\n> v_obj := 'dist_engine_ins';\n> dist_engine_ins(dist_engine_idn => v_dist_engine_idn,\n> pending_dt => sysdate,\n> source_idn => i_um_evt_lvl123_idn,\n> source_type => 'EVTLVL123',\n> dist_format_type_cd => v_dist_format_type_cd,\n> recipient_type_cd => 'VND',\n> io_type_cd => v_io_type_cd);\n>\n>\n> end loop;\n>\n> /* Trap all exceptions, calling pkg_error.log_error with details.\n> This will start an autonymous transaction to log the error\n> then rollback the current savepoint and re-raise exception for\n> the caller\n> */\n> exception\n> when others then\n> pkg_error.log_error (get_schema_name,v_pkg, v_prc, v_opr, v_obj,\nsqlcode, sqlerrm);\n> rollback to bp_cmd_chn;\n> raise;\n> end bp_cmd_chn;\n> /\n>\n>\n>\n>\n> On Tuesday 16 April 2002 12:04 pm, Neil Conway wrote:\n> > On Mon, 15 Apr 2002 23:49:21 -0500\n> >\n> > \"John Proctor\" <jproctor@prium.net> wrote:\n> > > However, none of the above is of any value if the performance penalty\nis\n> > > large. And PL/pgSQL needs much more that just the param number\n> > > increased.\n> >\n> > John,\n> >\n> > Could you elaborate on what enhancements you'd like to see in PL/pgSQL?\n> >\n> > Cheers,\n> >\n> > Neil\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n",
"msg_date": "Wed, 17 Apr 2002 22:29:12 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "And can we move the discussion to a more appropriate place (-HACKERS?)? \n\nThanks.\nLER\n\nOn Wed, 2002-04-17 at 09:29, Christopher Kings-Lynne wrote:\n> I think that this list should definitely be stored in the cvs somewhere -\n> TODO.detail perhaps, Bruce?\n> \n> It's good stuff.\n> \n> Chris\n> \n> ----- Original Message -----\n> From: \"John Proctor\" <jproctor@prium.net>\n> To: \"Neil Conway\" <nconway@klamath.dyndns.org>\n> Cc: <josh@agliodbs.com>; <peter_e@gmx.net>; <pgman@candle.pha.pa.us>;\n> <tgl@sss.pgh.pa.us>; <pgsql-patches@postgresql.org>\n> Sent: Wednesday, April 17, 2002 2:22 PM\n> Subject: Re: [PATCHES] [SQL] 16 parameter limit\n> \n> \n> >\n> >\n> > OK, here goes.\n> >\n> > 1) More than 16 parameters. This can be parameter configurable if\n> > necessary, but up to 128 would cover 99.9%.\n> >\n> > 2) Better exception handling. The procedure should be able to trap any\n> data\n> > related exception and decide what to do. No function should ever abort.\n> It should raise a trappable exception and let me decide what to do.\n> >\n> > 3) Allow transactions inside of functions. Mostly for incremental commits.\n> > Each transaction shoud be implicitely started after any CrUD statement and\n> > continue until a commit or rollback.\n> >\n> > 4) Allow autonomous transactions. This is related to number 2. In Oracle,\n> I\n> > can track every single exception and log it in a central table with\n> details,\n> > even if I rollback the current transaction or savepoint. This is a must\n> for\n> > tracking every single database error in an application at the exact point\n> of\n> > failure.\n> >\n> > 5) Find a way to get rid of the requirement to quote the entire proc. This\n> > is very clumsy. The PL/pgSQL interpreter should be able to do the quoting\n> > and escape what it needs.\n> >\n> > 6) Allow function parameters to be specified by name and type during the\n> definition. Even aliasing is cumbersome and error prone on large procs,\n> especially during development when changes are frequent.\n> >\n> > 7) Allow function parameters to be passed by name, not just positional.\n> i.e.\n> > get_employee_salary(emp_id => 12345, tax_year => 2001).\n> >\n> > 8) Add packages. This is a great way to group related functions, create\n> > reusable objects, like cursors, etc.\n> >\n> > 9) Allow anonymous PL/pgSQL blocks. It should not be required to create a\n> > function for every PL/pgSQL block. Often, I just want to do something\n> quick\n> > and dirty or write complex blocks that I don't even want saved in the\n> > database. I can just keep then in a file and execute when necessary.\n> >\n> >\n> > For those that have not seen Oracle PL/SQL, here is a complete proc that\n> illustrates the simplicity and power of it.\n> >\n> > create or replace\n> > procedure bp_cmd_chn (\n> > i_um_evt_lvl123_idn in um_evt_lvl123.um_evt_lvl123_idn%type,\n> > i_chn_class_group_cd in code_chn_class_group.chn_class_group_cd%type\n> > )\n> > as\n> >\n> > /* setup vars for footprinting exceptions */\n> > v_prc error_log.prc%type := 'bp_cmd_chn';\n> > v_opr error_log.opr%type := 'init';\n> > v_obj error_log.obj%type := 'init';\n> >\n> > /* local vars */\n> > v_chn_status_cd um_vendor_chn.chn_status_cd%type;\n> > v_dist_engine_idn dist_engine.dist_engine_idn%type;\n> > v_dist_format_type_cd\n> xrf_vendor_format_io.send_dist_format_type_cd%type;\n> > v_io_type_cd xrf_vendor_format_io.send_io_type_cd%type;\n> > v_app_user_name app_default_schema.user_name%type;\n> > v_app_schema_name app_default_schema.app_schema_name%type;\n> > v_send_process_type_cd xrf_vendor_format_io.send_process_type_cd%type;\n> >\n> > /* parameterized cursor */\n> > cursor cur_vnd_chn(\n> > ci_um_evt_lvl123_idn number,\n> > ci_chn_class_group_cd varchar2\n> > ) is\n> > select umvnd.rdx_vendor_idn,\n> > umvnd.chn_class_cd\n> > from um_vendor_chn umvnd,\n> > xrf_chn_class_group xchng\n> > where umvnd.chn_class_cd = xchng.chn_class_cd\n> > and umvnd.um_evt_lvl123_idn = ci_um_evt_lvl123_idn\n> > and umvnd.chn_status_cd = 'PEND'\n> > and xchng.chn_class_group_cd = ci_chn_class_group_cd;\n> >\n> >\n> > begin\n> >\n> > savepoint bp_cmd_chn;\n> >\n> > /* open cursor with parameters into row object v_vnd_chn_rec */\n> > for v_vnd_chn_rec in cur_vnd_chn(i_um_evt_lvl123_idn,\n> > i_chn_class_group_cd) loop\n> > /* nice clean select into syntax */\n> > v_opr := 'select into';\n> > v_obj := 'xrf_vendor_format_io';\n> > select send_dist_format_type_cd,\n> > send_io_type_cd,\n> > send_process_type_cd\n> > into v_dist_format_type_cd,\n> > v_io_type_cd ,\n> > v_send_process_type_cd\n> > from xrf_vendor_format_io\n> > where rdx_vendor_idn = v_vnd_chn_rec.rdx_vendor_idn\n> > and chn_class_cd = v_vnd_chn_rec.chn_class_cd;\n> >\n> > /* call procedure passing parms by name */\n> > v_opr := 'call';\n> > v_obj := 'dist_engine_ins';\n> > dist_engine_ins(dist_engine_idn => v_dist_engine_idn,\n> > pending_dt => sysdate,\n> > source_idn => i_um_evt_lvl123_idn,\n> > source_type => 'EVTLVL123',\n> > dist_format_type_cd => v_dist_format_type_cd,\n> > recipient_type_cd => 'VND',\n> > io_type_cd => v_io_type_cd);\n> >\n> >\n> > end loop;\n> >\n> > /* Trap all exceptions, calling pkg_error.log_error with details.\n> > This will start an autonymous transaction to log the error\n> > then rollback the current savepoint and re-raise exception for\n> > the caller\n> > */\n> > exception\n> > when others then\n> > pkg_error.log_error (get_schema_name,v_pkg, v_prc, v_opr, v_obj,\n> sqlcode, sqlerrm);\n> > rollback to bp_cmd_chn;\n> > raise;\n> > end bp_cmd_chn;\n> > /\n> >\n> >\n> >\n> >\n> > On Tuesday 16 April 2002 12:04 pm, Neil Conway wrote:\n> > > On Mon, 15 Apr 2002 23:49:21 -0500\n> > >\n> > > \"John Proctor\" <jproctor@prium.net> wrote:\n> > > > However, none of the above is of any value if the performance penalty\n> is\n> > > > large. And PL/pgSQL needs much more that just the param number\n> > > > increased.\n> > >\n> > > John,\n> > >\n> > > Could you elaborate on what enhancements you'd like to see in PL/pgSQL?\n> > >\n> > > Cheers,\n> > >\n> > > Neil\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n",
"msg_date": "17 Apr 2002 09:41:29 -0500",
"msg_from": "Larry Rosenman <ler@lerctr.org>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Folks,\n\n> 1) More than 16 parameters. � This can be parameter configurable if \n> necessary, but up to 128 would cover 99.9%.\n> \n> 2) Better exception handling. �The procedure should be able to trap\n> any data \n> related exception and decide what to do. No function should ever\n> abort. It should raise a trappable exception and let me decide what\n> to do.\n> \n> 3) Allow transactions inside of functions. � Mostly for incremental\n> commits. \n> Each transaction shoud be implicitely started after any CrUD\n> statement and \n> continue until a commit or rollback.\n> \n> 4) Allow autonomous transactions. �This is related to number 2. �In\n> Oracle, I \n> can track every single exception and log it in a central table with\n> details, \n> even if I rollback the current transaction or savepoint. � This is a\n> must for \n> tracking every single database error in an application at the exact\n> point of \n> failure.\n> \n> 5) Find a way to get rid of the requirement to quote the entire proc.\n> � This \n> is very clumsy. � The PL/pgSQL interpreter should be able to do the\n> quoting \n> and escape what it needs.\n> \n> 6) Allow function parameters to be specified by name and type during\n> the definition. Even aliasing is cumbersome and error prone on large\n> procs, especially during development when changes are frequent.\n> \n> 7) Allow function parameters to be passed by name, not just\n> positional. �i.e. \n> get_employee_salary(emp_id => 12345, tax_year => 2001).\n> \n> 8) Add packages. �This is a great way to group related functions,\n> create \n> reusable objects, like cursors, etc.\n> \n> 9) Allow anonymous PL/pgSQL blocks. � It should not be required to\n> create a \n> function for every PL/pgSQL block. � Often, I just want to do\n> something quick \n> and dirty or write complex blocks that I don't even want saved in the\n> \n> database. �I can just keep then in a file and execute when necessary.\n\nAlso:\n\n10) Allow declaration of all PostgreSQL data types, including custom\ndata types and domains, inside functions. Especially important are\nArrays, which are supported as parameters but not as declarations.\n\n11) PL/pgSQL has functionality 100% analagous to cursors, with a\ndifferent syntax. While the PL/pgSQL record loop is easier to use, the\nlack of support for standard cursor syntax mars the poratbility of\nOracle procedures to Postgres and vice-versa.\n\n12) The biggie: Allowing the easy return of query results from a\nprocedure. This is currently supported through a rather difficult\nworkaround involving either the ROWTYPE datatype or a return Cursor.\n Both approaches require the use of a procedural code loop on the\ninterface side to read the data being returned ... much clumsier than\njust dumping the data ala PL/SQL or T-SQL. If implemented, this rowset\nreturn would the the difference between a CREATE FUNCTION and a CREATE\nPROCEDURE statement.\n\n13) Allow the creation of multiple output parameters for PROCEDURES (as\nopposed to FUNCTIONS) in the parameter declaration.\n\n14) Procedures should have their own permissions, which supercede the\npermissions on the tables being affected if the procedure is created by\nthe database owner, in the same way that Views can allow users to\nSelect data they would not be entitled to from the base tables. In\nother words, if I declare \"GRANT SELECT ON fn_modify_assignment TO\nphpaccess\", the user phpaccess should be able to run\nfn_modify_assignment even if that user has no permissions on the\nassignment table itself.\n\n-Josh Berkus\n\nP.S. I haven't brought up these issues before because there is no way I\ncan contribute any significant resources to completing them. \n\n______AGLIO DATABASE SOLUTIONS___________________________\n Josh Berkus\n Complete information technology josh@agliodbs.com\n and data management solutions (415) 565-7293\n for law firms, small businesses fax 621-2533\n and non-profit organizations. San Francisco\n",
"msg_date": "Wed, 17 Apr 2002 09:08:58 -0700",
"msg_from": "\"Josh Berkus\" <josh@agliodbs.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "\nAdded to TODO:\n\n o Improve PL/PgSQL exception handling\n o Allow PL/PgSQL parameters to be specified by name and type during\n definition\n o Allow PL/PgSQL function parameters to be passed by name,\n get_employee_salary(emp_id => 12345, tax_year => 2001)\n o Add PL/PgSQL packages\n\n> \n> \n> OK, here goes.\n> \n> 1) More than 16 parameters. ? This can be parameter configurable if \n> necessary, but up to 128 would cover 99.9%.\n\nDone to 32.\n\n> \n> 2) Better exception handling. ?The procedure should be able to trap any data \n> related exception and decide what to do. No function should ever abort. It should raise a trappable exception and let me decide what to do.\n\nAdded.\n\n> \n> 3) Allow transactions inside of functions. ? Mostly for incremental commits. \n> Each transaction shoud be implicitely started after any CrUD statement and \n> continue until a commit or rollback.\n\nWhen we have subtransactions, we will be able to do this.\n\n> \n> 4) Allow autonomous transactions. ?This is related to number 2. ?In Oracle, I \n> can track every single exception and log it in a central table with details, \n> even if I rollback the current transaction or savepoint. ? This is a must for \n> tracking every single database error in an application at the exact point of \n> failure.\n\nSame.\n\n> 5) Find a way to get rid of the requirement to quote the entire proc. ? This \n> is very clumsy. ? The PL/pgSQL interpreter should be able to do the quoting \n> and escape what it needs.\n\nThis is pretty hard, especially because we have plug-in languages. I\ndon't see a way to do this.\n\n> \n> 6) Allow function parameters to be specified by name and type during the definition. Even aliasing is cumbersome and error prone on large procs, especially during development when changes are frequent.\n\nAdded,\n\n> \n> 7) Allow function parameters to be passed by name, not just positional. ?i.e. \n> get_employee_salary(emp_id => 12345, tax_year => 2001).\n\nAdded.\n \n> \n> 8) Add packages. ?This is a great way to group related functions, create \n> reusable objects, like cursors, etc.\n\nAdded.\n\n> \n> 9) Allow anonymous PL/pgSQL blocks. ? It should not be required to create a \n> function for every PL/pgSQL block. ? Often, I just want to do something quick \n> and dirty or write complex blocks that I don't even want saved in the \n> database. ?I can just keep then in a file and execute when necessary.\n\nI don't see the point here, except perhaps you want TEMP functions?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 22:43:18 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "\nAdded:\n\n o Allow array declarations and other data types in PL/PgSQl\n DECLARE\n o Add PL/PgSQL PROCEDURES that can return multiple values\n\n> Also:\n> \n> 10) Allow declaration of all PostgreSQL data types, including custom\n> data types and domains, inside functions. Especially important are\n> Arrays, which are supported as parameters but not as declarations.\n\n\nAdded\n\n> 11) PL/pgSQL has functionality 100% analagous to cursors, with a\n> different syntax. While the PL/pgSQL record loop is easier to use, the\n> lack of support for standard cursor syntax mars the poratbility of\n> Oracle procedures to Postgres and vice-versa.\n\nIs this done?\n\n> \n> 12) The biggie: Allowing the easy return of query results from a\n> procedure. This is currently supported through a rather difficult\n> workaround involving either the ROWTYPE datatype or a return Cursor.\n> Both approaches require the use of a procedural code loop on the\n> interface side to read the data being returned ... much clumsier than\n> just dumping the data ala PL/SQL or T-SQL. If implemented, this rowset\n> return would the the difference between a CREATE FUNCTION and a CREATE\n> PROCEDURE statement.\n\nDone for 7.3.\n\n\n> \n> 13) Allow the creation of multiple output parameters for PROCEDURES (as\n> opposed to FUNCTIONS) in the parameter declaration.\n\nAdded.\n\n> 14) Procedures should have their own permissions, which supercede the\n> permissions on the tables being affected if the procedure is created by\n> the database owner, in the same way that Views can allow users to\n> Select data they would not be entitled to from the base tables. In\n> other words, if I declare \"GRANT SELECT ON fn_modify_assignment TO\n> phpaccess\", the user phpaccess should be able to run\n> fn_modify_assignment even if that user has no permissions on the\n> assignment table itself.\n\nDone, I think, for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 22:48:12 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Bruce Momjian wrote:\n>>12) The biggie: Allowing the easy return of query results from a\n>>procedure. This is currently supported through a rather difficult\n>>workaround involving either the ROWTYPE datatype or a return Cursor.\n>> Both approaches require the use of a procedural code loop on the\n>>interface side to read the data being returned ... much clumsier than\n>>just dumping the data ala PL/SQL or T-SQL. If implemented, this rowset\n>>return would the the difference between a CREATE FUNCTION and a CREATE\n>>PROCEDURE statement.\n> \n> \n> Done for 7.3.\n\nUm, not done yet (PL/pgSQL table functions). Currently only SQL and C \nfunctions supported. I've had an off-line discussion with Neil, and I \nthink he is working this item and plans to have it ready for 7.3.\n\nCREATE PROCEDURE is not planned for 7.3 at all (I don't think; see the \nCALL foo recent discussion).\n\nIt's not clear to me which one is meant by the above. \"Dumping the data \nala PL/SQL or T-SQL\" could mean either. PL/SQL supports table functions; \nT-SQL only supports the CALL foo type capability. See:\nhttp://archives.postgresql.org/pgsql-general/2002-08/msg00602.php\nfor a description of the difference.\n\nJoe\n\n",
"msg_date": "Tue, 13 Aug 2002 20:28:52 -0700",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> >>12) The biggie: Allowing the easy return of query results from a\n> >>procedure. This is currently supported through a rather difficult\n> >>workaround involving either the ROWTYPE datatype or a return Cursor.\n> >> Both approaches require the use of a procedural code loop on the\n> >>interface side to read the data being returned ... much clumsier than\n> >>just dumping the data ala PL/SQL or T-SQL. If implemented, this rowset\n> >>return would the the difference between a CREATE FUNCTION and a CREATE\n> >>PROCEDURE statement.\n> > \n> > \n> > Done for 7.3.\n> \n> Um, not done yet (PL/pgSQL table functions). Currently only SQL and C \n> functions supported. I've had an off-line discussion with Neil, and I \n> think he is working this item and plans to have it ready for 7.3.\n\nOK, added to 7.3 open items:\n\n\tAllow PL/PgSQL functions to return sets\n\n> CREATE PROCEDURE is not planned for 7.3 at all (I don't think; see the \n> CALL foo recent discussion).\n\nRight, on TODO.\n\n> It's not clear to me which one is meant by the above. \"Dumping the data \n> ala PL/SQL or T-SQL\" could mean either. PL/SQL supports table functions; \n> T-SQL only supports the CALL foo type capability. See:\n> http://archives.postgresql.org/pgsql-general/2002-08/msg00602.php\n> for a description of the difference.\n\nNot sure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Tue, 13 Aug 2002 23:36:04 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Bruce Momjian wrote:\n> \n> Added to TODO:\n> \n> o Improve PL/PgSQL exception handling\n\nException handling? You're talking about nested transaction support and\ncatchable errors in the first place, and then (a year later) making use\nof that functionality in the procedural languages, right?\n\n> o Allow PL/PgSQL parameters to be specified by name and type during\n> definition\n> o Allow PL/PgSQL function parameters to be passed by name,\n> get_employee_salary(emp_id => 12345, tax_year => 2001)\n\nCREATE FUNCTION is in no way PL/pgSQL specific. PL/pgSQL simply works\naround that lack with the ALIAS syntax in the DECLARE section.\n\n> o Add PL/PgSQL packages\n\nThis really is a 100% PL/PgSQL problem.\n\n\nJan\n\n-- \n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n",
"msg_date": "Wed, 14 Aug 2002 14:30:53 -0400",
"msg_from": "Jan Wieck <JanWieck@Yahoo.com>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
},
{
"msg_contents": "Jan Wieck wrote:\n> Bruce Momjian wrote:\n> > \n> > Added to TODO:\n> > \n> > o Improve PL/PgSQL exception handling\n> \n> Exception handling? You're talking about nested transaction support and\n> catchable errors in the first place, and then (a year later) making use\n> of that functionality in the procedural languages, right?\n\nUh, I guess. Not sure.\n\n> \n> > o Allow PL/PgSQL parameters to be specified by name and type during\n> > definition\n> > o Allow PL/PgSQL function parameters to be passed by name,\n> > get_employee_salary(emp_id => 12345, tax_year => 2001)\n> \n> CREATE FUNCTION is in no way PL/pgSQL specific. PL/pgSQL simply works\n> around that lack with the ALIAS syntax in the DECLARE section.\n\nText updated to:\n\n o Allow parameters to be specified by name and type during \n definition\n o Allow function parameters to be passed by name,\n get_employee_salary(emp_id => 12345, tax_year => 2001)\n\n> > o Add PL/PgSQL packages\n> \n> This really is a 100% PL/PgSQL problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n",
"msg_date": "Wed, 14 Aug 2002 14:34:40 -0400 (EDT)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: [SQL] 16 parameter limit"
}
] |
[
{
"msg_contents": "Hi All,\n\nAs part of my ongoing quest to understand grammar files, I've been trying to\nimplement BETWEEN SYMMETRIC/ASYMMETRIC.\n\nI've attached my current work. Can someone please look and tell me if I'm\non the right track? With this patch, I get parse errors after BETWEEN if I\ngo:\n\nSELECT 2 BETWEEN ASYMMETRIC 1 and 3;\n\nor\n\nSELECT 2 BETWEEN SYMMETRIC 1 and 3;\n\nSo it doesn't seem to be working - I don't know why!!\n\nDon't look at the NOT BETWEEN stuff - I've not done it yet.\n\nI was forced to put SYMMETRIC and ASYMMETRIC as reserved words - anything\nelse seemed to give shift/reduce errors. Is there anything I can do about\nthat?\n\nChris\n\n",
"msg_date": "Wed, 3 Apr 2002 12:26:20 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "BETWEEN SYMMETRIC/ASYMMETRIC"
},
{
"msg_contents": "*sigh*\n\nI actually attached the diff this time...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Christopher\n> Kings-Lynne\n> Sent: Wednesday, 3 April 2002 12:26 PM\n> To: Hackers\n> Subject: [HACKERS] BETWEEN SYMMETRIC/ASYMMETRIC\n>\n>\n> Hi All,\n>\n> As part of my ongoing quest to understand grammar files, I've\n> been trying to\n> implement BETWEEN SYMMETRIC/ASYMMETRIC.\n>\n> I've attached my current work. Can someone please look and tell me if I'm\n> on the right track? With this patch, I get parse errors after\n> BETWEEN if I\n> go:\n>\n> SELECT 2 BETWEEN ASYMMETRIC 1 and 3;\n>\n> or\n>\n> SELECT 2 BETWEEN SYMMETRIC 1 and 3;\n>\n> So it doesn't seem to be working - I don't know why!!\n>\n> Don't look at the NOT BETWEEN stuff - I've not done it yet.\n>\n> I was forced to put SYMMETRIC and ASYMMETRIC as reserved words - anything\n> else seemed to give shift/reduce errors. Is there anything I can do about\n> that?\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>",
"msg_date": "Wed, 3 Apr 2002 12:31:30 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC"
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I was forced to put SYMMETRIC and ASYMMETRIC as reserved words - anything\n> else seemed to give shift/reduce errors. Is there anything I can do about\n> that?\n\nFirst thought is \"don't try to be cute\": forget the opt_asymmetry\nclause, and instead spell out six productions\n\n\ta_expr BETWEEN b_expr AND b_expr\n\ta_expr NOT BETWEEN b_expr AND b_expr\n\ta_expr BETWEEN SYMMETRIC b_expr AND b_expr\n\ta_expr NOT BETWEEN SYMMETRIC b_expr AND b_expr\n\ta_expr BETWEEN ASYMMETRIC b_expr AND b_expr\n\ta_expr NOT BETWEEN ASYMMETRIC b_expr AND b_expr\n\nI have not checked that this will work, but usually the cure for parse\nconflicts is to postpone the decision about which production applies.\nThe reason opt_asymmetry forces SYMMETRIC and ASYMMETRIC to become\nreserved is that it requires a premature decision. Given, say\n\n\t\ta_expr BETWEEN . SYMMETRIC\n\n(where . means \"where we are now\" and SYMMETRIC is the current lookahead\ntoken), an LR(1) parser *must* decide whether to reduce opt_asymmetry as\nnull, or to shift (implying that opt_asymmetry will be SYMMETRIC); it\nhas to make this choice before it can look beyond the SYMMETRIC token.\nIf SYMMETRIC might be a regular identifier then this is unresolvable\nwithout more lookahead. The six-production approach avoids this problem\nby not requiring any shift/reduce decisions to be made until an entire\nclause is available.\n\nOn second thought there may be no other way out. Consider\n\n\tfoo BETWEEN SYMMETRIC - bar AND baz\n\nIs SYMMETRIC a keyword (with \"-\" a prefix operator) or an identifier\n(with \"-\" infix)? This example makes me think that SYMMETRIC has to\nbecome reserved. But I wanted to point out that opt_asymmetry is\ncertainly a loser based on lookahead distance.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 00:19:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: BETWEEN SYMMETRIC/ASYMMETRIC "
}
] |
[
{
"msg_contents": "What's with this?\n\ncurrent pwd = /home/chriskl\n\nusa=# \\i ddlpack/kl_setnotnull.sql <-- tab completes properly\nDROP\nCREATE\nusa=# \\i ~/ddlpack/kl_setnotnull.sql <-- tab completes properly\n~/ddlpack/kl_setnotnull.sql: No such file or directory\nusa=#\n\nChris\n\n",
"msg_date": "Wed, 3 Apr 2002 13:45:13 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Odd psql \\i behaviour"
},
{
"msg_contents": "Christopher Kings-Lynne writes:\n\n> usa=# \\i ~/ddlpack/kl_setnotnull.sql <-- tab completes properly\n> ~/ddlpack/kl_setnotnull.sql: No such file or directory\n\nThe tilde is only meaningful in bash (or some other shell).\n\nTry putting this in your .inputrc:\n\n$if psql\nset expand-tilde on\n$endif\n\nThat will expand the tilde when you press TAB.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 11:25:39 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Odd psql \\i behaviour"
}
] |
[
{
"msg_contents": "The determination of locale is now done as follows:\n\ncollate/ctype:\n\ninitdb --lc-collate, initdb --locale, LC_ALL, LC_COLLATE, LANG\n\nmessages/monetary/numeric/time:\n\nHave GUC variables lc_messages, etc. The default is \"\", which means to\ninherit from the environment (or whatever setlocale() does with it).\nHowever, initdb will initialize postgresql.conf containing assignments to\nthese variables determined as with collate/ctype above. So the \"real\"\ndefaults are consistent with collate/ctype.\n\ninitdb --no-locale is the same as initdb --locale=C, for convenience.\n\nLet's see if these rules end up making sense to everybody.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 00:51:50 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Locale support is now on by default"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The determination of locale is now done as follows:\n\n> initdb --lc-collate, initdb --locale, LC_ALL, LC_COLLATE, LANG\n> initdb --no-locale is the same as initdb --locale=C, for convenience.\n\nI'm confused; what is the default behavior if you don't give any\nswitches to initdb?\n\nBTW, something that's been bothering me for awhile is that the notice\nwe stuck into the backend a couple versions back (about \"this locale\ndisables LIKE optimizations\") is being hidden by initdb, because you\ndecided recently that it was okay to route all the backend's commentary\nto /dev/null so as to hide xlog.c's startup chattiness. I don't object\nto getting rid of that chattiness, but 2>/dev/null is throwing the baby\nout with the bathwater (consider outright failure messages, for instance).\n\nIt might be that Bruce's recent changes to elog levels allow a graceful\ncompromise about backend messages during initdb. I haven't looked, but\nmaybe initdb could run the backend with message level one notch higher\nthan LOG to suppress all the normal-case messages without masking not-\nso-normal cases.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 01:04:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default "
},
{
"msg_contents": "Tom Lane writes:\n\n> > initdb --lc-collate, initdb --locale, LC_ALL, LC_COLLATE, LANG\n> > initdb --no-locale is the same as initdb --locale=C, for convenience.\n>\n> I'm confused; what is the default behavior if you don't give any\n> switches to initdb?\n\nWhatever is set in the environment -- which boils down to LC_ALL,\nLC_COLLATE, LANG.\n\n> It might be that Bruce's recent changes to elog levels allow a graceful\n> compromise about backend messages during initdb. I haven't looked, but\n> maybe initdb could run the backend with message level one notch higher\n> than LOG to suppress all the normal-case messages without masking not-\n> so-normal cases.\n\nI'll look.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 11:28:16 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale support is now on by default "
},
{
"msg_contents": "Tom Lane writes:\n\n> It might be that Bruce's recent changes to elog levels allow a graceful\n> compromise about backend messages during initdb. I haven't looked, but\n> maybe initdb could run the backend with message level one notch higher\n> than LOG to suppress all the normal-case messages without masking not-\n> so-normal cases.\n\nThere doesn't seem to be a way to turn off LOG without hiding almost\neverything:\n\n\tif (lev == LOG || lev == COMMERROR)\n\t{\n\t\tif (server_min_messages == LOG)\n\t\t\toutput_to_server = true;\n\t\telse if (server_min_messages < FATAL)\n\t\t\toutput_to_server = true;\n\t}\n\nEverything except for PANIC is less than FATAL, so this doesn't make sense\nto me.\n\nNonetheless, I don't like the way this message comes out. It destroys\nthe, er, well-formed display that initdb gives. Moreover, it's not really\na WARNING, meaning something is wrong. I was thinking about handling this\nwithin initdb, with a display like this:\n\n\"\"\"\nThe files belonging to this database system will be owned by user \"peter\".\nThis user must also own the server process.\n\nLocale settings: collate=en_US ctype=en_US [...]\n(This locale will prevent optimization of LIKE and regexp searches.)\n\ncreating directory pg-install/var/data... ok\ncreating directory pg-install/var/data/base... ok\n[...]\n\"\"\"\n\nYes, we'd need to duplicate some code within initdb, but it's not like\nthat list of LIKE-safe locales is very dynamic.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 12:30:51 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale support is now on by default "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I was thinking about handling this\n> within initdb, with a display like this:\n\n> \"\"\"\n> The files belonging to this database system will be owned by user \"peter\".\n> This user must also own the server process.\n\n> Locale settings: collate=en_US ctype=en_US [...]\n> (This locale will prevent optimization of LIKE and regexp searches.)\n\n> creating directory pg-install/var/data... ok\n> creating directory pg-install/var/data/base... ok\n> [...]\n> \"\"\"\n\nThat works for me.\n\n> Yes, we'd need to duplicate some code within initdb, but it's not like\n> that list of LIKE-safe locales is very dynamic.\n\nBut removing the warning from xlog.c would be a Good Thing; it does not\nbelong there either, by any stretch of the imagination. As long as both\nlocale_is_like_safe() and initdb's list are commented with cross-links\nto the other one, I don't think we're creating a huge maintenance\nproblem.\n\nBTW, I still suggest changing initdb to set message_level = FATAL rather\nthan /dev/null'ing the output. Having to use -d to learn anything at\nall about the cause of an initdb-time failure is a pain in the neck.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 12:46:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default "
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Tom Lane writes:\n> \n> > It might be that Bruce's recent changes to elog levels allow a graceful\n> > compromise about backend messages during initdb. I haven't looked, but\n> > maybe initdb could run the backend with message level one notch higher\n> > than LOG to suppress all the normal-case messages without masking not-\n> > so-normal cases.\n> \n> There doesn't seem to be a way to turn off LOG without hiding almost\n> everything:\n> \n> \tif (lev == LOG || lev == COMMERROR)\n> \t{\n> \t\tif (server_min_messages == LOG)\n> \t\t\toutput_to_server = true;\n> \t\telse if (server_min_messages < FATAL)\n> \t\t\toutput_to_server = true;\n> \t}\n> \n> Everything except for PANIC is less than FATAL, so this doesn't make sense\n> to me.\n\nActually, what this is saying is that for an elog(LOG) to show, the\nserver_min_messages, must be less than FATAL. Setting\nserver_min_messages to FATAL means only FATAL and PANIC appear:\n\nServer levels are:\n\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal, panic\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 14:24:36 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "> BTW, I still suggest changing initdb to set message_level = FATAL rather\n> than /dev/null'ing the output. Having to use -d to learn anything at\n> all about the cause of an initdb-time failure is a pain in the neck.\n\nThis is a great idea. Certainly there are FATAL/PANIC messages during\ninitdb that could be helpful.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 14:25:07 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > There doesn't seem to be a way to turn off LOG without hiding almost\n> > everything:\n> >\n> > \tif (lev == LOG || lev == COMMERROR)\n> > \t{\n> > \t\tif (server_min_messages == LOG)\n> > \t\t\toutput_to_server = true;\n> > \t\telse if (server_min_messages < FATAL)\n> > \t\t\toutput_to_server = true;\n> > \t}\n> >\n> > Everything except for PANIC is less than FATAL, so this doesn't make sense\n> > to me.\n>\n> Actually, what this is saying is that for an elog(LOG) to show, the\n> server_min_messages, must be less than FATAL.\n\nI know what this is saying, but the coding is redundant (since LOG is also\nless than FATAL).\n\n> Setting server_min_messages to FATAL means only FATAL and PANIC\n> appear:\n>\n> Server levels are:\n>\n> # debug5, debug4, debug3, debug2, debug1,\n> # info, notice, warning, error, log, fatal, panic\n\nI don't recall log being so high. Didn't it use to be after info?\nCertainly there should be a way to see only warnings, errors, and higher\nwithout seeing the \"unimportant\" log messages. Actually, I'm also\nconfused why we now have info, notice, *and* warning. Shouldn't two of\nthese be enough?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 17:16:34 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > There doesn't seem to be a way to turn off LOG without hiding almost\n> > > everything:\n> > >\n> > > \tif (lev == LOG || lev == COMMERROR)\n> > > \t{\n> > > \t\tif (server_min_messages == LOG)\n> > > \t\t\toutput_to_server = true;\n> > > \t\telse if (server_min_messages < FATAL)\n> > > \t\t\toutput_to_server = true;\n> > > \t}\n> > >\n> > > Everything except for PANIC is less than FATAL, so this doesn't make sense\n> > > to me.\n> >\n> > Actually, what this is saying is that for an elog(LOG) to show, the\n> > server_min_messages, must be less than FATAL.\n> \n> I know what this is saying, but the coding is redundant (since LOG is also\n> less than FATAL).\n\nSure, but the ordinal value of log is different for client and server:\n\n#server_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # info, notice, warning, error, log, fatal,\n # panic\n#client_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, notice, warning, error\n\nThe LOG value is ordinally correct for CLIENT, but for SERVER, it is\njust below FATAL. I can change it but for now that is what people\nwanted, meaning you probably want LOG in the log file before WARNINGS or\neven ERROR.\n\n> \n> > Setting server_min_messages to FATAL means only FATAL and PANIC\n> > appear:\n> >\n> > Server levels are:\n> >\n> > # debug5, debug4, debug3, debug2, debug1,\n> > # info, notice, warning, error, log, fatal, panic\n> \n> I don't recall log being so high. Didn't it use to be after info?\n> Certainly there should be a way to see only warnings, errors, and higher\n> without seeing the \"unimportant\" log messages. Actually, I'm also\n> confused why we now have info, notice, *and* warning. Shouldn't two of\n> these be enough?\n\nWe added NOTICE and INFO and WARNING because they were required. INFO\nis for SET-like information, NOTICE is for non-warnings like sequence\ncreation for SERIAL, and WARNING is for real warnings like identifier\ntruncation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 17:34:02 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> > > Server levels are:\n> > >\n> > > # debug5, debug4, debug3, debug2, debug1,\n> > > # info, notice, warning, error, log, fatal, panic\n> >\n> > I don't recall log being so high. Didn't it use to be after info?\n> > Certainly there should be a way to see only warnings, errors, and higher\n> > without seeing the \"unimportant\" log messages. Actually, I'm also\n> > confused why we now have info, notice, *and* warning. Shouldn't two of\n> > these be enough?\n>\n> We added NOTICE and INFO and WARNING because they were required. INFO\n> is for SET-like information, NOTICE is for non-warnings like sequence\n> creation for SERIAL, and WARNING is for real warnings like identifier\n> truncation.\n\nOK, let me phrase my question clearly: How can I turn off LOG and turn on\nall errors in the server log?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 17:55:44 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> Bruce Momjian writes:\n> \n> > > > Server levels are:\n> > > >\n> > > > # debug5, debug4, debug3, debug2, debug1,\n> > > > # info, notice, warning, error, log, fatal, panic\n> > >\n> > > I don't recall log being so high. Didn't it use to be after info?\n> > > Certainly there should be a way to see only warnings, errors, and higher\n> > > without seeing the \"unimportant\" log messages. Actually, I'm also\n> > > confused why we now have info, notice, *and* warning. Shouldn't two of\n> > > these be enough?\n> >\n> > We added NOTICE and INFO and WARNING because they were required. INFO\n> > is for SET-like information, NOTICE is for non-warnings like sequence\n> > creation for SERIAL, and WARNING is for real warnings like identifier\n> > truncation.\n> \n> OK, let me phrase my question clearly: How can I turn off LOG and turn on\n> all errors in the server log?\n\nRight now, you can't. I originally had LOG next to INFO, and for server\nit was INFO, then LOG, and for client, it was LOG, then INFO, but\nsomeone suggested that LOG should be between ERROR and FATAL because\nmost people want LOG stuff before they want to see ERROR/WARNING/NOTICE\nin the server logs.\n\nIf you would prefer LOG down near INFO in the server message levels,\nplease post the idea and let's get some more comments from folks.\n\nWe thought about going with a bitwise capability where you could turn on\ndifferent messages types independently, but the use of that with SET and\nthe confusion hardly seemed worth it.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 18:57:46 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "Bruce Momjian writes:\n\n> If you would prefer LOG down near INFO in the server message levels,\n> please post the idea and let's get some more comments from folks.\n\nLOG should be below WARNING, in any case. Perhaps between NOTICE and\nWARNING, but I'm not so sure about that.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 23:21:40 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": true,
"msg_subject": "Re: Locale support is now on by default"
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Bruce Momjian writes:\n>> If you would prefer LOG down near INFO in the server message levels,\n>> please post the idea and let's get some more comments from folks.\n\n> LOG should be below WARNING, in any case. Perhaps between NOTICE and\n> WARNING, but I'm not so sure about that.\n\nI think the ordering Bruce developed is appropriate for logging.\nThere are good reasons to think that per-query ERRORs are less\ninteresting than LOG events for admin logging purposes.\n\nThe real problem here is that in the initdb context, we are really\ndealing with an *interactive* situation, where LOG events ought to\nbe treated in the client-oriented scale --- but the backend does\nnot know this, it thinks it is emitting messages to the system log.\n\nI'm thinking that the mistake is in hard-wiring one scale of message\ninterest to control the frontend output and another one to the \"log\"\n(stderr/syslog) output. Perhaps we should have a notion of \"interactive\"\nmessage priorities vs \"logging\" message priorities, and allow either\nscale to be used to control which messages are dispatched to any\nmessage destination.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 23:30:15 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default "
},
{
"msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Bruce Momjian writes:\n> >> If you would prefer LOG down near INFO in the server message levels,\n> >> please post the idea and let's get some more comments from folks.\n> \n> > LOG should be below WARNING, in any case. Perhaps between NOTICE and\n> > WARNING, but I'm not so sure about that.\n> \n> I think the ordering Bruce developed is appropriate for logging.\n> There are good reasons to think that per-query ERRORs are less\n> interesting than LOG events for admin logging purposes.\n\nOK.\n\n> The real problem here is that in the initdb context, we are really\n> dealing with an *interactive* situation, where LOG events ought to\n> be treated in the client-oriented scale --- but the backend does\n> not know this, it thinks it is emitting messages to the system log.\n> \n> I'm thinking that the mistake is in hard-wiring one scale of message\n> interest to control the frontend output and another one to the \"log\"\n> (stderr/syslog) output. Perhaps we should have a notion of \"interactive\"\n> message priorities vs \"logging\" message priorities, and allow either\n> scale to be used to control which messages are dispatched to any\n> message destination.\n\nCan't we just 'grep -v '^LOG:' to remove the log display from initdb? \nSeems pretty simple.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 23:45:57 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Locale support is now on by default"
}
] |
[
{
"msg_contents": "I found out, that there are some probably temporary relations in one of my \ndatabases, with names (that show in vacuum verbose output) like \npg_temp.12720.0.\n\nAre these the result of CREATE TEMP TABLE or simmilar and if so, can such \nrelations be safely dropped? Perhaps a good idea to add some vacuum \nfunctionality to do this.\n\nDaniel\n\n",
"msg_date": "Wed, 03 Apr 2002 09:09:34 +0300",
"msg_from": "Daniel Kalchev <daniel@digsys.bg>",
"msg_from_op": true,
"msg_subject": "pg_temp.XX.0"
},
{
"msg_contents": "\nYou can stop the postmaster and start the postgres binary with the -O\nflag and delete the pg_temp tables. We don't have a cleanup for these\nfailed backends but we should. Normally they are cleaned up.\n\n---------------------------------------------------------------------------\n\nDaniel Kalchev wrote:\n> I found out, that there are some probably temporary relations in one of my \n> databases, with names (that show in vacuum verbose output) like \n> pg_temp.12720.0.\n> \n> Are these the result of CREATE TEMP TABLE or simmilar and if so, can such \n> relations be safely dropped? Perhaps a good idea to add some vacuum \n> functionality to do this.\n> \n> Daniel\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n",
"msg_date": "Wed, 3 Apr 2002 01:13:01 -0500 (EST)",
"msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: pg_temp.XX.0"
}
] |
[
{
"msg_contents": "\n\n> -----Original Message-----\n> From: Christopher Kings-Lynne [mailto:chriskl@familyhealth.com.au] \n> Sent: 03 April 2002 02:50\n> To: Hackers\n> Cc: Tom Lane; peter_e@gmx.net\n> Subject: SHOW ALL as a query result\n> \n> \n> Hi All,\n> \n> Now that Tom's modified the EXPLAIN output to appear as a \n> query result, maybe SHOW and SHOW ALL should also be modified \n> in that way. The current\n> NOTICE: business is a bit messy, and it sure would assist \n> projects just as pgAccess, phpPgAdmin and pgAdmin with \n> displaying configuration!\n\nIt certainly would. Of course we've worked around it now though :-(, but\nfuture enhancements.... \n\nRegards, Dave.\n\n",
"msg_date": "Wed, 3 Apr 2002 08:24:09 +0100 ",
"msg_from": "Dave Page <dpage@vale-housing.co.uk>",
"msg_from_op": true,
"msg_subject": "Re: SHOW ALL as a query result"
}
] |
[
{
"msg_contents": "Hi All,\n\nWith regards to the proposed command.c refactoring...\n\nI've done it by removing command.c and replacing it with\n\nportal.c\nalter.c\nlock.c\nnamespace.c\n\nIs that a good idea? Will it break too many outstanding patches?\n\nBasically the portal fetch/destroy commands go in portal.c, all the Alter*\ncommands with their static helper functions go in alter.c, the single\nLockTable command goes in lock.c and the CreateSchema function goes in\nnamespace.c. I anticipate that a few more functions will eventually be\ncreated to go in namespace.c\n\nI have also broken up the command.h header file into four separate\ncorrespondingly named header files, and removed command.h itself.\n\nThe next step after this would be to move a lot of the redundant code in\nalter.c into static functions.\n\nThoughts?\n\nChris\n\n",
"msg_date": "Wed, 3 Apr 2002 16:39:39 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "command.c breakup"
},
{
"msg_contents": "On Wed, 2002-04-03 at 09:39, Christopher Kings-Lynne wrote:\n> Hi All,\n> \n> With regards to the proposed command.c refactoring...\n> \n..about which I should apologise as I stuck my head above the parapet\nand then sat on my ideas (mixing metaphors a bit).\n\n> I've done it by removing command.c and replacing it with\n> \n> portal.c\n> alter.c\n> lock.c\n> namespace.c\n> \n> Is that a good idea? Will it break too many outstanding patches?\n\nThe feedback I had was not to worry too much about that! However, my\nscheme doesn't take account of some of the more recent changes -I had\nenvisaged a more radical division by \"object manipulated\". Here's my\ncurrent working draft (doesn't include material from the last couple of\nweeks):\n\ncommand.c\n---------\n\nPortalCleanup \nPerformPortalFetch \nPerformPortalClose \n\tPortal support functions move to portal.c\n\nAlterTableAddColumn \nAlterTableAlterColumnDefault\ndrop_default\nAlterTableAlterColumnFlags\n\t\t\t\n\tThese move to table.c. They share common code for permissions\n\tand recursion. Therefore, propose to create a short helper\n\troutine (AlterTableAlterColumnSetup) which checks permissions,\n\texistence of relation (and acquirtes lock on rel?). Also\n\tprovide macros for recursion, to be used in form:\n\n\tRECURSE_OVER_CHILDREN(relid);\n\tAlterTableDoSomething(args);\n\tRECURSE_OVER_CHILDREN_END;\n\n\nfind_attribute_walker \nfind_attribute_in_node\nRemoveColumnReferences\nAlterTableDropColumn \n\n\tThese are part of the old DROP_COLUMN_HACK. Should they go in\n\tthe transfer? (There seems to be agreement that DROP COLUMN\n\twill not be implemented as it is here).\n\nAlterTableAddConstraint \nAlterTableDropConstraint\n\t\n\tMove to table.c These also use permissions and recursion code.\n\nAlterTableOwner \nAlterTableCreateToastTable \nneeds_toast_table\n\tAll move to table.c. (Seems a bit more drastic than necessary\n\tto split AlterTableCreateToastTable and move\n\tneeds_toast_table to access/heap/tuptoaster.c). \n\nLockTableCommand\n\tMove to lock.c\n\n\ncreatinh.c\n----------\n\nDefineRelation\nRemoveRelation\nTruncateRelation\nMergeAttributes\nchange_varattnos_walker\nchange_varattnos_of_a_node\nStoreCatalogInheritance\nfindAttrByName\nsetRelhassubclassInRelation\n\t\n\tAll move to table.c\n\n\ndefine.c\n--------\n\ncase_translate_language_name\n\t\n\tRemove this one and refer to that in proclang.c\n\ncompute_return_type\ncompute_full_attributes\ninterpret_AS_clause\nCreateFunction\n\n\tMove to function.c \n\nDefineOperator\n\n\tMove to operator.c\n\nDefineAggregate\n\n\tMove to aggregate.c\n\nDefineType\n\n\tMove to type.c\n\ndefGetString\ndefGetNumeric\ndefGetTypeLength\n\t\n\tParameter fetching support, generic to all the processing for\n\tdefine statements. Inclined to move to type.c as used most by type\n\tcreation.\n\nremove.c\n--------\n\nRemoveOperator\n\n\tTo operator.c\n\nSingleOpOperatorRemove\nAttributeAndRelationRemove\n\n\tTo operator.c (or delete altogether -NOTYET since 94!)\n\nRemoveType\n\n\tTo type.c\n\nRemoveFunction\n\n\tTo function.c\n\nRemoveAggregate\n\n\tTo aggregate.c\n\n\nrename.c\n--------\n\n\nrenameatt\nrenamerel\nri_trigger_type\nupdate_ri_trigger_args\n\n\tTo table.c\n\n\n\n\nThus, the change in the set of files:\n\nRemoved:\n\ncommand.c\ncreatinh.c\ndefine.c\nremove.c\nrename.c\n\nAdded:\naggregate.c\nfunction.c\noperator.c\ntable.c\ntype.c\n\nSorry for going slow on this - but it seems that the organisation\nhas dropped out of my life in the last few weeks :) (and I've been away\nover Easter). \n\nRegards\n\nJohn\n\n",
"msg_date": "03 Apr 2002 10:10:22 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "John Gray <jgray@azuli.co.uk> writes:\n> Here's my current working draft (doesn't include material from the\n> last couple of weeks):\n\nPlease note that there's been pretty substantial revisions in command.c\nand creatinh.c over the past couple of weeks for schema support. While\nI think that those two files are largely done with, define.c and\nremove.c are about to get the same treatment as the schema project moves\non to schema-tizing functions and operators. So we'll need to coordinate\njust when and how to make these structural revisions; and you'll\ndefinitely need to be working against CVS tip. What are your plans,\ntime-wise? Does it make sense for the two of you to work together?\n\n> \tThese are part of the old DROP_COLUMN_HACK. Should they go in\n> \tthe transfer? (There seems to be agreement that DROP COLUMN\n> \twill not be implemented as it is here).\n\nI think Hiroshi finally removed all the DROP_COLUMN_HACK code yesterday.\n\n> \tParameter fetching support, generic to all the processing for\n> \tdefine statements. Inclined to move to type.c as used most by type\n> \tcreation.\n\nWhat about leaving define.c in existence, but have it hold only common\nsupport routines for object-definition commands? The param fetchers\nwould certainly fit in this category, and maybe some of the other\nsupport routines you've described would fit here too.\n\n> \tTo operator.c (or delete altogether -NOTYET since 94!)\n\nNOTYET probably means NEVER; whenever that functionality is implemented,\nit'll be based on some sort of generic dependency code, not\nspecial-purpose checks. Feel free to remove this stuff too.\n\n> Thus, the change in the set of files:\n\n> Removed:\n\n> command.c\n> creatinh.c\n> define.c\n> remove.c\n> rename.c\n\n> Added:\n> aggregate.c\n> function.c\n> operator.c\n> table.c\n> type.c\n\nMinor gripe here: I would suggest taking a cue from indexcmds.c and\nchoosing file names along the lines of functioncmds.c, tablecmds.c,\netc. The above names strike me as too generic and likely to cause\nconfusion with similarly-named files in other directories.\n\n> Sorry for going slow on this - but it seems that the organisation\n> has dropped out of my life in the last few weeks :) (and I've been away\n> over Easter). \n\nNot a problem. But we'll need a concentrated burst of work whenever\nyou are ready to prepare the final version of the patch; otherwise the\nsynchronization issues will cause problems/delays for other people.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 10:52:41 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup "
},
{
"msg_contents": "On Wed, 2002-04-03 at 16:52, Tom Lane wrote:\n> John Gray <jgray@azuli.co.uk> writes:\n> > Here's my current working draft (doesn't include material from the\n> > last couple of weeks):\n> \n> Please note that there's been pretty substantial revisions in command.c\n> and creatinh.c over the past couple of weeks for schema support. While\n> I think that those two files are largely done with, define.c and\n> remove.c are about to get the same treatment as the schema project moves\n> on to schema-tizing functions and operators. So we'll need to coordinate\n> just when and how to make these structural revisions; and you'll\n> definitely need to be working against CVS tip. What are your plans,\n> time-wise? Does it make sense for the two of you to work together?\n> \nI have compiled a new version against current CVS, now also including\nreferences to dependencies (See below). I accept that we'll need to work\nround the schema project -in the week since the last message I notice\nthat namespace support has arrived for function, aggregate and operator\ncreation. Is there more to come in these files?\n\nI'm unsure whether it is sensible to split the commands/defrem.h file to\nmatch the actual .c files (given that there are at present only two\nexternally referenced functions from each entity it seems reasonable to\nkeep them together -as they are all referred to from tcop/utility.c\nanyway.\n\nAs far as joint working goes, if Chris K-L would like to grab all or\npart of it he is very welcome :) My timescale is that I have time at\npresent to work on it, so maybe next week for incorporation (but do\npeople need more notice than that?)\n\nObviously, I haven't given more details of the common code elimination.\nThat is a slightly different kind of task -I'll post some specifics on\nthat in the next couple of days.\n\n> > \tParameter fetching support, generic to all the processing for\n> > \tdefine statements. Inclined to move to type.c as used most by type\n> > \tcreation.\n> \n> What about leaving define.c in existence, but have it hold only common\n> support routines for object-definition commands? The param fetchers\n> would certainly fit in this category, and maybe some of the other\n> support routines you've described would fit here too.\n>\nYes, this seems sensible -but as far as the other support code goes, it\nmight make sense to have a file called (say) cmdsupport.c where the\nparameter fetchers, the checking and recursion code etc. all goes? \n\n> > \tTo operator.c (or delete altogether -NOTYET since 94!)\n> \n> NOTYET probably means NEVER; whenever that functionality is implemented,\n> it'll be based on some sort of generic dependency code, not\n> special-purpose checks. Feel free to remove this stuff too.\n> \n\nOK\n\n> > Thus, the change in the set of files:\n> \n> \n> Minor gripe here: I would suggest taking a cue from indexcmds.c and\n> choosing file names along the lines of functioncmds.c, tablecmds.c,\n> etc. The above names strike me as too generic and likely to cause\n> confusion with similarly-named files in other directories.\n> \nYes, this makes sense and I've done that too.\n\n\n> > Sorry for going slow on this - but it seems that the organisation\n> > has dropped out of my life in the last few weeks :) (and I've been away\n> > over Easter). \n> \n> Not a problem. But we'll need a concentrated burst of work whenever\n> you are ready to prepare the final version of the patch; otherwise the\n> synchronization issues will cause problems/delays for other people.\n> \n\nThat shouldn't be too much of a problem in the next couple of weeks - if\nwe can decide on a specific day I'll book it into my diary (Any day but\nWednesday next week would be fine for me).\n\nRegards\n\nJohn\n\n\n\nsrc/backend/commands/ directory reorganisation version 2 \n(including dependencies), from CVS as of 12 noon, 2002-04-11)\n\nDependencies were determined from LXR cross-reference database. This\nwill show all *usage* -it won't catch cases where a header file is included\nredundantly. Recursive grep seems to provide the same answers though.\n\ncommand.c\n---------\n\nPortalCleanup \nPerformPortalFetch \nPerformPortalClose \n\tPortal support functions move to portalcmds.c\n\t\n\tprototype commands/command.h -> commands/portal.h\n\trefer executor/spi.c tcop/pquery.c tcop/utility.c\n\n\nAlterTableAddColumn \nAlterTableAlterColumnDropNotNull\nAlterTableAlterColumnSetNotNull\nAlterTableAlterColumnDefault\ndrop_default\nAlterTableAlterColumnFlags\nAlterTableDropColumn \t\t\t\nAlterTableAddConstraint \nAlterTableDropConstraint\nAlterTableOwner \nAlterTableCreateToastTable \nneeds_toast_table\n\n\tThese move to tablecmds.c. They share common code for permissions\n\tand recursion. Therefore, propose to create a short helper\n\troutine (AlterTableAlterColumnSetup) which checks permissions,\n\texistence of relation (and acquirtes lock on rel?). Also\n\tprovide macros for recursion, to be used in form:\n\n\tRECURSE_OVER_CHILDREN(relid);\n\tAlterTableDoSomething(args);\n\tRECURSE_OVER_CHILDREN_END;\n\n\tprototype commands/command.h -> commands/tablecmds.h\n\trefer tcop/utility.c commands/cluster.c executor/execMain.c\n\n\nLockTableCommand\n\tMove to lockcmds.c\n\n\tprototype commands/command.h -> commands/lockcmds.h\n\trefer tcop/utility.c\n\nCreateSchemaCommand\n\tMove to schemacmds.c\n\n\tprototype commands/command.h -> commands/schemacmds.h\n\trefer tcop/utility.c\n\ncreatinh.c\n----------\n\nDefineRelation \nRemoveRelation \nTruncateRelation \nMergeDomainAttributes\nMergeAttributes\nchange_varattnos_walker\nchange_varattnos_of_a_node\nStoreCatalogInheritance\nfindAttrByName\nsetRelhassubclassInRelation\n\t\n\tAll move to tablecmds.c\n\n\tprototye commands/creatinh.h -> commands/tablecmds.h\n\trefer commands/sequence.c commands/view.c tcop/utility.c\n\ndefine.c\n--------\n\ncase_translate_language_name\n\t\n\tRemove this one and refer to that in proclang.c. If this file\n\tbecomes a file for support functions, then the reverse should apply.\n\ncompute_return_type\ncompute_full_attributes\ninterpret_AS_clause\nCreateFunction\n\n\tMove to functioncmds.c \n\n\tprototype commands/defrem.h -> ?\n\trefer tcop/utility.c\n\nDefineOperator\n\n\tMove to operatorcmds.c\n\n\tprototype commands/defrem.h -> ?\n\trefer tcop/utility.c\n\n\nDefineAggregate\n\n\tMove to aggregatecmds.c\n\n\tprototype commands/defrem.h -> ?\n\trefer tcop/utility.c\n\n\nDefineDomain\n\n\tMove to domaincmds.c\n\n\tprototype commands/defrem.h -> ?\n\trefer tcop/utility.c\n\n\nDefineType\n\n\tMove to typecmds.c\n\n\tprototype commands/defrem.h -> ?\n\trefer tcop/utility.c\n\nfindTypeIOFunction\ndefGetString\ndefGetNumeric\ndefGetQualifiedName\ndefGetTypeName\ndefGetTypeLength\n\n\tKeep in define.c as general support code. If other support code is\n\tcoming here to, there might be a good case for a new file\n\t\"cmdutils.c\", say, to hold all sorts of generic code for permissions,\n\trecursion, etc.\n\nremove.c\n--------\n\nRemoveOperator\n\n\tTo operatorcmds.c\n\nSingleOpOperatorRemove\nAttributeAndRelationRemove\n\n\tPropose to delete altogether -NOTYET since 94, likely\n\tincompatible with current workings)\n\nRemoveType\n\n\tTo typecmds.c\n\nRemoveDomain\n\n\tTo domaincmds.c\n\nRemoveFunction\n\n\tTo functioncmds.c\n\nRemoveAggregate\n\n\tTo aggregatecmds.c\n\nprototypes and dependencies for these identical to Define commands in define.c\n\n\nrename.c\n--------\n\nrenameatt\nrenamerel\nri_trigger_type\nupdate_ri_trigger_args\n\n\tTo tablecmds.c\n\n\tprototype commands/rename.h -> commands/tablecmds.h\n\trefer tcop/utility.c commands/cluster.c\n\n\nThus, the change in the set of files:\n\nRemoved:\n\ncommand.c\ncreatinh.c\nremove.c\nrename.c\n\n(and include files commands/command.h, commands/creatinh.h, commands/rename.h)\n\n\nAdded:\naggregatecmds.c\nfunctioncmds.c\noperatorcmds.c\nportalcmds.c\ntablecmds.c\ntypecmds.c\nlockcmds.c\nschemacmds.c\n\n(and include files commands/portalcmds.h, commands/lockcmds.h, \ncommands/tablecmds.h, commands/schemacmds.h)\n\nPossibly \"rename\"[*] residual define.c to cmdsupport.c (and create new\nheader file commands/cmdsupport.h) which would also hold common \npermissions checkiing and inheritance code.\n\n\n",
"msg_date": "11 Apr 2002 13:08:02 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "John Gray <jgray@azuli.co.uk> writes:\n> I have compiled a new version against current CVS, now also including\n> references to dependencies (See below). I accept that we'll need to work\n> round the schema project -in the week since the last message I notice\n> that namespace support has arrived for function, aggregate and operator\n> creation. Is there more to come in these files?\n\nI am hoping to commit the revisions for aggregates today. Operators are\nstill to come, and after that it's the mop-up stuff like rules ...\n\n> I'm unsure whether it is sensible to split the commands/defrem.h file to\n> match the actual .c files (given that there are at present only two\n> externally referenced functions from each entity it seems reasonable to\n> keep them together -as they are all referred to from tcop/utility.c\n> anyway.\n\nProbably can leave well enough alone there; I don't see what it would\nbuy us to split up that header file.\n\n>> What about leaving define.c in existence, but have it hold only common\n>> support routines for object-definition commands? The param fetchers\n>> would certainly fit in this category, and maybe some of the other\n>> support routines you've described would fit here too.\n>> \n> Yes, this seems sensible -but as far as the other support code goes, it\n> might make sense to have a file called (say) cmdsupport.c where the\n> parameter fetchers, the checking and recursion code etc. all goes? \n\nIf you prefer --- I haven't a strong feeling one way or the other.\n\n> That shouldn't be too much of a problem in the next couple of weeks - if\n> we can decide on a specific day I'll book it into my diary (Any day but\n> Wednesday next week would be fine for me).\n\nI will try to have no uncommitted changes over this weekend; that will\ngive you a clear field Monday morning, or you can start on the weekend\nif you like. Sound good?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 11 Apr 2002 10:33:34 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup "
},
{
"msg_contents": "On Thu, 2002-04-11 at 15:33, Tom Lane wrote:\n>\n> > That shouldn't be too much of a problem in the next couple of weeks - if\n> > we can decide on a specific day I'll book it into my diary (Any day but\n> > Wednesday next week would be fine for me).\n> \n> I will try to have no uncommitted changes over this weekend; that will\n> give you a clear field Monday morning, or you can start on the weekend\n> if you like. Sound good?\n> \n\nFine. I'll work on that basis. I'll prepare a full-blown patch which can\nbe applied Monday -unless anyone else is sitting on uncommitted changes\nto the directory that they want me to wait for?\n\nRegards\n\nJohn\n\n\n",
"msg_date": "12 Apr 2002 00:10:14 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "> Fine. I'll work on that basis. I'll prepare a full-blown patch which can\n> be applied Monday -unless anyone else is sitting on uncommitted changes\n> to the directory that they want me to wait for?\n\nNothing important. Shall I suggest that you do the rearrangement first, and\nthen once everything's happy, we can work on removing redundant code?\n\nChris\n\n",
"msg_date": "Fri, 12 Apr 2002 10:33:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "On Fri, 2002-04-12 at 03:33, Christopher Kings-Lynne wrote:\n> > Fine. I'll work on that basis. I'll prepare a full-blown patch which can\n> > be applied Monday -unless anyone else is sitting on uncommitted changes\n> > to the directory that they want me to wait for?\n> \n> Nothing important. Shall I suggest that you do the rearrangement first, and\n> then once everything's happy, we can work on removing redundant code?\n> \n\nI think this is the right thing to do. Rearranging files shouldn't have\nany effect on behaviour, but the removal of redundant code (e.g. for\npermissions checks) may result in discussions about the appropriate\npermissions for different activities -ISTM that this should be open to\nnormal discussion and review.\n\nJohn\n\n\n",
"msg_date": "12 Apr 2002 09:48:34 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "I'm not exactly sure what you're touching, but could it wait for the\nbelow pg_depend patch to be either accepted or rejected? It lightly\nfiddles with a number of files in the command and catalog directories.\n\nhttp://archives.postgresql.org/pgsql-patches/2002-04/msg00050.php\n\n\n> > That shouldn't be too much of a problem in the next couple of\nweeks - if\n> > we can decide on a specific day I'll book it into my diary (Any\nday but\n> > Wednesday next week would be fine for me).\n>\n> I will try to have no uncommitted changes over this weekend; that\nwill\n> give you a clear field Monday morning, or you can start on the\nweekend\n> if you like. Sound good?\n\n\n",
"msg_date": "Sun, 14 Apr 2002 16:30:23 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup "
},
{
"msg_contents": "On Sun, 2002-04-14 at 21:30, Rod Taylor wrote:\n> I'm not exactly sure what you're touching, but could it wait for the\n> below pg_depend patch to be either accepted or rejected? It lightly\n> fiddles with a number of files in the command and catalog directories.\n> \n> http://archives.postgresql.org/pgsql-patches/2002-04/msg00050.php\n> \n\nWell, I'm working on it now and it's about 75% done. I hope to post the\npatch within the next few hours. I'm sorry that I wasn't aware of your\npatch -but commands/ is a busy place at present :). I've scanned your\npatch very briefly and the major impacts I can see are:\n\n1) The ALTER TABLE code will be in tablecmds.c (but exactly the same\ncode as at present)\n\n2) The type support will be in typecmds.c (define.c and remove.c are\nessentially gone -the define and remove commands for foo are in general\nnow together in foocmds.c\n\nI'm not touching anything in the catalog directory. \n\nNote that as I'm only shuffling code from one file to another, your\npatch shouldn't need much modification to get it working afterwards -\nalthough there is an intention to tidy up common code in the commands/\ndirectory as a second phase, this will consist of more \"ordinary\"\npatches...\n\nRegards\n\nJohn\n\n\n",
"msg_date": "14 Apr 2002 21:43:09 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "Sounds fair. I'd have brought it up earlier but was away last week.\n\nThe changes I made are very straight forward and easy enough to redo.\n--\nRod Taylor\n\nYour eyes are weary from staring at the CRT. You feel sleepy. Notice\nhow restful it is to watch the cursor blink. Close your eyes. The\nopinions stated above are yours. You cannot imagine why you ever felt\notherwise.\n\n----- Original Message -----\nFrom: \"John Gray\" <jgray@azuli.co.uk>\nTo: \"Rod Taylor\" <rbt@zort.ca>\nCc: \"Tom Lane\" <tgl@sss.pgh.pa.us>; \"Christopher Kings-Lynne\"\n<chriskl@familyhealth.com.au>; \"Hackers\"\n<pgsql-hackers@postgresql.org>\nSent: Sunday, April 14, 2002 4:43 PM\nSubject: Re: [HACKERS] command.c breakup\n\n\n> On Sun, 2002-04-14 at 21:30, Rod Taylor wrote:\n> > I'm not exactly sure what you're touching, but could it wait for\nthe\n> > below pg_depend patch to be either accepted or rejected? It\nlightly\n> > fiddles with a number of files in the command and catalog\ndirectories.\n> >\n> > http://archives.postgresql.org/pgsql-patches/2002-04/msg00050.php\n> >\n>\n> Well, I'm working on it now and it's about 75% done. I hope to post\nthe\n> patch within the next few hours. I'm sorry that I wasn't aware of\nyour\n> patch -but commands/ is a busy place at present :). I've scanned\nyour\n> patch very briefly and the major impacts I can see are:\n>\n> 1) The ALTER TABLE code will be in tablecmds.c (but exactly the same\n> code as at present)\n>\n> 2) The type support will be in typecmds.c (define.c and remove.c are\n> essentially gone -the define and remove commands for foo are in\ngeneral\n> now together in foocmds.c\n>\n> I'm not touching anything in the catalog directory.\n>\n> Note that as I'm only shuffling code from one file to another, your\n> patch shouldn't need much modification to get it working\nafterwards -\n> although there is an intention to tidy up common code in the\ncommands/\n> directory as a second phase, this will consist of more \"ordinary\"\n> patches...\n>\n> Regards\n>\n> John\n>\n>\n\n",
"msg_date": "Sun, 14 Apr 2002 16:58:40 -0400",
"msg_from": "\"Rod Taylor\" <rbt@zort.ca>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
},
{
"msg_contents": "On Sun, 2002-04-14 at 21:58, Rod Taylor wrote:\n> Sounds fair. I'd have brought it up earlier but was away last week.\n> \n> The changes I made are very straight forward and easy enough to redo.\n\nI've sent the patch to the -patches list -Please let me know if there\nare any queries -I will be able to deal with them after ~1700 UTC\nMonday.\n\nRegards\n\nJohn\n\n\n",
"msg_date": "15 Apr 2002 00:08:26 +0100",
"msg_from": "John Gray <jgray@azuli.co.uk>",
"msg_from_op": false,
"msg_subject": "Re: command.c breakup"
}
] |
[
{
"msg_contents": "Take this update statement:\n\nupdate mytable set foo=foo+1 where bar='xxx';\n\nIf that gets executed more than once at the same time by multiple instances of\npostgresql. Will foo ever lose a count? \n\nI am assumed that foo will always be correct and that the database will manage\nany contention, but when I think about transaction isolation, I'm not so sure.\nIs it possible for two or more instances of this update to run simultaneously,\neach getting the same value for foo, then each updating foo to the same\nincremented value?\n\nIs this a stupid question?\n",
"msg_date": "Wed, 03 Apr 2002 09:20:11 -0500",
"msg_from": "mlw <markw@nospam.not>",
"msg_from_op": true,
"msg_subject": "Question: update and transaction isolation"
}
] |
[
{
"msg_contents": "\nTake this update statement:\n\nupdate mytable set foo=foo+1 where bar='xxx';\n\nIf that gets executed more than once at the same time by multiple instances of\npostgresql. Will foo ever lose a count? \n\nI am assumed that foo will always be correct and that the database will manage\nany contention, but when I think about transaction isolation, I'm not so sure.\nIs it possible for two or more instances of this update to run simultaneously,\neach getting the same value for foo, then each updating foo to the same\nincremented value?\n\nIs this a stupid question?\n",
"msg_date": "Wed, 03 Apr 2002 10:00:59 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Question: update and transaction isolation"
},
{
"msg_contents": "Peter Eisentraut wrote:\n> \n> mlw writes:\n> \n> > update mytable set foo=foo+1 where bar='xxx';\n> >\n> > If that gets executed more than once at the same time by multiple instances of\n> > postgresql. Will foo ever lose a count?\n> \n> No, but if you run this in read committed isolation mode then you might\n> get into non-repeatable read type problems, i.e., you run it twice but\n> every foo was only increased once. If you use serializable mode then all\n> but one concurrent update will be aborted.\n\nI'm not sure you answered my question. Let me put it to you like this:\n\nSuppose I wanted to make a table of page counts, like this:\n\ncreate table pagecounts (counter int4, pagename varchar)\n\nFor each page hit, I do this:\n\nupdate pagecounts set counter = counter + 1 where pagename = 'testpag.php'\n\n\nDo I have to set a particular isolation level? Or does this not work in\ngeneral?\n",
"msg_date": "Wed, 03 Apr 2002 12:06:31 -0500",
"msg_from": "mlw <markw@mohawksoft.com>",
"msg_from_op": true,
"msg_subject": "Re: Question: update and transaction isolation"
},
{
"msg_contents": "mlw writes:\n\n> update mytable set foo=foo+1 where bar='xxx';\n>\n> If that gets executed more than once at the same time by multiple instances of\n> postgresql. Will foo ever lose a count?\n\nNo, but if you run this in read committed isolation mode then you might\nget into non-repeatable read type problems, i.e., you run it twice but\nevery foo was only increased once. If you use serializable mode then all\nbut one concurrent update will be aborted.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 12:12:41 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation"
},
{
"msg_contents": "mlw writes:\n\n> For each page hit, I do this:\n>\n> update pagecounts set counter = counter + 1 where pagename = 'testpag.php'\n>\n> Do I have to set a particular isolation level? Or does this not work in\n> general?\n\nIn read committed level, if the second update launches before the first\nupdate is finished (commits), then both of these updates will operate on\nthe old counter value. That is, you miss one page hit.\n\nIf it's possible, you might want to consider \"logging\" your page hits and\nmake a view for the page counts (with group by, etc.). That will get you\naround the concurrency issues altogether.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 12:41:09 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation"
},
{
"msg_contents": "mlw <markw@mohawksoft.com> writes:\n> I'm not sure you answered my question. Let me put it to you like this:\n> Suppose I wanted to make a table of page counts, like this:\n> create table pagecounts (counter int4, pagename varchar)\n> For each page hit, I do this:\n> update pagecounts set counter = counter + 1 where pagename = 'testpag.php'\n> Do I have to set a particular isolation level? Or does this not work in\n> general?\n\nThis will work; and you are best off with the default read-committed\nisolation level. (In serializable level, you would sometimes get\nserialization failures and have to repeat the transaction.) In more\ncomplex cases the answer is different, though.\n\nThe reason it works in read-committed mode is that the second guy to\narrive at the row will observe that the row has an update in progress;\nwill block waiting for the previous updater to commit or abort; and if\ncommit, will use the updated version of the row as the starting point\nfor his update. (This is what the EvalPlanQual ugliness in the executor\nis all about.)\n\nThere are some interesting properties of this solution if your\ntransaction actually tries to look at the row, and not just issue an\nUPDATE, though. Example:\n\nregression=# create table foo (key int, val int);\nCREATE\nregression=# insert into foo values(1, 0);\nINSERT 394248 1\n\nregression=# begin;\nBEGIN\nregression=# update foo set val = val + 1 where key = 1;\nUPDATE 1\nregression=# select * from foo;\n key | val\n-----+-----\n 1 | 1\n(1 row)\n\n<< leaving this transaction open, in a second window do >>\n\nregression=# begin;\nBEGIN\nregression=# select * from foo;\n key | val\n-----+-----\n 1 | 0\n(1 row)\n\nregression=# update foo set val = val + 1 where key = 1;\n\n<< blocks waiting for first xact to be committed or aborted.\n In first window, now issue END. Second window then completes\n its UPDATE: >>\n\nUPDATE 1\nregression=# select * from foo;\n key | val\n-----+-----\n 1 | 2\n(1 row)\n\nregression=# end;\n\n<< at this point the value \"2\" is visible in other transactions. >>\n\nNotice how xact 2 could only read val=0 in its first SELECT, even though\nit saw val=1 for purposes of the UPDATE. If your application-side logic\nis complex enough to get messed up by this inconsistency, then you\nshould either use SELECT FOR UPDATE to read the values, or use\nserializable isolation level and be prepared to retry failed transactions.\n\nIn serializable mode, you'd have gotten a failure when you tried to\nupdate the already-updated row. This tells you that you might have\ntried to update on the basis of stale information. You abort and\nrestart the transaction, taking care to re-read the info that is going\nto determine what you write. For example, suppose you wanted to do the\nincrement like this:\n\tBEGIN;\n\tSELECT val FROM foo WHERE key = 1;\n\t-- internally compute newval = val + 1\n\tUPDATE foo SET val = $newval WHERE key = 1;\n\tEND;\n(This is a tad silly here, but is not silly if the \"internal computation\"\nis too complex to write as an SQL expression.) In read-committed mode,\nconcurrent executions of this sequence would do the Wrong Thing. In\nserializable mode, you'd get concurrent-update failures; retrying from\nthe top of the transaction would eventually succeed with correct\nresults.\n\nAlternatively you could do\n\tBEGIN;\n\tSELECT val FROM foo WHERE key = 1 FOR UPDATE;\n\t-- internally compute newval = val + 1\n\tUPDATE foo SET val = $newval WHERE key = 1;\n\tEND;\nwhich will work reliably in read-committed mode; but if conflicts are\ninfrequent then the serializable approach will give better performance.\n(Basically, the serializable approach is like optimistic locking with\nretries; the FOR UPDATE approach is pessimistic locking.)\n\nIf you are propagating information from one row to another (or across\ntables) then serializable mode with a retry loop is probably the easiest\nway of avoiding consistency problems; especially if you are reading\nmultiple rows to derive the info you will write back. (The FOR UPDATE\napproach is prone to deadlocks with multiple source rows.) The basic\nEvalPlanQual behavior works nicely for simple updates that only read\nand write individual rows, but it does not scale to cases where you read\nsome rows and write other rows.\n\nBTW, I've promised to give a talk at the O'Reilly con on exactly these\nissues ...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 14:04:20 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation "
},
{
"msg_contents": "Tom Lane writes:\n\n> The reason it works in read-committed mode is that the second guy to\n> arrive at the row will observe that the row has an update in progress;\n> will block waiting for the previous updater to commit or abort; and if\n> commit, will use the updated version of the row as the starting point\n> for his update. (This is what the EvalPlanQual ugliness in the executor\n> is all about.)\n\nIsn't that a violation of the principle that transactions in read\ncommitted mode will look at the data that was committed *before* the\nstatement had begun?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Wed, 3 Apr 2002 17:11:31 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> The reason it works in read-committed mode is that the second guy to\n>> arrive at the row will observe that the row has an update in progress;\n>> will block waiting for the previous updater to commit or abort; and if\n>> commit, will use the updated version of the row as the starting point\n>> for his update. (This is what the EvalPlanQual ugliness in the executor\n>> is all about.)\n\n> Isn't that a violation of the principle that transactions in read\n> committed mode will look at the data that was committed *before* the\n> statement had begun?\n\nHey, I didn't design it. Complain to Vadim ...\n\nBut actually, SELECT FOR UPDATE also violates the principle you allege,\nand must do so if it's to be useful at all. The results you get are\nwhatever's in the row after it's been locked, not what was in the row\nat the instant of statement start. UPDATE is essentially behaving in\nthe same way.\n\nTo my mind, full SERIALIZABLE mode is the only approach that can be\nexplained in terms of simple notions like \"you see only the data that\nexisted at time T\". Read-committed mode is conceptually much dirtier,\neven though it's often simpler to use in practice.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 17:15:02 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Tom Lane writes:\n\n> To my mind, full SERIALIZABLE mode is the only approach that can be\n> explained in terms of simple notions like \"you see only the data that\n> existed at time T\".\n\nThere's another way. If the current value is different from\nthat at time T, we may be able to reset the time when the\nstatement begun, which is equivalent to replaceing the snapshot\n(this isn't allowed in serializable mode). Of cource it would\nbe very difficult to implement(at least effectively).\n\nAs I've already mentioned many times SELECT and SELECT ..\nFOR UPDATE are alike in appearance but quite different in\nnature. For example, the meaning of the snapshot isn't the\nsame as you've pointed out already in this thread.\nIt's meaingless for SELECT and UPDATE(SELECT .. FOR UPDATE)\nto have a common snapshot.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 10:08:48 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation"
},
{
"msg_contents": "> > For each page hit, I do this:\n> >\n> > update pagecounts set counter = counter + 1 where pagename = \n> 'testpag.php'\n> >\n> > Do I have to set a particular isolation level? Or does this not work in\n> > general?\n> \n> In read committed level, if the second update launches before the first\n> update is finished (commits), then both of these updates will operate on\n> the old counter value. That is, you miss one page hit.\n\ncan you break it into this:\n\nbegin;\nselect counter from pagecounts where pagename='testpag.php' for update;\nupdate pagecounts set counter=counter+1 where pagename='testpag.php';\ncommit;\n\nChris\n\n",
"msg_date": "Thu, 4 Apr 2002 09:38:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Question: update and transaction isolation"
}
] |
[
{
"msg_contents": "Hi,\n\nIt's a stupid thing ... but really useful ...\n\nWhen we use, like me, psql all the time with the function VACUUM, psql\nmake the completion of tables names, and of the word ANALYZE ...\n\nWhy not the same for VERBOSE ? and for the new function FULL, and FREEZE\n??\n\nMay be in the TODO for next release ;)\n\nregards,\n-- \nHerv� Piedvache\n\nElma Ingenierie Informatique\n6, rue du Faubourg Saint-Honor�\nF-75008 - Paris - France \nhttp://www.elma.fr\nTel: +33-1-44949901\nFax: +33-1-44949902 \nEmail: herve@elma.fr\n",
"msg_date": "Wed, 03 Apr 2002 20:01:43 +0200",
"msg_from": "=?iso-8859-1?Q?Herv=E9?= Piedvache <herve@elma.fr>",
"msg_from_op": true,
"msg_subject": "PSQL completion !? v7.2.1"
}
] |
[
{
"msg_contents": "Tom,\n\nI sent a list of items I would like to work on for my Master's Project\nyesterday, but didn't hear back. I don't want to be a pest, but was\nwondering when you turned on your anti-spam software -- is it possible I got\nkicked out and you didn't get my reply?\n\nThanks,\nMike Shelton\n\n-----Original Message-----\nFrom: Thomas Lockhart [mailto:thomas@fourpalms.org]\nSent: Wednesday, April 03, 2002 9:45 AM\nTo: mlw\nCc: Tom Lane; pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] Suggestions please: names for function\ncachability\n\n\n...\n> I'm not sure I'm the only one, am I?\n\nNo, I was also blocked from Tom's mail a while ago. I have a static IP,\nbut my ISP's entire block of addresses made it on to the spam list Tom\nuses, and the strategy of the list maintainers seems to be to maximize\nthe collateral damage to force me to somehow force my ISP to change\ntheir policies, whatever those are. If I researched it enough, I might\nbe able to find out what my ISP does or does not do, and what I'm\nsupposed to do or not do. What a pain...\n\nNot sure if my status has changed. I'll bet not, since the anti-spam\nfolks have high enough standards that someone like me can't make the\ngrade. I suppose they don't rely on PostgreSQL for their database... ;)\n\nThat said, I'd like to block some spam myself. I'd rather find a spam\nlist which doesn't already have me disallowed however...\n\n - Thomas\n",
"msg_date": "Wed, 3 Apr 2002 12:03:55 -0800 ",
"msg_from": "\"SHELTON,MICHAEL (Non-HP-Boise,ex1)\" <michael_shelton@non.hp.com>",
"msg_from_op": true,
"msg_subject": "FW: Suggestions please: names for function cachability"
}
] |
[
{
"msg_contents": "Hi,\n\nI'm playing with the new schema functionality.\n\nI login to a database as a user yamada.\nThere I created 2 schemas yamada and inoue.\nBy accident I made 2 tables with the same name\nvs1 in both public and yamada schemas.\n\nI can see the content of yamada.vs1 by the command\n select * from vs1\nbut there seems to be no way to see the content of\npublic.vs1.\nIf I drop the table yamada.vs1, I can see the content\nof public.vs1 by the command\n select * from vs1.\nWell there seems to be no concept of the CURRENT schema.\nIs it intended ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 12:41:05 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I can see the content of yamada.vs1 by the command\n> select * from vs1\n> but there seems to be no way to see the content of\n> public.vs1.\n\nPUBLIC is a reserved keyword, so you have to do something like\n\tselect * from \"public\".vs1;\nif there is a vs1 hiding it in an earlier namespace in the search\npath.\n\nI've been vacillating about whether to choose another name for the\npublic namespace to avoid the need for quotes here. I can't think\nof another good name :-(\n\n\t\t\tregards, tom lane\n",
"msg_date": "Wed, 03 Apr 2002 23:20:19 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> PUBLIC is a reserved keyword, so you have to do something like\n> \tselect * from \"public\".vs1;\n> if there is a vs1 hiding it in an earlier namespace in the search\n> path.\n> \n> I've been vacillating about whether to choose another name for the\n> public namespace to avoid the need for quotes here. I can't think\n> of another good name :-(\n> \n\n\nWhat about shared.vs1 or common.vs1?\n\nJoe\n\n\n",
"msg_date": "Wed, 03 Apr 2002 20:53:53 -0800",
"msg_from": "Joe Conway <mail@joeconway.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I can see the content of yamada.vs1 by the command\n> > select * from vs1\n> > but there seems to be no way to see the content of\n> > public.vs1.\n> \n> PUBLIC is a reserved keyword, so you have to do something like\n> select * from \"public\".vs1;\n> if there is a vs1 hiding it in an earlier namespace in the search\n> path.\n\nI see. However my main problem is that the schema of unqualified\nvs1 is affected by the existence of yamada.vs1. I don't think\nit's a useful behavior.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 04 Apr 2002 13:54:25 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Tom Lane writes:\n\n> PUBLIC is a reserved keyword, so you have to do something like\n> \tselect * from \"public\".vs1;\n> if there is a vs1 hiding it in an earlier namespace in the search\n> path.\n\nPUBLIC can be made less reserved easily. See patch below.\n\n> I've been vacillating about whether to choose another name for the\n> public namespace to avoid the need for quotes here. I can't think\n> of another good name :-(\n\nPUBLIC is a good name. Oracle uses it, I think.\n\n\ndiff -u -r2.299 gram.y\n--- gram.y 1 Apr 2002 04:35:38 -0000 2.299\n+++ gram.y 4 Apr 2002 05:10:23 -0000\n@@ -2558,14 +2558,14 @@\n n->groupname = NULL;\n $$ = (Node *)n;\n }\n- | GROUP ColId\n+ | GROUP UserId\n {\n PrivGrantee *n = makeNode(PrivGrantee);\n n->username = NULL;\n n->groupname = $2;\n $$ = (Node *)n;\n }\n- | ColId\n+ | UserId\n {\n PrivGrantee *n = makeNode(PrivGrantee);\n n->username = $1;\n@@ -5897,7 +5897,6 @@\n\n Iconst: ICONST { $$ = $1; };\n Sconst: SCONST { $$ = $1; };\n-UserId: ColId { $$ = $1; };\n\n /*\n * Name classification hierarchy.\n@@ -5913,6 +5912,13 @@\n /* Column identifier --- names that can be column, table, etc names.\n */\n ColId: IDENT { $$ = $1; }\n+ | unreserved_keyword { $$ = $1; }\n+ | col_name_keyword { $$ = $1; }\n+ | PUBLIC { $$ = \"public\"; }\n+ ;\n+\n+/* User identifier */\n+UserId: IDENT { $$ = $1; }\n | unreserved_keyword { $$ = $1; }\n | col_name_keyword { $$ = $1; }\n ;\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Thu, 4 Apr 2002 00:18:22 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> PUBLIC can be made less reserved easily. See patch below.\n\nWell, we could do that, but this patch seems an ugly way to do it;\nwe have too many classifications of keywords already, and I don't\nwant to introduce another one.\n\nI'd be inclined to make PUBLIC not a keyword at all, and instead have\nthe production grantee -> ColId do this in its action:\n\n\tif (strcmp($1, \"public\") == 0)\n\t\tcreate PUBLIC PrivGrantee node\n\telse\n\t\tcreate normal PrivGrantee node\n\nAn objection to this is that you couldn't make a user named \"public\"\n(with the quotes), since PUBLIC and \"public\" would look the same to\nthe action ... but that seems like a good restriction anyway. I'd\nbe quite willing to tweak CREATE USER to forbid that name.\n\nI suppose it's a judgment call which is uglier. Thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 10:11:36 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I see. However my main problem is that the schema of unqualified\n> vs1 is affected by the existence of yamada.vs1. I don't think\n> it's a useful behavior.\n\nWell, if you don't like it, you could set the search_path to be just\npublic, or public and then the user's personal namespace. But I think\npersonal namespace before public should be the default behavior.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 10:13:18 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I've been vacillating about whether to choose another name for the\n> public namespace to avoid the need for quotes here. I can't think\n> of another good name :-(\n> \n\nFor the special schemas, we have pg_catalog, (pg_temp, pg_toast ?),\nso pg_public could do the trick.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 04 Apr 2002 15:16:07 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> I see. However my main problem is that the schema of unqualified\n> vs1 is affected by the existence of yamada.vs1. I don't think\n> it's a useful behavior.\n> \n\nThe unqualified one is there mainly for compatibility, so you can\nstill use your old database set ups without schema names.\n\nOnce you redo your database to use schemas, or even while you are \nconverting it, there should not be tables with the same name in both \nplaces.\n\nAnyway, as Tom said, you can change the search order if you prefer.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 04 Apr 2002 15:19:20 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Tom Lane wrote:\n> \n> I suppose it's a judgment call which is uglier. Thoughts?\n> \n\nWell, PUBLIC is an SQL reserved keyword (pre-92). We are\nalready very liberal with keywords. I would leave PUBLIC alone.\n\nI does not _have_ to be \"public\", so we can just avoid the issue\nby adding a pg_ prefix to public, common or something else. \nIt is a PostgreSQL concept anyway. \n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 04 Apr 2002 15:27:58 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Tom Lane wrote:\n>> I've been vacillating about whether to choose another name for the\n>> public namespace to avoid the need for quotes here. I can't think\n>> of another good name :-(\n\n> For the special schemas, we have pg_catalog, (pg_temp, pg_toast ?),\n> so pg_public could do the trick.\n\nActually that was my initial choice of name, but I changed my mind\nlater. The reason is that the dbadmin should be able to restrict or\neven delete the public namespace if his usage plans for the database\ndon't allow any shared objects. If we call it pg_public then the system\nwill think it is a reserved namespace, and we'd have to put in a special\ncase to allow it to be deleted (not to mention recreated again, should\nthe DBA change his mind later).\nThe public namespace isn't really special and so it should not be named\nlike a system-reserved namespace. IMHO anyway...\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 15:28:50 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > Tom Lane wrote:\n> >> I've been vacillating about whether to choose another name for the\n> >> public namespace to avoid the need for quotes here. I can't think\n> >> of another good name :-(\n> \n> > For the special schemas, we have pg_catalog, (pg_temp, pg_toast ?),\n> > so pg_public could do the trick.\n> \n> Actually that was my initial choice of name, but I changed my mind\n> later. The reason is that the dbadmin should be able to restrict or\n> even delete the public namespace if his usage plans for the database\n> don't allow any shared objects. \n\nCan't we prevent creation in there by (un)setting permissions?\n\n> If we call it pg_public then the system\n> will think it is a reserved namespace, and we'd have to put in a special\n> case to allow it to be deleted (not to mention recreated again, should\n> the DBA change his mind later).\n\nIf we can disallow creation with permissions, then we could always keep\nit.\nThere should be a more practical way of making it empty than having to\ndrop\neach object individually (DROP will drop the contents but refuse to\ndelete\nthe schema itself as it is a pg_ one?).\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 04 Apr 2002 15:35:49 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Tom Lane wrote:\n>> Actually that was my initial choice of name, but I changed my mind\n>> later. The reason is that the dbadmin should be able to restrict or\n>> even delete the public namespace if his usage plans for the database\n>> don't allow any shared objects. \n\n> Can't we prevent creation in there by (un)setting permissions?\n\nThat was what I was referring to by \"restrict\" ... but ISTM we should\nallow dropping the namespace too. Why waste cycles searching it if\nyou don't want to use it?\n\n> There should be a more practical way of making it empty than having to\n> drop\n> each object individually (DROP will drop the contents but refuse to\n> delete\n> the schema itself as it is a pg_ one?).\n\nI'd expect DROP on a reserved namespace to error out, and thus do\nnothing at all.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 15:45:04 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Fernando Nasser <fnasser@redhat.com> writes:\n> > Tom Lane wrote:\n> >> Actually that was my initial choice of name, but I changed my mind\n> >> later. The reason is that the dbadmin should be able to restrict or\n> >> even delete the public namespace if his usage plans for the database\n> >> don't allow any shared objects.\n> \n> > Can't we prevent creation in there by (un)setting permissions?\n> \n> That was what I was referring to by \"restrict\" ... but ISTM we should\n> allow dropping the namespace too. Why waste cycles searching it if\n> you don't want to use it?\n> \n\nI don't know how the search will be implemented, but it should cost\nvery few instructions (one isnt checks that a list head is zero and\nanother gets the next pointer for the next namespace). And, as we now \ntransform things and keep them as Oids, it will be even cheaper.\n\n\n> > There should be a more practical way of making it empty than having to\n> > drop\n> > each object individually (DROP will drop the contents but refuse to\n> > delete\n> > the schema itself as it is a pg_ one?).\n> \n> I'd expect DROP on a reserved namespace to error out, and thus do\n> nothing at all.\n> \n\nBut we could have:\n\nDROP SCHEMA pg_public CONTENTS;\n\nor something of a sort (an extension, but a public schema is an\nextension).\nAnd this sintax can come handy for DBAs in general.\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Thu, 04 Apr 2002 16:07:35 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I see. However my main problem is that the schema of unqualified\n> > vs1 is affected by the existence of yamada.vs1. I don't think\n> > it's a useful behavior.\n> \n> Well, if you don't like it, you could set the search_path to be just\n> public,\n\nYes I don't like it and probably I would do it for myself but\nI couldn't force other people to do so. Well for example, \nhow could psqlodbc driver know the CURRENT schema ?\n\n> or public and then the user's personal namespace.\n\nThe order isn't the problem at all. Would the *public*\nbe the CURRENT schema then ? If I recognize correctly,\nneither is the CURRENT schema in the current spec.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 5 Apr 2002 06:49:56 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> Well for example, \n> how could psqlodbc driver know the CURRENT schema ?\n\nWhat \"CURRENT\" schema? If you have a search path more than one entry\nlong, there is no unique notion of a CURRENT schema.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 17:34:49 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > Well for example,\n> > how could psqlodbc driver know the CURRENT schema ?\n> \n> What \"CURRENT\" schema? If you have a search path more than one entry\n> long, there is no unique notion of a CURRENT schema.\n\nOh I see but I think using the search SCHEMA path for\ntable name resolution is harmful.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 05 Apr 2002 08:40:51 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Oh I see but I think using the search SCHEMA path for\n> table name resolution is harmful.\n\nHuh? That's more or less the entire *point* of these changes, IMHO.\nWhat's harmful about having a search path?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 19:47:16 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > Oh I see but I think using the search SCHEMA path for\n> > table name resolution is harmful.\n> \n> Huh? That's more or less the entire *point* of these changes, IMHO.\n> What's harmful about having a search path?\n\nI don't object to use a search path to resolve unqualified\nfunction, type etc names. But it is very siginificant for\nusers to be able to be sure what tables they are handling.\nWhere's the necessity to use a common search path to resolve\ntable and other objects' name in the first place ? I don't\nknow any OS commands which use the command search path to\nresolve ordinary file name.\n\nWe(at least I)'ve been often confused and damaged even when\nusing OS's command search path. Does the flexibilty worth\nthe risk ? The damage would be immeasurable if unexpected\ntables are chosen. Would PostgreSQL be a dbms unavailable\nfor careless users like me ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 05 Apr 2002 11:03:01 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I don't object to use a search path to resolve unqualified\n> function, type etc names. But it is very siginificant for\n> users to be able to be sure what tables they are handling.\n\nI really don't buy this argument; it seems exactly comparable to\narguing that the notion of current directory in Unix is evil, and\nthat users should be forced to specify absolute paths to every\nfile that they reference.\n\nThere is nothing to stop you from writing qualified names (schema.table)\nif you are concerned about being sure that you get the table you intend.\nIn practice, however, people seem to prefer relative pathnames in most\nUnix commands, and I think they'll prefer unqualified names in SQL\ncommands as well.\n\n> Where's the necessity to use a common search path to resolve\n> table and other objects' name in the first place ? I don't\n> know any OS commands which use the command search path to\n> resolve ordinary file name.\n\nI think that's because of security concerns. I would not object to\nhaving separate search paths for functions/operators and for\ntables/datatypes, though, if that would make you happier.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 21:21:07 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "> I really don't buy this argument; it seems exactly comparable to\n> arguing that the notion of current directory in Unix is evil, and\n> that users should be forced to specify absolute paths to every\n> file that they reference.\n\nYou know, I'm kinda surprised that the spec doesn't define a CURRENT_SCHEMA\nvariable you can query???\n\nChris\n\n",
"msg_date": "Fri, 5 Apr 2002 10:46:17 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I don't object to use a search path to resolve unqualified\n> > function, type etc names. But it is very siginificant for\n> > users to be able to be sure what tables they are handling.\n> \n> I really don't buy this argument; it seems exactly comparable to\n> arguing that the notion of current directory in Unix is evil, and\n> that users should be forced to specify absolute paths to every\n> file that they reference.\n> \n> There is nothing to stop you from writing qualified names (schema.table)\n> if you are concerned about being sure that you get the table you intend.\n\nProbably I can do it in many cases but I couldn't force others\nto do it. I don't object if PostgreSQL doesn't allow unqualified \ntable name other than in public/temporary/catalog schema.\nThere's no ambiguity and there's no need for the CURRENT schema.\n\nBTW where's the description in SQL standard about the use\nof SCHEMA path list to resolve unqualified table name ?\nIs it a PostgreSQL's enhancement(extension) ?\nAs I already mentioned before, SQL-path isn't used to resolve\nunqalified table name.\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Fri, 05 Apr 2002 12:08:04 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Christopher Kings-Lynne wrote:\n> \n> > I really don't buy this argument; it seems exactly comparable to\n> > arguing that the notion of current directory in Unix is evil, and\n> > that users should be forced to specify absolute paths to every\n> > file that they reference.\n> \n> You know, I'm kinda surprised that the spec doesn't define a CURRENT_SCHEMA\n> variable you can query???\n> \n\nMaybe because it would be the same as CURRENT_USER.\n\nFor the standard, the schema name used (implied) to qualify objects\noutside\na CREATE SCHEMA statement is a schema name with the SQL-session user id.\nExcept for functions and UDTs where each schema has a SQL-path for \nsearching those (the implied schema must always be in it though).\nThere must be an implementation-defined default for this SQL-path\n(but the implied schema must also be in it).\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 05 Apr 2002 11:39:55 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Tom Lane wrote:\n> >\n> > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > I don't object to use a search path to resolve unqualified\n> > > function, type etc names. But it is very siginificant for\n> > > users to be able to be sure what tables they are handling.\n> >\n> > I really don't buy this argument; it seems exactly comparable to\n> > arguing that the notion of current directory in Unix is evil, and\n> > that users should be forced to specify absolute paths to every\n> > file that they reference.\n> >\n> > There is nothing to stop you from writing qualified names (schema.table)\n> > if you are concerned about being sure that you get the table you intend.\n> \n> Probably I can do it in many cases but I couldn't force others\n> to do it. I don't object if PostgreSQL doesn't allow unqualified\n> table name other than in public/temporary/catalog schema.\n> There's no ambiguity and there's no need for the CURRENT schema.\n> \n\nWe can't do that. Accordingly to the SQL if you are user HIROSHI\nand write \"SELECT * FROM a;\" the table is actually \"HIROSHI.a\".\n\nThis must work for people who are using SQL-schemas in their databases\nor we would have a non-conforming implementation of SCHEMAS (would make\nthe whole exercise pointless IMO).\n\nThe path proposed by Tom (discussed in the list some time ago) actually\ndoes magic:\n\n1) It allows SQL_schema compliant code and database to work as the \nstandard expects;\n\n2) It allows backward compatibility as someone will be able to use the\nsame schema-unaware code and create their databases without schemas as\nbefore.\n\n3) If the DBA is careful enough, she/he can convert his/her database to\nuse schemas incrementally.\n\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 05 Apr 2002 11:49:29 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Fernando Nasser <fnasser@redhat.com> writes:\n> Christopher Kings-Lynne wrote:\n>> You know, I'm kinda surprised that the spec doesn't define a CURRENT_SCHEMA\n>> variable you can query???\n\n> Maybe because it would be the same as CURRENT_USER.\n\nIt'd probably make sense for us to have one, though, given that I'm\nintent on not hardwiring the two concepts together as the spec does ;-).\nAlthough you can interrogate the search path with SHOW, that's not very\naccessible at the SQL level, so an SQL function seems useful too.\n\nI'd be inclined to make CURRENT_SCHEMA return the name of the schema\nthat is the default creation target namespace (viz, the front of the\nsearch path). Thoughts?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 11:55:17 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "> -----Original Message-----\n> From: Fernando Nasser\n> \n> Hiroshi Inoue wrote:\n> > \n> > Tom Lane wrote:\n> > >\n> > > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > > > I don't object to use a search path to resolve unqualified\n> > > > function, type etc names. But it is very siginificant for\n> > > > users to be able to be sure what tables they are handling.\n> > >\n> > > I really don't buy this argument; it seems exactly comparable to\n> > > arguing that the notion of current directory in Unix is evil, and\n> > > that users should be forced to specify absolute paths to every\n> > > file that they reference.\n> > >\n> > > There is nothing to stop you from writing qualified names \n> (schema.table)\n> > > if you are concerned about being sure that you get the table \n> you intend.\n> > \n> > Probably I can do it in many cases but I couldn't force others\n> > to do it. I don't object if PostgreSQL doesn't allow unqualified\n> > table name other than in public/temporary/catalog schema.\n> > There's no ambiguity and there's no need for the CURRENT schema.\n> > \n> \n> We can't do that. Accordingly to the SQL if you are user HIROSHI\n> and write \"SELECT * FROM a;\" the table is actually \"HIROSHI.a\".\n> \n> This must work for people who are using SQL-schemas in their databases\n> or we would have a non-conforming implementation of SCHEMAS (would make\n> the whole exercise pointless IMO).\n\nSchema name isn't necessarily a user id since SQL-92\nthough SQL-86 and SQL-89 had and probably Oracle still\nhas the limitation. As far as I see PostgreSQL's schema\nsupport wouldn't have the limitation. Probably I wouldn't\ncreate the schema HIROSHI using PostgreSQL. When\nI used Oracle I really disliked the limitation.\n\n> The path proposed by Tom (discussed in the list some time ago) actually\n> does magic:\n\nThe seems a misuse of SQL-path to me.\nIf I restrict the path to temporary:CURRENT schema:catalog, I would be\nable to use the CURRENT schema and I can see no other useful way in \nunqualified table name resolution. Probably I would also be able to use the\npath as SQL-path. But how can I use the path in both style simultaneously ?\n\nregards,\nHiroshi Inoue \n",
"msg_date": "Sat, 6 Apr 2002 07:58:00 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > We can't do that. Accordingly to the SQL if you are user HIROSHI\n> > and write \"SELECT * FROM a;\" the table is actually \"HIROSHI.a\".\n> >\n> > This must work for people who are using SQL-schemas in their databases\n> > or we would have a non-conforming implementation of SCHEMAS (would make\n> > the whole exercise pointless IMO).\n> \n> Schema name isn't necessarily a user id since SQL-92\n> though SQL-86 and SQL-89 had and probably Oracle still\n> has the limitation. As far as I see PostgreSQL's schema\n> support wouldn't have the limitation. Probably I wouldn't\n> create the schema HIROSHI using PostgreSQL. When\n> I used Oracle I really disliked the limitation.\n> \n\nYou misunderstood what I've said. You may have how many schemas\nyou please. But you will have to refer to their objects specifying\nthe schema name explicitly. The only cases where you can omit the\nschema name are (accordingly to the SQL'99 standard):\n\n1) The statement is part of a CREATE SCHEMA statement that is\ncreating the object, so the schema being created is assumed\n(and that is what you want).\n\n2) Your schema has the same name as your user id, your statement\nis not inside a CREATE SCHEMA and it runs on a session with that \nauthorization id. A schema name equal to the sessuin user id is\nassumed (which is what you want in this specific case). \n\nOtherwise you have to specify the schema explicitly.\n\nSo, if you name your schema \"APPLE\", and not HIROSHI, except \nfor inside the CREATE SCHEMA APPLE statement elements, you will\nhave to keep refering to tables with the \"APPLE.\" prefix.\n\n\nPostgreSQL will be smarter and try to relax 2) for you, looking\nfor the table in a public schema as well (if one exists), so old\nstyle (non-schema) databases can still be used and people who have\nschemas with names that are not their user id can save some typing. ;-)\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Fri, 05 Apr 2002 19:08:02 -0500",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Fernando Nasser writes:\n\n> I does not _have_ to be \"public\", so we can just avoid the issue\n> by adding a pg_ prefix to public, common or something else.\n> It is a PostgreSQL concept anyway.\n\nNo, it's an Oracle concept.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n",
"msg_date": "Fri, 5 Apr 2002 21:48:01 -0500 (EST)",
"msg_from": "Peter Eisentraut <peter_e@gmx.net>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Fernando Nasser\n>\n> Hiroshi Inoue wrote:\n> >\n> > > We can't do that. Accordingly to the SQL if you are user HIROSHI\n> > > and write \"SELECT * FROM a;\" the table is actually \"HIROSHI.a\".\n> > >\n> > > This must work for people who are using SQL-schemas in their databases\n> > > or we would have a non-conforming implementation of SCHEMAS\n> (would make\n> > > the whole exercise pointless IMO).\n> >\n> > Schema name isn't necessarily a user id since SQL-92\n> > though SQL-86 and SQL-89 had and probably Oracle still\n> > has the limitation. As far as I see PostgreSQL's schema\n> > support wouldn't have the limitation. Probably I wouldn't\n> > create the schema HIROSHI using PostgreSQL. When\n> > I used Oracle I really disliked the limitation.\n> >\n>\n> You misunderstood what I've said. You may have how many schemas\n> you please. But you will have to refer to their objects specifying\n> the schema name explicitly. The only cases where you can omit the\n> schema name are (accordingly to the SQL'99 standard):\n\nPlease tell me where's the description in SQL99 ?\nI wasn't able to find it unfortunately.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Sat, 6 Apr 2002 18:14:45 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n>\n> Fernando Nasser <fnasser@redhat.com> writes:\n> > Christopher Kings-Lynne wrote:\n> >> You know, I'm kinda surprised that the spec doesn't define a\n> CURRENT_SCHEMA\n> >> variable you can query???\n>\n> > Maybe because it would be the same as CURRENT_USER.\n>\n> It'd probably make sense for us to have one, though, given that I'm\n> intent on not hardwiring the two concepts together as the spec does ;-).\n> Although you can interrogate the search path with SHOW, that's not very\n> accessible at the SQL level, so an SQL function seems useful too.\n>\n> I'd be inclined to make CURRENT_SCHEMA return the name of the schema\n> that is the default creation target namespace (viz, the front of the\n> search path). Thoughts?\n\nI think only one schema other than TEMP or catalog is allowed in the search\npath for the resolution of table name. I can call the schema the\nCURRENT_SCHEMA.\nIf the restricted search path is inappropriate for the resolution of\nfunction, type etc\nname, you have to provide another path IMHO.\n\nBTW every time I examined SQL99, I can find neither the description\nCURRENT_SCHEMA == CURRENT_USER nor the one that the schema\nname of an unqualified table name may vary according to the table name.\nProbably it's because of my poor English. I'm happy if you could tell me\nwhere to find it.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Sun, 7 Apr 2002 20:44:55 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> > You misunderstood what I've said. You may have how many schemas\n> > you please. But you will have to refer to their objects specifying\n> > the schema name explicitly. The only cases where you can omit the\n> > schema name are (accordingly to the SQL'99 standard):\n> \n> Please tell me where's the description in SQL99 ?\n> I wasn't able to find it unfortunately.\n> \n\nAs most things in the SQL standard, you have to collect information\nfrom several places and add it together.\n\nLook at 4.20, 11.1 and specially at the rules for\n<schema qualified name>.\n\nThen think a little bit about scenarios, trying to apply the rules.\n\nIt is a pain, but there is no other way.\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 08 Apr 2002 12:09:21 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "> -----Original Message-----\n> From: Fernando Nasser\n>\n> Hiroshi Inoue wrote:\n> >\n> > > You misunderstood what I've said. You may have how many schemas\n> > > you please. But you will have to refer to their objects specifying\n> > > the schema name explicitly. The only cases where you can omit the\n> > > schema name are (accordingly to the SQL'99 standard):\n> >\n> > Please tell me where's the description in SQL99 ?\n> > I wasn't able to find it unfortunately.\n> >\n>\n> As most things in the SQL standard, you have to collect information\n> from several places and add it together.\n>\n> Look at 4.20, 11.1 and specially at the rules for\n> <schema qualified name>.\n\nOK I can see at 4.20.\n If a reference to a <table name> does not explicitly contain a <schema\nname>,\n then a specific <schema name> is implied. The particular <schema name>\n associated with such a <table name> depends on the context in which the\n <table name> appears and is governed by the rules for <schema qualified\nname>.\n\nUnfortunately I can't find what to see at 11.1. Please tell me where to see.\n\nHowever I can see the following at 5.4 Names and Identifiers\n11) If a <schema qualified name> does not contain a <schema name>, then\n Case:\n a) If the <schema qualified name> is contained in a <schema\ndefinition>,\n then the <schema name> that is specified or implicit in the <schema\ndefinition>\n is implicit.\n b) Otherwise, the <schema name> that is specified or implicit for the\n <SQL-client module definition> is implicit.\n\nregards,\nHiroshi Inoue\n\n",
"msg_date": "Tue, 9 Apr 2002 02:17:54 +0900",
"msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> However I can see the following at 5.4 Names and Identifiers\n> 11) If a <schema qualified name> does not contain a <schema name>, then\n> Case:\n> a) If the <schema qualified name> is contained in a <schema\n> definition>,\n> then the <schema name> that is specified or implicit in the <schema\n> definition>\n> is implicit.\n\nYes. Fernando, our existing CREATE SCHEMA command does not get this\nright for references from views to tables, does it? It seems to me that\nto get compliant behavior, we'll need to temporarily push the new schema\nonto the front of the namespace search path while parsing view\ndefinitions inside CREATE SCHEMA.\n\n(The relevance to the current discussion is that this is easy to do if\nSET variables roll back on error ... but it might be tricky if they do\nnot.)\n\n\t\t\tregards, tom lane\n",
"msg_date": "Mon, 08 Apr 2002 13:41:31 -0400",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ? "
},
{
"msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > However I can see the following at 5.4 Names and Identifiers\n> > 11) If a <schema qualified name> does not contain a <schema name>, then\n> > Case:\n> > a) If the <schema qualified name> is contained in a <schema\n> > definition>,\n> > then the <schema name> that is specified or implicit in the <schema\n> > definition>\n> > is implicit.\n> \n> Yes. Fernando, our existing CREATE SCHEMA command does not get this\n> right for references from views to tables, does it? It seems to me that\n> to get compliant behavior, we'll need to temporarily push the new schema\n> onto the front of the namespace search path while parsing view\n> definitions inside CREATE SCHEMA.\n> \n\nCorrect. It only takes care of proper setting/checking the schema name\nfor the view (as is done for tables) that are being created. Doing as\nyou suggest would be nice (similar to what we do with the authid).\n\nBTW, I think have to properly fill/check the schema when the grant\nobjects\nare tables/views (I am not sure how functions will be handled).\nI will send a patch in later today or tomorrow, unless you want to do\nit differently. I prefer to do in in the parser because I can issue\nand error if a grant is for something that is not an object in the\nschema being created.\n\n\nRegards,\nFernando\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Mon, 08 Apr 2002 20:35:54 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Fernando Nasser wrote:\n> \n> As most things in the SQL standard, you have to collect information\n> from several places and add it together.\n> \n> Look at 4.20, 11.1 and specially at the rules for\n> <schema qualified name>.\n> \n> Then think a little bit about scenarios, trying to apply the rules.\n> \n> It is a pain, but there is no other way.\n\nI couldn't find the description CURRENT_SCHEMA == CURRENT_USER.\nIf I recognize SQL99 correctly, the CURRENT schema is the schema\ndefined in a <SQL-client module> not restricted to the CURRENT\nuser.\n\nWell here's my proposal.\n1) Use the different search path for table name and\n others.\n2) Allow only one schema other than temp or catalog in\n the table name search path so that we can call it\n the CURRENT schema.\n\nComments ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Wed, 10 Apr 2002 18:41:22 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hiroshi Inoue wrote:\n> \n> Fernando Nasser wrote:\n> >\n> > As most things in the SQL standard, you have to collect information\n> > from several places and add it together.\n> >\n> > Look at 4.20, 11.1 and specially at the rules for\n> > <schema qualified name>.\n> >\n> > Then think a little bit about scenarios, trying to apply the rules.\n> >\n> > It is a pain, but there is no other way.\n> \n> I couldn't find the description CURRENT_SCHEMA == CURRENT_USER.\n> If I recognize SQL99 correctly, the CURRENT schema is the schema\n> defined in a <SQL-client module> not restricted to the CURRENT\n> user.\n> \n\nYes, but we don't have a \"module\" language. You have to look for\n\"session\".\n\n\n-- \nFernando Nasser\nRed Hat Canada Ltd. E-Mail: fnasser@redhat.com\n2323 Yonge Street, Suite #300\nToronto, Ontario M4P 2C9\n",
"msg_date": "Wed, 10 Apr 2002 09:27:05 -0400",
"msg_from": "Fernando Nasser <fnasser@redhat.com>",
"msg_from_op": false,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Fernando Nasser wrote:\n> \n> Hiroshi Inoue wrote:\n> >\n> > Fernando Nasser wrote:\n> > >\n> > > As most things in the SQL standard, you have to collect information\n> > > from several places and add it together.\n> > >\n> > > Look at 4.20, 11.1 and specially at the rules for\n> > > <schema qualified name>.\n> > >\n> > > Then think a little bit about scenarios, trying to apply the rules.\n> > >\n> > > It is a pain, but there is no other way.\n> >\n> > I couldn't find the description CURRENT_SCHEMA == CURRENT_USER.\n> > If I recognize SQL99 correctly, the CURRENT schema is the schema\n> > defined in a <SQL-client module> not restricted to the CURRENT\n> > user.\n> >\n> \n> Yes,\n\nOK I wasn't wrong at this point.\n\n> but we don't have a \"module\" language. You have to look for\n> \"session\".\n\nDo you mean PostgreSQL by the *we* ?\nWe have never been and would never be completely in\nconformity to standard. If we don't have a \"module\"\nlanguage, does it mean we couldn't have any subsitute\nfor <SQL-client module> ?\n\nregards,\nHiroshi Inoue\n",
"msg_date": "Thu, 11 Apr 2002 15:13:01 +0900",
"msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>",
"msg_from_op": true,
"msg_subject": "Re: What's the CURRENT schema ?"
},
{
"msg_contents": "Hi all,\n\nI see there's a TODO item for large object security, it's a feature I'd really like to see. I'm willing to put in the time to write a patch, but know far to little about postgres internals and history to just dive in. Has there been any discussion on this list about what this feature should be or how it might be implemented? I saw a passing reference to \"LOB LOCATORs\" in the list archives, but that was all.\n\nWhat's a LOB LOCATOR ? \n\nWhat about giving each large object its own permission flags? ex:\n\nGRANT SELECT ON LARGE OBJECT 10291 TO USER webapp;\nGRANT SELECT, DELETE, UPDATE ON LARGE OBJECT 10291 TO USER admin;\n\nDefault permission flags (and INSERT permissions) would be set at the table level. All objects without specific permissions would use the table rules. This allows for backward compatibility and convenience.\n\nI think per-object security is important. A user shouldn't be able to get at another user's data just by guessing the right OID. Ideally, users without permission would not know there were objects in the database they were not allowed to see.\n\nI can also imagine a security scheme that uses rule/trigger syntax to give the user a hook to provide her own security functions. I haven't thought that through, though.\n\nAny thoughts?\n\n\n-Damon\n",
"msg_date": "Fri, 19 Apr 2002 02:03:38 -0700",
"msg_from": "Damon Cokenias <lists@mtn-palace.com>",
"msg_from_op": false,
"msg_subject": "Large object security"
}
] |
[
{
"msg_contents": "Hi all,\n\nSome questions:\n\n1. What is the difference between abstime and timestamp - they seem to\ndisplay equally...\n\n2. Since int4 and abstime are binary compatible (ie int4::abstime works), is\nthere any serious problem with updating a pg_attribute row for an int4 and\nchanging it to and abstime? My experiments seem to work.\n\n3. Is there any way of checking pg_type to check that two types are binary\ncompatible and can be substiuted in this way?\n\n4. Is there any worth in me submitting a patch that will allow rudimentary\ncolumn type changing, so long as the types are binary compatible???\n\nChris\n\n",
"msg_date": "Thu, 4 Apr 2002 12:48:44 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Changing column types..."
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> 3. Is there any way of checking pg_type to check that two types are binary\n> compatible and can be substiuted in this way?\n\nBinary compatibility is not represented in pg_type (which is a shortcoming).\nYou have to use the IsBinaryCompatible() routine provided by\nparse_coerce.h.\n\n> 4. Is there any worth in me submitting a patch that will allow rudimentary\n> column type changing, so long as the types are binary compatible???\n\nHmm. Seems like that case, and the various ones involving adjustment of\nchar/varchar length by hacking atttypmod, would be useful to support via\nALTER COLUMN even if we don't have a full implementation. Essentially\nthis would be taking the existing folklore about safe ways to hack\npg_attribute and reducing them to code --- why not do it?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 10:34:42 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Changing column types... "
},
{
"msg_contents": "> 1. What is the difference between abstime and timestamp - they seem to\n> display equally...\n\nabstime is four bytes with a range of +/- 68 years. timestamp is eight\nbytes with a range from 4212BC to way into the future.\n\n> 2. Since int4 and abstime are binary compatible (ie int4::abstime works), is\n> there any serious problem with updating a pg_attribute row for an int4 and\n> changing it to and abstime? My experiments seem to work.\n\nA few integer values are reserved values in abstime, to allow\nimplementation of infinity, -infinity, etc.\n\n - Thomas\n",
"msg_date": "Thu, 04 Apr 2002 17:29:47 -0800",
"msg_from": "Thomas Lockhart <lockhart@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Changing column types..."
},
{
"msg_contents": "> > 2. Since int4 and abstime are binary compatible (ie\n> int4::abstime works), is\n> > there any serious problem with updating a pg_attribute row for\n> an int4 and\n> > changing it to and abstime? My experiments seem to work.\n>\n> A few integer values are reserved values in abstime, to allow\n> implementation of infinity, -infinity, etc.\n\nDoes this mean that hacking the type of an int4 column to become abstime is\na bad idea?\n\nYes in theory - no in practice?\n\nChris\n\n",
"msg_date": "Fri, 5 Apr 2002 11:08:36 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Changing column types..."
},
{
"msg_contents": "> > 4. Is there any worth in me submitting a patch that will allow\n> rudimentary\n> > column type changing, so long as the types are binary compatible???\n>\n> Hmm. Seems like that case, and the various ones involving adjustment of\n> char/varchar length by hacking atttypmod, would be useful to support via\n> ALTER COLUMN even if we don't have a full implementation. Essentially\n> this would be taking the existing folklore about safe ways to hack\n> pg_attribute and reducing them to code --- why not do it?\n\nCan you only reduce the length of a varchar (say) or can you actually\nincrease them as well?\n\nChris\n\n",
"msg_date": "Fri, 5 Apr 2002 11:10:07 +0800",
"msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>",
"msg_from_op": true,
"msg_subject": "Re: Changing column types... "
},
{
"msg_contents": "> > A few integer values are reserved values in abstime, to allow\n> > implementation of infinity, -infinity, etc.\n> Does this mean that hacking the type of an int4 column to become abstime is\n> a bad idea?\n> Yes in theory - no in practice?\n\nHmm. I assume that this is in the context of an \"officially supported\"\nconversion strategy? I'm afraid I am not recalling the details of these\nthreads; my brain does not hold as much as it used to ;)\n\nAnyway, if we are thinking of allowing some types to be converted in\nplace without actually modifying the contents of tuples, then for this\ncase the risks are relatively small afaicr. The reserved values are at\nthe high and low ends of the integer range, so there are some large (in\nthe absolute sense) integer values which would take on some unexpected\ninterpretations for an abstime value.\n\nThat said, I'm not sure why we would want to bother with hacking things\nin this way (but if I recalled the details of the threads maybe I\nwould?).\n\nistm that the general strategy for changing column types would require\nmarking a column as dead and adding a new column to replace it, or\nwriting an atomic copy / modify / replace operation for tables which\nmodifies tuples as it proceeds, or ?? Just because we may allow a hack\nfor text types because they happen to have a similar/identical storage\nstructure doesn't necessarily mean that it is a good design for the\ngeneral case. But you've probably already covered that territory...\n\n - Thomas\n",
"msg_date": "Thu, 04 Apr 2002 20:40:31 -0800",
"msg_from": "Thomas Lockhart <thomas@fourpalms.org>",
"msg_from_op": false,
"msg_subject": "Re: Changing column types..."
},
{
"msg_contents": "On Fri, 5 Apr 2002, Christopher Kings-Lynne wrote:\n\n> > > 2. Since int4 and abstime are binary compatible (ie\n> > int4::abstime works), is\n> > > there any serious problem with updating a pg_attribute row for\n> > an int4 and\n> > > changing it to and abstime? My experiments seem to work.\n> >\n> > A few integer values are reserved values in abstime, to allow\n> > implementation of infinity, -infinity, etc.\n> \n> Does this mean that hacking the type of an int4 column to become abstime is\n> a bad idea?\n\nThe only problem with this would be if the int4 column contained the\nreserved values:\n\n#define INVALID_ABSTIME ((AbsoluteTime) 0x7FFFFFFE) \n#define NOEND_ABSTIME ((AbsoluteTime) 0x7FFFFFFC)\n#define NOSTART_ABSTIME ((AbsoluteTime) INT_MIN)\n\n> \n> Yes in theory - no in practice?\n\nKind of the other way around, in my opinion: No in theory, yes in\npractice.\n\nGavin\n\n",
"msg_date": "Fri, 5 Apr 2002 14:55:42 +1000 (EST)",
"msg_from": "Gavin Sherry <swm@linuxworld.com.au>",
"msg_from_op": false,
"msg_subject": "Re: Changing column types..."
},
{
"msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Hmm. Seems like that case, and the various ones involving adjustment of\n>> char/varchar length by hacking atttypmod, would be useful to support via\n>> ALTER COLUMN even if we don't have a full implementation. Essentially\n>> this would be taking the existing folklore about safe ways to hack\n>> pg_attribute and reducing them to code --- why not do it?\n\n> Can you only reduce the length of a varchar (say) or can you actually\n> increase them as well?\n\nYou can go either way. If you're reducing then in theory you should\nscan the column and make sure that no current values exceed the new\nlimit.\n\nFor char() as opposed to varchar(), you actually need to update the\ncolumn to establish the correctly-padded new values.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 10:08:25 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Changing column types... "
},
{
"msg_contents": "Thomas Lockhart <thomas@fourpalms.org> writes:\n> istm that the general strategy for changing column types would require\n> marking a column as dead and adding a new column to replace it, or\n> writing an atomic copy / modify / replace operation for tables which\n> modifies tuples as it proceeds, or ?? Just because we may allow a hack\n> for text types because they happen to have a similar/identical storage\n> structure doesn't necessarily mean that it is a good design for the\n> general case.\n\nSure. This is not intended to cover the general case; if we hold Chris\nto that standard then the task will drop right back to the TODO list\nwhere it's been for years. My thought was that we've frequently\nanswered people on the mailing lists \"well, officially that's not\nsupported, but unofficially, for the case you need you can hack the\ncatalogs like this: ...\". Why not make that folklore functionality\navailable in a slightly cleaner package? It won't preclude doing a\nfull-up ALTER COLUMN implementation later.\n\n\t\t\tregards, tom lane\n",
"msg_date": "Fri, 05 Apr 2002 10:12:54 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Changing column types... "
}
] |
[
{
"msg_contents": "Tom,\n\nI attached a message from my colleague and think it'd be interesting\nto you. A short history: During developing of one project on\nWindows platform, Teodor has discovered a pretty nice feature of Gigabase\n(free embedded database by Konstantin Knizhnik,\nhttp://www.geocities.com/kknizhnik/gigabase.html), which helps us a lot.\nIvan has wrote a proposal for implementing it in PostgreSQL.\nCould you, please, comment the proposal.\n\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n---------- Forwarded message ----------\nDate: Wed, 3 Apr 2002 01:40:05 +0400 (MSD)\nFrom: Ivan E. Panchenko <ivan@xray.sai.msu.ru>\nTo: Oleg Bartunov <oleg@sai.msu.su>, Teodor Sigaev <teodor@stack.net>\nSubject: Bidirectional hard joins\n\nHi,\n\n\nHere we propose some essential improvement of postgreSQL functionality,\nwhich may provide a great perfomance increase.\n\n 1. Problem\n\nThe fastest way to find and fetch a record from a table is to\nperform a SELECT ... WHERE record.id = value.\nProbably, an index scan would be performed for this SELECT.\n\nSuch index scan seems to be fast, but there are some cases where it may\nappear too slow. The most evident case is the case of a sub-query, which\ncan arise as a result of a join or a nested select statement.\n\nIf it were possible to store direct references to database\nrecords in the tables, joins could be implemented in much more effective\nway.\n\n 2. Possible soultion\n\nCreating a datatype which stores direct reference\nto the record (i.e., physical location of the tuple) is only a part of the\nsolution.\n\nWhen a record that is referenced is updated its physical location can be\nchanged, so the references to it should be updated. To make this\npossible, the referenced record should remember all the references to\nitself. Thus, we consider the direct tuple references as bidirectional\nlinks, or \"bidirectional hard joins\".\n\nThese \"hard joins\" are in some sense similar to hard links in a\nfilesystem (in this analogy, classic joins are like symbolic links).\n\nPhilosophically, this means a convergence between indexes and tables: a\ntable behaives like an index for an other table.\n\nObviously, this is is a nonrelational feature, and it requires some\nspecial SQL syntax. Below we provide some examples for clarification of\nthe use of the proposed feature.\n\n 3. Examples\n\n\nCREATE JOIN paternity FROM man.children TO child.father ;\n -- creates a field man.children containing a reference to the table\n child, and a field father in the table child with a back reference.\n\n\nINSERT INTO man VALUES ('Bob Scott');\nINSERT INTO child VALUES ('Charles Scott');\nLINK paternity WHERE (man.name = 'Bob Scott'),(child.name = 'Charles Scott');\n -- Create a link betewen the two records.\n\nINSERT INTO child VALUES ('Doug Scott');\nLINK paternity (man.name = 'Bob Scott'),(child.name = 'Doug Scott');\n\n\nSELECT child.name from child, man\n WHERE paternity(man,child) AND man.name = 'Bob Scott';\n -- Find all Bob's children\n> Charles Scott\n> Doug Scott\n> 2 records seleted.\n\n---------------------------------------------------------------\nThis syntax was thought of just for illustration and is not proposed to\nimplement (now?).\n\n 4. Performance\n\nWhen direct joins are used in select statements, they can strongly\nincrease performance.\n\nLet us examine the query plan of the request (\"Find all Bob's\nchildren\") from the example above in the present day postgres.\n create table man (id SERIAL,name text);\n create table child (id SERIAL,name text, parent_id int4 references man(id));\n .. populate the tables ... and create indexes...\n explain select child.name from child, man\n where child.parent_id = man.id\n and man.name = 'Bob Scott';\n\n Nested Loop\n -> Index Scan using man_n on man\n -> Index Scan using child_par on child\n\n\n\nIn a hypotetical postgres with hard joins it could be:\n\n Nested Loop\n -> Index Scan using man_n on man\n -> Direct retrieval on child\n\nI.e., the for each retrieved \"man\" record we retrieve the \"child\" records\ndirectly using hard join. The real overhead for this operation should be\nneglible in comparison with index scan.\n\nUsing the hard joins require some additional overhead in updates. In fact,\nafter updating the record which takes part in such join, the references\nto this record in the other records should be also updated. This operation\nis not essentially new for postgres as similar things are done with\nindexes when an indexed record is updated. Hence, the overhead for updates\nis not greater than the overhead for updating indexes.\n\n\n 5. Implementation and conclusion\n\nEffective implementing of hard joins requires hard changes to postgres,\nmost serious of them probably in the executor, where a new method \"fetch\nrecord by reference\" should be added in addition to \"index scan\" and \"seq\nscan\". Also the optimizer should be taught to deal with this.\n\nThe update support is not so hard as it is similar to the updating of\nindexes.\n\nThough the implementation of such hard joins is really a complicated task,\nthe performance it brings should be tremendous, so we consider discussing\nthis important.\n\n\n\n\n\n",
"msg_date": "Thu, 4 Apr 2002 15:17:41 +0300 (GMT)",
"msg_from": "Oleg Bartunov <oleg@sai.msu.su>",
"msg_from_op": true,
"msg_subject": "Bidirectional hard joins (fwd)"
},
{
"msg_contents": "On Thu, 2002-04-04 at 14:17, Oleg Bartunov wrote:\n Subject: Bidirectional hard joins\n> \n> Hi,\n> \n> \n> Here we propose some essential improvement of postgreSQL functionality,\n> which may provide a great perfomance increase.\n> \n> 1. Problem\n> \n> The fastest way to find and fetch a record from a table is to\n> perform a SELECT ... WHERE record.id = value.\n> Probably, an index scan would be performed for this SELECT.\n> \n> Such index scan seems to be fast, but there are some cases where it may\n> appear too slow. The most evident case is the case of a sub-query, which\n> can arise as a result of a join or a nested select statement.\n> \n> If it were possible to store direct references to database\n> records in the tables, joins could be implemented in much more effective\n> way.\n> \n> 2. Possible soultion\n> \n> Creating a datatype which stores direct reference\n> to the record (i.e., physical location of the tuple) is only a part of the\n> solution.\n\nThe tid type does exaclty what is needed.\n\n> \n> When a record that is referenced is updated its physical location can be\n> changed, so the references to it should be updated. To make this\n> possible, the referenced record should remember all the references to\n> itself. Thus, we consider the direct tuple references as bidirectional\n> links, or \"bidirectional hard joins\".\n> \n> These \"hard joins\" are in some sense similar to hard links in a\n> filesystem (in this analogy, classic joins are like symbolic links).\n> \n> Philosophically, this means a convergence between indexes and tables: a\n> table behaives like an index for an other table.\n> \n> Obviously, this is is a nonrelational feature, and it requires some\n> special SQL syntax. Below we provide some examples for clarification of\n> the use of the proposed feature.\n\nOr we can just use tid's and ordinary joins to make it a relational\nfeature.\n\nIIRC this has been discussed on this list a few months ago. I'm not sure\nif bi-directional tid usage was discussed, but I can't see how to use\nthem efficiently in a non-overwrite storage manager. \n\n...\n\n> 4. Performance\n> \n> When direct joins are used in select statements, they can strongly\n> increase performance.\n> \n> Let us examine the query plan of the request (\"Find all Bob's\n> children\") from the example above in the present day postgres.\n> create table man (id SERIAL,name text);\n> create table child (id SERIAL,name text, parent_id int4 references man(id));\n> .. populate the tables ... and create indexes...\n> explain select child.name from child, man\n> where child.parent_id = man.id\n> and man.name = 'Bob Scott';\n> \n> Nested Loop\n> -> Index Scan using man_n on man\n> -> Index Scan using child_par on child\n> \n> \n> \n> In a hypotetical postgres with hard joins it could be:\n> \n> Nested Loop\n> -> Index Scan using man_n on man\n> -> Direct retrieval on child\n>\n> I.e., the for each retrieved \"man\" record we retrieve the \"child\" records\n> directly using hard join. The real overhead for this operation should be\n> neglible in comparison with index scan.\n\nOTOH, if index is in memory and the retrieved tuple is not then the\n_speed_difference_ could be neglible.\n\n> Using the hard joins require some additional overhead in updates. In fact,\n> after updating the record which takes part in such join, the references\n> to this record in the other records should be also updated.\n\nAnd this should be in a non-overwriting way. If we just do a standard\nUPDATE, causing a new heap record to be added this will result in a\ncircle as then the original records references are not valid anymore and\nso also need to be updated and so on ...\n\n> This operation\n> is not essentially new for postgres as similar things are done with\n> indexes when an indexed record is updated. Hence, the overhead for updates\n> is not greater than the overhead for updating indexes.\n\nAFAIK indexes are not \"updated\" but a new index entry is added as the\nold tuple may be still visible to some other transaction.\n\n> 5. Implementation and conclusion\n> \n> Effective implementing of hard joins requires hard changes to postgres,\n> most serious of them probably in the executor, where a new method \"fetch\n> record by reference\" should be added in addition to \"index scan\" and \"seq\n> scan\". Also the optimizer should be taught to deal with this.\n> \n> The update support is not so hard as it is similar to the updating of\n> indexes.\n> \n> Though the implementation of such hard joins is really a complicated task,\n> the performance it brings should be tremendous, so we consider discussing\n> this important.\n\nDepending on usage the performance degradation can also be tremendous,\nas a simple update can trigger an avalance of referencing tid updates\n...\n\n--------------\nHannu\n\n\n\n\n\n\n\n\n\n",
"msg_date": "04 Apr 2002 17:31:47 +0200",
"msg_from": "Hannu Krosing <hannu@tm.ee>",
"msg_from_op": false,
"msg_subject": "Re: Bidirectional hard joins (fwd)"
},
{
"msg_contents": "Oleg Bartunov <oleg@sai.msu.su> writes:\n> Could you, please, comment the proposal.\n\nOkay: \"ugly and unimplementable\".\n\nWhere are you going to put these back-references that the description\nglosses over so quickly? They can't be in the row itself; that doesn't\nscale to large numbers of references to the same row. I think you'd end\nup building an external datastructure that would in the final analysis\noffer no better performance than standard indexes.\n\nI'd also want to see an analysis of how this interacts with MVCC before\nwe could consider whether it makes any sense in Postgres. In\nparticular, which version of a row does the reference point at, and how\nwill concurrent updates (possibly aborted) be handled?\n\n\t\t\tregards, tom lane\n",
"msg_date": "Thu, 04 Apr 2002 10:57:39 -0500",
"msg_from": "Tom Lane <tgl@sss.pgh.pa.us>",
"msg_from_op": false,
"msg_subject": "Re: Bidirectional hard joins (fwd) "
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.