threads
listlengths
1
2.99k
[ { "msg_contents": "Bruce, could you update following in HISTORY:\n\nAllow automatic conversion to Unicode (Tatsuo)\n\nto:\n\nAllow automatic conversion to/from Unicode (Tatsuo, Eiji)\n\nEiji Tokuya <e-tokuya@Mail.Sankyo-Unyu.co.jp> has contributed a better\nconversion map for SJIS.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 15 Feb 2001 17:36:17 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "HISTORY" }, { "msg_contents": "Done.\n\n> Bruce, could you update following in HISTORY:\n> \n> Allow automatic conversion to Unicode (Tatsuo)\n> \n> to:\n> \n> Allow automatic conversion to/from Unicode (Tatsuo, Eiji)\n> \n> Eiji Tokuya <e-tokuya@Mail.Sankyo-Unyu.co.jp> has contributed a better\n> conversion map for SJIS.\n> --\n> Tatsuo Ishii\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Feb 2001 08:31:36 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY" } ]
[ { "msg_contents": "There is a link to sqlSQL for win32 here\nhttp://208.160.255.143/pgsql/pgsql.exe . Not tried it yet but it has been\nposted here before.\n\nRegards\n\nBen\n\n> -----Original Message-----\n> From: Peter T Mount [mailto:peter@retep.org.uk]\n> Sent: 06 February 2001 10:23\n> To: sourabh.dixit@wipro.com; sourabh dixit\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Postgre SQL for Windows\n> \n> \n> Quoting sourabh dixit <sourabh.dixit@wipro.com>:\n> \n> > Hello!\n> > Can anybody tell me the website from which I can download PostgreSQL\n> > for Windows95.\n> \n> I'm not sure if it will run under Win95, but I have it \n> running fine under NT \n> using Cygwin and WinIPC.\n> \n> While my linux box was down, I had to use it under NT to work \n> on the JDBC \n> driver.\n> \n> Peter\n> \n> -- \n> Peter Mount peter@retep.org.uk\n> PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\n> RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n> \n\n\n\n\n\nRE: [HACKERS] Postgre SQL for Windows\n\n\nThere is a link to sqlSQL for win32 here http://208.160.255.143/pgsql/pgsql.exe .  Not tried it yet but it has been posted here before.\nRegards\n\nBen\n\n> -----Original Message-----\n> From: Peter T Mount [mailto:peter@retep.org.uk]\n> Sent: 06 February 2001 10:23\n> To: sourabh.dixit@wipro.com; sourabh dixit\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Postgre SQL for Windows\n> \n> \n> Quoting sourabh  dixit <sourabh.dixit@wipro.com>:\n> \n> > Hello!\n> > Can anybody tell me the website from which I can download PostgreSQL\n> > for Windows95.\n> \n> I'm not sure if it will run under Win95, but I have it \n> running fine under NT \n> using Cygwin and WinIPC.\n> \n> While my linux box was down, I had to use it under NT to work \n> on the JDBC \n> driver.\n> \n> Peter\n> \n> -- \n> Peter Mount peter@retep.org.uk\n> PostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\n> RetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n>", "msg_date": "Thu, 15 Feb 2001 09:58:44 -0000", "msg_from": "\"Trewern, Ben\" <Ben.Trewern@mowlem.com>", "msg_from_op": true, "msg_subject": "RE: Postgre SQL for Windows" } ]
[ { "msg_contents": "\n\nHi,\n\nI realy get into problems witht his one.\n\nI've made an Irix nss library which connects to postgresql.\nBut somehow the backend doesn;t get into active status.\n\nThe blocking PQconnectdb halts until timeout (if i ignore the\nerrormessage, the results return right after the timeout has expired).\n\nBy using the non blocking function PQconnectStart() and using a callback\nfunction and the select() mainloop of the nss daemon, the status of\nPQconnectPoll() doesn't turn into PGRESS_POLLING_OK.\n\nIn the callback routine I check for PGRESS_POLLING_OK, but it never gets\nactive!\n\nI put this little piece of code in the callback function for testing,\nbut the syslogs says this routine is called 6300 times before timeout.\n\nIt seems libpq is sending data to the socket as if it were ready, but it\ndoesn't get the PGRESS_POLLING_OK status!\n\nnsd_callback_remove(PQsocket(pgc));\n\nif (pgs != PGRES_POLLING_OK) {\n nsd_callback_new(PQsocket(pgc), (void *)(ns_psql_ccb(file, pgc, s)),\nNSD_READ);\n return;\n}\nnsd_timeout_remove(file);\n\nDoes anybody have any idea about this problem?\n\nI might be nss_daemon or even Irix reletaed, but i don't get a way to\ncheck this out :(\n\nErik\n", "msg_date": "Thu, 15 Feb 2001 12:10:46 +0100", "msg_from": "Erik Hofman <erik@ehofman.com>", "msg_from_op": true, "msg_subject": "non blocking mode bug?" } ]
[ { "msg_contents": "Hi,\n\nWell i create a new type. I want indexing my new type but i\ncan't , postgresql erro : my type has no default operator class\nSo i use the following query in order to have default operator:\nINSERT INTO pg_opclass(opcname,opcdeftype) select 'ean13_ops',oid\n from pg_type where typname = 'ean13' ;\nSo the new error with postgresql when i try to create an index\non my new type is : opclass \"ean13\" not supported by\naccess method btree\nSo I think i forget some definition function in my\nc program in order to use index .\nI 've got operators for my type ( > , < , == and so on )\nCan you help me ???\nThanks in advance,\nBest regards,\nPEJAC pascal\n\n", "msg_date": "Thu, 15 Feb 2001 13:11:36 +0100 (CET)", "msg_from": "<pejac@altern.org>", "msg_from_op": true, "msg_subject": "Indexing new type ........" }, { "msg_contents": "<pejac@altern.org> writes:\n> So i use the following query in order to have default operator:\n> INSERT INTO pg_opclass(opcname,opcdeftype) select 'ean13_ops',oid\n> from pg_type where typname = 'ean13' ;\n> So the new error with postgresql when i try to create an index\n> on my new type is : opclass \"ean13\" not supported by\n> access method btree\n\nYou need some entries in pg_amop and pg_amproc as well. See the section\non \"interfacing extensions to indices\" in the Programmer's Guide.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 15 Feb 2001 10:17:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Indexing new type ........ " } ]
[ { "msg_contents": "Does anyone know why the MySQL web site is missing:\n\n\thttp://www.tcx.se/\n\nIt shows an empty page. Did they just close it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Feb 2001 09:06:54 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "MySQL web site" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> Does anyone know why the MySQL web site is missing:\n> \n> http://www.tcx.se/\n> \n> It shows an empty page. Did they just close it?\n\nTry http://www.mysql.com/.\n\n-Egon\n\n-- \nSIX Offene Systeme GmbH � Stuttgart - Berlin \nSielminger Stra�e 63 � D-70771 Leinfelden-Echterdingen\nFon +49 711 9909164 � Fax +49 711 9909199 http://www.six.de\nBesuchen Sie uns auf der CeBIT 2001, Halle 6, Stand F62/4\n", "msg_date": "Thu, 15 Feb 2001 15:09:18 +0100", "msg_from": "\"Egon Schmid (@vacation)\" <eschmid@php.net>", "msg_from_op": false, "msg_subject": "Re: MySQL web site" }, { "msg_contents": "Error 500: Internal Server Error.\n\nSomeone did bad.\n\nGavin\n\n", "msg_date": "Fri, 16 Feb 2001 01:19:34 +1100 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: MySQL web site" } ]
[ { "msg_contents": "Dear friends,\n\n\tI have been searching the mailing lists for a couple of days now hoping to\nfind a solution to my problem. Well I hope I find a solution here. \n\nThe problem is such:\n\t\tI have a Win32 application that uses ODBC to connect to the Postgres (ver\n6.5) on RedHat Linux dbase. I have been able to connect and perform a lot\nof SQL statements, etc. However I am unable to perform the backup of the\ndbase. I need to backup up the database from within this application. It\ndoesn't matter where the backup file is. I know there is a shell command\n\"pg_dump\", yes it works fine from the shell but I need to backup the dbase\nwhen connected to the Postgers database on an ODBC connection (I'm using\nthe 32-bit ODBC drivers for postgres).\n\tI have also tried making a function in the dbase and including the\n\"pg_dump\" in that but to no avail. \n\nI would be grateful if there were any suggestions/advice/code to help me\nwith this task.\n\nthanx a lot guys,\nlloyd\n\n\n", "msg_date": "Thu, 15 Feb 2001 22:57:53", "msg_from": "Online -- Goa <info@opspl.com>", "msg_from_op": true, "msg_subject": "Backup from within Postgres" }, { "msg_contents": "Online -- Goa writes:\n\n> \t\tI have a Win32 application that uses ODBC to connect to the Postgres (ver\n> 6.5) on RedHat Linux dbase. I have been able to connect and perform a lot\n> of SQL statements, etc. However I am unable to perform the backup of the\n> dbase. I need to backup up the database from within this application.\n\nThen you need to ask the author of that application to add this\nfunctionality. If this is your own application, then you will have to\nduplicate a lot of pg_dump's code in it, which will probably be a rather\nlarge project.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 16 Feb 2001 21:43:34 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Backup from within Postgres" } ]
[ { "msg_contents": "While working on fixing the ODBC driver, I did several things I should\nnot have done during beta. First, I removed the feature that allowed\nODBC to talk to backends of version <=6.3. This should not have been\ndone during beta. I did this very quickly, with little warning to\nusers. Second, I pgindent'ed the ODBC code and fixed some of its\nalignment. Again, something that I should not have done without warning\npeople and waiting for comments.\n\nWhen my mistakes were pointed out to me, instead of correcting thing, I\ntried to defend my actions. I have since backed out all my changes,\nexcept my original bug fix.\n\nApologies to Hiroshi, Marc, and others for causing this confusion, and\nnot initially accepting blame for my actions. \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 15 Feb 2001 20:14:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Apology for ODBC mistakes" } ]
[ { "msg_contents": "Hi all,\n\nI have a question of PostgreSQL's floating point\nrepresentation.\n\ncreate table t (fl1 float4, fl2 float4, fl3 float4);\ninsert into t values (1.234567, 1.23457, 1.23457);\nselect * from t;\n fl1 | fl2 | fl3\n---------+---------+---------\n 1.23457 | 1.23457 | 1.23457\n(1 row)\n\nselect * from t where fl1=fl2;\n fl1 | fl2 | fl3\n-----+-----+-----\n(0 rows)\n\nselect * from t where t where fl2=fl3;\n fl1 | fl2 | fl3\n---------+---------+---------\n 1.23457 | 1.23457 | 1.23457\n(1 row)\n\nOK, fl1 != fl2 and fl2 == fl3 but\n\ncopy t to stdout;\n1.23457 1.23457 1.23457\n\nThe output of pg_dump is same. Then\nafter restoring from the pg_dump \noutput, we would get a tuple such\nthat fl1==fl2==fl3.\n\nIs it reasonable ?\n\nIn addtion this makes a client library like ODBC\ndriver very unhappy with the handlig of floating\npoint data. For example, once a floating point\ndata like fl1(1.234567) was stored, MS-Access\ncouldn't update the tuple any more.\n\nIs there a way to change the precision of floating\npoint representation from clients ?\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Fri, 16 Feb 2001 16:56:43 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "floating point representation" }, { "msg_contents": "Hiroshi Inoue writes:\n\n> Is there a way to change the precision of floating\n> point representation from clients ?\n\nNot currently, but I image it couldn't be too hard to introduce a\nparameter that changes the format string used by float*out to something\nelse.\n\nThe GNU C library now offers a %a (and %A) format that prints floating\npoint numbers in a semi-internal form that is meant to be portable. (I\nimage this was done because of C99, but I'm speculating.) It might be\nuseful to offer this to preserve accurate data across dumps.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 16 Feb 2001 17:57:08 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> The GNU C library now offers a %a (and %A) format that prints floating\n> point numbers in a semi-internal form that is meant to be portable. (I\n> image this was done because of C99, but I'm speculating.) It might be\n> useful to offer this to preserve accurate data across dumps.\n\nHere's what I find in the C99 draft:\n\n a,A A double argument representing a (finite) floating-\n point number is converted in the style\n [-]0xh.hhhhp�d, where there is one hexadecimal digit\n ^ � == \"+/-\" ... tgl\n (which is nonzero if the argument is a normalized\n floating-point number and is otherwise unspecified)\n before the decimal-point character (219) and the\n number of hexadecimal digits after it is equal to\n the precision; if the precision is missing and\n FLT_RADIX is a power of 2, then the precision is\n sufficient for an exact representation of the value;\n if the precision is missing and FLT_RADIX is not a\n power of 2, then the precision is sufficient to\n distinguish (220) values of type double, except that\n trailing zeros may be omitted; if the precision is\n zero and the # flag is not specified, no decimal-\n point character appears. The letters abcdef are\n used for a conversion and the letters ABCDEF for A\n conversion. The A conversion specifier produces a\n number with X and P instead of x and p. The\n exponent always contains at least one digit, and\n only as many more digits as necessary to represent\n the decimal exponent of 2. If the value is zero,\n the exponent is zero.\n\n A double argument representing an infinity or NaN is\n converted in the style of an f or F conversion\n specifier.\n\n ____________________\n\n 219Binary implementations can choose the hexadecimal digit\n to the left of the decimal-point character so that\n subsequent digits align to nibble (4-bit) boundaries.\n\n 220The precision p is sufficient to distinguish values of\n the source type if 16p-1>bn where b is FLT_RADIX and n is\n the number of base-b digits in the significand of the\n source type. A smaller p might suffice depending on the\n implementation's scheme for determining the digit to the\n left of the decimal-point character.\n\n 7.19.6.1 Library 7.19.6.1\n\n 314 Committee Draft -- August 3, 1998 WG14/N843\n\n\nSo, it looks like C99-compliant libc implementations will have this,\nbut I'd hesitate to rely on it for pg_dump purposes; it would certainly\nnot be very portable for awhile yet.\n\nPeter's idea of a SET variable to control float display format might\nnot be a bad idea, but what if anything should pg_dump do with it?\nMaybe just crank the precision up a couple digits from the current\ndefaults?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 14:08:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > The GNU C library now offers a %a (and %A) format that prints floating\n> > point numbers in a semi-internal form that is meant to be portable. (I\n> > image this was done because of C99, but I'm speculating.) It might be\n> > useful to offer this to preserve accurate data across dumps.\n> \n\n[snip]\n> \n> So, it looks like C99-compliant libc implementations will have this,\n> but I'd hesitate to rely on it for pg_dump purposes; it would certainly\n> not be very portable for awhile yet.\n> \n\nAgreed.\n\n> Peter's idea of a SET variable to control float display format might\n> not be a bad idea, but what if anything should pg_dump do with it?\n> Maybe just crank the precision up a couple digits from the current\n> defaults?\n>\n\nCurrently the precision of float display format is FLT_DIG(DBL_DIG).\nIt's not sufficent to distinguish float values. As Peter already suggested,\nthe quickest solution would be to change XXX_DIG constants to variables\nand provide a routine to SET the variables. Strictly speaking the precision\nneeded to distigush float values seems OS-dependent. It seems preferable\nto have a symbol to specify the precision. \n\nRegards,\nHiroshi Inoue\n", "msg_date": "Sat, 17 Feb 2001 08:42:26 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: floating point representation " }, { "msg_contents": "I wrote:\n> \n> > -----Original Message-----\n> > From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> >\n> > Peter Eisentraut <peter_e@gmx.net> writes:\n\n[snip]\n\n> \n> > Peter's idea of a SET variable to control float display format might\n> > not be a bad idea, but what if anything should pg_dump do with it?\n> > Maybe just crank the precision up a couple digits from the current\n> > defaults?\n> >\n> \n> Currently the precision of float display format is FLT_DIG(DBL_DIG).\n> It's not sufficent to distinguish float values. As Peter already suggested,\n> the quickest solution would be to change XXX_DIG constants to variables\n> and provide a routine to SET the variables. Strictly speaking the precision\n> needed to distigush float values seems OS-dependent. It seems preferable\n> to have a symbol to specify the precision.\n> \n\nThe 7.1-release seems near.\nMay I provide the followings ?\n\tSET FLOAT4_PRECISION TO ..\n\tSET FLOAT8_PRECISION TO ..\n\nOr must we postpone to fix it ?\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Mon, 19 Feb 2001 14:56:25 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> The 7.1-release seems near.\n> May I provide the followings ?\n> \tSET FLOAT4_PRECISION TO ..\n> \tSET FLOAT8_PRECISION TO ..\n\n> Or must we postpone to fix it ?\n\nThis seems a small enough change that I do not fear fixing it at this\nlate date. However, I do not like the idea of making the SET variables\nbe just number of digits precision. As long as we're going to have SET\nvariables, let's go for the full flexibility offered by sprintf: define\nthe SET variables as the sprintf format strings to use. The defaults\nwould be \"%.7g\" and \"%.17g\" (or thereabouts, not sure what number of\ndigits we are currently using). This way, someone could select the C99\n%a format if he knew that his libc supported it. Or he could force a\nparticular format like %7.3f if that's what he needed in a specific\napplication.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 01:03:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > The 7.1-release seems near.\n> > May I provide the followings ?\n> > SET FLOAT4_PRECISION TO ..\n> > SET FLOAT8_PRECISION TO ..\n> \n> > Or must we postpone to fix it ?\n> \n> This seems a small enough change that I do not fear fixing it at this\n> late date. However, I do not like the idea of making the SET variables\n> be just number of digits precision. As long as we're going to have SET\n> variables, let's go for the full flexibility offered by sprintf: define\n> the SET variables as the sprintf format strings to use. \n\nAgreed.\n\n> The defaults\n> would be \"%.7g\" and \"%.17g\" (or thereabouts, not sure what number of\n> digits we are currently using).\n\nWouldn't changing current '%.6g','%.15g'(on many platforms)\ncause the regression test failure ? \n\n> This way, someone could select the C99\n> %a format if he knew that his libc supported it. Or he could force a\n> particular format like %7.3f if that's what he needed in a specific\n> application.\n> \n\nRegards,\nHiroshi Inoue\n", "msg_date": "Mon, 19 Feb 2001 15:17:55 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Tom Lane wrote:\n>> The defaults\n>> would be \"%.7g\" and \"%.17g\" (or thereabouts, not sure what number of\n>> digits we are currently using).\n\n> Wouldn't changing current '%.6g','%.15g'(on many platforms)\n> cause the regression test failure ? \n\nI didn't check my numbers. If the current behavior is '%.6g','%.15g'\nthen we should stay with that as the default.\n\nHmm, on looking at the code, this might mean we need some configure\npushups to extract FLT_DIG and DBL_DIG and put those into the default\nstrings. Do we support any platforms where these are not 6 & 15?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 01:22:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > The 7.1-release seems near.\n> > May I provide the followings ?\n> > \tSET FLOAT4_PRECISION TO ..\n> > \tSET FLOAT8_PRECISION TO ..\n> \n> > Or must we postpone to fix it ?\n> \n> This seems a small enough change that I do not fear fixing it at this\n> late date. However, I do not like the idea of making the SET variables\n> be just number of digits precision. As long as we're going to have SET\n> variables, let's go for the full flexibility offered by sprintf: define\n> the SET variables as the sprintf format strings to use. The defaults\n> would be \"%.7g\" and \"%.17g\" (or thereabouts, not sure what number of\n> digits we are currently using). This way, someone could select the C99\n> %a format if he knew that his libc supported it. Or he could force a\n> particular format like %7.3f if that's what he needed in a specific\n> application.\n\nAdded to TODO:\n\n\t* Add SET FLOAT4_PRECISION and SET FLOAT8_PRECISION using printf args\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 10:22:57 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Hiroshi Inoue writes:\n\n> The 7.1-release seems near.\n> May I provide the followings ?\n> \tSET FLOAT4_PRECISION TO ..\n> \tSET FLOAT8_PRECISION TO ..\n\nI'd prefer names that go with the SQL type names:\n\nREAL_FORMAT\nDOUBLE_PRECISION_FORMAT\n\nSeems a bit tacky, but a lot of work has been put in to make these names\nmore prominent.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Mon, 19 Feb 2001 16:40:38 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Hiroshi Inoue writes:\n\n> The 7.1-release seems near.\n> May I provide the followings ?\n> \tSET FLOAT4_PRECISION TO ..\n> \tSET FLOAT8_PRECISION TO ..\n>\n> Or must we postpone to fix it ?\n\nActually, you're going to have to recode the float*in() functions, using\nscanf, and scanf's formats are not always equivalent to printf's.\n\nAnd what about the geometry types that are based on floats?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Mon, 19 Feb 2001 16:51:53 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Added to TODO:\n> \t* Add SET FLOAT4_PRECISION and SET FLOAT8_PRECISION using printf args\n\nfoo_PRECISION is not the right name if these variables will contain\nprintf format strings. Perhaps foo_FORMAT? Anyone have a better idea?\n\nAfter further thought I think that we ought to standardize on %.6g and\n%.15g even if the local <float.h> offers slightly different values of\nFLT_DIG and DBL_DIG. IEEE or near-IEEE float math is so close to\nuniversal that I don't think it's worth worrying about the possibility\nthat different precisions would be more appropriate for some platforms.\nFurthermore, having cross-platform consistency of display format seems\nmore useful than not.\n\nSomething else we should perhaps think about, though we are very late\nin beta: once these variables exist, we could have the geometry regress\ntest set them to suppress a couple of digits, and eliminate most if not\nall of the need for platform-specific geometry results. Doing this\nwould be a no-brainer at any other time in the development cycle, but\nright now I am worried about whether we'd be able to reconfirm regress\nresults on all the currently-supported platforms before release.\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 10:58:32 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n>> Or must we postpone to fix it ?\n\n> Actually, you're going to have to recode the float*in() functions, using\n> scanf, and scanf's formats are not always equivalent to printf's.\n\nHmm... that wouldn't matter, except for this %a format. Maybe we'd\nbetter not try to make this happen in the waning days of the 7.1 cycle.\n\n> And what about the geometry types that are based on floats?\n\nThey should track the float8 format, certainly.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 11:24:33 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "> Hiroshi Inoue writes:\n> \n> > The 7.1-release seems near.\n> > May I provide the followings ?\n> > \tSET FLOAT4_PRECISION TO ..\n> > \tSET FLOAT8_PRECISION TO ..\n> \n> I'd prefer names that go with the SQL type names:\n> \n> REAL_FORMAT\n> DOUBLE_PRECISION_FORMAT\n> \n> Seems a bit tacky, but a lot of work has been put in to make these names\n> more prominent.\n\nTODO updated:\n\n\t* Add SET REAL_FORMAT and SET DOUBLE_PRECISION_FORMAT using printf args\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 11:54:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Actually, you're going to have to recode the float*in() functions, using\n> scanf, and scanf's formats are not always equivalent to printf's.\n\nFurther thought: one answer to this is to institute four SET variables,\ntwo for output and two for input; perhaps FLOAT8_FORMAT, FLOAT8_IN_FORMAT,\nand similarly for FLOAT4. The input formats would normally just be\n\"%lg\" and \"%g\" but could be changed for special cases (like reading\ntable dumps prepared with %a output format).\n\nHowever, it's becoming quite clear to me that this feature needs more\nthought than first appeared. Accordingly, I now vote that we not try\nto fit it into 7.1, but do it in a more considered fashion for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 11:57:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "Tom Lane wrote:\n> \n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Actually, you're going to have to recode the float*in() functions, using\n> > scanf, and scanf's formats are not always equivalent to printf's.\n> \n> Further thought: one answer to this is to institute four SET variables,\n> two for output and two for input; perhaps FLOAT8_FORMAT, FLOAT8_IN_FORMAT,\n> and similarly for FLOAT4. The input formats would normally just be\n> \"%lg\" and \"%g\" but could be changed for special cases (like reading\n> table dumps prepared with %a output format).\n> \n\n From the first I don't want to change the current default\noutput format\n\t\"%.\" #FLT_DIG \"g\" (REAL)\n\t\"%.\" #DBL_DIG \"g\" (DOUBLE PRECISION)\nfor 7.1 because their changes would cause a regress\ntest failure.\n\n> However, it's becoming quite clear to me that this feature needs more\n> thought than first appeared. Accordingly, I now vote that we not try\n> to fit it into 7.1, but do it in a more considered fashion for 7.2.\n> \n\nThe simplest way to fix it quickly would be to not provide\nXXXX_IN_FORMAT and restrict XXXX_FORMAT to \"%.*g\" at present.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 20 Feb 2001 10:41:52 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: floating point representation" }, { "msg_contents": "> \n> Tom Lane wrote:\n> > \n> > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > Actually, you're going to have to recode the float*in() functions, using\n> > > scanf, and scanf's formats are not always equivalent to printf's.\n> > \n> > Further thought: one answer to this is to institute four SET variables,\n> > two for output and two for input; perhaps FLOAT8_FORMAT, FLOAT8_IN_FORMAT,\n> > and similarly for FLOAT4. The input formats would normally just be\n> > \"%lg\" and \"%g\" but could be changed for special cases (like reading\n> > table dumps prepared with %a output format).\n> > \n> \n> >From the first I don't want to change the current default\n> output format\n> \t\"%.\" #FLT_DIG \"g\" (REAL)\n> \t\"%.\" #DBL_DIG \"g\" (DOUBLE PRECISION)\n> for 7.1 because their changes would cause a regress\n> test failure.\n\nBut we run regress with the proper setting, right? How does giving\npeople the ability to change the defaults affect the regression tests?\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 21:02:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representationu" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> >\n> > Tom Lane wrote:\n> > >\n> > > Peter Eisentraut <peter_e@gmx.net> writes:\n> > > > Actually, you're going to have to recode the float*in() functions, using\n> > > > scanf, and scanf's formats are not always equivalent to printf's.\n> > >\n> > > Further thought: one answer to this is to institute four SET variables,\n> > > two for output and two for input; perhaps FLOAT8_FORMAT, FLOAT8_IN_FORMAT,\n> > > and similarly for FLOAT4. The input formats would normally just be\n> > > \"%lg\" and \"%g\" but could be changed for special cases (like reading\n> > > table dumps prepared with %a output format).\n> > >\n> >\n> > >From the first I don't want to change the current default\n> > output format\n> > \"%.\" #FLT_DIG \"g\" (REAL)\n> > \"%.\" #DBL_DIG \"g\" (DOUBLE PRECISION)\n> > for 7.1 because their changes would cause a regress\n> > test failure.\n> \n> But we run regress with the proper setting, right?> How does giving\n> people the ability to change the defaults affect the regression tests?\n> \n\nHmm I'm afraid I'm misunderstanding your point.\nIf the default float4(8) output format would be the\nsame as current output format then we would have no\nproblem with the current regress test. But there\ncould be a choise to change default output format\nto have a large enough presision to distinguish\nfloat4(8).\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 20 Feb 2001 11:47:58 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: floating point representationu" }, { "msg_contents": "> > > >From the first I don't want to change the current default\n> > > output format\n> > > \"%.\" #FLT_DIG \"g\" (REAL)\n> > > \"%.\" #DBL_DIG \"g\" (DOUBLE PRECISION)\n> > > for 7.1 because their changes would cause a regress\n> > > test failure.\n> > \n> > But we run regress with the proper setting, right?> How does giving\n> > people the ability to change the defaults affect the regression tests?\n> > \n> \n> Hmm I'm afraid I'm misunderstanding your point.\n> If the default float4(8) output format would be the\n> same as current output format then we would have no\n> problem with the current regress test. But there\n> could be a choise to change default output format\n> to have a large enough presision to distinguish\n> float4(8).\n\nBut are they going to change the default to run the regression tests? \nHow do they change it? in ~/.psqlrc?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 23:07:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representationu" }, { "msg_contents": "> Hmm, on looking at the code, this might mean we need some configure\n> pushups to extract FLT_DIG and DBL_DIG and put those into the default\n> strings. Do we support any platforms where these are not 6 & 15?\n\nIn principle, yes. VAX does not use IEEE math (by default anyway) and\nhas less range and more precision. Most machines nowadays use the IEEE\ndefinitions, but having at least one counterexample will help keep us\nhonest ;)\n\n - Thomas\n", "msg_date": "Tue, 20 Feb 2001 04:19:10 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > > > >From the first I don't want to change the current default\n> > > > output format\n> > > > \"%.\" #FLT_DIG \"g\" (REAL)\n> > > > \"%.\" #DBL_DIG \"g\" (DOUBLE PRECISION)\n> > > > for 7.1 because their changes would cause a regress\n> > > > test failure.\n> > >\n> > > But we run regress with the proper setting, right?> How does giving\n> > > people the ability to change the defaults affect the regression tests?\n> > >\n> >\n> > Hmm I'm afraid I'm misunderstanding your point.\n> > If the default float4(8) output format would be the\n> > same as current output format then we would have no\n> > problem with the current regress test. But there\n> > could be a choise to change default output format\n> > to have a large enough presision to distinguish\n> > float4(8).\n> \n> But are they going to change the default to run the regression tests?\n> How do they change it? in ~/.psqlrc?\n> \n\nProbably there's a misunderstanding between you and I\nbut unfortunaltely I don't understand what it is in my\npoor English.\nAnyway in my plan(current format as default) there would\nbe no problem with regress test at least for 7.1.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 20 Feb 2001 13:27:56 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": true, "msg_subject": "Re: floating point representationu" }, { "msg_contents": "> > > same as current output format then we would have no\n> > > problem with the current regress test. But there\n> > > could be a choise to change default output format\n> > > to have a large enough presision to distinguish\n> > > float4(8).\n> > \n> > But are they going to change the default to run the regression tests?\n> > How do they change it? in ~/.psqlrc?\n> > \n> \n> Probably there's a misunderstanding between you and I\n> but unfortunaltely I don't understand what it is in my\n> poor English.\n> Anyway in my plan(current format as default) there would\n> be no problem with regress test at least for 7.1.\n\nOh, I see. I can't see any way we can make this change in 7.1. It has\nto be done in 7.2. You are right, changing it at this late date would\nbe a regression disaster.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 23:28:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representationu" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> But are they going to change the default to run the regression tests? \n\nYou're barking up the wrong tree, Bruce. Hiroshi specifically said\nthat he does *not* want to change the default behavior.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 23:36:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representationu " }, { "msg_contents": "> ...right now I am worried about whether we'd be able to reconfirm regress\n> results on all the currently-supported platforms before release.\n\nThis would be an excellent topic for a full development cycle ;)\n\n - Thomas\n", "msg_date": "Tue, 20 Feb 2001 04:38:44 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > But are they going to change the default to run the regression tests? \n> \n> You're barking up the wrong tree, Bruce. Hiroshi specifically said\n> that he does *not* want to change the default behavior.\n\nOK, I am confused. Can someone straighten me out?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 23:39:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representationu" }, { "msg_contents": "> > ...right now I am worried about whether we'd be able to reconfirm regress\n> > results on all the currently-supported platforms before release.\n> \n> This would be an excellent topic for a full development cycle ;)\n\nOh, I see. Never mind. I was lost.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 23:40:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Tom Lane writes:\n > Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n > > Tom Lane wrote:\n > >> The defaults\n > >> would be \"%.7g\" and \"%.17g\" (or thereabouts, not sure what number of\n > >> digits we are currently using).\n > \n > > Wouldn't changing current '%.6g','%.15g'(on many platforms)\n > > cause the regression test failure ? \n > \n > I didn't check my numbers. If the current behavior is '%.6g','%.15g'\n > then we should stay with that as the default.\n > \n > Hmm, on looking at the code, this might mean we need some configure\n > pushups to extract FLT_DIG and DBL_DIG and put those into the default\n > strings. Do we support any platforms where these are not 6 & 15?\n\nPlease remind me what we are trying to do. 6 & 15 are values to\nsuppress trailing digits at the end of a number in a standard printf.\nFor example, 0.1 prints as 0.10000000000000001 at %.17g but as 0.1 at\n%.16g. However those shorter formats are less precise. There are\nseveral other doubles that will also print the same result. A round\ntrip of printf/scanf will not generally preserve the number.\n\nPrinting for display purposes may not be adequate for dumping with a\nview to restoring. Are we talking about display or dump?\n\nThe ideal is to print just enough digits to be able to read the number\nback. There should be no redundant digits at the end. Printf is\nunable to do this by itself. The reason is that the correct number of\ndecimal digits for a %.*g is a function of the number being printed.\n\nThere are algorithms to do the right thing but they can be expensive.\nI play with some in a program at the URI below. There is a minor typo\nin the usage and a missing (optional) file. I'll correct those when\nthe site allows uploads again. The files' contents are currently\navailable at http://petef.8k.com/.\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\npete.forman@westerngeco.com -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n", "msg_date": "Tue, 20 Feb 2001 09:40:30 +0000", "msg_from": "Pete Forman <pete.forman@westerngeco.com>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "Pete Forman <pete.forman@westerngeco.com> writes:\n> Please remind me what we are trying to do.\n\nThe real point is that we need to serve several different purposes\nthat aren't necessarily fully compatible.\n\nThe existing default of FLT_DIG or DBL_DIG digits seems like a good\ngeneral-purpose policy, but it doesn't meet all needs. For pg_dump,\nwe clearly would like to promise exact dump and restore. On the\nother side, the geometry regress tests would like to suppress a few\nof the noisier low-order digits. And we frequently see questions from\nusers about how they can display fewer digits than the system wants to\ngive them --- or, more generally, format the output in some special\nform.\n\nI think the idea of making a user-settable format string is a good one.\nI'm just afraid of the idea of trying to shoehorn in a solution at the\nlast minute; if we do, we may find it's not quite right and then have\na backwards-compatibility problem with fixing it. Besides, we are in\n\"no new features\" mode during beta. I think it should wait for 7.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 15:48:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "> I think the idea of making a user-settable format string is a good one.\n> I'm just afraid of the idea of trying to shoehorn in a solution at the\n> last minute; if we do, we may find it's not quite right and then have\n> a backwards-compatibility problem with fixing it. Besides, we are in\n> \"no new features\" mode during beta. I think it should wait for 7.2.\n\nAgreed. I have the items on the TODO list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Feb 2001 15:58:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tom Lane writes:\n>> And we frequently see questions from users about how they can display\n>> fewer digits than the system wants to give them --- or, more\n>> generally, format the output in some special form.\n\n> to_char() should serve those people.\n\nOnly if they're willing to go through and change all their queries.\nThe geometry regress tests seem a good counterexample: one SET at the\ntop versus a lot of rewriting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 16:26:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "Tom Lane writes:\n\n> And we frequently see questions from users about how they can display\n> fewer digits than the system wants to give them --- or, more\n> generally, format the output in some special form.\n\nto_char() should serve those people.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 20 Feb 2001 22:29:19 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "At 15:48 20/02/01 -0500, Tom Lane wrote:\n>\n>The existing default of FLT_DIG or DBL_DIG digits seems like a good\n>general-purpose policy, but it doesn't meet all needs. For pg_dump,\n>we clearly would like to promise exact dump and restore. On the\n>other side, the geometry regress tests would like to suppress a few\n>of the noisier low-order digits. And we frequently see questions from\n>users about how they can display fewer digits than the system wants to\n>give them --- or, more generally, format the output in some special\n>form.\n>\n\nIf I could add another 'nice-to-have' in here: the ability on a\nper-attribute basis to specify the preferred output format. This could\napply to real, date, integer etc etc. Clearly not a 7.1 feature.\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 21 Feb 2001 10:02:06 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "At 22:29 20/02/01 +0100, Peter Eisentraut wrote:\n>Tom Lane writes:\n>> And we frequently see questions from users about how they can display\n>> fewer digits than the system wants to give them --- or, more\n>> generally, format the output in some special form.\n>\n>to_char() should serve those people.\n>\n\nThis is not a good solution if what you want (as a user) is consistency of\noutput no matter who retrieves the data; people should not have to wrap\nevery SELECT field in to_char to get the precision/format they want.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 21 Feb 2001 10:07:59 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: floating point representation " }, { "msg_contents": "Philip Warner writes:\n\n> At 22:29 20/02/01 +0100, Peter Eisentraut wrote:\n> >Tom Lane writes:\n> >> And we frequently see questions from users about how they can display\n> >> fewer digits than the system wants to give them --- or, more\n> >> generally, format the output in some special form.\n> >\n> >to_char() should serve those people.\n> >\n>\n> This is not a good solution if what you want (as a user) is consistency of\n> output no matter who retrieves the data; people should not have to wrap\n> every SELECT field in to_char to get the precision/format they want.\n\nViews should serve those people.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Feb 2001 16:59:39 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: floating point representation " } ]
[ { "msg_contents": "\nHi,\n\nI'm using the Logical Volume Manager with the 2.4.1 (sorry for forgetting\nto specify the exact version). What I am now suspecting is memory. Once I\ndowngraded to 128MB (from 384MB), this night the error did not show up.\n\nWhat showed up was the \"Error index_formtuple: data takes 21268 bytes: too\nbig\". If anyone has any ideas on this, please share them.\n\nThanks for the interest\nRobert\n\n\n\n\n \n Denis Pugnere \n <Denis.Pugnere@ig To: Robert.Farrugia@go.com.mt \n h.cnrs.fr> cc: \n Subject: Re: Kernel panic error \n 16/02/2001 09:50 \n \n \n\n\n\n\nYesterday, the 15 February 2001 at 10:12, Robert.Farrugia@go.com.mt wrote :\n\n | Hi,\n |\n | I have been using Postgres 7.0.3 for the last few weeks. I also use LVM\non\n | the 2.4 kernel. I have very large tables (one of which is over 2GB).\n\nare you using RAID ?\nthe 2.4 kernel seems to have some panics in specific cases.\ntry 2.4.1\n\n |\n | My problem is that lately (the last week), when doing maintenance\n(normally\n | at night using a cronjob), the follwoing error has been repeatly given:\n |\n | Incorrect number of segments after building list\n | nr_segments is 8\n | counted segments is 2\n | Flag 1 0\n | Segement 0xc5fa74a0, blocks 8, addr 0x4007ffff\n | Kernel panic: Thats all folks. Too dangerous to continue.\n |\n | I have noticed yesterday that postgres gave the error below before the\n | kernel panic. This error was given when vacuuming one of the smallest\n | tables in the database.\n | Error index_formtuple: data takes 21268 bytes: too big\n |\n | Anyone has any ideas what is happening ?\n |\n | Thanks\n | Robert\n |\n |\n\nDenis Pugn�re\n---\nDenis.Pugnere@igh.cnrs.fr | IGH/CNRS UPR 1142, 141 Rue de la Cardonille\nTel : +33 (0)4 9961.9909 | 34396 Montpellier Cedex 5, France\nFax : +33 (0)4 9961.9901 | http://www.igh.cnrs.fr\n\n\n\n\n\n", "msg_date": "Fri, 16 Feb 2001 09:00:49 +0100", "msg_from": "Robert.Farrugia@go.com.mt", "msg_from_op": true, "msg_subject": "Re: Kernel panic error" }, { "msg_contents": "Robert.Farrugia@go.com.mt writes:\n> What showed up was the \"Error index_formtuple: data takes 21268 bytes: too\n> big\". If anyone has any ideas on this, please share them.\n\nThat says that you have a value too wide to fit in an index entry. If\nit was from data that fit before, then I think this must indicate that\ndata on-disk has gotten corrupted, causing some datum to appear longer\nthan it was --- and then when vacuum tries to rebuild the index entry\nfor that row, you get a failure.\n\nIn any case I'd say this is a consequence of your kernel-level problem.\nIt cannot be the cause.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 11:07:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Kernel panic error " }, { "msg_contents": "Robert.Farrugia@go.com.mt writes:\n\n> What showed up was the \"Error index_formtuple: data takes 21268 bytes: too\n> big\". If anyone has any ideas on this, please share them.\n\nIt means your data is too big to fit into an index.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 16 Feb 2001 17:12:00 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: Kernel panic error" }, { "msg_contents": "> Robert.Farrugia@go.com.mt writes:\n> \n> > What showed up was the \"Error index_formtuple: data takes 21268 bytes: too\n> > big\". If anyone has any ideas on this, please share them.\n> \n> It means your data is too big to fit into an index.\n\nGood case in point. Here is a typical email. Here is a difficult/rare\nproblem that should be appearing on the mailing lists. Those easy\nquestions are pretty much gone, as far as I can tell.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 11:17:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] Re: Kernel panic error" } ]
[ { "msg_contents": "Hello,\n\nThere is a bug in the binary distribution for S.u.S.E. 7.0; in the script \n\"/etc/rc.d/postgres\", in the \"start\" clause.\n\nThe -D option of the postmaster daemon is used to declare where is the data \ndirectory.\n\nYou do it like this:\n\n\tpostgres -D$datadir\n\nbut you must do it like this:\n\n\tpostgres -D $datadir\n\nThere must be a space among \"-D\" and \"$datadir\".\n\n\nDavid Lizano\n\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nDavid Lizano - Director �rea t�cnica\ncorreo-e: david.lizano@izanet.com\n\nI Z A N E T - Servicios integrales de internet.\nweb: http://www.izanet.com/\nDirecci�n: C/ Checa, 57-59, 3� D - 50.007 Zaragoza (Espa�a)\nTel�fono: +34 976 25 80 23 Fax: +34 976 25 80 24\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\n\n", "msg_date": "Fri, 16 Feb 2001 10:59:01 +0100", "msg_from": "David Lizano <david.lizano@izanet.com>", "msg_from_op": true, "msg_subject": "A bug in binary distribution for S.u.S.E. 7.0" } ]
[ { "msg_contents": "MATCH PARTIAL isn't in 7.1. Is it?\n\nMike Mascari\nmascarm@mascari.com\n\n", "msg_date": "Fri, 16 Feb 2001 09:29:02 -0500", "msg_from": "Mike Mascari <mascarm@mascari.com>", "msg_from_op": true, "msg_subject": "MATCH PARTIAL" }, { "msg_contents": "No. In parser/gram.y I see:\n\n\n | MATCH PARTIAL\n {\n elog(ERROR, \"FOREIGN KEY/MATCH PARTIAL not yet implemented\");\n $$ = \"PARTIAL\";\n }\n\n> MATCH PARTIAL isn't in 7.1. Is it?\n> \n> Mike Mascari\n> mascarm@mascari.com\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 09:42:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: MATCH PARTIAL" } ]
[ { "msg_contents": "Just a quick question, but how much of SQL92 Entry Level does 7.1 support, and \nwhat parts haven't we got (yet)?\n\nI need to know for a couple of internal bits in the JDBC driver...\n\nPeter\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Fri, 16 Feb 2001 11:41:35 -0500 (EST)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": true, "msg_subject": "Quick question about 7.1 & SQL92 Entry Level" }, { "msg_contents": "Peter T Mount <peter@retep.org.uk> writes:\n> Just a quick question, but how much of SQL92 Entry Level does 7.1\n> support, and what parts haven't we got (yet)?\n\nI don't think anyone's made a careful list --- making one is on my\npersonal to-do list for the near future, but not yet at the top.\n\nSchemas are one big item I know we are missing, and the privileges\nmechanism needs a revamp as well.\n\nPeter Eisentraut made a list a year ago (see attached) but that was\nas of 6.5, and I'm not sure how careful he was.\n\n\t\t\tregards, tom lane\n\n\n------- Forwarded Message\n\nDate: Sat, 19 Feb 2000 15:12:24 +0100 (CET)\nFrom: Peter Eisentraut <peter_e@gmx.net>\nTo: Thomas Lockhart <lockhart@alumni.caltech.edu>\ncc: PostgreSQL Development <pgsql-hackers@postgreSQL.org>\nSubject: [HACKERS] Re: SQL compliance\n\nOn 2000-02-17, Thomas Lockhart mentioned:\n\n> I've since seen the article in the latest issue of PCWeek. The article\n> was not at all clear on the *specific* features which would disqualify\n> Postgres from having SQL92 entry level compliance\n\nI dug through the standard to come up with a list. I probably missed some\nthings, but they would be more of a lexical nature. I think I covered all\nlanguage constructs (which is what people look at anyway). Some of these\nthings I never used, so I merely tested them by looking at the current\ndocumentation and/or entering a simple example query. Also, this list\ndoesn't care whether an implemented feature contains bugs that would\nactually disqualify it from complete compliance.\n\n\n* TIME and TIMESTAMP WITH TIMEZONE missing [6.1]\n\n* Things such as SELECT MAX(ALL x) FROM y; don't work. [6.5]\n{This seems to be an easy grammar fix.}\n\n* LIKE with ESCAPE clause missing [8.5]\n{Is on TODO.}\n\n* SOME / ANY doesn't seem to exist [8.7]\n\n* Grant privileges have several deficiencies [10.3, 11.36]\n\n* Schemas [11.1, 11.2]\n\n* CREATE VIEW name (x, y, z) doesn't work [11.19]\n\n* There's a WITH CHECK OPTION clause for CREATE VIEW [11.19]\n\n* no OPEN statement [13.2]\n\n* FETCH syntax has a few issues [13.3]\n\n* SELECT x INTO a, b, c table [13.5]\n\n* DELETE WHERE CURRENT OF [13.6]\n\n* INSERT INTO table DEFAULT VALUES [13.8]\n{Looks like a grammar fix as well.}\n\n* UPDATE WHERE CURRENT OF [13.9]\n\n* no SQLSTATE, SQLCODE [22.1, 22.2]\n{Not sure about that one, since the sections don't contain leveling\ninformation.}\n\n* default transaction isolation level is SERIALIZABLE\n{Why isn't ours?}\n\n* no autocommit in SQL\n\n* modules? [12]\n\n* Some type conversion problems. For example a DECIMAL field should not\ndump out as NUMERIC, and a FLOAT(x) field should be stored as such.\n\n[* Haven't looked at Embedded SQL.]\n\n\nThat's it. :)\n\n-- \nPeter Eisentraut Sernanders v�g 10:115\npeter_e@gmx.net 75262 Uppsala\nhttp://yi.org/peter-e/ Sweden\n\n\n\n************\n\n------- End of Forwarded Message\n\n", "msg_date": "Fri, 16 Feb 2001 12:47:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Quick question about 7.1 & SQL92 Entry Level " } ]
[ { "msg_contents": "Hello!!\n\nI need demostrate that PostgreSQL is a great RDBMS for my undergraduate\nproject, because this, Does somebody has a bechmark (or similar\ndocument) between Postgres and others DB (commercial DB's, principally)?\n\nThanks in advance!!\n\n\n\n", "msg_date": "Fri, 16 Feb 2001 12:14:24 -0500", "msg_from": "Jreniz <jreniz@tutopia.com>", "msg_from_op": true, "msg_subject": "Postgres Benchmark" }, { "msg_contents": "Hi,\n\nThey're a little dated, but Great Bridge funded some benchmarks last \nsummer putting Postgres 7.0 against Unnamed Proprietary Database 1 \n(version 8i) and Unnamed Proprietary Database 2 (version 7.0, available \nfor NT platform only). See \nhttp://www.greatbridge.com/about/press.php?content_id=4\n\nThey ran the AS3AP and TPC-C benchmarks using an off-the-shelf \ncommercial product called Benchmark Factory, now part of Quest \nSoftware. See http://www.quest.com/benchmark_factory/\n\nBest regards,\nNed Lilly\n\n\nJreniz wrote:\n\n> Hello!!\n> \n> I need demostrate that PostgreSQL is a great RDBMS for my undergraduate\n> project, because this, Does somebody has a bechmark (or similar\n> document) between Postgres and others DB (commercial DB's, principally)?\n> \n> Thanks in advance!!\n> \n> \n> \n> \n\n-- \n----------------------------------------------------\nNed Lilly e: ned@greatbridge.com\nVice President w: www.greatbridge.com\nEvangelism / Hacker Relations v: 757.233.5523\nGreat Bridge, LLC f: 757.233.5555\n\n", "msg_date": "Fri, 16 Feb 2001 15:47:48 -0500", "msg_from": "Ned Lilly <ned@greatbridge.com>", "msg_from_op": false, "msg_subject": "Re: Postgres Benchmark" } ]
[ { "msg_contents": "\tI had postgres start blocking all it's UPDATEs on a production\ndatabase today, when an engineer added the following two tables,\namong other things. We've had to restore from backup, and the\ninteresting thing is that when we re-add these tables, things\nbreak again.\n\t\nVersion: PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.95.2\n\n(we were planning on going to beta4 in another week, and have done\nsome testing. This problem doesn't seem to occur on the engineer's\nmachine, which is already at beta4)\n\n\tMy first thought was the index on the boolean field in the time_cards,\nwhich I could have sworn has caused me problems before. Anyone else see\nanything wrong?\n\n\n-- time_tasks.schema\ncreate table time_tasks (\n name char(2) primary key,\n title text,\n description text\n);\n\ninsert into time_tasks (name, title) values ('CO', 'Communication');\ninsert into time_tasks (name, title) values ('DB', 'Debug');\n.\n.\n.\n\n\n\n-- time_cards.schema\ncreate table time_cards (\n id serial,\n open bool not null default 't',\n accounted bool not null default 'f',\n\n uid int4 not null,\n task char(2) not null,\n project int4,\n component text,\n\n time_start int4,\n time_stop int4,\n total_minutes int4,\n\n notes text\n);\ncreate index time_cards_open_pkey on time_cards (open);\ncreate index time_cards_uid_pkey on time_cards (uid);\n\n-- \nAdam Haberlach |A cat spends her life conflicted between a\nadam@newsnipple.com |deep, passionate, and profound desire for\nhttp://www.newsnipple.com |fish and an equally deep, passionate, and\n'88 EX500 '00 >^< |profound desire to avoid getting wet.\n", "msg_date": "Fri, 16 Feb 2001 09:34:13 -0800", "msg_from": "Adam Haberlach <adam@newsnipple.com>", "msg_from_op": true, "msg_subject": "Something smells in this schema..." }, { "msg_contents": "Adam Haberlach <adam@newsnipple.com> writes:\n> \tI had postgres start blocking all it's UPDATEs on a production\n> database today, when an engineer added the following two tables,\n> among other things. We've had to restore from backup, and the\n> interesting thing is that when we re-add these tables, things\n> break again.\n\n\"blocking\"? Please define symptoms more precisely.\n\n> \tMy first thought was the index on the boolean field in the time_cards,\n> which I could have sworn has caused me problems before. Anyone else see\n> anything wrong?\n\nPre-7.1 versions do have problems with large numbers of equal keys in\na btree index, which is more or less the definition of an index on\nboolean. I'm dubious that such an index is of any value anyway ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 13:02:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Something smells in this schema... " }, { "msg_contents": "On Fri, Feb 16, 2001 at 01:02:24PM -0500, Tom Lane wrote:\n> Adam Haberlach <adam@newsnipple.com> writes:\n> > \tI had postgres start blocking all it's UPDATEs on a production\n> > database today, when an engineer added the following two tables,\n> > among other things. We've had to restore from backup, and the\n> > interesting thing is that when we re-add these tables, things\n> > break again.\n> \n> \"blocking\"? Please define symptoms more precisely.\n\n\tThe postgres process stalls. According to ps, it's it is attempting\nan UPDATE. I think it times out eventually (I was in disaster-recovery\nmode this morning, and not always waiting around for these things. :)\n\n> > \tMy first thought was the index on the boolean field in the time_cards,\n> > which I could have sworn has caused me problems before. Anyone else see\n> > anything wrong?\n> \n> Pre-7.1 versions do have problems with large numbers of equal keys in\n> a btree index, which is more or less the definition of an index on\n> boolean. I'm dubious that such an index is of any value anyway ...\n\n\tOk--I'll check this. Thanks for the incredibly fast response--my\nfavorite thing about PostgreSQL is the fact that I can post to a mailing\nlist and get clued answers from real developers, usually within hours\nif not minutes.\n\n-- \nAdam Haberlach |A cat spends her life conflicted between a\nadam@newsnipple.com |deep, passionate, and profound desire for\nhttp://www.newsnipple.com |fish and an equally deep, passionate, and\n'88 EX500 '00 >^< |profound desire to avoid getting wet.\n", "msg_date": "Fri, 16 Feb 2001 10:14:34 -0800", "msg_from": "Adam Haberlach <adam@newsnipple.com>", "msg_from_op": true, "msg_subject": "Re: Something smells in this schema..." } ]
[ { "msg_contents": "\nthings appear to have quieted off nicely ... so would like to put out a\nBeta5 for testing ...\n\nTom, I saw/read your proposal about the JOIN syntax, but haven't seen any\ncommit on it yet, nor any arguments against the changes ... so just\nwondering where those stand right now?\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n\n\n", "msg_date": "Fri, 16 Feb 2001 13:39:08 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "beta5 ... " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> things appear to have quieted off nicely ... so would like to put out a\n> Beta5 for testing ...\n\n> Tom, I saw/read your proposal about the JOIN syntax, but haven't seen any\n> commit on it yet, nor any arguments against the changes ... so just\n> wondering where those stand right now?\n\nYou must have been looking the other way ;-) ... it's committed.\n\nWhat I'm currently thinking about is the discussion from last week where\nVadim reported that he could get \"stuck spinlock\" errors during btree\nindex crash recovery, because the backend fixing the index might hold\ndisk-buffer locks longer than the ~70 second timeout for spinlocks\n(see \"Btree runtime recovery. Stuck spins\" thread on 2/8 and 2/9).\n\nVadim says (and I agree) that we really ought to implement a new\nlightweight lock manager that would fall between spinlocks and regular\nlocks in terms of overhead and functionality. But it's not reasonable\nto try to do that for 7.1 at this late date. So I was trying to pick a\nstopgap solution for 7.1. Unfortunately Vadim's off to Siberia and I\ncan't consult with him...\n\nI'm currently thinking of modifying the buffer manager so that disk\nbuffer spinlocks use an alternate version of s_lock() with no timeout,\nand perhaps longer sleeps (no zero-delay selects anyway). This was one\nof the ideas we kicked around last week, and I think it's about the best\nwe can do for now. Comments anyone?\n\nOther than that, I have nothing to hold up a beta5. Anyone else?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 16:17:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "I am GO. SET DIAGNOSTICS is my only open item left.\n\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > things appear to have quieted off nicely ... so would like to put out a\n> > Beta5 for testing ...\n> \n> > Tom, I saw/read your proposal about the JOIN syntax, but haven't seen any\n> > commit on it yet, nor any arguments against the changes ... so just\n> > wondering where those stand right now?\n> \n> You must have been looking the other way ;-) ... it's committed.\n> \n> What I'm currently thinking about is the discussion from last week where\n> Vadim reported that he could get \"stuck spinlock\" errors during btree\n> index crash recovery, because the backend fixing the index might hold\n> disk-buffer locks longer than the ~70 second timeout for spinlocks\n> (see \"Btree runtime recovery. Stuck spins\" thread on 2/8 and 2/9).\n> \n> Vadim says (and I agree) that we really ought to implement a new\n> lightweight lock manager that would fall between spinlocks and regular\n> locks in terms of overhead and functionality. But it's not reasonable\n> to try to do that for 7.1 at this late date. So I was trying to pick a\n> stopgap solution for 7.1. Unfortunately Vadim's off to Siberia and I\n> can't consult with him...\n> \n> I'm currently thinking of modifying the buffer manager so that disk\n> buffer spinlocks use an alternate version of s_lock() with no timeout,\n> and perhaps longer sleeps (no zero-delay selects anyway). This was one\n> of the ideas we kicked around last week, and I think it's about the best\n> we can do for now. Comments anyone?\n> \n> Other than that, I have nothing to hold up a beta5. Anyone else?\n> \n> \t\t\tregards, tom lane\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 21:18:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "At 04:17 PM 2/16/01 -0500, Tom Lane wrote:\n>\n>Vadim says (and I agree) that we really ought to implement a new\n>lightweight lock manager that would fall between spinlocks and regular\n>locks in terms of overhead and functionality. But it's not reasonable\n\nWill there be an arbitrary user locking feature? E.g. lock on arbitrary\ntext string. That would be great :). \n\nBTW, is 7.1 going to be a bit slower than 7.0? Or just Beta 5? Just\ncurious. Don't mind waiting for 7.2 for the speed-up if necessary.\n\nCheerio,\nLink.\n\n\n", "msg_date": "Sat, 17 Feb 2001 11:43:31 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "> BTW, is 7.1 going to be a bit slower than 7.0? Or just Beta 5? Just\n> curious. Don't mind waiting for 7.2 for the speed-up if necessary.\n> \n\nWe expect 7.1 to be faster than 7.0.X. We may have a small problem that\nwe may have to address. Not sure yet.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 22:47:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ..." }, { "msg_contents": "On Sat, 17 Feb 2001, Lincoln Yeoh wrote:\n\n> At 04:17 PM 2/16/01 -0500, Tom Lane wrote:\n> >\n> >Vadim says (and I agree) that we really ought to implement a new\n> >lightweight lock manager that would fall between spinlocks and regular\n> >locks in terms of overhead and functionality. But it's not reasonable\n>\n> Will there be an arbitrary user locking feature? E.g. lock on arbitrary\n> text string. That would be great :).\n>\n> BTW, is 7.1 going to be a bit slower than 7.0? Or just Beta 5? Just\n> curious. Don't mind waiting for 7.2 for the speed-up if necessary.\n\nIt is possible that it will be ... the question is whether the slow down\nis unbearable or not, as to whether we'll let it hold things up or not ...\n\n From reading one of Tom's email's, it looks like the changes to 'fix' the\nslowdown are drastic/large enough that it might not be safe (or desirable)\nto fix it at this late of a stage in beta ...\n\nDepending on what is involved, we might put out a v7.1 for March 1st, so\nthat ppl can feel confident about using the various features, but have a\nv7.1.1 that follows relatively closely on its heels that addresses the\nperformance problem ...\n\n\n\n\n", "msg_date": "Sat, 17 Feb 2001 00:56:24 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "> Other than that, I have nothing to hold up a beta5. Anyone else?\n> \n> \t\t\tregards, tom lane\n\nI see a small problem with the regression test. If PL/pgSQL has been\nalready to template1, the regression scripts will fail because\ncreatelang fails. Probably we should create the regression database\nusing template0?\n--\nTatsuo Ishii\n", "msg_date": "Sat, 17 Feb 2001 14:18:42 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> Probably we should create the regression database\n> using template0?\n\nSeems like a good idea.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 00:27:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "> >\n> > BTW, is 7.1 going to be a bit slower than 7.0? Or just Beta 5? Just\n> > curious. Don't mind waiting for 7.2 for the speed-up if necessary.\n> \n> It is possible that it will be ... the question is whether the slow down\n> is unbearable or not, as to whether we'll let it hold things up or not ...\n> \n> >From reading one of Tom's email's, it looks like the changes to 'fix' the\n> slowdown are drastic/large enough that it might not be safe (or desirable)\n> to fix it at this late of a stage in beta ...\n> \n> Depending on what is involved, we might put out a v7.1 for March 1st, so\n> that ppl can feel confident about using the various features, but have a\n> v7.1.1 that follows relatively closely on its heels that addresses the\n> performance problem ...\n\nThe easy fix is to just set the delay to zero. Looks like that will fix\nmost of the problem. The near-committers thing may indeed be overkill,\nand certainly is not worth holding beta.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 00:37:58 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ..." }, { "msg_contents": "On Sat, 17 Feb 2001, Bruce Momjian wrote:\n\n> > >\n> > > BTW, is 7.1 going to be a bit slower than 7.0? Or just Beta 5? Just\n> > > curious. Don't mind waiting for 7.2 for the speed-up if necessary.\n> >\n> > It is possible that it will be ... the question is whether the slow down\n> > is unbearable or not, as to whether we'll let it hold things up or not ...\n> >\n> > >From reading one of Tom's email's, it looks like the changes to 'fix' the\n> > slowdown are drastic/large enough that it might not be safe (or desirable)\n> > to fix it at this late of a stage in beta ...\n> >\n> > Depending on what is involved, we might put out a v7.1 for March 1st, so\n> > that ppl can feel confident about using the various features, but have a\n> > v7.1.1 that follows relatively closely on its heels that addresses the\n> > performance problem ...\n>\n> The easy fix is to just set the delay to zero. Looks like that will fix\n> most of the problem.\n\nExcept that Vadim had a reason for setting it to 5, and I'm loath to see\nthat changed unless someone actaully understands the ramifications other\nthen increasing performance ...\n\n> The near-committers thing may indeed be overkill, and certainly is not\n> worth holding beta.\n\nWhat is this 'near-committers thing'??\n\n\n", "msg_date": "Sat, 17 Feb 2001 13:23:49 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Re: beta5 ..." }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n>> The easy fix is to just set the delay to zero. Looks like that will fix\n>> most of the problem.\n\n> Except that Vadim had a reason for setting it to 5,\n\nHe claimed to have seen better performance with a nonzero delay.\nSo far none of the rest of us have been able to duplicate that.\nPerhaps he was using a machine where a 5-microsecond select() delay\nactually is 5 microseconds? If so, he's the outlier, not the\nrest of us ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 12:30:02 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ... " }, { "msg_contents": "> > The easy fix is to just set the delay to zero. Looks like that will fix\n> > most of the problem.\n> \n> Except that Vadim had a reason for setting it to 5, and I'm loath to see\n> that changed unless someone actaully understands the ramifications other\n> then increasing performance ...\n\nSee post from a few minutes ago with analysis of purpose and actual\naffect of Vadim's parameter. I objected to the delay when it was\nintroduced because of my analysis, but Vadim's argument is that 5\nmicroseconds is very small delay, just enough to yield the CPU. We now\nsee that is much longer than that.\n\n> \n> > The near-committers thing may indeed be overkill, and certainly is not\n> > worth holding beta.\n> \n> What is this 'near-committers thing'??\n\nOther backends about to commit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 13:14:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ..." }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I see a small problem with the regression test. If PL/pgSQL has been\n> already to template1, the regression scripts will fail because\n> createlang fails. Probably we should create the regression database\n> using template0?\n\nDone ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 13:04:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> things appear to have quieted off nicely ... so would like to put out a\n> Beta5 for testing ...\n\nUnless Peter E. has some more commits up his sleeve, I think we're\ngood to go.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 13:15:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "Tom Lane writes:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > things appear to have quieted off nicely ... so would like to put out a\n> > Beta5 for testing ...\n>\n> Unless Peter E. has some more commits up his sleeve, I think we're\n> good to go.\n\nJust uploaded freshly baked man pages, including your createdb change.\nThat'd be it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Sun, 18 Feb 2001 19:45:35 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "On Sun, 18 Feb 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > things appear to have quieted off nicely ... so would like to put out a\n> > Beta5 for testing ...\n>\n> Unless Peter E. has some more commits up his sleeve, I think we're\n> good to go.\n\nokay, I'll put one out Mon aft, just in case of any strays that come up\ntonight, or any final commits from our overseas committers ...\n\nthanks ;:)\n\n", "msg_date": "Sun, 18 Feb 2001 15:00:51 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "Quoting The Hermit Hacker <scrappy@hub.org>:\n\n> On Sun, 18 Feb 2001, Tom Lane wrote:\n> \n> > The Hermit Hacker <scrappy@hub.org> writes:\n> > > things appear to have quieted off nicely ... so would like to put\n> out a\n> > > Beta5 for testing ...\n> >\n> > Unless Peter E. has some more commits up his sleeve, I think we're\n> > good to go.\n> \n> okay, I'll put one out Mon aft, just in case of any strays that come up\n> tonight, or any final commits from our overseas committers ...\n\nI'm not planning on doing any until after Beta5 is out ;-)\n\nPeter\n\n> \n> thanks ;:)\n> \n> \n\n\n\n-- \nPeter Mount peter@retep.org.uk\nPostgreSQL JDBC Driver: http://www.retep.org.uk/postgres/\nRetepPDF PDF library for Java: http://www.retep.org.uk/pdf/\n", "msg_date": "Mon, 19 Feb 2001 04:05:58 -0500 (EST)", "msg_from": "Peter T Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." } ]
[ { "msg_contents": "Hi\nI'd like to know if someone is working on the ALTER TABLE specs of Postgres. \nI think that on this field there is much to do, especially comparing with \nother DB servers, which let you change lots of more things that Postgres \ndoesn't.\nHope this can make a better product for the moving people... ;-)\n\nSaludos... :-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 16 Feb 2001 16:07:56 -0300", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": true, "msg_subject": "wish list for 7.2 (ALTER TABLE)" }, { "msg_contents": "On Fri, 16 Feb 2001, Martin A. Marques wrote:\n\n> Hi\n> I'd like to know if someone is working on the ALTER TABLE specs of Postgres. \n> I think that on this field there is much to do, especially comparing with \n> other DB servers, which let you change lots of more things that Postgres \n> doesn't.\n> Hope this can make a better product for the moving people... ;-)\n\nYes, there are various people looking at various pieces of ALTER TABLE.\nMostly adding constraints and dropping columns as far as I know, but\nthere's always room for people to work on stuff :)\n\n\n", "msg_date": "Fri, 16 Feb 2001 14:11:58 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: wish list for 7.2 (ALTER TABLE)" }, { "msg_contents": "El Vie 16 Feb 2001 19:11, escribiste:\n> On Fri, 16 Feb 2001, Martin A. Marques wrote:\n> > Hi\n> > I'd like to know if someone is working on the ALTER TABLE specs of\n> > Postgres. I think that on this field there is much to do, especially\n> > comparing with other DB servers, which let you change lots of more things\n> > that Postgres doesn't.\n> > Hope this can make a better product for the moving people... ;-)\n>\n> Yes, there are various people looking at various pieces of ALTER TABLE.\n> Mostly adding constraints and dropping columns as far as I know, but\n> there's always room for people to work on stuff :)\n\nWell, show me the place!\nNow, talking serious, where should I look? Download the CVS? Then, in with \nbranch should I start looking?\n\nThanks for the reply. ;-)\n\nSaludos... ;-)\n\n-- \nSystem Administration: It's a dirty job, \nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s\t\t\temail: \tmartin@math.unl.edu.ar\nSanta Fe - Argentina\t\thttp://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 16 Feb 2001 19:15:17 -0300", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": true, "msg_subject": "Re: wish list for 7.2 (ALTER TABLE)" } ]
[ { "msg_contents": "ISTM that it is mighty confusing that extract() and date_part() don't\naccept the same set of \"field\" arguments.\n\n-> SELECT EXTRACT(decade FROM TIMESTAMP '2001-02-16 20:38:40');\nERROR: parser: parse error at or near \"decade\"\n=> SELECT EXTRACT(\"decade\" FROM TIMESTAMP '2001-02-16 20:38:40');\nERROR: parser: parse error at or near \"\"\"\n=> SELECT date_part('decade', TIMESTAMP '2001-02-16 20:38:40');\n date_part\n-----------\n 200\n\nThis can be an easy grammar fix:\n\ndiff -c -r2.220 gram.y\n*** gram.y 2001/02/09 03:26:28 2.220\n--- gram.y 2001/02/16 19:42:42\n***************\n*** 4987,4992 ****\n--- 4987,4993 ----\n ;\n\n extract_arg: datetime { $$ = $1; }\n+ | IDENT { $$ = $1; }\n | TIMEZONE_HOUR { $$ = \"tz_hour\"; }\n | TIMEZONE_MINUTE { $$ = \"tz_minute\"; }\n ;\n\n(Using ColId instead of datetime + IDENT gives reduce/reduce conflicts\nthat I don't want to mess with now.)\n\nThe date_part implementation is prepared for unknown field selectors, so\nthis should be all safe. Comments?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 16 Feb 2001 20:56:43 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "extract vs date_part" }, { "msg_contents": "> (Using ColId instead of datetime + IDENT gives reduce/reduce conflicts\n> that I don't want to mess with now.)\n> The date_part implementation is prepared for unknown field selectors, so\n> this should be all safe. Comments?\n\nWorks for me. Since extract required explicit reserved words, I had just\nimplemented the ones specified in the SQL9x standard. Your extension\npatch is a great idea, as long as others agree it can go into the beta\n(afaict this is an extremely low risk fix).\n\n - Thomas\n", "msg_date": "Fri, 16 Feb 2001 20:18:58 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: extract vs date_part" } ]
[ { "msg_contents": "The age() function is documented as \"Calculate time difference while\nretaining year/month fields\", but it doesn't seem to do anything different\nfrom a plain date subtraction with a few time zone problems added in:\n\nselect age(date '1999-05-17', date '1957-06-13');\n age\n-------------------------------\n 41 years 11 mons 3 days 23:00\n(1 row)\n\npeter=# select age(date '1999-05-17', date '1999-06-13');\n age\n----------\n -27 days\n(1 row)\n\nBut then again, date subtraction has seen better days, too:\n\npeter=# select date '1999-08-13' - date '1989-06-13';\n ?column?\n----------\n 3713\n(1 row)\n\npeter=# select date '1999-08-13' - date '1999-06-13';\n ?column?\n----------\n 61\n(1 row)\n\nAs opposed to:\n\npeter=# select timestamp '1999-08-13' - timestamp '1999-06-13';\n ?column?\n----------\n 61 days\n(1 row)\n\nSQL sez date - date returns interval, btw.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 16 Feb 2001 21:19:22 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "age() function not to spec, date subtraction?" } ]
[ { "msg_contents": "Perhaps someone can explain what's wrong with this. Excuse the mess,\nit was cut out of a much larger function but reliably creates the\nerror on my Postgres 7.1 beta 4 machine.\n\nCompile C function, restart postgres (for the heck of it), create a\nnew db (I used 'stuff), import sql. The insert it runs at the end\nfails, and:\n\npqReadData() -- backend closed the channel unexpectedly.....\n\nQuite confused, if I remove the SPI_Finish() it works fine, but\ncomplains every time an SPI_Connect is issued after the first run of\nthe function.\n\nAs you can see, the closure is SPI_Finish as the notice before\nappears, and the notice after doesn't.\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.", "msg_date": "Fri, 16 Feb 2001 16:08:09 -0500", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": true, "msg_subject": "SPI_Free() causes backend to close." } ]
[ { "msg_contents": "While poking at Peter Schmidt's comments about pgbench showing worse\nperformance than for 7.0 (using -F in both cases), I noticed that given\nenough buffer space, FileWrite never seemed to get called at all. A\nlittle bit of sleuthing revealed the following:\n\n1. Under WAL, we don't write dirty buffers out of the shared memory at\nevery transaction commit. Instead, as long as a dirty buffer's slot\nisn't needed for something else, it just sits there until the next\ncheckpoint or shutdown. CreateCheckpoint calls FlushBufferPool which\nwrites out all the dirty buffers in one go. This is a Good Thing; it\nlets us consolidate multiple updates of a single datafile page by\nsuccessive transactions into one disk write. We need this to buy back\nsome of the extra I/O required to write the WAL logfile.\n\n2. However, this means that a lot of the dirty-buffer writes get done by\nthe periodic checkpoint process, not by the backends that originally\ndirtied the buffers. And that means that every last one gets done by\nblind write, because the checkpoint process isn't going to have opened\nany relation cache entries --- maybe a couple of system catalog\nrelations, but for sure it won't have any for user relations. If you\nlook at BufferSync, any page that the current process doesn't have an\nalready-open relcache entry for is sent to smgrblindwrt not smgrwrite.\n\n3. Blind write is gratuitously inefficient: it does separate open,\nseek, write, close kernel calls for every request. This was the right\nthing in 7.0.*, because backends relatively seldom did blind writes and\neven less often needed to blindwrite multiple pages of a single relation\nin succession. But the typical usage has changed a lot.\n\n\nI am thinking it'd be a good idea if blind write went through fd.c and\nthus was able to re-use open file descriptors, just like normal writes.\nThis should improve the efficiency of dumping dirty buffers during\ncheckpoint by a noticeable amount.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 21:31:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Performance lossage in checkpoint dumping" }, { "msg_contents": "> 3. Blind write is gratuitously inefficient: it does separate open,\n> seek, write, close kernel calls for every request. This was the right\n> thing in 7.0.*, because backends relatively seldom did blind writes and\n> even less often needed to blindwrite multiple pages of a single relation\n> in succession. But the typical usage has changed a lot.\n> \n> \n> I am thinking it'd be a good idea if blind write went through fd.c and\n> thus was able to re-use open file descriptors, just like normal writes.\n> This should improve the efficiency of dumping dirty buffers during\n> checkpoint by a noticeable amount.\n\nI totally agree the current code is broken. I am reading what you say\nand am thinking, \"Oh well, we lose there, but at least we only open a\nrelation once and do them in one shot.\" Now I am hearing that is not\ntrue, and it is a performance problem.\n\nThis is not a total surprise. We have that stuff pretty well\nstreamlined for the old behavour. Now that things have changed, I can\nsee the need to reevaluate stuff.\n\nNot sure how to handle the beta issue though.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 21:47:25 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I am thinking it'd be a good idea if blind write went through fd.c and\n>> thus was able to re-use open file descriptors, just like normal writes.\n>> This should improve the efficiency of dumping dirty buffers during\n>> checkpoint by a noticeable amount.\n\n> Not sure how to handle the beta issue though.\n\nAfter looking a little more, I think this is too big a change to risk\nmaking for beta. I was thinking it might be an easy change, but it's\nnot; there's noplace to store the open-relation reference if we don't\nhave a relcache entry. But we don't want to pay the price of opening a\nrelcache entry just to dump some buffers.\n\nI recall Vadim speculating about decoupling the storage manager's notion\nof open files from the relcache, and having a much more lightweight\nopen-relation mechanism at the smgr level. That might be a good way\nto tackle this. But I'm not going to touch it for 7.1...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 22:06:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance lossage in checkpoint dumping " }, { "msg_contents": "> After looking a little more, I think this is too big a change to risk\n> making for beta. I was thinking it might be an easy change, but it's\n> not; there's noplace to store the open-relation reference if we don't\n> have a relcache entry. But we don't want to pay the price of opening a\n> relcache entry just to dump some buffers.\n> \n> I recall Vadim speculating about decoupling the storage manager's notion\n> of open files from the relcache, and having a much more lightweight\n> open-relation mechanism at the smgr level. That might be a good way\n> to tackle this. But I'm not going to touch it for 7.1...\n\nNo way to group the writes to you can keep the most recent one open?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 22:33:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> But I'm not going to touch it for 7.1...\n\n> No way to group the writes to you can keep the most recent one open?\n\nDon't see an easy way, do you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 16 Feb 2001 22:42:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance lossage in checkpoint dumping " }, { "msg_contents": "\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> But I'm not going to touch it for 7.1...\n> \n> > No way to group the writes to you can keep the most recent one open?\n> \n> Don't see an easy way, do you?\n> \n\nNo, but I haven't looked at it. I am now much more concerned with the\ndelay, and am wondering if I should start thinking about trying my idea\nof looking for near-committers and post the patch to the list to see if\nanyone likes it for 7.1 final. Vadim will not be back in enough time to\nwrite any new code in this area, I am afraid.\n\nWe could look to fix this in 7.1.1. Let's see what the pgbench tester\ncomes back with when he sets the delay to zero.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 22:46:05 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping" }, { "msg_contents": "On Fri, 16 Feb 2001, Bruce Momjian wrote:\n\n>\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > >> But I'm not going to touch it for 7.1...\n> >\n> > > No way to group the writes to you can keep the most recent one open?\n> >\n> > Don't see an easy way, do you?\n> >\n>\n> No, but I haven't looked at it. I am now much more concerned with the\n> delay, and am wondering if I should start thinking about trying my idea\n> of looking for near-committers and post the patch to the list to see if\n> anyone likes it for 7.1 final. Vadim will not be back in enough time to\n> write any new code in this area, I am afraid.\n\nNear committers? *puzzled look*\n\n\n", "msg_date": "Sat, 17 Feb 2001 00:58:05 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping" }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> No way to group the writes to you can keep the most recent one open?\n> Don't see an easy way, do you?\n>> \n>> No, but I haven't looked at it. I am now much more concerned with the\n>> delay,\n\nI concur. The blind write business is not important enough to hold up\nthe release for --- for one thing, it has nothing to do with the pgbench\nresults we're seeing, because these tests don't run long enough to\ninclude any checkpoint cycles. The commit delay, on the other hand,\nis a big problem.\n\n>> and am wondering if I should start thinking about trying my idea\n>> of looking for near-committers and post the patch to the list to see if\n>> anyone likes it for 7.1 final. Vadim will not be back in enough time to\n>> write any new code in this area, I am afraid.\n\n> Near committers? *puzzled look*\n\nProcesses nearly ready to commit. I'm thinking that any mechanism for\ndetecting that might be overkill, however, especially compared to just\nsetting commit_delay to zero by default.\n\nI've been sitting here running pgbench under various scenarios, and so\nfar I can't find any condition where commit_delay>0 is materially better\nthan commit_delay=0, even under heavy load. It's either the same or\nmuch worse. Numbers to follow...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 00:24:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance lossage in checkpoint dumping " }, { "msg_contents": "> > No, but I haven't looked at it. I am now much more concerned with the\n> > delay, and am wondering if I should start thinking about trying my idea\n> > of looking for near-committers and post the patch to the list to see if\n> > anyone likes it for 7.1 final. Vadim will not be back in enough time to\n> > write any new code in this area, I am afraid.\n> \n> Near committers? *puzzled look*\n\nUmm, uh, it means backends that have entered COMMIT and will be issuing\nan fsync() of their own very soon. It took me a while to remember what\nI mean too because I was thinking of CVS committers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 00:34:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping" }, { "msg_contents": "On Sat, 17 Feb 2001, Bruce Momjian wrote:\n\n> > > No, but I haven't looked at it. I am now much more concerned with the\n> > > delay, and am wondering if I should start thinking about trying my idea\n> > > of looking for near-committers and post the patch to the list to see if\n> > > anyone likes it for 7.1 final. Vadim will not be back in enough time to\n> > > write any new code in this area, I am afraid.\n> >\n> > Near committers? *puzzled look*\n>\n> Umm, uh, it means backends that have entered COMMIT and will be issuing\n> an fsync() of their own very soon. It took me a while to remember what\n> I mean too because I was thinking of CVS committers.\n\nThat's what I was thinking to, which was what was confusing the hell out\nof me ... like, a near committer ... is that the guy sitting beside you\nwhile you commit? :)\n\n\n", "msg_date": "Sat, 17 Feb 2001 16:01:45 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping" }, { "msg_contents": "On Sat, 17 Feb 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > No way to group the writes to you can keep the most recent one open?\n> > Don't see an easy way, do you?\n> >>\n> >> No, but I haven't looked at it. I am now much more concerned with the\n> >> delay,\n>\n> I concur. The blind write business is not important enough to hold up\n> the release for --- for one thing, it has nothing to do with the pgbench\n> results we're seeing, because these tests don't run long enough to\n> include any checkpoint cycles. The commit delay, on the other hand,\n> is a big problem.\n>\n> >> and am wondering if I should start thinking about trying my idea\n> >> of looking for near-committers and post the patch to the list to see if\n> >> anyone likes it for 7.1 final. Vadim will not be back in enough time to\n> >> write any new code in this area, I am afraid.\n>\n> > Near committers? *puzzled look*\n>\n> Processes nearly ready to commit. I'm thinking that any mechanism for\n> detecting that might be overkill, however, especially compared to just\n> setting commit_delay to zero by default.\n>\n> I've been sitting here running pgbench under various scenarios, and so\n> far I can't find any condition where commit_delay>0 is materially better\n> than commit_delay=0, even under heavy load. It's either the same or\n> much worse. Numbers to follow...\n\nOkay, if the whole commit_delay is purely means as a performance thing,\nI'd say go with lowering the default to zero for v7.1, and once Vadim gets\nback, we can properly determine why it appears to improve performance in\nhis case ... since I believe his OS of choice is FreeBSD, and you\nmentioned doing tests on it, I can't see how he'd have a more fine\ngrain'd select() then you have for testing ...\n\n\n", "msg_date": "Sat, 17 Feb 2001 16:06:00 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Performance lossage in checkpoint dumping " } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, February 16, 2001 7:13 PM\n> To: Schmidt, Peter\n> Cc: 'Bruce Momjian'; 'Michael Ansley'; 'pgsql-admin@postgresql.org'\n> Subject: Re: [ADMIN] v7.1b4 bad performance \n> \n> \n> \"Schmidt, Peter\" <peter.schmidt@prismedia.com> writes:\n> > I tried -B 1024 and got roughly the same results (~50 tps).\n> \n> What were you using before?\n> \n> > However, when I change WAL option commit_delay from the default of 5\n> > to 0, I get ~200 tps (which is double what I get with 7.03). I'm not\n> > sure I want to do this, do I?\n> \n> Hmm. There have been several discussions about whether CommitDelay is\n> a good idea or not. What happens if you vary it --- try 1 \n> microsecond,\n> and then various multiples of 1000. I suspect you may find that there\n> is no difference in the range 1..10000, then a step, then no change up\n> to 20000. In other words, your kernel may be rounding the delay up to\n> the next multiple of a clock tick, which might be 10 milliseconds.\n> That would explain a 50-tps limit real well...\n> \n> BTW, have you tried pgbench with multiple clients (-c) rather \n> than just\n> one?\n> \n> \t\t\tregards, tom lane\n> \n\n\nI get ~50 tps for any commit_delay value > 0. I've tried many values in the\nrange 0 - 999, and always get ~50 tps. commit_delay=0 always gets me ~200+\ntps.\n\nYes, I have tried multiple clients but got stuck on the glaring difference\nbetween versions with a single client. The tests that I ran showed the same\nkind of results you got earlier today i.e. 1 client/1000 transactions = 10\nclients/100 transactions.\n\nSo, is it OK to use commit_delay=0?\n\nPeter\n\n\n\n\n\n\n\nRE: [ADMIN] v7.1b4 bad performance \n\n\n\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, February 16, 2001 7:13 PM\n> To: Schmidt, Peter\n> Cc: 'Bruce Momjian'; 'Michael Ansley'; 'pgsql-admin@postgresql.org'\n> Subject: Re: [ADMIN] v7.1b4 bad performance \n> \n> \n> \"Schmidt, Peter\" <peter.schmidt@prismedia.com> writes:\n> > I tried -B 1024 and got roughly the same results (~50 tps).\n> \n> What were you using before?\n> \n> > However, when I change WAL option commit_delay from the default of 5\n> > to 0, I get ~200 tps (which is double what I get with 7.03). I'm not\n> > sure I want to do this, do I?\n> \n> Hmm.  There have been several discussions about whether CommitDelay is\n> a good idea or not.  What happens if you vary it --- try 1 \n> microsecond,\n> and then various multiples of 1000.  I suspect you may find that there\n> is no difference in the range 1..10000, then a step, then no change up\n> to 20000.  In other words, your kernel may be rounding the delay up to\n> the next multiple of a clock tick, which might be 10 milliseconds.\n> That would explain a 50-tps limit real well...\n> \n> BTW, have you tried pgbench with multiple clients (-c) rather \n> than just\n> one?\n> \n>                       regards, tom lane\n> \n\n\nI get ~50 tps for any commit_delay value > 0. I've tried many values in the range 0 - 999, and always get ~50 tps. commit_delay=0 always gets me ~200+ tps.\nYes, I have tried multiple clients but got stuck on the glaring difference between versions with a single client. The tests that I ran showed the same kind of results you got earlier today i.e. 1 client/1000 transactions = 10 clients/100 transactions.\nSo, is it OK to use commit_delay=0?\n\nPeter", "msg_date": "Fri, 16 Feb 2001 19:54:45 -0800", "msg_from": "\"Schmidt, Peter\" <peter.schmidt@prismedia.com>", "msg_from_op": true, "msg_subject": "RE: v7.1b4 bad performance " }, { "msg_contents": "> I get ~50 tps for any commit_delay value > 0. I've tried many values in the\n> range 0 - 999, and always get ~50 tps. commit_delay=0 always gets me ~200+\n> tps.\n> \n> Yes, I have tried multiple clients but got stuck on the glaring difference\n> between versions with a single client. The tests that I ran showed the same\n> kind of results you got earlier today i.e. 1 client/1000 transactions = 10\n> clients/100 transactions.\n> \n> So, is it OK to use commit_delay=0?\n\ncommit_delay was designed to provide better performance in multi-user\nworkloads. If you are going to use it with only a single backend, you\ncertainly should set it to zero. If you will have multiple backends\ncommitting at the same time, we are not sure if 5 or 0 is the right\nvalue. If multi-user benchmark shows 0 is faster, we may change the\ndefault.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 16 Feb 2001 23:11:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance" }, { "msg_contents": "\"Schmidt, Peter\" <peter.schmidt@prismedia.com> writes:\n> So, is it OK to use commit_delay=0?\n\nCertainly. In fact, I think that's about to become the default ;-)\n\nI have now experimented with several different platforms --- HPUX,\nFreeBSD, and two considerably different strains of Linux --- and I find\nthat the minimum delay supported by select(2) is 10 or more milliseconds\non all of them, as much as 20 msec on some popular platforms. Try it\nyourself (my test program is attached).\n\nThus, our past arguments about whether a few microseconds of delay\nbefore commit are a good idea seem moot; we do not have any portable way\nof implementing that, and a ten millisecond delay for commit is clearly\nNot Good.\n\n\t\t\tregards, tom lane\n\n\n/* To use: gcc test.c, then\n\n\ttime ./a.out N\n\nN=0 should return almost instantly, if your select(2) does not block as\nper spec. N=1 shows the minimum achievable delay, * 1000 --- for\nexample, if time reports the elapsed time as 10 seconds, then select\nhas rounded your 1-microsecond delay request up to 10 milliseconds.\n\nSome Unixen seem to throw in an extra ten millisec of delay just for\ngood measure, eg, on FreeBSD 4.2 N=1 takes 20 sec, N=20000 takes 30.\n*/\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <sys/stat.h>\n#include <sys/time.h>\n#include <sys/types.h>\n\nint main(int argc, char** argv)\n{\n\tstruct timeval\tdelay;\n\tint i, del;\n\n\tdel = atoi(argv[1]);\n\n\tfor (i = 0; i < 1000; i++) {\n\t\tdelay.tv_sec = 0;\n\t\tdelay.tv_usec = del;\n\t\t(void) select(0, NULL, NULL, NULL, &delay);\n\t}\n\treturn 0;\n}\n", "msg_date": "Fri, 16 Feb 2001 23:43:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance " }, { "msg_contents": "I wrote:\n> Thus, our past arguments about whether a few microseconds of delay\n> before commit are a good idea seem moot; we do not have any portable way\n> of implementing that, and a ten millisecond delay for commit is clearly\n> Not Good.\n\nI've now finished running a spectrum of pgbench scenarios, and I find\nno case in which commit_delay = 0 is worse than commit_delay > 0.\nNow this is just one benchmark on just one platform, but it's pretty\ndamning...\n\nPlatform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think).\nMinimum select(2) delay is 10 msec on this platform.\n\nPOSTMASTER OPTIONS: -i -B 1024 -N 100\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 1 -t 1000 bench\ntps = 13.304624(including connections establishing)\ntps = 13.323967(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 1 -t 1000 bench\ntps = 16.614691(including connections establishing)\ntps = 16.645832(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 10 -t 100 bench\ntps = 13.612502(including connections establishing)\ntps = 13.712996(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 10 -t 100 bench\ntps = 14.674477(including connections establishing)\ntps = 14.787715(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 30 -t 100 bench\ntps = 10.875912(including connections establishing)\ntps = 10.932836(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 30 -t 100 bench\ntps = 12.853009(including connections establishing)\ntps = 12.934365(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 50 -t 100 bench\ntps = 9.476856(including connections establishing)\ntps = 9.520800(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 50 -t 100 bench\ntps = 9.807925(including connections establishing)\ntps = 9.854161(excluding connections establishing)\n\nWith -F (no fsync), it's the same story:\n\nPOSTMASTER OPTIONS: -i -o -F -B 1024 -N 100\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 1 -t 1000 bench\ntps = 40.584300(including connections establishing)\ntps = 40.708855(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 1 -t 1000 bench\ntps = 51.585629(including connections establishing)\ntps = 51.797280(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 10 -t 100 bench\ntps = 35.811729(including connections establishing)\ntps = 36.448439(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 10 -t 100 bench\ntps = 43.878827(including connections establishing)\ntps = 44.856029(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=1' pgbench -c 30 -t 100 bench\ntps = 23.490464(including connections establishing)\ntps = 23.749558(excluding connections establishing)\n\n$ PGOPTIONS='-c commit_delay=0' pgbench -c 30 -t 100 bench\ntps = 23.452935(including connections establishing)\ntps = 23.716181(excluding connections establishing)\n\n\nI vote for commit_delay = 0, unless someone can show cases where\npositive delay is significantly better than zero delay.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 01:10:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance " }, { "msg_contents": "> \"Schmidt, Peter\" <peter.schmidt@prismedia.com> writes:\n> > So, is it OK to use commit_delay=0?\n> \n> Certainly. In fact, I think that's about to become the default ;-)\n\nI agree with Tom. I did some benchmarking tests using pgbench for a\ncomputer magazine in Japan. I got a almost equal or better result for\n7.1 than 7.0.3 if commit_delay=0. See included png file.\n--\nTatsuo Ishii", "msg_date": "Sat, 17 Feb 2001 15:46:35 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I agree with Tom. I did some benchmarking tests using pgbench for a\n> computer magazine in Japan. I got a almost equal or better result for\n> 7.1 than 7.0.3 if commit_delay=0. See included png file.\n\nInteresting curves. One thing you might like to know is that while\npoking around with a profiler this afternoon, I found that the vast\nmajority of the work done for this benchmark is in the uniqueness\nchecks driven by the unique indexes. Declare those as plain (non\nunique) and the TPS figures would probably go up noticeably. That\ndoesn't make the test invalid, but it does suggest that pgbench is\nemphasizing one aspect of system performance to the exclusion of\nothers ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 01:59:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "> ... See included png file.\n\nWhat kind of machine was this run on?\n\n - Thomas\n", "msg_date": "Sat, 17 Feb 2001 07:20:48 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance" }, { "msg_contents": "lockhart> > ... See included png file.\nlockhart> \nlockhart> What kind of machine was this run on?\nlockhart> \nlockhart> - Thomas\n\nSorry to forget to mention about that.\n\nSONY VAIO Z505CR/K (note PC)\nPentium III 750MHz/256MB memory/20GB IDE HDD\nLinux (kernel 2.2.17)\nconfigure --enable-multibyte=EUC_JP\npostgresql.conf:\n\t\tfsync = on\n\t\tmax_connections = 128\n\t\tshared_buffers = 1024\n\t\tsilent_mode = on\n\t\tcommit_delay = 0\npostmaster opts for 7.0.3:\n\t\t-B 1024 -N 128 -S\npgbench settings:\n\t\tscaling factor = 1\n\t\tdata excludes connetion establishing time\n\t\tnumber of total transactions are always 640\n\t\t\t (see included scripts I ran for the testing)\n------------------------------------------------------\n#! /bin/sh\npgbench -i test\nfor i in 1 2 4 8 16 32 64 128\ndo\n\tt=`expr 640 / $i`\n\tpgbench -t $t -c $i test\n\techo \"===== sync ======\"\n\tsync;sync;sync;sleep 10\n\techo \"===== sync done ======\"\ndone\n------------------------------------------------------\n--\nTatsuo Ishii\n", "msg_date": "Sat, 17 Feb 2001 17:13:50 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010216 22:49]:\n> \"Schmidt, Peter\" <peter.schmidt@prismedia.com> writes:\n> > So, is it OK to use commit_delay=0?\n> \n> Certainly. In fact, I think that's about to become the default ;-)\n> \n> I have now experimented with several different platforms --- HPUX,\n> FreeBSD, and two considerably different strains of Linux --- and I find\n> that the minimum delay supported by select(2) is 10 or more milliseconds\n> on all of them, as much as 20 msec on some popular platforms. Try it\n> yourself (my test program is attached).\n> \n> Thus, our past arguments about whether a few microseconds of delay\n> before commit are a good idea seem moot; we do not have any portable way\n> of implementing that, and a ten millisecond delay for commit is clearly\n> Not Good.\n> \n> \t\t\tregards, tom lane\nHere is another one. UnixWare 7.1.1 on a P-III 500 256 Meg Ram:\n\n$ cc -o tgl.test -O tgl.test.c\n$ time ./tgl.test 0\n\nreal 0m0.01s\nuser 0m0.01s\nsys 0m0.00s\n$ time ./tgl.test 1\n\nreal 0m10.01s\nuser 0m0.00s\nsys 0m0.01s\n$ time ./tgl.test 2\n\nreal 0m10.01s\nuser 0m0.00s\nsys 0m0.00s\n$ time ./tgl.test 3\n\nreal 0m10.11s\nuser 0m0.00s\nsys 0m0.01s\n$ uname -a\nUnixWare lerami 5 7.1.1 i386 x86at SCO UNIX_SVR5\n$ \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 17 Feb 2001 09:52:33 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "On Sat, 17 Feb 2001, Tom Lane wrote:\n\n[skip]\n\nTL> Platform: HPUX 10.20 on HPPA C180, fast wide SCSI discs, 7200rpm (I think).\nTL> Minimum select(2) delay is 10 msec on this platform.\n\n[skip]\n\nTL> I vote for commit_delay = 0, unless someone can show cases where\nTL> positive delay is significantly better than zero delay.\n\nBTW, for modern versions of FreeBSD kernels, there is HZ kernel option\nwhich describes maximum timeslice granularity (actually, HZ value is\nnumber of timeslice periods per second, with default of 100 = 10 ms). On\nmodern CPUs HZ may be increased to at least 1000, and sometimes even to\n5000 (unfortunately, I haven't test platform by hand).\n\nSo, maybe you can test select granularity at ./configure phase and then\ndefine default commit_delay accordingly.\n\nYour thoughts?\n\nSincerely,\nD.Marck [DM5020, DM268-RIPE, DM3-RIPN]\n------------------------------------------------------------------------\n*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\n------------------------------------------------------------------------\n\n", "msg_date": "Sun, 18 Feb 2001 22:36:03 +0300 (MSK)", "msg_from": "Dmitry Morozovsky <marck@rinet.ru>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance " }, { "msg_contents": "> TL> I vote for commit_delay = 0, unless someone can show cases where\n> TL> positive delay is significantly better than zero delay.\n> \n> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option\n> which describes maximum timeslice granularity (actually, HZ value is\n> number of timeslice periods per second, with default of 100 = 10 ms). On\n> modern CPUs HZ may be increased to at least 1000, and sometimes even to\n> 5000 (unfortunately, I haven't test platform by hand).\n> \n> So, maybe you can test select granularity at ./configure phase and then\n> define default commit_delay accordingly.\n\nAccording to the BSD4.4 book by Karels/McKusick, even though computers\nare faster now, increasing the Hz doesn't seem to improve performance. \nThis is probably because of cache misses from context switches.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Feb 2001 14:59:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance" }, { "msg_contents": "On Sun, 18 Feb 2001, Dmitry Morozovsky wrote:\n\nI just done the experiment with increasing HZ to 1000 on my own machine\n(PII 374). Your test program reports 2 ms instead of 20. The other side\nof increasing HZ is surely more overhead to scheduler system. Anyway, it's\na bit of data to dig into, I suppose ;-)\n\nResults for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM\nDTLA IDE in ATA66 mode with tag queueing and soft updates turned on)\n\n>> default delay (5 us)\n\nnumber of clients: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\ntps = 96.678008(including connections establishing)\ntps = 96.982619(excluding connections establishing)\n\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 77.538398(including connections establishing)\ntps = 79.126914(excluding connections establishing)\n\nnumber of clients: 20\nnumber of transactions per client: 50\nnumber of transactions actually processed: 1000/1000\ntps = 68.448429(including connections establishing)\ntps = 70.957500(excluding connections establishing)\n\n>> delay of 0\n\nnumber of clients: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\ntps = 111.939751(including connections establishing)\ntps = 112.335089(excluding connections establishing)\n\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 84.262936(including connections establishing)\ntps = 86.152702(excluding connections establishing)\n\nnumber of clients: 20\nnumber of transactions per client: 50\nnumber of transactions actually processed: 1000/1000\ntps = 79.678831(including connections establishing)\ntps = 83.106418(excluding connections establishing)\n\n\nResults are very close... Another thing to dig into.\n\nBTW, postgres parameters were: -B 256 -F -i -S\n\n\n\n\nDM> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option\nDM> which describes maximum timeslice granularity (actually, HZ value is\nDM> number of timeslice periods per second, with default of 100 = 10 ms). On\nDM> modern CPUs HZ may be increased to at least 1000, and sometimes even to\nDM> 5000 (unfortunately, I haven't test platform by hand).\nDM> \nDM> So, maybe you can test select granularity at ./configure phase and then\nDM> define default commit_delay accordingly.\nDM> \nDM> Your thoughts?\nDM> \nDM> Sincerely,\nDM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN]\nDM> ------------------------------------------------------------------------\nDM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\nDM> ------------------------------------------------------------------------\nDM> \n\nSincerely,\nD.Marck [DM5020, DM268-RIPE, DM3-RIPN]\n------------------------------------------------------------------------\n*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\n------------------------------------------------------------------------\n\n\n", "msg_date": "Sun, 18 Feb 2001 23:32:11 +0300 (MSK)", "msg_from": "Dmitry Morozovsky <marck@rinet.ru>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance " }, { "msg_contents": "On Sun, 18 Feb 2001, Dmitry Morozovsky wrote:\n\nDM> I just done the experiment with increasing HZ to 1000 on my own machine\nDM> (PII 374). Your test program reports 2 ms instead of 20. The other side\nDM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's\nDM> a bit of data to dig into, I suppose ;-)\nDM> \nDM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM\nDM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on)\n\nOh, I forgot to paste the results from original system (with HZ=100). Here\nthey are:\n\n>> delay = 5\n\nnumber of clients: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\ntps = 47.422866(including connections establishing)\ntps = 47.493439(excluding connections establishing)\n\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 37.930605(including connections establishing)\ntps = 38.308613(excluding connections establishing)\n\nnumber of clients: 20\nnumber of transactions per client: 50\nnumber of transactions actually processed: 1000/1000\ntps = 35.757531(including connections establishing)\ntps = 36.420532(excluding connections establishing)\n\n>> delay = 0\n\nnumber of clients: 1\nnumber of transactions per client: 1000\nnumber of transactions actually processed: 1000/1000\ntps = 111.521859(including connections establishing)\ntps = 111.904026(excluding connections establishing)\n\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 62.808216(including connections establishing)\ntps = 63.819590(excluding connections establishing)\n\nnumber of clients: 20\nnumber of transactions per client: 50\nnumber of transactions actually processed: 1000/1000\ntps = 64.250431(including connections establishing)\ntps = 66.438067(excluding connections establishing)\n\n\nSo, I suppose (very preliminary, of course ;):\n\n1 - at least for dedicated PostgreSQL servers it _may_ be\nreasonable to increase HZ\n2 - there is still no advantages of using delay != 0.\n\nYour ideas?\n\n\n\nDM> \nDM> >> default delay (5 us)\nDM> \nDM> number of clients: 1\nDM> number of transactions per client: 1000\nDM> number of transactions actually processed: 1000/1000\nDM> tps = 96.678008(including connections establishing)\nDM> tps = 96.982619(excluding connections establishing)\nDM> \nDM> number of clients: 10\nDM> number of transactions per client: 100\nDM> number of transactions actually processed: 1000/1000\nDM> tps = 77.538398(including connections establishing)\nDM> tps = 79.126914(excluding connections establishing)\nDM> \nDM> number of clients: 20\nDM> number of transactions per client: 50\nDM> number of transactions actually processed: 1000/1000\nDM> tps = 68.448429(including connections establishing)\nDM> tps = 70.957500(excluding connections establishing)\nDM> \nDM> >> delay of 0\nDM> \nDM> number of clients: 1\nDM> number of transactions per client: 1000\nDM> number of transactions actually processed: 1000/1000\nDM> tps = 111.939751(including connections establishing)\nDM> tps = 112.335089(excluding connections establishing)\nDM> \nDM> number of clients: 10\nDM> number of transactions per client: 100\nDM> number of transactions actually processed: 1000/1000\nDM> tps = 84.262936(including connections establishing)\nDM> tps = 86.152702(excluding connections establishing)\nDM> \nDM> number of clients: 20\nDM> number of transactions per client: 50\nDM> number of transactions actually processed: 1000/1000\nDM> tps = 79.678831(including connections establishing)\nDM> tps = 83.106418(excluding connections establishing)\nDM> \nDM> \nDM> Results are very close... Another thing to dig into.\nDM> \nDM> BTW, postgres parameters were: -B 256 -F -i -S\nDM> \nDM> \nDM> \nDM> \nDM> DM> BTW, for modern versions of FreeBSD kernels, there is HZ kernel option\nDM> DM> which describes maximum timeslice granularity (actually, HZ value is\nDM> DM> number of timeslice periods per second, with default of 100 = 10 ms). On\nDM> DM> modern CPUs HZ may be increased to at least 1000, and sometimes even to\nDM> DM> 5000 (unfortunately, I haven't test platform by hand).\nDM> DM> \nDM> DM> So, maybe you can test select granularity at ./configure phase and then\nDM> DM> define default commit_delay accordingly.\nDM> DM> \nDM> DM> Your thoughts?\nDM> DM> \nDM> DM> Sincerely,\nDM> DM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN]\nDM> DM> ------------------------------------------------------------------------\nDM> DM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\nDM> DM> ------------------------------------------------------------------------\nDM> DM> \nDM> \nDM> Sincerely,\nDM> D.Marck [DM5020, DM268-RIPE, DM3-RIPN]\nDM> ------------------------------------------------------------------------\nDM> *** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\nDM> ------------------------------------------------------------------------\nDM> \nDM> \n\nSincerely,\nD.Marck [DM5020, DM268-RIPE, DM3-RIPN]\n------------------------------------------------------------------------\n*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\n------------------------------------------------------------------------\n\n", "msg_date": "Sun, 18 Feb 2001 23:54:33 +0300 (MSK)", "msg_from": "Dmitry Morozovsky <marck@rinet.ru>", "msg_from_op": false, "msg_subject": "Re: v7.1b4 bad performance " }, { "msg_contents": "Tom Lane wrote:\n> \n> I wrote:\n> > Thus, our past arguments about whether a few microseconds of delay\n> > before commit are a good idea seem moot; we do not have any portable way\n> > of implementing that, and a ten millisecond delay for commit is clearly\n> > Not Good.\n> \n> I've now finished running a spectrum of pgbench scenarios, and I find\n> no case in which commit_delay = 0 is worse than commit_delay > 0.\n> Now this is just one benchmark on just one platform, but it's pretty\n> damning...\n> \n\nIn your test cases I always see \"where bid = 1\" at \"update branches\"\ni.e.\n update branches set bbalance = bbalance + ... where bid = 1\n\nISTM there's no multiple COMMIT in your senario-s due to\ntheir lock conflicts. \n\nRegards,\nHiroshi Inoue\n", "msg_date": "Mon, 19 Feb 2001 17:15:03 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "I did not realize how much WAL improved performance when using fsync.\n\n> > \"Schmidt, Peter\" <peter.schmidt@prismedia.com> writes:\n> > > So, is it OK to use commit_delay=0?\n> > \n> > Certainly. In fact, I think that's about to become the default ;-)\n> \n> I agree with Tom. I did some benchmarking tests using pgbench for a\n> computer magazine in Japan. I got a almost equal or better result for\n> 7.1 than 7.0.3 if commit_delay=0. See included png file.\n> --\n> Tatsuo Ishii\n\n[ Attachment, skipping... ]\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 11:50:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> In your test cases I always see \"where bid = 1\" at \"update branches\"\n> i.e.\n> update branches set bbalance = bbalance + ... where bid = 1\n\n> ISTM there's no multiple COMMIT in your senario-s due to\n> their lock conflicts. \n\nHmm. It looks like using a 'scaling factor' larger than 1 is necessary\nto spread out the updates of \"branches\". AFAIR, the people who reported\nruns with scaling factors > 1 got pretty much the same results though.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 12:15:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > In your test cases I always see \"where bid = 1\" at \"update branches\"\n> > i.e.\n> > update branches set bbalance = bbalance + ... where bid = 1\n> \n> > ISTM there's no multiple COMMIT in your senario-s due to\n> > their lock conflicts.\n> \n> Hmm. It looks like using a 'scaling factor' larger than 1 is necessary\n> to spread out the updates of \"branches\". AFAIR, the people who reported\n> runs with scaling factors > 1 got pretty much the same results though.\n> \n\nPeople seem to believe your results are decisive\nand would raise your results if the evidence is\nrequired.\nAll clients of pgbench execute the same sequence\nof queries. There could be various conflicts e.g.\noridinary lock, buffer lock, IO spinlock ...\nI've been suspicious if pgbench is an (unique)\nappropiriate test case for evaluaing commit_delay.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 20 Feb 2001 08:28:47 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I've been suspicious if pgbench is an (unique)\n> appropiriate test case for evaluaing commit_delay.\n\nOf course it isn't. Never trust only one benchmark.\n\nI've asked the Great Bridge folks to run their TPC-C benchmark with both\nzero and small nonzero commit_delay. It will be a couple of days before\nwe have the results, however. Can anyone else offer any comparisons\nbased on other multiuser benchmarks?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 18:40:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "Tom Lane wrote:\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I've been suspicious if pgbench is an (unique)\n> > appropiriate test case for evaluaing commit_delay.\n> \n> Of course it isn't. Never trust only one benchmark.\n> \n> I've asked the Great Bridge folks to run their TPC-C benchmark with both\n> zero and small nonzero commit_delay. It will be a couple of days before\n> we have the results, however. Can anyone else offer any comparisons\n> based on other multiuser benchmarks?\n> \n\nI changed pgbench so that different connection connects\nto the different database and got the following results.\n\nThe results of \n pgbench -c 10 -t 100\n\n[CommitDelay=0]\n1st)tps = 18.484611(including connections establishing)\n tps = 19.827988(excluding connections establishing)\n2nd)tps = 18.754826(including connections establishing)\n tps = 19.352268(excluditp connections establishing)\n3rd)tps = 18.771225(including connections establishing)\n tps = 19.261843(excluding connections establishing)\n[CommitDelay=1]\n1st)tps = 20.317649(including connections establishing)\n tps = 20.975151(excluding connections establishing)\n2nd)tps = 24.208025(including connections establishing)\n tps = 24.663665(excluding connections establishing)\n3rd)tps = 25.821156(including connections establishing)\n tps = 26.842741(excluding connections establishing)\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 20 Feb 2001 19:45:25 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> I changed pgbench so that different connection connects\n> to the different database and got the following results.\n\nHmm, you mean you set up a separate test database for each pgbench\n\"client\", but all under the same postmaster?\n\n> The results of \n> pgbench -c 10 -t 100\n\n> [CommitDelay=0]\n> 1st)tps = 18.484611(including connections establishing)\n> tps = 19.827988(excluding connections establishing)\n> 2nd)tps = 18.754826(including connections establishing)\n> tps = 19.352268(excluditp connections establishing)\n> 3rd)tps = 18.771225(including connections establishing)\n> tps = 19.261843(excluding connections establishing)\n> [CommitDelay=1]\n> 1st)tps = 20.317649(including connections establishing)\n> tps = 20.975151(excluding connections establishing)\n> 2nd)tps = 24.208025(including connections establishing)\n> tps = 24.663665(excluding connections establishing)\n> 3rd)tps = 25.821156(including connections establishing)\n> tps = 26.842741(excluding connections establishing)\n\nWhat platform is this on --- in particular, how long a delay\nis CommitDelay=1 in reality? What -B did you use?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 11:19:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> \n> Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> > I changed pgbench so that different connection connects\n> > to the different database and got the following results.\n> \n> Hmm, you mean you set up a separate test database for each pgbench\n> \"client\", but all under the same postmaster?\n>\n\nYes. Different database is to make the conflict as less as possible.\nThe conflict among backends is a greatest enemy of CommitDelay.\n \n> > The results of \n> > pgbench -c 10 -t 100\n> \n> > [CommitDelay=0]\n> > 1st)tps = 18.484611(including connections establishing)\n> > tps = 19.827988(excluding connections establishing)\n> > 2nd)tps = 18.754826(including connections establishing)\n> > tps = 19.352268(excluditp connections establishing)\n> > 3rd)tps = 18.771225(including connections establishing)\n> > tps = 19.261843(excluding connections establishing)\n> > [CommitDelay=1]\n> > 1st)tps = 20.317649(including connections establishing)\n> > tps = 20.975151(excluding connections establishing)\n> > 2nd)tps = 24.208025(including connections establishing)\n> > tps = 24.663665(excluding connections establishing)\n> > 3rd)tps = 25.821156(including connections establishing)\n> > tps = 26.842741(excluding connections establishing)\n> \n> What platform is this on --- in particular, how long a delay\n> is CommitDelay=1 in reality? What -B did you use?\n> \n\nplatform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2)\nmin delay) 10msec according to your test program.\n-B) 64 (all other settings are default)\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Wed, 21 Feb 2001 06:48:19 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n>> Hmm, you mean you set up a separate test database for each pgbench\n>> \"client\", but all under the same postmaster?\n\n> Yes. Different database is to make the conflict as less as possible.\n> The conflict among backends is a greatest enemy of CommitDelay.\n\nOkay, so this errs in the opposite direction from the original form of\nthe benchmark: there will be *no* cross-backend locking delays, except\nfor accesses to the common WAL log. That's good as a comparison point,\nbut we shouldn't trust it absolutely either.\n\n>> What platform is this on --- in particular, how long a delay\n>> is CommitDelay=1 in reality? What -B did you use?\n\n> platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2)\n> min delay) 10msec according to your test program.\n> -B) 64 (all other settings are default)\n\nThanks. Could I trouble you to run it again with a larger -B, say\n1024 or 2048? What I've found is that at -B 64, the benchmark is\nso constrained by limited buffer space that it doesn't reflect\nperformance at a more realistic production setting.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 16:52:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "Tom Lane wrote:\n> \n> \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> >> Hmm, you mean you set up a separate test database for each pgbench\n> >> \"client\", but all under the same postmaster?\n> \n> > Yes. Different database is to make the conflict as less as possible.\n> > The conflict among backends is a greatest enemy of CommitDelay.\n> \n> Okay, so this errs in the opposite direction from the original form of\n> the benchmark: there will be *no* cross-backend locking delays, except\n> for accesses to the common WAL log. That's good as a comparison point,\n> but we shouldn't trust it absolutely either.\n> \n\nOf cource it's only one of the test cases.\nBecause I've ever seen one-sided test cases, I had to\nprovide this test case unwillingly.\nThere are some obvious cases that CommitDelay is harmful\nand I've seen no test case other than such cases i.e\n1) There's only one session.\n2) The backends always conflict(e.g pgbench with scaling factor 1).\n\n> >> What platform is this on --- in particular, how long a delay\n> >> is CommitDelay=1 in reality? What -B did you use?\n> \n> > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2)\n> > min delay) 10msec according to your test program.\n> > -B) 64 (all other settings are default)\n> \n> Thanks. Could I trouble you to run it again with a larger -B, say\n> 1024 or 2048? What I've found is that at -B 64, the benchmark is\n> so constrained by limited buffer space that it doesn't reflect\n> performance at a more realistic production setting.\n> \n\nOK I would try it later though I'm not sure I could\nincrease -B that large in my current environment.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Wed, 21 Feb 2001 08:30:41 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "Tom Lane wrote:\n> \n> > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2)\n> > min delay) 10msec according to your test program.\n> > -B) 64 (all other settings are default)\n> \n> Thanks. Could I trouble you to run it again with a larger -B, say\n> 1024 or 2048? What I've found is that at -B 64, the benchmark is\n> so constrained by limited buffer space that it doesn't reflect\n> performance at a more realistic production setting.\n> \n\nHmm the result doesn't seem that obvious.\n\nFirst I got the following result.\n[CommitDelay=0]\n1)tps = 23.024648(including connections establishing)\n tps = 23.856420(excluding connections establishing)\n2)tps = 30.276270(including connections establishing)\n tps = 30.996459(excluding connections establishing)\n[CommitDelay=1]\n1)tps = 23.065921(including connections establishing)\n tps = 23.866029(excluding connections establishing)\n2)tps = 34.024632(including connections establishing)\n tps = 35.671566(excluding connections establishing)\n\nThe result seems inconstant and after disabling \ncheckpoint process I got the following.\n\n[CommitDelay=0]\n1)tps = 24.060970(including connections establishing)\n tps = 24.416851(excluding connections establishing)\n2)tps = 21.361134(including connections establishing)\n tps = 21.605583(excluding connections establishing)\n3)tps = 20.377635(including connections establishing)\n tps = 20.646523(excluding connections establishing)\n[CommitDelay=1]\n1)tps = 22.164379(including connections establishing)\n tps = 22.790772(excluding connections establishing)\n2)tps = 22.719068(including connections establishing)\n tps = 23.040485(excluding connections establishing)\n3)tps = 24.341675(including connections establishing)\n tps = 25.869479(excluding connections establishing)\n\nUnfortunately I have no more time to check today.\nPlease check the similar test case.\n\n[My test case]\nI created and initialized 10 datatabases as follows.\n1) create databases.\n createdb inoue1\n craetedb inoue2\n .\n createdb inoue10\n2) pgbench -i inoue1\n pgbench -i inoue2\n .\n pgbench -i inoue10\n3) invoke a modified pgbench\n pgbench -c 10 -t 100 inoue\n\nI've attached a patch to change pgbench so that\neach connection connects to different database\nwhose name is 'xxxx%d'(xxxx is the specified\ndatabase? name).\n\nRegards,\nHiroshi Inoue", "msg_date": "Wed, 21 Feb 2001 10:53:45 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "I Inoue wrote:\n> \n> Tom Lane wrote:\n> >\n> > > platform) i686-pc-linux-gnu, compiled by GCC egcs-2.91.60(turbolinux 4.2)\n> > > min delay) 10msec according to your test program.\n> > > -B) 64 (all other settings are default)\n> >\n> > Thanks. Could I trouble you to run it again with a larger -B, say\n> > 1024 or 2048? What I've found is that at -B 64, the benchmark is\n> > so constrained by limited buffer space that it doesn't reflect\n> > performance at a more realistic production setting.\n> >\n> \n> Hmm the result doesn't seem that obvious.\n> \n> First I got the following result.\n\nSorry I forgot to mention the -B setting of my previous\nposting. All results are with -B 1024.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Wed, 21 Feb 2001 12:35:53 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "> Tom Lane wrote:\n> > \n> > > platform) i686-pc-linux-gnu, compiled by GCC \n> egcs-2.91.60(turbolinux 4.2)\n> > > min delay) 10msec according to your test program.\n> > > -B) 64 (all other settings are default)\n> > \n> > Thanks. Could I trouble you to run it again with a larger -B, say\n> > 1024 or 2048? What I've found is that at -B 64, the benchmark is\n> > so constrained by limited buffer space that it doesn't reflect\n> > performance at a more realistic production setting.\n> > \n> \n> Hmm the result doesn't seem that obvious.\n>\n\nI tried with -B 1024 10 times for commit_delay=0 and 1 respectively.\nThe average result of 'pgbench -c 10 -t 100' is as follows.\n\n[commit_delay=0]\n 26.462817(including connections establishing)\n 26.788047(excluding connections establishing)\n[commit_delay=1]\n 27.630405(including connections establishing)\n 28.042666(excluding connections establishing)\n\nHiroshi Inoue\n", "msg_date": "Thu, 22 Feb 2001 00:21:46 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "Just another data point.\n\nI downloaded a snapshot yesterday - Changelogs dated Feb 20 17:02\n\nIt's significantly slower than \"7.0.3 with fsync off\" for one of my webapps.\n\n7.0.3 with fsync off gets me about 55 hits per sec max (however it's\ninteresting that the speed keeps dropping with continued tests).\n( PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66)\n\nFor 7.1b4 snapshot I get about 23 hits per second (drops gradually too).\nI'm using Pg::DBD compiled using the 7.1 libraries for both tests.\n(PostgreSQL 7.1beta4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66)\n\nFor a simple \"select only\" webapp I'm getting 112 hits per sec for 7.0.3.\nand 109 hits a sec for the 7.1 beta4 snapshot. These results remain quite\nstable over many repeated tests.\n\nThe first webapp does a rollback, begin, select, update, commit, begin, a\nbunch of selects in sequence and rollback. \n\nSo my guess is that the 7.1 updates (with default fsync) are significantly\nslower than 7.0.3 fsync=off now. \n\nBut it's interesting that the updates slow things down significantly. Going\nfrom 50 to 30 hits per second after a few thousand hits for 7.0.3, and 23\nto 17 after about a thousand hits for 7.1beta4.\n\n\nFor postgresql 7.0.3 to speed things back up from 30 to 60 hits per sec I\nhad to do:\n\nlylyeoh=# delete from session;\nDELETE 1\nlylyeoh=# vacuum; vacuum analyze;\nVACUUM\nNOTICE: RegisterSharedInvalid: SI buffer overflow\nNOTICE: InvalidateSharedInvalid: cache state reset\nVACUUM\n(Not sure why the above happened, but I repeated the vacuum again for good\nmeasure)\n\nlylyeoh=# vacuum; vacuum analyze;\nVACUUM\nVACUUM\n\nThen I ran the apachebench again (after visiting the webpage once to create\nthe session).\n\nNote that even with only one row in the session table it kept getting\nslower and slower as it kept getting updated, even when I kept trying to\nvacuum and vacuum analyze it. I had to delete the row and vacuum only then\nwas there a difference.\n\nI didn't try this on 7.1beta4.\n\nCheerio,\nLink.\n\n\n", "msg_date": "Thu, 22 Feb 2001 15:13:14 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "I wrote:\n> \n> I tried with -B 1024 10 times for commit_delay=0 and 1 respectively.\n> The average result of 'pgbench -c 10 -t 100' is as follows.\n> \n> [commit_delay=0]\n> 26.462817(including connections establishing)\n> 26.788047(excluding connections establishing)\n> [commit_delay=1]\n> 27.630405(including connections establishing)\n> 28.042666(excluding connections establishing)\n> \n\nI got another clear result by simplifying pgbench.\n\n[commit_delay = 0]\n1)tps = 52.682295(including connections establishing)\n tps = 53.574140(excluding connections establishing)\n2)tps = 54.580892(including connections establishing)\n tps = 55.672988(excluding connections establishing)\n3)tps = 60.409452(including connections establishing)\n tps = 61.740995(excluding connections establishing)\n4)tps = 60.787502(including connections establishing)\n tps = 62.131317(excluding connections establishing)\n5)tps = 60.968409(including connections establishing)\n tps = 62.328142(excluding connections establishing)\n6)tps = 62.396566(including connections establishing)\n tps = 63.614357(excluding connections establishing)\n7)tps = 52.720152(including connections establishing)\n tps = 54.811739(excluding connections establishing)\n8)tps = 53.417274(including connections establishing)\n tps = 54.454355(excluding connections establishing)\n9)tps = 54.862412(including connections establishing)\n tps = 55.953512(excluding connections establishing)\n10)tps = 60.616255(including connections establishing)\n tps = 63.423590(excluding connections establishing)\n\n[commit_delay = 1]\n1)tps = 68.458715(including connections establishing)\n tps = 71.147012(excluding connections establishing)\n2)tps = 71.059064(including connections establishing)\n tps = 72.685829(excluding connections establishing)\n3)tps = 67.625556(including connections establishing)\n tps = 69.288699(excluding connections establishing)\n4)tps = 84.749505(including connections establishing)\n tps = 87.430563(excluding connections establishing)\n5)tps = 83.001418(including connections establishing)\n tps = 85.525377(excluding connections establishing)\n6)tps = 66.235768(including connections establishing)\n tps = 67.830999(excluding connections establishing)\n7)tps = 80.993308(including connections establishing)\n tps = 87.333491(excluding connections establishing)\n8)tps = 69.844893(including connections establishing)\n tps = 71.640972(excluding connections establishing)\n9)tps = 71.135311(including connections establishing)\n tps = 72.979021(excluding connections establishing)\n10)tps = 68.091439(including connections establishing)\n tps = 69.539728(excluding connections establishing)\n\nThe patch to let pgbench execute 1 query/trans is the following.\n\nIndex: pgbench.c\n===================================================================\nRCS file: /home/cvs/pgcurrent/contrib/pgbench/pgbench.c,v\nretrieving revision 1.1\ndiff -c -r1.1 pgbench.c\n*** pgbench.c\t2001/02/20 07:55:21\t1.1\n--- pgbench.c\t2001/02/22 10:03:52\n***************\n*** 217,222 ****\n--- 217,224 ----\n \t\t\tst->state = 0;\n \t}\n \n+ if (st->state > 1)\n+ st->state=6;\n \tswitch (st->state)\n \t{\n \t\tcase 0:\t\t\t/* about to start */\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Thu, 22 Feb 2001 19:51:43 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> Just another data point.\n> \n> I downloaded a snapshot yesterday - Changelogs dated Feb 20 17:02\n> \n> It's significantly slower than \"7.0.3 with fsync off\" for one of my webapps.\n> \n> 7.0.3 with fsync off gets me about 55 hits per sec max (however it's\n> interesting that the speed keeps dropping with continued tests).\n> ( PostgreSQL 7.0.3 on i686-pc-linux-gnu, compiled by gcc egcs-2.91.66)\n> \n> For 7.1b4 snapshot I get about 23 hits per second (drops gradually too).\n> I'm using Pg::DBD compiled using the 7.1 libraries for both tests.\n> (PostgreSQL 7.1beta4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66)\n> \n> For a simple \"select only\" webapp I'm getting 112 hits per sec for 7.0.3.\n> and 109 hits a sec for the 7.1 beta4 snapshot. These results remain quite\n> stable over many repeated tests.\n> \n> The first webapp does a rollback, begin, select, update, commit, begin, a\n> bunch of selects in sequence and rollback.\n\nIt may be that WAL has changed the rollback time-characteristics to\nworse \nthan pre-wal ?\n\nIf that is the case tha routeinely rollbacking transactions is no longer \na good programming practice.\n\nIt may have used to be as I think that before wal both rollback and\ncommit \nhad more or less the same cost.\n\n> So my guess is that the 7.1 updates (with default fsync) are significantly\n> slower than 7.0.3 fsync=off now.\n\nthe consensus seems to be that they are only \"a little\" slower.\n\n> But it's interesting that the updates slow things down significantly. Going\n> from 50 to 30 hits per second after a few thousand hits for 7.0.3, and 23\n> to 17 after about a thousand hits for 7.1beta4.\n> \n> For postgresql 7.0.3 to speed things back up from 30 to 60 hits per sec I\n> had to do:\n\n-------------\nHannu\n", "msg_date": "Thu, 22 Feb 2001 13:49:04 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "Dmitry Morozovsky wrote:\n\n> On Sun, 18 Feb 2001, Dmitry Morozovsky wrote:\n> \n> DM> I just done the experiment with increasing HZ to 1000 on my own machine\n> DM> (PII 374). Your test program reports 2 ms instead of 20. The other side\n> DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's\n> DM> a bit of data to dig into, I suppose ;-)\n> DM> \n> DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM\n> DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on)\n\nIs this unmodified pgbench or has it Hiroshi tweaked behaviour of \nconnecting each client to its own database, so that locking and such \ndoes not shade the possible benefits (was it about 15% ?) of delay>1\n\nalso, IIRC Tom suggested running with at least -B 1024 if you can.\n\n-----------------\nHannu\n\n", "msg_date": "Fri, 23 Feb 2001 13:09:37 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "On Fri, 23 Feb 2001, Hannu Krosing wrote:\n\nHK> > DM> I just done the experiment with increasing HZ to 1000 on my own machine\nHK> > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side\nHK> > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's\nHK> > DM> a bit of data to dig into, I suppose ;-)\nHK> > DM> \nHK> > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM\nHK> > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on)\nHK> \nHK> Is this unmodified pgbench or has it Hiroshi tweaked behaviour of \nHK> connecting each client to its own database, so that locking and such \nHK> does not shade the possible benefits (was it about 15% ?) of delay>1\n\nHK> also, IIRC Tom suggested running with at least -B 1024 if you can.\n\nIt was original pgbench. Maybe, duritng this weekend I'll make new kernel\nwith big SHM table and try to test with larger -B (for now, -B 256 is the\nmost I can set)\n\nSincerely,\nD.Marck [DM5020, DM268-RIPE, DM3-RIPN]\n------------------------------------------------------------------------\n*** Dmitry Morozovsky --- D.Marck --- Wild Woozle --- marck@rinet.ru ***\n------------------------------------------------------------------------\n\n", "msg_date": "Fri, 23 Feb 2001 14:56:41 +0300 (MSK)", "msg_from": "Dmitry Morozovsky <marck@rinet.ru>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "On Fri, Feb 23, 2001 at 01:09:37PM +0200, Hannu Krosing wrote:\n> Dmitry Morozovsky wrote:\n> \n> > DM> I just done the experiment with increasing HZ to 1000 on my own machine\n> > DM> (PII 374). Your test program reports 2 ms instead of 20. The other side\n> > DM> of increasing HZ is surely more overhead to scheduler system. Anyway, it's\n> > DM> a bit of data to dig into, I suppose ;-)\n> > DM> \n> > DM> Results for pgbench with 7.1b4: (BTW, machine is FreeBSD 4-stable on IBM\n> > DM> DTLA IDE in ATA66 mode with tag queueing and soft updates turned on)\n> \n> Is this unmodified pgbench or has it Hiroshi tweaked behaviour of \n> connecting each client to its own database, so that locking and such \n> does not shade the possible benefits (was it about 15% ?) of delay>1\n> \n> also, IIRC Tom suggested running with at least -B 1024 if you can.\n\nJust try this:\nexplain select * from <tablename> where <fieldname>=<any_value>\n(Use for fieldname an indexed field).\n\nIf postgres is using an sequential scan in stead of an index scan. You have\nto vacuum your database. This will REALLY remove deleted data from your indexes.\n\nHope it will work,\n\nDave Mertens\nSystem Administrator ISM, Netherlands\n", "msg_date": "Fri, 23 Feb 2001 12:21:32 +0000", "msg_from": "Dave Mertens <dave@redbull.zyprexia.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance" }, { "msg_contents": "Hannu Krosing <hannu@tm.ee> writes:\n> Is this unmodified pgbench or has it Hiroshi tweaked behaviour of \n> connecting each client to its own database, so that locking and such \n> does not shade the possible benefits (was it about 15% ?) of delay>1\n\nI didn't much like that approach to altering the test, since it also\nmeans that all the clients are working with separate tables and hence\nnot able to share read I/O; that doesn't seem like it's the same\nbenchmark at all. What would make more sense to me is to increase the\nnumber of rows in the branches table.\n\nRight now, at the default \"scale factor\" of 1, pgbench makes tables of\nthese sizes:\n\naccounts\t100000\nbranches\t1\nhistory\t\t0\t\t(filled during test)\ntellers\t\t10\n\nIt seems to me that the branches table should have at least 10 to 100\nentries, and tellers about 10 times whatever branches is. 100000\naccounts rows seems enough though.\n\nMaking such a change would render results not comparable with the prior\npgbench, but that would be true with Hiroshi's change too.\n\nAlternatively we could just say that we won't believe any numbers taken\nat scale factors less than, say, 10, but I doubt we really need\nmillion-row accounts tables in order to learn anything...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 10:53:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "> I didn't much like that approach to altering the test, since it also\n> means that all the clients are working with separate tables and hence\n> not able to share read I/O; that doesn't seem like it's the same\n> benchmark at all. What would make more sense to me is to increase the\n> number of rows in the branches table.\n> \n> Right now, at the default \"scale factor\" of 1, pgbench makes tables of\n> these sizes:\n> \n> accounts\t100000\n> branches\t1\n> history\t\t0\t\t(filled during test)\n> tellers\t\t10\n> \n> It seems to me that the branches table should have at least 10 to 100\n> entries, and tellers about 10 times whatever branches is. 100000\n> accounts rows seems enough though.\n\nThose numbers are defined in the TPC-B spec. But pgbench is not an\nofficial test tool anyway, so you could modify it if you like.\nThat is the benefit of the open source:-)\n--\nTatsuo Ishii\n", "msg_date": "Sat, 24 Feb 2001 01:13:32 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n>> It seems to me that the branches table should have at least 10 to 100\n>> entries, and tellers about 10 times whatever branches is. 100000\n>> accounts rows seems enough though.\n\n> Those numbers are defined in the TPC-B spec.\n\nAh. And of course, the TPC bunch never thought anyone would be\ninterested in the results with scale factors so tiny as one ;-),\nso they didn't see any problem with it.\n\nOkay, plan B then: let's ask people to redo their benchmarks with\n-s bigger than one. Now, how much bigger?\n\nTo the extent that you think this is a model of a real bank, it should\nbe obvious that the number of concurrent transactions cannot exceed the\nnumber of tellers; there should never be any write contention on a\nteller's table row, because only that teller (client) should be issuing\ntransactions against it. Contention on a branch's row is realistic,\nbut not from more clients than there are tellers in the branch.\n\nAs a rule of thumb, then, we could say that the benchmark's results are\nnot to be believed for numbers of clients exceeding perhaps 5 times the\nscale factor, ie, half the number of teller rows (so that it's not too\nlikely we will have contention on a teller row).\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 11:42:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Hannu Krosing <hannu@tm.ee> writes:\n> > Is this unmodified pgbench or has it Hiroshi tweaked behaviour of \n> > connecting each client to its own database, so that locking and such \n> > does not shade the possible benefits (was it about 15% ?) of delay>1\n> \n> I didn't much like that approach to altering the test, since it also\n> means that all the clients are working with separate tables and hence\n> not able to share read I/O; that doesn't seem like it's the same\n> benchmark at all.\n\nI agree with you at this point. Generally speaking the benchmark\nhas little meaning if it has no conflicts in the test case. I only\nborrowed pgbench's source code to implement my test cases.\nNote that there's only one database in my last test case. My\nmodified \"pgbench\" isn't a pgbench any more and I didn't intend\nto change pgbench's spec like that. Probably it was my mistake\nthat I had posted my test cases using the form of patch. My\nintension was to clarify the difference of my test cases.\nHowever heavy conflicts with scaling factor 1 doesn't seem\npreferable at least as the default of pgbench.\n\nRegards,\nHiroshi Inoue \n", "msg_date": "Sat, 24 Feb 2001 06:38:27 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: [HACKERS] Re: v7.1b4 bad performance " }, { "msg_contents": "> Okay, plan B then: let's ask people to redo their benchmarks with\n> -s bigger than one. Now, how much bigger?\n> \n> To the extent that you think this is a model of a real bank, it should\n> be obvious that the number of concurrent transactions cannot exceed the\n> number of tellers; there should never be any write contention on a\n> teller's table row, because only that teller (client) should be issuing\n> transactions against it. Contention on a branch's row is realistic,\n> but not from more clients than there are tellers in the branch.\n> \n> As a rule of thumb, then, we could say that the benchmark's results are\n> not to be believed for numbers of clients exceeding perhaps 5 times the\n> scale factor, ie, half the number of teller rows (so that it's not too\n> likely we will have contention on a teller row).\n\nAt least -s 5 seems reasonable for me too. Maybe we should make it as\nthe default setting for pgbench?\n--\nTatsuo Ishii\n", "msg_date": "Sat, 24 Feb 2001 11:38:13 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: v7.1b4 bad performance " } ]
[ { "msg_contents": "The html docs should once again be generated automatically on\npostgresql.org on a twice-daily basis. Thanks to Peter E for working me\nthrough the toolset and configuration changes...\n\n - Thomas\n", "msg_date": "Sat, 17 Feb 2001 14:58:12 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": true, "msg_subject": "Docs generation fixed" } ]
[ { "msg_contents": "Ok, after Tatsuo and Peter have both said that building without locale\nsupport should not use the locale support in the OS, and remembering my\n6.5.3 experience of a year back, I decided to test it out completely. \nAnd I am wrong with respect to 7.1beta4.\n\nFor 7.1beta4 disabling locale will indeed work properly, at least on\nRedHat 6.2.\n\nTesting methodology:\n1.)\tBlow out entire PGDATA tree;\n2.)\tInitdb with locale-enabled backend;\n3.)\tRun regression with locale-enable binaries (locale=en_US);\n4.)\tRebuild without --enable-locale;\n5.)\tBlow out entire PGDATA tree;\n6.)\tInitdb with non-locale backend;\n7.)\tRun regression with non-locale binaries.\n\nResults:\nFor --enable-locale RPM's, pg_regress --schedule=parallel_schedule\nproduces:\nparallel group (13 tests): boolean char name varchar int4 int2 oid\nfloat4 float\n8 text bit int8 numeric\n boolean ... ok\n char ... ok\n name ... ok\n varchar ... ok\n text ... ok\n int2 ... ok\n int4 ... ok\n int8 ... FAILED\n oid ... ok\n float4 ... ok\n float8 ... ok\n bit ... ok\n numeric ... FAILED\ntest strings ... ok\ntest numerology ... ok\nparallel group (18 tests): point lseg box path polygon circle comments\nreltime\ndate abstime interval time inet type_sanity tinterval timestamp oidjoins\nopr_san\nity\n point ... ok\n lseg ... ok\n box ... ok\n path ... ok\n polygon ... ok\n circle ... ok\n date ... ok\n time ... ok\n timestamp ... ok\n interval ... ok\n abstime ... ok\n reltime ... ok\n tinterval ... ok\n inet ... ok\n comments ... ok\n oidjoins ... ok\n type_sanity ... ok\n opr_sanity ... ok\ntest geometry ... ok\ntest horology ... ok\ntest create_function_1 ... ok\ntest create_type ... ok\ntest create_table ... ok\ntest create_function_2 ... ok\ntest copy ... ok\nparallel group (7 tests): create_aggregate create_operator triggers\ninherit con\nstraints create_misc create_index\n constraints ... ok\n triggers ... ok\n create_misc ... ok\n create_aggregate ... ok\n create_operator ... ok\n create_index ... ok\n inherit ... ok\ntest create_view ... ok\ntest sanity_check ... ok\ntest errors ... ok\ntest select ... ok\nparallel group (16 tests): select_into select_distinct_on\nselect_distinct selec\nt_having select_implicit subselect transactions union case random arrays\naggrega\ntes join portals hash_index btree_index\n select_into ... ok\n select_distinct ... ok\n select_distinct_on ... ok\n select_implicit ... FAILED\n select_having ... FAILED\n subselect ... ok\n union ... ok\n case ... ok\n join ... ok\n aggregates ... ok\n transactions ... ok\n random ... failed (ignored)\n portals ... ok\n arrays ... ok\n btree_index ... ok\n hash_index ... ok\ntest misc ... ok\nparallel group (5 tests): portals_p2 alter_table rules foreign_key\nselect_views\n select_views ... FAILED\n alter_table ... ok\n portals_p2 ... ok\n rules ... ok\n foreign_key ... ok\nparallel group (3 tests): limit temp plpgsql\n limit ... ok\n plpgsql ... ok\n temp ... ok\n----\n\nWith locale disabled:\nAll 76 tests passed.\n\nSo, there's the data. This is different behavior from the 6.5.3\nnon-locale set I produced a year ago. Is there interest in a non-locale\nRPM distribution, or? The locale enabled regression results fail due to\ncurrency format and collation errors. Diffs attached. I'm not sure I\nunderstand the select_views failure, either. Locale used was en_US.\n\nComments?\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11", "msg_date": "Sat, 17 Feb 2001 11:15:23 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Non-locale 7.1beta4 binaries on RedHat 6.2 test results." }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> The locale enabled regression results fail due to\n> currency format and collation errors. Diffs attached. I'm not sure I\n> understand the select_views failure, either. Locale used was en_US.\n\nThe select_views delta looks like a sort-order issue as well; nothing\nto worry about.\n\nThese deltas would go away if you allowed pg_regress to build a temp\ninstallation in which it could force the locale to C. Of course,\nthat doesn't presently work without a built source tree to install\nfrom. I wonder if it is worth adding a third operating mode to\npg_regress that would build a temp PGDATA directory but use the\nalready-installed bin/lib/share directories ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 11:40:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Non-locale 7.1beta4 binaries on RedHat 6.2 test results." }, { "msg_contents": "Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > The locale enabled regression results fail due to\n> > currency format and collation errors. Diffs attached. I'm not sure I\n> > understand the select_views failure, either. Locale used was en_US.\n \n> The select_views delta looks like a sort-order issue as well; nothing\n> to worry about.\n\nGood. I didn't see any difference -- but maybe that's because I went\ncross-eyed.... :-)\n \n> These deltas would go away if you allowed pg_regress to build a temp\n> installation in which it could force the locale to C. Of course,\n> that doesn't presently work without a built source tree to install\n> from. I wonder if it is worth adding a third operating mode to\n\nPossibly. If pg_regress uses a different port for postmaster, AND a\ndifferent PGDATA, you could run regression on a sandbox while a\nproduction system was running, FWIW. Since that's more of an RPM issue\nthan a core issue, I can do that third mode work, as I would be the\ndirect benefactor (unless someone else does it first, of course).\n\nBoth the locale and non-locale installation were from RPM, BTW, as I\nwanted the least number of variables possible.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 17 Feb 2001 11:53:29 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": true, "msg_subject": "Re: Non-locale 7.1beta4 binaries on RedHat 6.2 test results." } ]
[ { "msg_contents": "A comment on microsecond delays using select(). Most Unix kernels run\nat 100hz, meaning that they have a programmable timer that interrupts\nthe CPU every 10 milliseconds. The kernel only gets to control the cpu\nduring those tick interrupts or if a user application makes a kernel\ncall.\n\nTherefore, it is no surprise the most Unix kernels can't do 5\nmicrosecond sleeps. The only way they could do it would be to reprogram\nthe timer interrupt if a wakeup was going to occur in less than 10\nmilliseconds. I doubt many kernels do that because I don't think timer\ninterrupt programming is a very quick or accurate operation. Also,\nreprogramming it would make the actual 100hz timer unreliable.\n\nNow, kernels could check on return from kernel to user code to see if\nsomeone is ready to be woken up, but I doubt they do that either. \nLooking at the BSDI kernel, all timeouts are expressed in ticks, which\nare 10 milliseconds. Obviously there is no checking during kernel call\nreturns because they don't even store the sleeps with enough resolution\nto perform a check. In fact, the kernel doesn't even contain have a way\nto measure microsecond timings.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 11:41:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Microsecond sleeps with select()" }, { "msg_contents": "Bruce Momjian wrote:\n> In fact, the kernel doesn't even contain have a way\n> to measure microsecond timings.\n\nLinux has patches available to do microsecond timings, but they're\nnonportable, of course.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Sat, 17 Feb 2001 11:55:03 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: Microsecond sleeps with select()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> A comment on microsecond delays using select(). Most Unix kernels run\n> at 100hz, meaning that they have a programmable timer that interrupts\n> the CPU every 10 milliseconds.\n\nRight --- this probably also explains my observation that some kernels\nseem to add an extra 10msec to the requested sleep time. Actually\nthey're interpreting a one-clock-tick select() delay as \"wait till\nthe next clock tick, plus one tick\". The actual delay will be between\none and two ticks depending on just when you went to sleep.\n\nI have been thinking some more about the s_lock() delay loop in\nconnection with this. We currently have\n\n/*\n * Each time we busy spin we select the next element of this array as the\n * number of microseconds to wait. This accomplishes pseudo random back-off.\n * Values are not critical but 10 milliseconds is a common platform\n * granularity.\n *\n * Total time to cycle through all 20 entries might be about .07 sec,\n * so the given value of S_MAX_BUSY results in timeout after ~70 sec.\n */\n#define S_NSPINCYCLE\t20\n#define S_MAX_BUSY\t1000 * S_NSPINCYCLE\n\nint\ts_spincycle[S_NSPINCYCLE] =\n{\t0, 0, 0, 0, 10000, 0, 0, 0, 10000, 0,\n\t0, 10000, 0, 0, 10000, 0, 10000, 0, 10000, 10000\n};\n\nHaving read the select(2) man page more closely, I now realize that\nit is *defined* not to yield the processor when the requested delay\nis zero: it just checks the file ready status and returns immediately.\n\nTherefore, on a single-CPU machine, the zero entries in this array\nare a gold-plated, guaranteed waste of time: we'll cycle through the\nkernel call, go back and retest the spinlock, and find it still locked.\n\nOn a multi-CPU machine, the time wasted in the kernel call might\npossibly be enough to allow the lock holder (if running on another\nCPU) to finish its work and release the lock. But it's more likely\nthat we're just wasting CPU.\n\nOn either single or multi CPUs, there is no \"pseudo random backoff\"\nbehavior here whatsoever. Spinning through the zero entries will take\nsome small number of microseconds, and then we hit the one-clock-tick\nselect() delay. In reality, every backend waiting for a spinlock will\nbe awoken on every clock tick.\n\nIf your kernel is one of the ones that interprets a one-tick delay\nrequest as \"rest of the current tick plus a tick\", then all the\ndelays are actually two ticks. In this case, on average half the\npopulation of waiting backends will be awoken on each alternate tick.\nA little better but not much.\n\nIn short: s_spincycle in its current form does not do anything anywhere\nnear what the author thought it would. It's wasted complexity.\n\nI am thinking about simplifying s_lock_sleep down to simple\nwait-one-tick-on-every-call logic. An alternative is to keep\ns_spincycle, but populate it with, say, 10000, 20000 and larger entries,\nwhich would offer some hope of actual random-backoff behavior.\nEither change would clearly be a win on single-CPU machines, and I doubt\nit would hurt on multi-CPU machines.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 12:26:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Microsecond sleeps with select() " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > A comment on microsecond delays using select(). Most Unix kernels run\n> > at 100hz, meaning that they have a programmable timer that interrupts\n> > the CPU every 10 milliseconds.\n> \n> Right --- this probably also explains my observation that some kernels\n> seem to add an extra 10msec to the requested sleep time. Actually\n> they're interpreting a one-clock-tick select() delay as \"wait till\n> the next clock tick, plus one tick\". The actual delay will be between\n> one and two ticks depending on just when you went to sleep.\n> \n\nThe BSDI code would be pselect():\n\n\t\t/*\n\t\t * If poll wait was tiny, this could be zero; we will\n\t\t * have to round it up to avoid sleeping forever. If\n\t\t * we retry below, the timercmp above will get us out.\n\t\t * Note that if wait was 0, the timercmp will prevent\n\t\t * us from getting here the first time.\n\t\t */\n\t\ttimo = hzto(&atv);\n\t\tif (timo == 0)\n\t\t\ttimo = 1;\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 13:08:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Microsecond sleeps with select()" }, { "msg_contents": "> I have been thinking some more about the s_lock() delay loop in\n> connection with this. We currently have\n> \n> /*\n> * Each time we busy spin we select the next element of this array as the\n> * number of microseconds to wait. This accomplishes pseudo random back-off.\n> * Values are not critical but 10 milliseconds is a common platform\n> * granularity.\n> *\n> * Total time to cycle through all 20 entries might be about .07 sec,\n> * so the given value of S_MAX_BUSY results in timeout after ~70 sec.\n> */\n> #define S_NSPINCYCLE\t20\n> #define S_MAX_BUSY\t1000 * S_NSPINCYCLE\n> \n> int\ts_spincycle[S_NSPINCYCLE] =\n> {\t0, 0, 0, 0, 10000, 0, 0, 0, 10000, 0,\n> \t0, 10000, 0, 0, 10000, 0, 10000, 0, 10000, 10000\n> };\n> \n> Having read the select(2) man page more closely, I now realize that\n> it is *defined* not to yield the processor when the requested delay\n> is zero: it just checks the file ready status and returns immediately.\n\nActually, a kernel call is something. On kernel call return, process\npriorities are checked and the CPU may be yielded to a higher-priority\nbackend that perhaps just had its I/O completed.\n\nI think the 0 and 10000 are correct. They would be zero ticks and one\ntick. You think 5000 and 10000 would be better? I can see that.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 13:12:34 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Microsecond sleeps with select()" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> Having read the select(2) man page more closely, I now realize that\n>> it is *defined* not to yield the processor when the requested delay\n>> is zero: it just checks the file ready status and returns immediately.\n\n> Actually, a kernel call is something. On kernel call return, process\n> priorities are checked and the CPU may be yielded to a higher-priority\n> backend that perhaps just had its I/O completed.\n\nSo *if* some I/O just completed, the call *might* do what we need,\nwhich is yield the CPU. Otherwise we're just wasting cycles, and\nwill continue to waste them until we do a select with a nonzero\ndelay. I propose we cut out the spinning and just do a nonzero delay\nimmediately.\n\n> I think the 0 and 10000 are correct. They would be zero ticks and one\n> tick. You think 5000 and 10000 would be better? I can see that.\n\nNo, I am not suggesting that, because there is no difference between\n5000 and 10000.\n\nAll of this stuff probably ought to be replaced with a less-bogus\nmechanism (POSIX semaphores maybe?), but not in late beta.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 13:30:31 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Microsecond sleeps with select() " }, { "msg_contents": "> So *if* some I/O just completed, the call *might* do what we need,\n> which is yield the CPU. Otherwise we're just wasting cycles, and\n> will continue to waste them until we do a select with a nonzero\n> delay. I propose we cut out the spinning and just do a nonzero delay\n> immediately.\n\nWell, any backend with a higher piority would get run over the current\nprocess. The question is how would that happen. I will say that\nbecause of CPU cache issues, the system tries _not_ to change processes\nif the current one still needs the CPU, so the zero may be bogus.\n\n> \n> > I think the 0 and 10000 are correct. They would be zero ticks and one\n> > tick. You think 5000 and 10000 would be better? I can see that.\n> \n> No, I am not suggesting that, because there is no difference between\n> 5000 and 10000.\n> \n> All of this stuff probably ought to be replaced with a less-bogus\n> mechanism (POSIX semaphores maybe?), but not in late beta.\n\nGood question. We have sched_yield, that is a threads function, or at\nleast only in the pthreads library.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 13:36:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Microsecond sleeps with select()" }, { "msg_contents": "On Sat, Feb 17, 2001 at 12:26:31PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > A comment on microsecond delays using select(). Most Unix kernels run\n> > at 100hz, meaning that they have a programmable timer that interrupts\n> > the CPU every 10 milliseconds.\n> \n> Right --- this probably also explains my observation that some kernels\n> seem to add an extra 10msec to the requested sleep time. Actually\n> they're interpreting a one-clock-tick select() delay as \"wait till\n> the next clock tick, plus one tick\". The actual delay will be between\n> one and two ticks depending on just when you went to sleep.\n> ...\n> In short: s_spincycle in its current form does not do anything anywhere\n> near what the author thought it would. It's wasted complexity.\n> \n> I am thinking about simplifying s_lock_sleep down to simple\n> wait-one-tick-on-every-call logic. An alternative is to keep\n> s_spincycle, but populate it with, say, 10000, 20000 and larger entries,\n> which would offer some hope of actual random-backoff behavior.\n> Either change would clearly be a win on single-CPU machines, and I doubt\n> it would hurt on multi-CPU machines.\n> \n> Comments?\n\nI don't believe that most kernels schedule only on clock ticks.\nThey schedule on a clock tick *or* whenever the process yields, \nwhich on a loaded system may be much more frequently.\n\nThe question is whether, scheduling, the kernel considers processes\nthat have requested to sleep less than a clock tick as \"ready\" once\ntheir actual request time expires. On V7 Unix, the answer was no, \nbecause the kernel had no way to measure any time shorter than a\ntick, so it rounded up all sleeps to \"the next tick\".\n\nCertainly there are machines and kernels that count time more precisely \n(isn't PG ported to QNX?). We do users of such kernels no favors by \npretending they only count clock ticks. Furthermore, a 1ms clock\ntick is pretty common, e.g. on Alpha boxes. A 10ms initial delay is \nten clock ticks, far longer than seems appropriate.\n\nThis argues for yielding the minimum discernable amount of time (1us)\nand then backing off to a less-minimal time (1ms). On systems that \nchug at 10ms, this is equivalent to a sleep of up-to-10ms (i.e. until \nthe next tick), then a sequence of 10ms sleeps; on dumbOS Alphas, it's \nequivalent to a sequence of 1ms sleeps; and on a smartOS on an Alpha it's \nequivalent to a short, variable time (long enough for other runnable \nprocesses to run and yield) followed by a sequence of 1ms sleeps. \n(Some of the numbers above are doubled on really dumb kernels, as\nTom noted.)\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 17 Feb 2001 15:35:15 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Microsecond sleeps with select()" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> Certainly there are machines and kernels that count time more precisely \n> (isn't PG ported to QNX?). We do users of such kernels no favors by \n> pretending they only count clock ticks. Furthermore, a 1ms clock\n> tick is pretty common, e.g. on Alpha boxes.\n\nOkay, I didn't know there were any popular systems that did that.\n\n> This argues for yielding the minimum discernable amount of time (1us)\n> and then backing off to a less-minimal time (1ms).\n\nFair enough. As you say, it's the same result on machines with coarse\ntime resolution, and it should help on smarter boxes. The main thing\nis that I want to change the zero entries in s_spincycle, which\nclearly aren't doing what the author intended.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 19:28:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Microsecond sleeps with select() " } ]
[ { "msg_contents": "I want to give some background on commit_delay, its initial purpose, and\npossible options.\n\nFirst, looking at the process that happens during a commit:\n\n\twrite() - copy WAL dirty page to kernel disk buffer\n\tfsync() - force WAL kernel disk buffer to disk platter\n\nfsync() take much longer than write().\n\nWhat Vadim doesn't want is:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\twrite()\t\t\n1\tfysnc()\t\twrite()\n2\t\t\tfsync()\n\nThis would be better as:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\twrite()\t\t\n1\t\t\twrite()\n2\tfsync()\t\tfsync()\n\nThis was the purpose of the commit_delay. Having two fsync()'s is not a\nproblem because only one will see there are dirty buffers. The other\nwill probably either return right away, or wait for the other's fsync()\nto complete.\n\nWith the delay, it looks like:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\twrite()\t\t\n1\tsleep()\t\twrite()\n2\tfsync()\t\tsleep()\n3\t\t\tfsync()\n\nWhich shows the second fsync() doing nothing, which is good, because\nthere are no dirty buffers at that time. However, a very possible\ncircumstance is:\n\ntime\tbackend 1\tbackend 2\tbackend 3\n----\t---------\t---------\t---------\n0\twrite()\t\t\n1\tsleep()\t\twrite()\t\t\n2\tfsync()\t\tsleep()\t\twrite()\n3\t\t\tfsync()\t\tsleep()\n4\t\t\t\t\tfsync()\n\nIn this case, the fsync() by backend 2 does indeed do some work because\nfsync's backend 3's write(). Frankly, I don't see how the sleep does\nmuch except delay things because it doesn't have any smarts about when\nthe delay is useful, and when it is useless. Without that feedback, I\nrecommend removing the entire setting. For single backends, the sleep\nis clearly a loser.\n\nAnother situation it can not deal with is:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\twrite()\t\t\n1\tsleep()\t\t\n2\tfsync()\t\twrite()\n3\t\t\tsleep()\n4\t\t\tfsync()\n\nMy solution can't deal with this either.\n\n---------------------------------------------------------------------------\n\nThe quick fix is to remove the commit_delay code. A more elaborate\nperformance boost would be to have the each backend get feedback from\nother backends, so they can block and wait for other about-to-fsync\nbackends before fsync(). This allows the write() to bunch up before \nthe fsync().\n\nHere is the single backend case, which experiences no delays:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\tget_shlock()\n1\twrite()\t\t\n2\trel_shlock()\n3\tget_exlock()\n4\trel_exlock()\n5\tfsync()\n\nHere is the two-backend case, which shows both write()'s completing\nbefore the fsync()'s:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\tget_shlock()\n1\twrite()\t\t\n2\trel_shlock()\tget_shlock()\n3\tget_exlock()\twrite()\n4\t\t\trel_shlock()\n5\trel_exlock()\t\n6\tfsync()\t\tget_exlock()\n7\t\t\trel_exlock()\n8\t\t\tfsync()\n\nContrast that with the first 2 backend case presented above:\n\ntime\tbackend 1\tbackend 2\n----\t---------\t---------\n0\twrite()\t\t\n1\tfysnc()\t\twrite()\n2\t\t\tfsync()\n\nNow, it is my understanding that instead of just shared locking around\nthe write()'s, we could block the entire commit code, so the backend can\nsignal to other about-to-fsync backends to wait.\n\nI believe our existing lock code can be used for the locking/unlocking. \nWe can just lock a random, unused table log pg_log or something.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 13:05:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "WAL and commit_delay" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> With the delay, it looks like:\n\n> time\tbackend 1\tbackend 2\n> ----\t---------\t---------\n> 0\twrite()\t\t\n> 1\tsleep()\t\twrite()\n> 2\tfsync()\t\tsleep()\n> 3\t\t\tfsync()\n\nActually ... take a close look at the code. The delay is done in\nxact.c between XLogInsert(commitrecord) and XLogFlush(). As near\nas I can tell, both the write() and the fsync() will happen in\nXLogFlush(). This means the delay is just plain broken: placed\nthere, it cannot do anything except waste time.\n\nAnother thing I am wondering about is why we're not using fdatasync(),\nwhere available, instead of fsync(). The whole point of preallocating\nthe WAL files is to make fdatasync safe, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 13:46:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "I wrote:\n> Actually ... take a close look at the code. The delay is done in\n> xact.c between XLogInsert(commitrecord) and XLogFlush(). As near\n> as I can tell, both the write() and the fsync() will happen in\n> XLogFlush(). This means the delay is just plain broken: placed\n> there, it cannot do anything except waste time.\n\nUh ... scratch that ... nevermind. The point is that we've inserted\nour commit record into the WAL output buffer. Now we are sleeping\nin the hope that some other backend will do both the write and the\nfsync for us, and that when we eventually call XLogFlush() it will find\nnothing to do. So the delay is not in the wrong place.\n\n> Another thing I am wondering about is why we're not using fdatasync(),\n> where available, instead of fsync(). The whole point of preallocating\n> the WAL files is to make fdatasync safe, no?\n\nThis still looks like it'd be a win, by reducing the number of seeks\nneeded to complete a WAL logfile flush. Right now, each XLogFlush\nrequires writing both the file's data area and its inode.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 13:55:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "> Actually ... take a close look at the code. The delay is done in\n> xact.c between XLogInsert(commitrecord) and XLogFlush(). As near\n> as I can tell, both the write() and the fsync() will happen in\n> XLogFlush(). This means the delay is just plain broken: placed\n> there, it cannot do anything except waste time.\n\nI see. :-(\n\n> Another thing I am wondering about is why we're not using fdatasync(),\n> where available, instead of fsync(). The whole point of preallocating\n> the WAL files is to make fdatasync safe, no?\n\nI don't have fdatasync() here. How does it compare to fsync().\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 14:05:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "> > Another thing I am wondering about is why we're not using fdatasync(),\n> > where available, instead of fsync(). The whole point of preallocating\n> > the WAL files is to make fdatasync safe, no?\n> \n> This still looks like it'd be a win, by reducing the number of seeks\n> needed to complete a WAL logfile flush. Right now, each XLogFlush\n> requires writing both the file's data area and its inode.\n\nDon't we have to fsync the inode too? Actually, I was hoping sequential\nfsync's could sit on the WAL disk track, but I can imagine it has to\nseek around to hit both areas.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 14:07:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Another thing I am wondering about is why we're not using fdatasync(),\n> where available, instead of fsync(). The whole point of preallocating\n> the WAL files is to make fdatasync safe, no?\n\n> Don't we have to fsync the inode too? Actually, I was hoping sequential\n> fsync's could sit on the WAL disk track, but I can imagine it has to\n> seek around to hit both areas.\n\nThat's the point: we're trying to get things set up so that successive\nwrites/fsyncs in the WAL file do the minimum amount of seeking. The WAL\ncode tries to preallocate the whole log file (incorrectly, but that's\neasily fixed, see below) so that we should not need to update the file\nmetadata when we write into the file.\n\n> I don't have fdatasync() here. How does it compare to fsync().\n\nHPUX's man page says\n\n: fdatasync() causes all modified data and file attributes of fildes\n: required to retrieve the data to be written to disk.\n\n: fsync() causes all modified data and all file attributes of fildes\n: (including access time, modification time and status change time) to\n: be written to disk.\n\nThe implication is that the only thing you can lose after fdatasync is\nthe highly-inessential file mod time. However, I have been told that\non some implementations, fdatasync only flushes data blocks, and never\nwrites the inode or indirect blocks. That would mean that if you had\nallocated new disk space to the file, fdatasync would not guarantee\nthat that allocation was reflected on disk. This is the reason for\npreallocating the WAL log file (and doing a full fsync *at that time*).\nThen you know the inode block pointers and indirect blocks are down\non disk, and so fdatasync is sufficient even if you have the cheesy\nversion of fdatasync.\n\nRight now the WAL preallocation code (XLogFileInit) is not good enough\nbecause it does lseek to the 16MB position and then writes 1 byte there.\nOn an implementation that supports holes in files (which is most Unixen)\nthat doesn't cause physical allocation of the intervening space. We'd\nhave to actually write zeroes into all 16MB to ensure the space is\nallocated ... but that's just a couple more lines of code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 14:44:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "> Right now the WAL preallocation code (XLogFileInit) is not good enough\n> because it does lseek to the 16MB position and then writes 1 byte there.\n> On an implementation that supports holes in files (which is most Unixen)\n> that doesn't cause physical allocation of the intervening space. We'd\n> have to actually write zeroes into all 16MB to ensure the space is\n> allocated ... but that's just a couple more lines of code.\n\nAre OS's smart enough to not allocate zero-written blocks? Do we need\nto write non-zeros?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 15:45:30 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010217 14:46]:\n> > Right now the WAL preallocation code (XLogFileInit) is not good enough\n> > because it does lseek to the 16MB position and then writes 1 byte there.\n> > On an implementation that supports holes in files (which is most Unixen)\n> > that doesn't cause physical allocation of the intervening space. We'd\n> > have to actually write zeroes into all 16MB to ensure the space is\n> > allocated ... but that's just a couple more lines of code.\n> \n> Are OS's smart enough to not allocate zero-written blocks? Do we need\n> to write non-zeros?\nI don't believe so. writing Zeros is valid. \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 17 Feb 2001 14:48:13 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010217 14:46]:\n> > > Right now the WAL preallocation code (XLogFileInit) is not good enough\n> > > because it does lseek to the 16MB position and then writes 1 byte there.\n> > > On an implementation that supports holes in files (which is most Unixen)\n> > > that doesn't cause physical allocation of the intervening space. We'd\n> > > have to actually write zeroes into all 16MB to ensure the space is\n> > > allocated ... but that's just a couple more lines of code.\n> > \n> > Are OS's smart enough to not allocate zero-written blocks? Do we need\n> > to write non-zeros?\n> I don't believe so. writing Zeros is valid. \n\nThe reason I ask is because I know you get zeros when trying to read\ndata from a file with holes, so it seems some OS could actually drop\nthose blocks from storage.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 17 Feb 2001 15:50:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010217 14:50]:\n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010217 14:46]:\n> > > > Right now the WAL preallocation code (XLogFileInit) is not good enough\n> > > > because it does lseek to the 16MB position and then writes 1 byte there.\n> > > > On an implementation that supports holes in files (which is most Unixen)\n> > > > that doesn't cause physical allocation of the intervening space. We'd\n> > > > have to actually write zeroes into all 16MB to ensure the space is\n> > > > allocated ... but that's just a couple more lines of code.\n> > > \n> > > Are OS's smart enough to not allocate zero-written blocks? Do we need\n> > > to write non-zeros?\n> > I don't believe so. writing Zeros is valid. \n> \n> The reason I ask is because I know you get zeros when trying to read\n> data from a file with holes, so it seems some OS could actually drop\n> those blocks from storage.\nI've written swap files and such with:\n\ndd if=/dev/zero of=SWAPFILE bs=512 count=204800\n\nand all the blocks are allocated. \n\nLER\n\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sat, 17 Feb 2001 14:52:20 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I've written swap files and such with:\n> dd if=/dev/zero of=SWAPFILE bs=512 count=204800\n> and all the blocks are allocated. \n\nI've also confirmed that writing zeroes is sufficient on HPUX (du\nshows that the correct amount of space is allocated, unlike the\ncurrent seek-to-the-end method).\n\nSome poking around the net shows that pre-2.4 Linux kernels implement\nfdatasync() as fsync(), and we already knew that BSD hasn't got it\nat all. So distinguishing fdatasync from fsync won't be helpful for\nvery many people yet --- but I still think we should do it. I'm\nplaying with a test setup in which I just changed pg_fsync to call\nfdatasync instead of fsync, and on HPUX I'm seeing pgbench tps values\naround 17, as opposed to 13 yesterday. (The HPUX man page warns that\nthese calls are inefficient for large files, and I wouldn't be surprised\nif a lot of the run time is now being spent in the kernel scanning\nthrough all the buffers that belong to the logfile. 2.4 Linux is\napparently reasonably smart about this case, and only looks at the\nactually dirty buffers.)\n\nIs anyone out there running a 2.4 Linux kernel? Would you try pgbench\nwith current sources, commit_delay=0, -B at least 1024, no -F, and see\nhow the results change when pg_fsync is made to call fdatasync instead\nof fsync? (It's in src/backend/storage/file/fd.c)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 17:56:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "On Sat, Feb 17, 2001 at 03:45:30PM -0500, Bruce Momjian wrote:\n> > Right now the WAL preallocation code (XLogFileInit) is not good enough\n> > because it does lseek to the 16MB position and then writes 1 byte there.\n> > On an implementation that supports holes in files (which is most Unixen)\n> > that doesn't cause physical allocation of the intervening space. We'd\n> > have to actually write zeroes into all 16MB to ensure the space is\n> > allocated ... but that's just a couple more lines of code.\n> \n> Are OS's smart enough to not allocate zero-written blocks? \n\nNo, but some disks are. Writing zeroes is a bit faster on smart disks.\nThis has no real implications for PG, but it is one of the reasons that \nwriting zeroes doesn't really wipe a disk, for forensic purposes.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 17 Feb 2001 15:04:13 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "On Sat, 17 Feb 2001, Tom Lane wrote:\n\n> Another thing I am wondering about is why we're not using fdatasync(),\n> where available, instead of fsync(). The whole point of preallocating\n> the WAL files is to make fdatasync safe, no?\n\nLinux/x86 fdatasync(2) manpage:\n\nBUGS\n Currently (Linux 2.0.23) fdatasync is equivalent to fsync.\n\n\n-- \nDominic J. Eidson\n \"Baruk Khazad! Khazad ai-menu!\" - Gimli\n-------------------------------------------------------------------------------\nhttp://www.the-infinite.org/ http://www.the-infinite.org/~dominic/\n\n", "msg_date": "Sat, 17 Feb 2001 17:05:31 -0600 (CST)", "msg_from": "\"Dominic J. Eidson\" <sauron@the-infinite.org>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "On 17 Feb 2001 at 17:56 (-0500), Tom Lane wrote:\n\n[snipped]\n\n| Is anyone out there running a 2.4 Linux kernel? Would you try pgbench\n| with current sources, commit_delay=0, -B at least 1024, no -F, and see\n| how the results change when pg_fsync is made to call fdatasync instead\n| of fsync? (It's in src/backend/storage/file/fd.c)\n\nI've not run this requested test, but glibc-2.2 provides this bit\nof code for fdatasync, so it /appears/ to me that kernel version\nwill not affect the test case.\n\n[glibc-2.2/sysdeps/generic/fdatasync.c]\n\n int\n fdatasync (int fildes)\n {\n return fsync (fildes);\n }\n\n\nhth.\n brent\n\n-- \n\"We want to help, but we wouldn't want to deprive you of a valuable \nlearning experience.\"\n http://openbsd.org/mail.html\n", "msg_date": "Sat, 17 Feb 2001 18:30:12 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "On Sat, Feb 17, 2001 at 06:30:12PM -0500, Brent Verner wrote:\n> On 17 Feb 2001 at 17:56 (-0500), Tom Lane wrote:\n> \n> [snipped]\n> \n> | Is anyone out there running a 2.4 Linux kernel? Would you try pgbench\n> | with current sources, commit_delay=0, -B at least 1024, no -F, and see\n> | how the results change when pg_fsync is made to call fdatasync instead\n> | of fsync? (It's in src/backend/storage/file/fd.c)\n> \n> I've not run this requested test, but glibc-2.2 provides this bit\n> of code for fdatasync, so it /appears/ to me that kernel version\n> will not affect the test case.\n> \n> [glibc-2.2/sysdeps/generic/fdatasync.c]\n> \n> int\n> fdatasync (int fildes)\n> {\n> return fsync (fildes);\n> }\n\nIn the 2.4 kernel it says (fs/buffer.c)\n\n /* this needs further work, at the moment it is identical to fsync() */\n down(&inode->i_sem);\n err = file->f_op->fsync(file, dentry);\n up(&inode->i_sem);\n\nWe can probably expect this to be fixed in an upcoming 2.4.x, i.e.\nwell before 2.6.\n\nThis is moot, though, if you're writing to a raw volume, which\nyou will be if you are really serious. Then, fsync really is \nequivalent to fdatasync.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 17 Feb 2001 15:53:14 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay" }, { "msg_contents": "On 17 Feb 2001 at 15:53 (-0800), Nathan Myers wrote:\n| On Sat, Feb 17, 2001 at 06:30:12PM -0500, Brent Verner wrote:\n| > On 17 Feb 2001 at 17:56 (-0500), Tom Lane wrote:\n| > \n| > [snipped]\n| > \n| > | Is anyone out there running a 2.4 Linux kernel? Would you try pgbench\n| > | with current sources, commit_delay=0, -B at least 1024, no -F, and see\n| > | how the results change when pg_fsync is made to call fdatasync instead\n| > | of fsync? (It's in src/backend/storage/file/fd.c)\n| > \n| > I've not run this requested test, but glibc-2.2 provides this bit\n| > of code for fdatasync, so it /appears/ to me that kernel version\n| > will not affect the test case.\n| > \n| > [glibc-2.2/sysdeps/generic/fdatasync.c]\n| > \n| > int\n| > fdatasync (int fildes)\n| > {\n| > return fsync (fildes);\n| > }\n| \n| In the 2.4 kernel it says (fs/buffer.c)\n| \n| /* this needs further work, at the moment it is identical to fsync() */\n| down(&inode->i_sem);\n| err = file->f_op->fsync(file, dentry);\n| up(&inode->i_sem);\n|\n| We can probably expect this to be fixed in an upcoming 2.4.x, i.e.\n| well before 2.6.\n\n2.4.0-ac11 already has provisions for fdatasync \n\n[fs/buffer.c]\n\n 352 asmlinkage long sys_fsync(unsigned int fd)\n 353 {\n ...\n 372 down(&inode->i_sem);\n 373 filemap_fdatasync(inode->i_mapping);\n 374 err = file->f_op->fsync(file, dentry, 0);\n 375 filemap_fdatawait(inode->i_mapping);\n 376 up(&inode->i_sem);\n\n 384 asmlinkage long sys_fdatasync(unsigned int fd)\n 385 {\n ...\n 403 down(&inode->i_sem);\n 404 filemap_fdatasync(inode->i_mapping);\n 405 err = file->f_op->fsync(file, dentry, 1);\n 406 filemap_fdatawait(inode->i_mapping);\n 407 up(&inode->i_sem);\n\next2 does use this third param of its fsync() operation to (potentially)\nbypass a call to ext2_sync_inode(inode)\n\n b\n\n", "msg_date": "Sat, 17 Feb 2001 19:10:09 -0500", "msg_from": "Brent Verner <brent@rcfile.org>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> In the 2.4 kernel it says (fs/buffer.c)\n\n> /* this needs further work, at the moment it is identical to fsync() */\n> down(&inode->i_sem);\n> err = file->f_op->fsync(file, dentry);\n> up(&inode->i_sem);\n\nHmm, that's the same code that's been there since 2.0 or before.\nI had trawled the Linux kernel mail lists and found patch submissions\nfrom several different people to make fdatasync really work, and what\nI thought was an indication that at least one had been applied.\nEvidently not. Oh well...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 19:34:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay " }, { "msg_contents": "On Sat, Feb 17, 2001 at 07:34:22PM -0500, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > In the 2.4 kernel it says (fs/buffer.c)\n> \n> > /* this needs further work, at the moment it is identical to fsync() */\n> > down(&inode->i_sem);\n> > err = file->f_op->fsync(file, dentry);\n> > up(&inode->i_sem);\n> \n> Hmm, that's the same code that's been there since 2.0 or before.\n\nIndeed. All xterms look alike, and I used one connected to the wrong box.\nHere's what's in 2.4.0:\n\nFor fsync:\n\n filemap_fdatasync(inode->i_mapping);\n err = file->f_op->fsync(file, dentry, 0);\n filemap_fdatawait(inode->i_mapping);\n\nand for fdatasync:\n\n filemap_fdatasync(inode->i_mapping);\n err = file->f_op->fsync(file, dentry, 1);\n filemap_fdatawait(inode->i_mapping);\n\n(Notice the \"1\" vs. \"0\" difference?) So the actual file system \n(ext2fs, reiserfs, etc.) has the option of equating the two, or not. \nIn fs/ext2/fsync.c, we have\n\n int ext2_fsync_inode(struct inode *inode, int datasync)\n {\n int err;\n err = fsync_inode_buffers(inode);\n if (!(inode->i_state & I_DIRTY))\n return err;\n if (datasync && !(inode->i_state & I_DIRTY_DATASYNC))\n return err;\n err |= ext2_sync_inode(inode);\n return err ? -EIO : 0;\n }\n\nI.e. yes, Linux 2.4.0 and ext2 do implement the distinction.\nSorry for the misinformation.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 17 Feb 2001 18:13:19 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> I.e. yes, Linux 2.4.0 and ext2 do implement the distinction.\n> Sorry for the misinformation.\n\nOkay ... meanwhile I've got to report the reverse: I've just confirmed\nthat on HPUX 10.20, there is *not* a distinction between fsync and\nfdatasync. I was misled by what was apparently an outlier result on my\nfirst try with fdatasync plugged in ... but when I couldn't reproduce\nthat, some digging led to the fact that the fsync and fdatasync symbols\nin libc are at the same place :-(.\n\nStill, using fdatasync for the WAL file seems like a forward-looking\nthing to do, and it'll just take another couple of lines of configure\ncode, so I'll go ahead and plug it in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 22:45:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay " }, { "msg_contents": "fdatasync() is available on Tru64 and according to the man-page behaves\nas Tom expects. So it should be a win for us. What do other commercial\nunixes say?\n\nAdriaan\n", "msg_date": "Sun, 18 Feb 2001 11:50:14 +0200", "msg_from": "Adriaan Joubert <a.joubert@albourne.com>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "Adriaan Joubert <a.joubert@albourne.com> writes:\n> fdatasync() is available on Tru64 and according to the man-page behaves\n> as Tom expects. So it should be a win for us.\n\nCareful ... HPUX's man page also claims that fdatasync does something\nuseful, but it doesn't. I'd recommend an experiment. Does today's\nsnapshot run any faster for you (without -F) than before?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 11:51:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010218 10:53]:\n> Adriaan Joubert <a.joubert@albourne.com> writes:\n> > fdatasync() is available on Tru64 and according to the man-page behaves\n> > as Tom expects. So it should be a win for us.\n> \n> Careful ... HPUX's man page also claims that fdatasync does something\n> useful, but it doesn't. I'd recommend an experiment. Does today's\n> snapshot run any faster for you (without -F) than before?\nBTW, UnixWare 7.1.1 does *NOT* have fdatasync. What standard created\nthis one? \n\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 10:56:10 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay" }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> The implication is that the only thing you can lose after fdatasync is\n> the highly-inessential file mod time. However, I have been told that\n> on some implementations, fdatasync only flushes data blocks, and never\n> writes the inode or indirect blocks. That would mean that if you had\n> allocated new disk space to the file, fdatasync would not guarantee\n> that that allocation was reflected on disk. This is the reason for\n> preallocating the WAL log file (and doing a full fsync *at that time*).\n> Then you know the inode block pointers and indirect blocks are down\n> on disk, and so fdatasync is sufficient even if you have the cheesy\n> version of fdatasync.\n\nActually, there is also a performance reason. Indeed, fdatasync would\nnot perform any better than fsync if the log file was not\npreallocated: the file length would change each time a record is\nappended, and therefore the inode would have to be updated.\n\n-- Jerome\n", "msg_date": "18 Feb 2001 11:59:24 -0500", "msg_from": "Jerome Vouillon <vouillon@saul.cis.upenn.edu>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> BTW, UnixWare 7.1.1 does *NOT* have fdatasync. What standard created\n> this one? \n\nHP's manpage quoth:\n\nSTANDARDS CONFORMANCE\n fsync(): AES, SVID3, XPG3, XPG4, POSIX.4\n fdatasync(): POSIX.4\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 12:01:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay " }, { "msg_contents": "Jerome Vouillon <vouillon@saul.cis.upenn.edu> writes:\n> Actually, there is also a performance reason. Indeed, fdatasync would\n> not perform any better than fsync if the log file was not\n> preallocated: the file length would change each time a record is\n> appended, and therefore the inode would have to be updated.\n\nGood point, but seeking to the 16-meg position and writing one byte was\nalready sufficient to take care of that issue.\n\nI think that there may be a performance advantage to pre-filling the\nlogfile even so, assuming that file allocation info is stored in a\nBerkeley/McKusik-like fashion (note: I have no idea what ext2 or\nreiserfs actually do). Namely, we'll only sync the file's indirect\nblocks once, in the fsync() at the end of XLogFileInit. A correct\nfdatasync implementation would have to sync the last indirect block each\ntime a new filesystem block is added to the logfile, so it would end up\ndoing a lot of seeks for that purpose even if it rarely touches the\ninode itself. Another point is that if the logfile is pre-filled over a\nshort interval, its blocks are more likely to be allocated close to each\nother than if it grows to full size over a longer interval. Not much\npoint in avoiding seeks outside the file data if the file data itself\nis scattered all over the place :-(.\n\nBasically we're trading more work in XLogFileInit (which we hope is not\ntime-critical) for less work in typical transaction commits.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 12:12:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "On Sun, Feb 18, 2001 at 11:51:50AM -0500, Tom Lane wrote:\n> Adriaan Joubert <a.joubert@albourne.com> writes:\n> > fdatasync() is available on Tru64 and according to the man-page behaves\n> > as Tom expects. So it should be a win for us.\n> \n> Careful ... HPUX's man page also claims that fdatasync does something\n> useful, but it doesn't. I'd recommend an experiment. Does today's\n> snapshot run any faster for you (without -F) than before?\n\nIt's worth noting in documentation that systems that don't have \nfdatasync(), or that have the phony implementation, can get the same \nbenefit by using a raw volume (partition) for the log file. This \napplies even on Linux 2.0 and 2.2 without the \"raw-i/o\" patch. Using \nraw volumes would have other performance benefits, even on systems \nthat do fully support fdatasync, through bypassing the buffer cache.\n\n(The above assumes I understood correctly Vadim's postings about\nchanges he made to support putting logs on raw volumes.)\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sun, 18 Feb 2001 12:08:02 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay" }, { "msg_contents": "On Sun, 18 Feb 2001, Tom Lane wrote:\n\n> I think that there may be a performance advantage to pre-filling the\n> logfile even so, assuming that file allocation info is stored in a\n> Berkeley/McKusik-like fashion (note: I have no idea what ext2 or\n> reiserfs actually do).\n\next2 is a lot like [UF]FS. reiserfs is very different, but does\nhave similar hole semantics.\n\nBTW, I have attached two patches which streamline log initialisation\na little. The first (xlog-sendfile.diff) adds support for Linux's\nsendfile system call. FreeBSD and HP/UX have sendfile() too, but the\nprototype is different. If it's interesting, someone will have to\ncome up with a configure test, as autoconf scares me.\n\nThe second removes a further three syscalls from the log init path.\nThere are a couple of things to note here:\n * I don't know why link/unlink is currently preferred over\n rename. POSIX offers strong guarantees on the semantics\n of the latter.\n * I have assumed that the close/rename/reopen stuff is only\n there for the benefit of Windows users, and ifdeffed it\n for everyone else.\n\nMatthew.", "msg_date": "Mon, 19 Feb 2001 13:29:00 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "On Mon, 19 Feb 2001, Matthew Kirkwood wrote:\n\n> BTW, I have attached two patches which streamline log initialisation\n> a little. The first (xlog-sendfile.diff) adds support for Linux's\n> sendfile system call.\n\nWhoops, don't use this. It looks like Linux won't sendfile()\nfrom /dev/zero. I'll endeavour to get this fixed, but it\nlooks like it'll be rather harder to use sendfile for this.\n\nBah.\n\nMatthew.\n\n", "msg_date": "Mon, 19 Feb 2001 14:06:42 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> BTW, I have attached two patches which streamline log initialisation\n> a little. The first (xlog-sendfile.diff) adds support for Linux's\n> sendfile system call. FreeBSD and HP/UX have sendfile() too, but the\n> prototype is different. If it's interesting, someone will have to\n> come up with a configure test, as autoconf scares me.\n\nI think we don't want to mess with something as unportable as that\nat this late stage of the release cycle (quite aside from your later\nnote that it doesn't work ;-)).\n\n> The second removes a further three syscalls from the log init path.\n> There are a couple of things to note here:\n> * I don't know why link/unlink is currently preferred over\n> rename. POSIX offers strong guarantees on the semantics\n> of the latter.\n> * I have assumed that the close/rename/reopen stuff is only\n> there for the benefit of Windows users, and ifdeffed it\n> for everyone else.\n\nThe reason for avoiding rename() is that the POSIX guarantees are\nthe wrong ones: specifically, rename promises to overwrite an existing\ndestination, which is exactly what we *don't* want. In theory two\nbackends cannot be executing this code in parallel, but if they were,\nwe would not want to destroy a logfile that perhaps already contains\nWAL entries by the time we finish preparing our own logfile. link()\nwill fail if the destination name exists, which is a lot safer.\n\nI'm not sure about the close/reopen stuff; I agree it looks unnecessary.\nBut this function is going to be so I/O bound (particularly now that\nit fills the file) that two more kernel calls are insignificant.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 10:48:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: WAL and commit_delay " }, { "msg_contents": "Tom Lane wrote:\n> Adriaan Joubert <a.joubert@albourne.com> writes:\n> > fdatasync() is available on Tru64 and according to the man-page behaves\n> > as Tom expects. So it should be a win for us.\n>\n> Careful ... HPUX's man page also claims that fdatasync does something\n> useful, but it doesn't. I'd recommend an experiment. Does today's\n> snapshot run any faster for you (without -F) than before?\n\n IIRC your HPUX manpage states that fdatasync() updates only\n required information to find back the data. It sounded to me\n that HPUX distinguishes between irrelevant inode info (like\n modtime) and important things (like blocks).\n\n But maybe I'm confused by HP and they can still tell me an X\n for an U.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 19 Feb 2001 14:47:39 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Re: WAL and commit_delay" } ]
[ { "msg_contents": "Hi,\n\nNot sure if anyone will find this of interest, but I ran\npgbench on my main Linux box to see what sort of performance\ndifference might be visible between 2.2 and 2.4 kernels.\n\nHardware: A dual P3-450 with 384Mb of RAM and 3 SCSI disks.\nThe pg datafiles live in a half-gig partition on the first\none.\n\nSoftware: Red Hat 6.1 plus all sort of bits and pieces.\nPostgreSQL 7.1beta4 RPMs. pgbench hand-compiled from source\nfor same. No options changed from defaults. (I'll look at\nthat tomorrow -- is there anything worth changing other than\ncommit_delay and fsync?)\n\nKernels: 2.2.15 + software RAID patches, 2.4.2-pre2\n\nWith 2.2.15:\n\tpgbench -s5 -i: 1.27.78 elapsed\n\tpgbench -s5 -t100:\n\tclients: TPS / TPS (excluding connection establishment)\n\t1: 39.66 / 40.08 TPS\n\t2: 60.77 / 61.64 TPS\n\t4: 76.15 / 77.42\n\t8: 90.99 / 92.73\n\t16: 71.10 / 72.15\n\t32: 49.20 / 49.70\n\t1: 27.76 / 28.00\n\t1: 27.82 / 28.03\n\n\tpgbench -v -s5 -t100:\n\t1: 30.73 / 30.98\n\n\nAnd with 2.4.2-pre2:\n\tpgbench -s5 -i: 1:17.46 elapsed\n\tpgbench -s5 -t100\n\t1: 43.57 / 44.11 TPS\n\t2: 62.85 / 63.86 TPS\n\t4: 87.24 / 89.08 TPS\n\t8: 86.60 / 88.38 TPS\n\t16: 53.22 / 53.88 TPS\n\t32: 60.28 / 61.10 TPS\n\t1: 35.93 / 36.33\n\t1: 34.82 / 35.18\n\n\tpgbench -v -s5 -t100:\n\t1: 35.70 / 36.01\n\n\nOverall, two things jump out at me.\n\nFirstly, it looks like 2.4 is mixed news for heavy pgbench users\n:) Low-utilisation numbers are better, but the sweet spot seems\nlower and narrower.\n\nSecondly, in both occasions after a run, performance has been\nmore than 20% lower. Restarting or performing a full vacuum does\nnot seem to help. Is there some sort of fragmentation issue\nhere?\n\nMatthew.\n\n", "msg_date": "Sat, 17 Feb 2001 23:18:37 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Linux 2.2 vs 2.4" }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> No options changed from defaults. (I'll look at\n> that tomorrow -- is there anything worth changing other than\n> commit_delay and fsync?)\n\n-B for sure ... the default -B is way too small for WAL.\n\n> Firstly, it looks like 2.4 is mixed news for heavy pgbench users\n> :) Low-utilisation numbers are better, but the sweet spot seems\n> lower and narrower.\n\nHuh? With the exception of the 16-user case (possibly measurement\nnoise), 2.4 looks better across the board, AFAICS. But see below.\n\n> Secondly, in both occasions after a run, performance has been\n> more than 20% lower.\n\nI find that pgbench's reported performance can vary quite a bit from run\nto run, at least with smaller values of total transactions. I think\nthis is because it's a bit of a crapshoot how many WAL logfile\ninitializations occur during the run and get charged against the total\ntime. Not to mention whatever else the machine might be doing. With\nlonger runs (say at least 10000 total transactions) the numbers should\nstabilize. I wouldn't put any faith at all in tests involving less\nthan about 1000 total transactions...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 17 Feb 2001 19:21:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Linux 2.2 vs 2.4 " }, { "msg_contents": "On Sat, 17 Feb 2001, Tom Lane wrote:\n\n> the default -B is way too small for WAL.\n\nOK, here are some 2.4 numbers with 1K transactions/client\nand -B10240.\n\n> Huh? With the exception of the 16-user case (possibly measurement\n> noise), 2.4 looks better across the board, AFAICS. But see below.\n\nOK.\n\nRough methodology:\n# service postgresql stop\n# rpm -e postgresql-server\n# rm -fr /var/lib/pgsql\n# service postgresql start\n# reboot\n# sysctl -w kernel.shmmax=186048768\npg$ creatuser matthew\npg$ createdb matthew\nme$ ./pgbench -i -s5 -t$T -c$N\n\nDoes this look fairly immune to troubles?\n\n> > Secondly, in both occasions after a run, performance has been\n> > more than 20% lower.\n>\n> I find that pgbench's reported performance can vary quite a bit from\n> run to run, at least with smaller values of total transactions. I\n> think this is because it's a bit of a crapshoot how many WAL logfile\n> initializations occur during the run and get charged against the total\n> time. Not to mention whatever else the machine might be doing. With\n> longer runs (say at least 10000 total transactions) the numbers should\n> stabilize. I wouldn't put any faith at all in tests involving less\n> than about 1000 total transactions...\n\nAh, good point. Here are some with 2.4.2pre2 and 1000 transactions.\n\nI'll try to find time tomorrow to do some batch benching with 10K\ntransactions on various kernels.\n\nI hear allegations that the 2.4.1 disk elevator and VM are subject\nto investigation to I'll try to keep some up-to-date numbers if any-\none is interested.\n\nMatthew.\n\n-- \nNumbers:\n2.4.2-pre2 (-B10240):\n\n\tpgbench -s5 -i: 1:13:02 elapsed\n\tpgbench -s5 -t1000\n\t1: 40.06 / 40.10 TPS\n\t2: 53.01 / 53.08\n\t4: 57.14 / 57.23\n\t8: 62.82 / 62.92\n\t16: 62.46 / 62.56\n\t32: 43.15 / 43.20\n\t1: 23.48 / 26.05\n\t1: 30.85 / 30.88\n\n\tpgbench -v -s5 -t1000\n\t1: 26.37 / 26.39\n\n", "msg_date": "Sun, 18 Feb 2001 13:01:03 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: Linux 2.2 vs 2.4 " } ]
[ { "msg_contents": "\tHi all,\n\n\tI finished the beta version of my PL/SQL-to-PL/PgSQL-HOWTO last night\nand put it in http://www.brasileiro.net/roberto/howto .\n\tIt explains basic differences between Oracle's PL/SQL and PG's\nPL/PgSQL and how to port apps from one to the other. It also includes my\ninstr functions that mimick Oracle's counterpart (they are handy).\n\tPlease take a look and send me (rmello@cc.usu.edu) any suggestions,\ncriticism, etc. I am almost done writing my PL/PgSQL documentation that\nhopefully will make into the PG doc tree.\n\n\t-Roberto\n\n-- \nComputer Science\t\t\tUtah State University\nSpace Dynamics Laboratory\t\tWeb Developer\nUSU Free Software & GNU/Linux Club \thttp://fslc.usu.edu\nMy home page - http://www.brasileiro.net/roberto\n", "msg_date": "Sat, 17 Feb 2001 17:34:22 -0700", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": true, "msg_subject": "PL/SQL-to-PL/PgSQL-HOWTO beta Available" }, { "msg_contents": "I found below is very valuable. I hope this would be included in the\n7.1 docs.\n--\nTatsuo Ishii\n\nFrom: Roberto Mello <rmello@cc.usu.edu>\nSubject: [SQL] PL/SQL-to-PL/PgSQL-HOWTO beta Available\nDate: Sat, 17 Feb 2001 17:34:22 -0700\nMessage-ID: <20010217173422.A27682@cc.usu.edu>\n\n> \tHi all,\n> \n> \tI finished the beta version of my PL/SQL-to-PL/PgSQL-HOWTO last night\n> and put it in http://www.brasileiro.net/roberto/howto .\n> \tIt explains basic differences between Oracle's PL/SQL and PG's\n> PL/PgSQL and how to port apps from one to the other. It also includes my\n> instr functions that mimick Oracle's counterpart (they are handy).\n> \tPlease take a look and send me (rmello@cc.usu.edu) any suggestions,\n> criticism, etc. I am almost done writing my PL/PgSQL documentation that\n> hopefully will make into the PG doc tree.\n> \n> \t-Roberto\n> \n> -- \n> Computer Science\t\t\tUtah State University\n> Space Dynamics Laboratory\t\tWeb Developer\n> USU Free Software & GNU/Linux Club \thttp://fslc.usu.edu\n> My home page - http://www.brasileiro.net/roberto\n", "msg_date": "Mon, 19 Feb 2001 10:44:43 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [SQL] PL/SQL-to-PL/PgSQL-HOWTO beta Available" }, { "msg_contents": "Can someone comment on this? Is it being merged into the main docs?\n\n\n> \tHi all,\n> \n> \tI finished the beta version of my PL/SQL-to-PL/PgSQL-HOWTO last night\n> and put it in http://www.brasileiro.net/roberto/howto .\n> \tIt explains basic differences between Oracle's PL/SQL and PG's\n> PL/PgSQL and how to port apps from one to the other. It also includes my\n> instr functions that mimick Oracle's counterpart (they are handy).\n> \tPlease take a look and send me (rmello@cc.usu.edu) any suggestions,\n> criticism, etc. I am almost done writing my PL/PgSQL documentation that\n> hopefully will make into the PG doc tree.\n> \n> \t-Roberto\n> \n> -- \n> Computer Science\t\t\tUtah State University\n> Space Dynamics Laboratory\t\tWeb Developer\n> USU Free Software & GNU/Linux Club \thttp://fslc.usu.edu\n> My home page - http://www.brasileiro.net/roberto\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 5 Mar 2001 16:05:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [SQL] PL/SQL-to-PL/PgSQL-HOWTO beta Available" } ]
[ { "msg_contents": "Hi,\n\nThere seems to be a teeny-tiny bug in the beta4 RPMS.\n\n/etc/rc.d/init.d/postgresql contains:\n\n# PGVERSION is:\nPGVERSION=7.1beta3\n\nMatthew.\n\n", "msg_date": "Sun, 18 Feb 2001 11:18:39 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "beta4 RPM bug" } ]
[ { "msg_contents": "Say Bruce, I notice that a lot of the files under src/bin still have\n\n# Copyright (c) 1994, Regents of the University of California\n\nand have never had a Postgres group copyright added to them. I updated\ncreatedb just now to\n\n# Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group\n# Portions Copyright (c) 1994, Regents of the University of California\n\nbut shouldn't they all look like this?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 12:59:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Copyright notices" }, { "msg_contents": "> Say Bruce, I notice that a lot of the files under src/bin still have\n> \n> # Copyright (c) 1994, Regents of the University of California\n> \n> and have never had a Postgres group copyright added to them. I updated\n> createdb just now to\n> \n> # Portions Copyright (c) 1996-2001, PostgreSQL Global Development Group\n> # Portions Copyright (c) 1994, Regents of the University of California\n> \n> but shouldn't they all look like this?\n\nThey should all look like this. I will do it now.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Feb 2001 13:17:31 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Copyright notices" } ]
[ { "msg_contents": "PHP 4.0.4pl1 Build dies with current CVS:\nMaking all in pgsql\ngmake[2]: Entering directory `/home/ler/php/ext/pgsql'\ngmake[3]: Entering directory `/home/ler/php/ext/pgsql'\n/bin/sh /home/ler/php/libtool --silent --mode=compile cc -Xb -I.\n-I/home/ler/php/ext/pgsql -I/home/ler/php/main -I/home/ler/php\n-I/usr/internet/apache/include -I/home/ler/php/Zend\n-I/usr/local/ssl/include -I/usr/local/include\n-I/home/ler/php/ext/xml/expat/xmltok\n-I/home/ler/php/ext/xml/expat/xmlparse -I/home/ler/php/TSRM\n-I/usr/local/pgsql/include -DNDEBUG -DUW=700 -DUSE_HSREGEX -DUSE_EXPAT\n-DXML_BYTE_ORDER=12 -O -c pgsql.c\nUX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 53:\ntypedef redeclared: regproc\nUX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 54:\ntypedef redeclared: RegProcedure\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 69:\n(struct) tag redeclared: varlena\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 87:\nidentifier redeclared: bytea\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 88:\nidentifier redeclared: text\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 89:\nidentifier redeclared: BpChar\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 90:\nidentifier redeclared: VarChar\nUX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 171:\ntypedef redeclared: int2vector\nUX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 172:\ntypedef redeclared: oidvector\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 179:\n(union) tag redeclared: nameData\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 182:\nidentifier redeclared: NameData\nUX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 183:\nidentifier redeclared: Name\nUX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 192:\ntypedef redeclared: TransactionId\nUX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 196:\ntypedef redeclared: CommandId\ngmake[3]: *** [pgsql.lo] Error 1\ngmake[3]: Leaving directory `/home/ler/php/ext/pgsql'\ngmake[2]: *** [all-recursive] Error 1\ngmake[2]: Leaving directory `/home/ler/php/ext/pgsql'\ngmake[1]: *** [all-recursive] Error 1\ngmake[1]: Leaving directory `/home/ler/php/ext'\ngmake: *** [all-recursive] Error 1\n$ \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 14:18:19 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" }, { "msg_contents": "Just an FYI... PHP compiles with up to and including PG-Beta4.. \n\n-Mitch\n\n----- Original Message ----- \nFrom: \"Larry Rosenman\" <ler@lerctr.org>\nTo: \"PostgreSQL Hackers List\" <pgsql-hackers@postgresql.org>\nSent: Sunday, February 18, 2001 3:18 PM\nSubject: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS\n\n\n> PHP 4.0.4pl1 Build dies with current CVS:\n> Making all in pgsql\n> gmake[2]: Entering directory `/home/ler/php/ext/pgsql'\n> gmake[3]: Entering directory `/home/ler/php/ext/pgsql'\n> /bin/sh /home/ler/php/libtool --silent --mode=compile cc -Xb -I.\n> -I/home/ler/php/ext/pgsql -I/home/ler/php/main -I/home/ler/php\n> -I/usr/internet/apache/include -I/home/ler/php/Zend\n> -I/usr/local/ssl/include -I/usr/local/include\n> -I/home/ler/php/ext/xml/expat/xmltok\n> -I/home/ler/php/ext/xml/expat/xmlparse -I/home/ler/php/TSRM\n> -I/usr/local/pgsql/include -DNDEBUG -DUW=700 -DUSE_HSREGEX -DUSE_EXPAT\n> -DXML_BYTE_ORDER=12 -O -c pgsql.c\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 53:\n> typedef redeclared: regproc\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 54:\n> typedef redeclared: RegProcedure\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 69:\n> (struct) tag redeclared: varlena\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 87:\n> identifier redeclared: bytea\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 88:\n> identifier redeclared: text\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 89:\n> identifier redeclared: BpChar\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 90:\n> identifier redeclared: VarChar\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 171:\n> typedef redeclared: int2vector\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 172:\n> typedef redeclared: oidvector\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 179:\n> (union) tag redeclared: nameData\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 182:\n> identifier redeclared: NameData\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 183:\n> identifier redeclared: Name\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 192:\n> typedef redeclared: TransactionId\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 196:\n> typedef redeclared: CommandId\n> gmake[3]: *** [pgsql.lo] Error 1\n> gmake[3]: Leaving directory `/home/ler/php/ext/pgsql'\n> gmake[2]: *** [all-recursive] Error 1\n> gmake[2]: Leaving directory `/home/ler/php/ext/pgsql'\n> gmake[1]: *** [all-recursive] Error 1\n> gmake[1]: Leaving directory `/home/ler/php/ext'\n> gmake: *** [all-recursive] Error 1\n> $ \n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n", "msg_date": "Sun, 18 Feb 2001 15:40:49 -0500", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" } ]
[ { "msg_contents": "Re-Sent due to bounce from ftp.postgresql.org\n\n----- Forwarded message from Larry Rosenman <ler@lerctr.org> -----\n\nFrom: Larry Rosenman <ler@lerctr.org>\nSubject: Re: [HACKERS] PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS\nDate: Sun, 18 Feb 2001 14:41:33 -0600\nMessage-ID: <20010218144133.A5745@lerami.lerctr.org>\nUser-Agent: Mutt/1.3.15i\nX-Mailer: Mutt http://www.mutt.org/\nTo: PostgreSQL Hackers List <pgsql-hackers@postgresql.org>\n\n* Larry Rosenman <ler@lerctr.org> [010218 14:19]:\n> PHP 4.0.4pl1 Build dies with current CVS:\n> Making all in pgsql\n> gmake[2]: Entering directory `/home/ler/php/ext/pgsql'\n> gmake[3]: Entering directory `/home/ler/php/ext/pgsql'\n> /bin/sh /home/ler/php/libtool --silent --mode=compile cc -Xb -I.\n> -I/home/ler/php/ext/pgsql -I/home/ler/php/main -I/home/ler/php\n> -I/usr/internet/apache/include -I/home/ler/php/Zend\n> -I/usr/local/ssl/include -I/usr/local/include\n> -I/home/ler/php/ext/xml/expat/xmltok\n> -I/home/ler/php/ext/xml/expat/xmlparse -I/home/ler/php/TSRM\n> -I/usr/local/pgsql/include -DNDEBUG -DUW=700 -DUSE_HSREGEX -DUSE_EXPAT\n> -DXML_BYTE_ORDER=12 -O -c pgsql.c\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 53:\n> typedef redeclared: regproc\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 54:\n> typedef redeclared: RegProcedure\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 69:\n> (struct) tag redeclared: varlena\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 87:\n> identifier redeclared: bytea\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 88:\n> identifier redeclared: text\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 89:\n> identifier redeclared: BpChar\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 90:\n> identifier redeclared: VarChar\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 171:\n> typedef redeclared: int2vector\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 172:\n> typedef redeclared: oidvector\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 179:\n> (union) tag redeclared: nameData\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 182:\n> identifier redeclared: NameData\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 183:\n> identifier redeclared: Name\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 192:\n> typedef redeclared: TransactionId\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 196:\n> typedef redeclared: CommandId\n> gmake[3]: *** [pgsql.lo] Error 1\n> gmake[3]: Leaving directory `/home/ler/php/ext/pgsql'\n> gmake[2]: *** [all-recursive] Error 1\n> gmake[2]: Leaving directory `/home/ler/php/ext/pgsql'\n> gmake[1]: *** [all-recursive] Error 1\n> gmake[1]: Leaving directory `/home/ler/php/ext'\n> gmake: *** [all-recursive] Error 1\n> $ \nmore info, courtesy gcc:\n$ sh x\ngcc: unrecognized option `-KPIC'\nIn file included from php_pgsql.h:32,\n from pgsql.c:29:\n/usr/local/pgsql/include/postgres.h:53: redefinition of `regproc'\n/usr/local/pgsql/include/c.h:312: `regproc' previously declared here\n/usr/local/pgsql/include/postgres.h:54: redefinition of `RegProcedure'\n/usr/local/pgsql/include/c.h:313: `RegProcedure' previously declared\nhere\n/usr/local/pgsql/include/postgres.h:69: redefinition of `struct\nvarlena'\n/usr/local/pgsql/include/postgres.h:87: redefinition of `bytea'\n/usr/local/pgsql/include/c.h:354: `bytea' previously declared here\n/usr/local/pgsql/include/postgres.h:88: redefinition of `text'\n/usr/local/pgsql/include/c.h:355: `text' previously declared here\n/usr/local/pgsql/include/postgres.h:89: redefinition of `BpChar'\n/usr/local/pgsql/include/c.h:356: `BpChar' previously declared here\n/usr/local/pgsql/include/postgres.h:90: redefinition of `VarChar'\n/usr/local/pgsql/include/c.h:357: `VarChar' previously declared here\n/usr/local/pgsql/include/postgres.h:171: redefinition of `int2vector'\n/usr/local/pgsql/include/c.h:363: `int2vector' previously declared\nhere\n/usr/local/pgsql/include/postgres.h:172: redefinition of `oidvector'\n/usr/local/pgsql/include/c.h:364: `oidvector' previously declared here\n/usr/local/pgsql/include/postgres.h:179: redefinition of `union\nnameData'\n/usr/local/pgsql/include/postgres.h:182: redefinition of `NameData'\n/usr/local/pgsql/include/c.h:375: `NameData' previously declared here\n/usr/local/pgsql/include/postgres.h:183: redefinition of `Name'\n/usr/local/pgsql/include/c.h:376: `Name' previously declared here\n/usr/local/pgsql/include/postgres.h:192: redefinition of\n`TransactionId'\n/usr/local/pgsql/include/c.h:315: `TransactionId' previously declared\nhere\n/usr/local/pgsql/include/postgres.h:196: redefinition of `CommandId'\n/usr/local/pgsql/include/c.h:319: `CommandId' previously declared here\n$ \n\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 14:50:32 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "(forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" } ]
[ { "msg_contents": "OK, I found it. PHP was including postgres.h (which we no longer\ninstall...., so we were picking up a Feb 7 version). \n\nChanging php's ext/pgsql/php_pgsql.h to #include <postgres_fe.h>\nfixes it. \n\nThis is a gotcha for people following CVS or not cleaning out \nthe $(DESTDIR)/include directory\n\nI'll submit a patch to the PHP folk. \n\nLER\n\n----- Forwarded message from Larry Rosenman <ler@lerctr.org> -----\n\nFrom: Larry Rosenman <ler@lerctr.org>\nSubject: (forw) Re: [HACKERS] PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS\nDate: Sun, 18 Feb 2001 14:50:32 -0600\nMessage-ID: <20010218145032.A6190@lerami.lerctr.org>\nUser-Agent: Mutt/1.3.15i\nX-Mailer: Mutt http://www.mutt.org/\nTo: PostgreSQL Hackers List <pgsql-hackers@postgresql.org>,\n\ttgl@sss.pgh.pa.us, Peter Eisentraut <peter_e@gmx.net>\n\nRe-Sent due to bounce from ftp.postgresql.org\n\n----- Forwarded message from Larry Rosenman <ler@lerctr.org> -----\n\nFrom: Larry Rosenman <ler@lerctr.org>\nSubject: Re: [HACKERS] PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS\nDate: Sun, 18 Feb 2001 14:41:33 -0600\nMessage-ID: <20010218144133.A5745@lerami.lerctr.org>\nUser-Agent: Mutt/1.3.15i\nX-Mailer: Mutt http://www.mutt.org/\nTo: PostgreSQL Hackers List <pgsql-hackers@postgresql.org>\n\n* Larry Rosenman <ler@lerctr.org> [010218 14:19]:\n> PHP 4.0.4pl1 Build dies with current CVS:\n> Making all in pgsql\n> gmake[2]: Entering directory `/home/ler/php/ext/pgsql'\n> gmake[3]: Entering directory `/home/ler/php/ext/pgsql'\n> /bin/sh /home/ler/php/libtool --silent --mode=compile cc -Xb -I.\n> -I/home/ler/php/ext/pgsql -I/home/ler/php/main -I/home/ler/php\n> -I/usr/internet/apache/include -I/home/ler/php/Zend\n> -I/usr/local/ssl/include -I/usr/local/include\n> -I/home/ler/php/ext/xml/expat/xmltok\n> -I/home/ler/php/ext/xml/expat/xmlparse -I/home/ler/php/TSRM\n> -I/usr/local/pgsql/include -DNDEBUG -DUW=700 -DUSE_HSREGEX -DUSE_EXPAT\n> -DXML_BYTE_ORDER=12 -O -c pgsql.c\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 53:\n> typedef redeclared: regproc\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 54:\n> typedef redeclared: RegProcedure\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 69:\n> (struct) tag redeclared: varlena\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 87:\n> identifier redeclared: bytea\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 88:\n> identifier redeclared: text\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 89:\n> identifier redeclared: BpChar\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 90:\n> identifier redeclared: VarChar\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 171:\n> typedef redeclared: int2vector\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 172:\n> typedef redeclared: oidvector\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 179:\n> (union) tag redeclared: nameData\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 182:\n> identifier redeclared: NameData\n> UX:acomp: ERROR: \"/usr/local/pgsql/include/postgres.h\", line 183:\n> identifier redeclared: Name\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 192:\n> typedef redeclared: TransactionId\n> UX:acomp: WARNING: \"/usr/local/pgsql/include/postgres.h\", line 196:\n> typedef redeclared: CommandId\n> gmake[3]: *** [pgsql.lo] Error 1\n> gmake[3]: Leaving directory `/home/ler/php/ext/pgsql'\n> gmake[2]: *** [all-recursive] Error 1\n> gmake[2]: Leaving directory `/home/ler/php/ext/pgsql'\n> gmake[1]: *** [all-recursive] Error 1\n> gmake[1]: Leaving directory `/home/ler/php/ext'\n> gmake: *** [all-recursive] Error 1\n> $ \nmore info, courtesy gcc:\n$ sh x\ngcc: unrecognized option `-KPIC'\nIn file included from php_pgsql.h:32,\n from pgsql.c:29:\n/usr/local/pgsql/include/postgres.h:53: redefinition of `regproc'\n/usr/local/pgsql/include/c.h:312: `regproc' previously declared here\n/usr/local/pgsql/include/postgres.h:54: redefinition of `RegProcedure'\n/usr/local/pgsql/include/c.h:313: `RegProcedure' previously declared\nhere\n/usr/local/pgsql/include/postgres.h:69: redefinition of `struct\nvarlena'\n/usr/local/pgsql/include/postgres.h:87: redefinition of `bytea'\n/usr/local/pgsql/include/c.h:354: `bytea' previously declared here\n/usr/local/pgsql/include/postgres.h:88: redefinition of `text'\n/usr/local/pgsql/include/c.h:355: `text' previously declared here\n/usr/local/pgsql/include/postgres.h:89: redefinition of `BpChar'\n/usr/local/pgsql/include/c.h:356: `BpChar' previously declared here\n/usr/local/pgsql/include/postgres.h:90: redefinition of `VarChar'\n/usr/local/pgsql/include/c.h:357: `VarChar' previously declared here\n/usr/local/pgsql/include/postgres.h:171: redefinition of `int2vector'\n/usr/local/pgsql/include/c.h:363: `int2vector' previously declared\nhere\n/usr/local/pgsql/include/postgres.h:172: redefinition of `oidvector'\n/usr/local/pgsql/include/c.h:364: `oidvector' previously declared here\n/usr/local/pgsql/include/postgres.h:179: redefinition of `union\nnameData'\n/usr/local/pgsql/include/postgres.h:182: redefinition of `NameData'\n/usr/local/pgsql/include/c.h:375: `NameData' previously declared here\n/usr/local/pgsql/include/postgres.h:183: redefinition of `Name'\n/usr/local/pgsql/include/c.h:376: `Name' previously declared here\n/usr/local/pgsql/include/postgres.h:192: redefinition of\n`TransactionId'\n/usr/local/pgsql/include/c.h:315: `TransactionId' previously declared\nhere\n/usr/local/pgsql/include/postgres.h:196: redefinition of `CommandId'\n/usr/local/pgsql/include/c.h:319: `CommandId' previously declared here\n$ \n\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n----- End forwarded message -----\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 15:19:17 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "(forw) (forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> OK, I found it. PHP was including postgres.h (which we no longer\n> install...., so we were picking up a Feb 7 version). \n> Changing php's ext/pgsql/php_pgsql.h to #include <postgres_fe.h>\n> fixes it. \n\nHm. Should php be including either one? I would have been in less\nhurry to invent a new file if I had thought that client apps were\nincluding postgres.h ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 17:54:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (forw) (forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010218 16:54]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > OK, I found it. PHP was including postgres.h (which we no longer\n> > install...., so we were picking up a Feb 7 version). \n> > Changing php's ext/pgsql/php_pgsql.h to #include <postgres_fe.h>\n> > fixes it. \n> \n> Hm. Should php be including either one? I would have been in less\n> hurry to invent a new file if I had thought that client apps were\n> including postgres.h ...\nhmm. It include libpq-fe.h as well. \n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 16:55:16 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: (forw) (forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010218 16:54]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > OK, I found it. PHP was including postgres.h (which we no longer\n> > install...., so we were picking up a Feb 7 version). \n> > Changing php's ext/pgsql/php_pgsql.h to #include <postgres_fe.h>\n> > fixes it. \n> \n> Hm. Should php be including either one? I would have been in less\n> hurry to invent a new file if I had thought that client apps were\n> including postgres.h ...\n> \nInterestingly, leaving out postgres_fe.h works as well. \n\nI'll update my bug report w/php to delete that line altogether. \n\n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 16:57:00 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: (forw) (forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010218 16:54]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > OK, I found it. PHP was including postgres.h (which we no longer\n> > install...., so we were picking up a Feb 7 version). \n> > Changing php's ext/pgsql/php_pgsql.h to #include <postgres_fe.h>\n> > fixes it. \n> \n> Hm. Should php be including either one? I would have been in less\n> hurry to invent a new file if I had thought that client apps were\n> including postgres.h ...\nUpdated to not include either postgres.h or postgres_fe.h....\n\nFYI, bug # in PHP's DB is 9328.\n\nLER\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 17:02:57 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: (forw) (forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS" }, { "msg_contents": "I sure hope it gets more attention than some of the other PHP PostgreSQL\nbugs.. I don't mean to trash anyone here but the pg_connect problem has been\naround since 4.0.1 and has yet to be addressed. One of our programmers is\ntaking a look at that one but he's not been able to fix it yet.\n\n*crosses fingers*\n\nIs there anything stupendously broken in PG beta 4? I have it on my devel\nserver and don't want to have to recompile (right now at least, deadlines\nare growing close) unless I stand a large chance of pulling the pin on a\ngrenade somewhere.\n\nThanks!!\n\n-Mitch\n\n----- Original Message -----\nFrom: \"Larry Rosenman\" <ler@lerctr.org>\nTo: \"Tom Lane\" <tgl@sss.pgh.pa.us>\nCc: \"PostgreSQL Hackers List\" <pgsql-hackers@postgresql.org>; \"Peter\nEisentraut\" <peter_e@gmx.net>; \"Bruce Momjian\" <pgman@candle.pha.pa.us>\nSent: Sunday, February 18, 2001 6:02 PM\nSubject: Re: (forw) (forw) Re: PHP 4.0.4pl1 BUILD: BUSTED WITH CURRENT CVS\n\n\n> * Tom Lane <tgl@sss.pgh.pa.us> [010218 16:54]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > OK, I found it. PHP was including postgres.h (which we no longer\n> > > install...., so we were picking up a Feb 7 version).\n> > > Changing php's ext/pgsql/php_pgsql.h to #include <postgres_fe.h>\n> > > fixes it.\n> >\n> > Hm. Should php be including either one? I would have been in less\n> > hurry to invent a new file if I had thought that client apps were\n> > including postgres.h ...\n> Updated to not include either postgres.h or postgres_fe.h....\n>\n> FYI, bug # in PHP's DB is 9328.\n>\n> LER\n>\n> >\n> > regards, tom lane\n> --\n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n>\n\n", "msg_date": "Sun, 18 Feb 2001 19:00:28 -0500", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "\"Mitch Vincent\" <mitch@venux.net> writes:\n> Is there anything stupendously broken in PG beta 4?\n\nOther than the bug I introduced into b4 for views containing UNION, a\nquick scan of the CVS logs doesn't show any showstoppers fixed in the\nbackend (dunno what all Peter Mount has been doing in JDBC though).\nYou could probably hold off updating for a little while.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 20:19:15 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "[ Charset ISO-8859-1 unsupported, converting... ]\n> I sure hope it gets more attention than some of the other PHP PostgreSQL\n> bugs.. I don't mean to trash anyone here but the pg_connect problem has been\n> around since 4.0.1 and has yet to be addressed. One of our programmers is\n> taking a look at that one but he's not been able to fix it yet.\n\nI have worked with Thies on getting persistent connections to work\nbetter. If there are any PostgreSQL problems with PHP, I recommend\nsending something to him as he is focused on PostgreSQL recently.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Feb 2001 22:08:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010218 21:23]:\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > I sure hope it gets more attention than some of the other PHP PostgreSQL\n> > bugs.. I don't mean to trash anyone here but the pg_connect problem has been\n> > around since 4.0.1 and has yet to be addressed. One of our programmers is\n> > taking a look at that one but he's not been able to fix it yet.\n> \n> I have worked with Thies on getting persistent connections to work\n> better. If there are any PostgreSQL problems with PHP, I recommend\n> sending something to him as he is focused on PostgreSQL recently.\nCan you point him at today's fun? \n\nBug#9328 in PHP's bug DB.\n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 21:24:58 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "FWIW, I emailed Thies about the pg_connect problems, and whis is what he\nresponded with (yesterday would be Feb 13):\n\n----\n\n i've commited a fix for this to PHP 4 CVS yesterday.\n\n if you don't want to live on the \"bleeding edge\" (use PHP\n from CVS) just replace the php_pgsql_set_default_link\n function in pgsql.c against this one and you're all-set!\n\n regards,\n tc\n\nstatic void php_pgsql_set_default_link(int id)\n{\n PGLS_FETCH();\n \n if ((PGG(default_link) != -1) && (PGG(default_link) != id)) {\n zend_list_delete(PGG(default_link));\n }\n \n if (PGG(default_link) != id) {\n PGG(default_link) = id;\n zend_list_addref(id);\n }\n}\n\n-----\n\nMichael Fork - CCNA - MCP - A+\nNetwork Support - Toledo Internet Access - Toledo Ohio\n\nOn Sun, 18 Feb 2001, Bruce Momjian wrote:\n\n> [ Charset ISO-8859-1 unsupported, converting... ]\n> > I sure hope it gets more attention than some of the other PHP PostgreSQL\n> > bugs.. I don't mean to trash anyone here but the pg_connect problem has been\n> > around since 4.0.1 and has yet to be addressed. One of our programmers is\n> > taking a look at that one but he's not been able to fix it yet.\n> \n> I have worked with Thies on getting persistent connections to work\n> better. If there are any PostgreSQL problems with PHP, I recommend\n> sending something to him as he is focused on PostgreSQL recently.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> \n\n\n", "msg_date": "Sun, 18 Feb 2001 23:23:48 -0500 (EST)", "msg_from": "Michael Fork <mfork@toledolink.com>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Great! Glad to see our PHP interface improving.\n\n> FWIW, I emailed Thies about the pg_connect problems, and whis is what he\n> responded with (yesterday would be Feb 13):\n> \n> ----\n> \n> i've commited a fix for this to PHP 4 CVS yesterday.\n> \n> if you don't want to live on the \"bleeding edge\" (use PHP\n> from CVS) just replace the php_pgsql_set_default_link\n> function in pgsql.c against this one and you're all-set!\n> \n> regards,\n> tc\n> \n> static void php_pgsql_set_default_link(int id)\n> {\n> PGLS_FETCH();\n> \n> if ((PGG(default_link) != -1) && (PGG(default_link) != id)) {\n> zend_list_delete(PGG(default_link));\n> }\n> \n> if (PGG(default_link) != id) {\n> PGG(default_link) = id;\n> zend_list_addref(id);\n> }\n> }\n> \n> -----\n> \n> Michael Fork - CCNA - MCP - A+\n> Network Support - Toledo Internet Access - Toledo Ohio\n> \n> On Sun, 18 Feb 2001, Bruce Momjian wrote:\n> \n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > I sure hope it gets more attention than some of the other PHP PostgreSQL\n> > > bugs.. I don't mean to trash anyone here but the pg_connect problem has been\n> > > around since 4.0.1 and has yet to be addressed. One of our programmers is\n> > > taking a look at that one but he's not been able to fix it yet.\n> > \n> > I have worked with Thies on getting persistent connections to work\n> > better. If there are any PostgreSQL problems with PHP, I recommend\n> > sending something to him as he is focused on PostgreSQL recently.\n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 853-3000\n> > + If your life is a hard drive, | 830 Blythe Avenue\n> > + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n> > \n> \n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Feb 2001 23:24:51 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Just shoot it over to the PHP folks. Seems they are already on top if\nit. I don't want to work around their normal system unless necessary.\n\n\n> * Bruce Momjian <pgman@candle.pha.pa.us> [010218 21:23]:\n> > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > I sure hope it gets more attention than some of the other PHP PostgreSQL\n> > > bugs.. I don't mean to trash anyone here but the pg_connect problem has been\n> > > around since 4.0.1 and has yet to be addressed. One of our programmers is\n> > > taking a look at that one but he's not been able to fix it yet.\n> > \n> > I have worked with Thies on getting persistent connections to work\n> > better. If there are any PostgreSQL problems with PHP, I recommend\n> > sending something to him as he is focused on PostgreSQL recently.\n> Can you point him at today's fun? \n> \n> Bug#9328 in PHP's bug DB.\n> \n> LER\n> \n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Feb 2001 23:25:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "* Bruce Momjian <pgman@candle.pha.pa.us> [010218 22:25]:\n> Just shoot it over to the PHP folks. Seems they are already on top if\n> it. I don't want to work around their normal system unless necessary.\nTheir stuff seems to sit forever. I put it in the BugDB. \n\nI have a couple of other UnixWare issues that have sort of\nlanguished...\n\nYour call Though...\n> \n> \n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010218 21:23]:\n> > > [ Charset ISO-8859-1 unsupported, converting... ]\n> > > > I sure hope it gets more attention than some of the other PHP PostgreSQL\n> > > > bugs.. I don't mean to trash anyone here but the pg_connect problem has been\n> > > > around since 4.0.1 and has yet to be addressed. One of our programmers is\n> > > > taking a look at that one but he's not been able to fix it yet.\n> > > \n> > > I have worked with Thies on getting persistent connections to work\n> > > better. If there are any PostgreSQL problems with PHP, I recommend\n> > > sending something to him as he is focused on PostgreSQL recently.\n> > Can you point him at today's fun? \n> > \n> > Bug#9328 in PHP's bug DB.\n> > \n> > LER\n> > \n> > -- \n> > Larry Rosenman http://www.lerctr.org/~ler\n> > Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> > US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n> > \n> \n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 853-3000\n> + If your life is a hard drive, | 830 Blythe Avenue\n> + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 22:32:37 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "On Sun, 18 Feb 2001, Larry Rosenman wrote:\n\n> * Bruce Momjian <pgman@candle.pha.pa.us> [010218 22:25]:\n> > Just shoot it over to the PHP folks. Seems they are already on top if\n> > it. I don't want to work around their normal system unless necessary.\n> Their stuff seems to sit forever. I put it in the BugDB.\n\n The problem here is that we don't plan to release 4.0.5\n during the next month. I don't know the exact timeframe for\n the release of PostgreSQL 7.0.1, but regular releases of\n PostgreSQL/PHP won't compile together for at least some time.\n That is rather frustating for the end-user and will delay the\n adoption of the new PostgreSQL release.\n\n> I have a couple of other UnixWare issues that have sort of\n> languished...\n\n I found one report which is related to UnixWare's broken\n system libraries (#8441). I'll look into working around that\n later. If there are others, please point me into their\n direction.\n\n Thanks,\n - Sascha\n\n", "msg_date": "Mon, 19 Feb 2001 08:02:37 +0100 (CET)", "msg_from": "Sascha Schumann <sascha@schumann.cx>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "On Sun, 18 Feb 2001, Bruce Momjian wrote:\n\n> Just shoot it over to the PHP folks. Seems they are already on top if\n> it. I don't want to work around their normal system unless necessary.\n\n I've committed an autoconf check, so PHP 4.0.5 and upwards\n will be compatible with existing and future PostgreSQL\n versions. Additionally, discussions about starting the\n release process for 4.0.5 have commenced.\n\n It'd be cool, if PostgreSQL and/or the C front-end would have\n a numeric version indicator which we could use to check for\n features, etc.\n\n #include <libpq-fe.h>\n #if defined(PGSQL_FE_VERSION) && PGSQL_FE_VERSION < 20010210\n # include <postgres.h>\n #endif\n\n - Sascha\n\n", "msg_date": "Mon, 19 Feb 2001 11:13:20 +0100 (CET)", "msg_from": "Sascha Schumann <sascha@schumann.cx>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "* Sascha Schumann <sascha@schumann.cx> [010219 01:37]:\n> On Sun, 18 Feb 2001, Larry Rosenman wrote:\n> \n> > * Bruce Momjian <pgman@candle.pha.pa.us> [010218 22:25]:\n> > > Just shoot it over to the PHP folks. Seems they are already on top if\n> > > it. I don't want to work around their normal system unless necessary.\n> > Their stuff seems to sit forever. I put it in the BugDB.\n> \n> The problem here is that we don't plan to release 4.0.5\n> during the next month. I don't know the exact timeframe for\n> the release of PostgreSQL 7.0.1, but regular releases of\n> PostgreSQL/PHP won't compile together for at least some time.\n> That is rather frustating for the end-user and will delay the\n> adoption of the new PostgreSQL release.\nI don't believe you will break if that patch is applied now.\n\nI don't have a 7.0 handy to compile against, but I can pull one\nif necessary. \n\nI believe it was an error for PHP to #include <postgres.h> at all.\n\nComments from other -hackers?\n> \n> > I have a couple of other UnixWare issues that have sort of\n> > languished...\n> \n> I found one report which is related to UnixWare's broken\n> system libraries (#8441). I'll look into working around that\n> later. If there are others, please point me into their\n> direction.\nThat's the one I was refering to. I submitted it when 4.0.4 came out,\nand it didn't even draw a comment till now.... Thanks!\n\n(The other one was the libtool patch which Rasmus did commit). \n\nLER\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 07:23:46 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "> I don't believe you will break if that patch is applied now.\n>\n\n InvalidOid is not defined otherwise.\n\n - Sascha\n\n", "msg_date": "Mon, 19 Feb 2001 14:42:09 +0100 (CET)", "msg_from": "Sascha Schumann <sascha@schumann.cx>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "* Sascha Schumann <sascha@schumann.cx> [010219 07:42]:\n> > I don't believe you will break if that patch is applied now.\n> >\n> \n> InvalidOid is not defined otherwise.\naha. Ok. PG-Hackers: Can we include a Dummy or #warning postgres.h\nin 7.1? \n\nI.E.:\n#ifndef _POSTGRES_H\n#define _POSTGRES_H\n#warning Client Code should not include postgres.h\n#endif\n\n> \n> - Sascha\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 08:02:08 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "On Mon, 19 Feb 2001, Larry Rosenman wrote:\n\n> * Sascha Schumann <sascha@schumann.cx> [010219 07:42]:\n> > > I don't believe you will break if that patch is applied now.\n> > >\n> >\n> > InvalidOid is not defined otherwise.\n> aha. Ok. PG-Hackers: Can we include a Dummy or #warning postgres.h\n> in 7.1?\n\n #warning is not portable.\n\n As I've mentioned earlier, we already have addressed this\n issue. If you want to give it a test, please check out\n\n http://snaps.php.net/\n\n - Sascha\n\n", "msg_date": "Mon, 19 Feb 2001 15:04:31 +0100 (CET)", "msg_from": "Sascha Schumann <sascha@schumann.cx>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "> * Bruce Momjian <pgman@candle.pha.pa.us> [010218 22:25]:\n> > Just shoot it over to the PHP folks. Seems they are already on top if\n> > it. I don't want to work around their normal system unless necessary.\n> Their stuff seems to sit forever. I put it in the BugDB. \n> \n> I have a couple of other UnixWare issues that have sort of\n> languished...\n\nI am sorry to hear that. Thies is supposedly working on PostgreSQL\nitems. Can you contact him directly? He is \"Thies C. Arntzen\"\n<thies@thieso.net>.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 09:54:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Sascha Schumann <sascha@schumann.cx> writes:\n> It'd be cool, if PostgreSQL and/or the C front-end would have\n> a numeric version indicator which we could use to check for\n> features, etc.\n\n> #include <libpq-fe.h>\n> #if defined(PGSQL_FE_VERSION) && PGSQL_FE_VERSION < 20010210\n> # include <postgres.h>\n> #endif\n\nAFAIK there is no need for you to be including <postgres.h> in *any*\nPostgres release --- it's supposed to be an internal header file,\nnot something that client applications need. Try it with just\n\t#include <libpq-fe.h>\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 11:02:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "Sascha Schumann <sascha@schumann.cx> writes:\n>> I don't believe you will break if that patch is applied now.\n\n> InvalidOid is not defined otherwise.\n\nOh, is that the problem? Okay, do this:\n\n\t#include <libpq-fe.h>\n\t#ifndef InvalidOid\n\t#define InvalidOid ((Oid) 0)\n\t#endif\n\nI knew there was a reason I'd moved InvalidOid into postgres_ext.h ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 11:07:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "> AFAIK there is no need for you to be including <postgres.h> in *any*\n> Postgres release --- it's supposed to be an internal header file,\n> not something that client applications need. Try it with just\n\n/home/sas/src/php4/ext/pgsql/pgsql.c: In function `php_if_pg_getlastoid':\n/home/sas/src/php4/ext/pgsql/pgsql.c:1260: `InvalidOid' undeclared (first use in this function)\n/home/sas/src/php4/ext/pgsql/pgsql.c:1260: (Each undeclared identifier is reported only once\n/home/sas/src/php4/ext/pgsql/pgsql.c:1260: for each function it appears in.)\n\n InvalidOid is used to check the return value of PQoidValue().\n\n src/interfaces/libpq/fe-exec.c:PQoidValue() can return\n InvalidOid, so this appears like a legitimate use to me.\n Feel free to correct me though, I have not used the C fe\n before.\n\n - Sascha\n\n", "msg_date": "Mon, 19 Feb 2001 17:08:32 +0100 (CET)", "msg_from": "Sascha Schumann <sascha@schumann.cx>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "* Sascha Schumann <sascha@schumann.cx> [010219 10:57]:\n> > AFAIK there is no need for you to be including <postgres.h> in *any*\n> > Postgres release --- it's supposed to be an internal header file,\n> > not something that client applications need. Try it with just\n> \n> /home/sas/src/php4/ext/pgsql/pgsql.c: In function `php_if_pg_getlastoid':\n> /home/sas/src/php4/ext/pgsql/pgsql.c:1260: `InvalidOid' undeclared (first use in this function)\n> /home/sas/src/php4/ext/pgsql/pgsql.c:1260: (Each undeclared identifier is reported only once\n> /home/sas/src/php4/ext/pgsql/pgsql.c:1260: for each function it appears in.)\n> \n> InvalidOid is used to check the return value of PQoidValue().\n> \n> src/interfaces/libpq/fe-exec.c:PQoidValue() can return\n> InvalidOid, so this appears like a legitimate use to me.\n> Feel free to correct me though, I have not used the C fe\n> before.\nI still think we need a dummy postgres.h in $(destdir)/include to\ncatch others using it this release. PHP 4.0.4pl1 and earlier will\n*BREAK* unless we do. \n\nThis is a PROBLEM.\n\nLER\n\n> \n> - Sascha\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 15:37:53 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> I still think we need a dummy postgres.h in $(destdir)/include to\n> catch others using it this release. PHP 4.0.4pl1 and earlier will\n> *BREAK* unless we do. \n\nIf we do that, no one will ever fix their code. Moreover, such an\napproach would conflict with the install-all-headers option...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 16:43:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010219 15:43]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > I still think we need a dummy postgres.h in $(destdir)/include to\n> > catch others using it this release. PHP 4.0.4pl1 and earlier will\n> > *BREAK* unless we do. \n> \n> If we do that, no one will ever fix their code. Moreover, such an\n> approach would conflict with the install-all-headers option...\nHow about a BIG warning in the INSTALL doc, then? \n\nLER\n\n> \n> \t\t\tregards, tom lane\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 15:44:50 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010219 15:45]:\n> * Tom Lane <tgl@sss.pgh.pa.us> [010219 15:43]:\n> > Larry Rosenman <ler@lerctr.org> writes:\n> > > I still think we need a dummy postgres.h in $(destdir)/include to\n> > > catch others using it this release. PHP 4.0.4pl1 and earlier will\n> > > *BREAK* unless we do. \n> > \n> > If we do that, no one will ever fix their code. Moreover, such an\n> > approach would conflict with the install-all-headers option...\n> How about a BIG warning in the INSTALL doc, then? \nAND make sure we nuke any OLD version in $(destdir)/include... Which\nwill cause a file not found vs. compile errors based on redeclares...?\n\n\nLER\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 15:53:03 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> AND make sure we nuke any OLD version in $(destdir)/include... Which\n> will cause a file not found vs. compile errors based on redeclares...?\n\nHm. Good point, not only for postgres.h but also for the other include\nfiles we no longer install by default. OTOH, what of people who have\nmanually added the various spi.h sub-includes to their install directory?\nI do not think we should take it on ourselves to clean those out, but\nthey could still cause cross-version errors.\n\nFor the RPM installation this doesn't matter anyway (I think), but it\nwould for non-RPM installs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 17:07:11 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "old include files (was Re: PHP 4.0.4pl1 / Beta 5)" }, { "msg_contents": "Tom Lane wrote:\n> For the RPM installation this doesn't matter anyway (I think), but it\n> would for non-RPM installs.\n\nYou would be correct, as the old version will be either overwritten\nduring the new version's install or removed during the previous\nversion's RPM uninstall. RPM is pretty good at cleaning the old out. \nSometimes a little too good :-/.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Mon, 19 Feb 2001 18:07:05 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: old include files (was Re: PHP 4.0.4pl1 / Beta 5)" }, { "msg_contents": "* Larry Rosenman <ler@lerctr.org> [010219 15:55]:\n> * Larry Rosenman <ler@lerctr.org> [010219 15:45]:\n> > * Tom Lane <tgl@sss.pgh.pa.us> [010219 15:43]:\n> > > Larry Rosenman <ler@lerctr.org> writes:\n> > > > I still think we need a dummy postgres.h in $(destdir)/include to\n> > > > catch others using it this release. PHP 4.0.4pl1 and earlier will\n> > > > *BREAK* unless we do. \n> > > \n> > > If we do that, no one will ever fix their code. Moreover, such an\n> > > approach would conflict with the install-all-headers option...\n> > How about a BIG warning in the INSTALL doc, then? \n> AND make sure we nuke any OLD version in $(destdir)/include... Which\n> will cause a file not found vs. compile errors based on redeclares...?\nThanks for killing the old versions. Now what do we do re PHP \nwith releases 4.0.4pl1 and earlier which now won't compile against\n7.1beta5 and later? \n\nI think we need to do SOMETHING....\n\nLER\n\n> \n> \n> LER\n> -- \n> Larry Rosenman http://www.lerctr.org/~ler\n> Phone: +1 972-414-9812 E-Mail: ler@lerctr.org\n> US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 20 Feb 2001 15:13:40 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> Thanks for killing the old versions. Now what do we do re PHP \n> with releases 4.0.4pl1 and earlier which now won't compile against\n> 7.1beta5 and later? \n\n> I think we need to do SOMETHING....\n\nFrankly, if that's the biggest 7.0-to-7.1 compatibility problem that\nwe see, I'll be surprised (and pleased). This isn't a problem for\nprecompiled PHP distributions, and it's a trivial fix for those working\nfrom source. So I don't feel a need to go through any major pushups to\ndeal with it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 16:22:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "> Larry Rosenman <ler@lerctr.org> writes:\n> > Thanks for killing the old versions. Now what do we do re PHP \n> > with releases 4.0.4pl1 and earlier which now won't compile against\n> > 7.1beta5 and later? \n> \n> > I think we need to do SOMETHING....\n> \n> Frankly, if that's the biggest 7.0-to-7.1 compatibility problem that\n> we see, I'll be surprised (and pleased). This isn't a problem for\n> precompiled PHP distributions, and it's a trivial fix for those working\n> from source. So I don't feel a need to go through any major pushups to\n> deal with it.\n\nSure, let's wait for people to report a problem and we can deal with it\nin a minor release.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Feb 2001 16:23:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "but the changes in the include structure force us to.\n\nIf someone includes the old ones that aren't supposed to be there, we cause\nnon-obvious compile errors.\n\nLER\n\n\n-----Original Message-----\nFrom: Peter Eisentraut [mailto:peter_e@gmx.net]\nSent: Wednesday, February 21, 2001 10:56 AM\nTo: Larry Rosenman\nCc: Tom Lane; Sascha Schumann; PostgreSQL Hackers List; Bruce Momjian\nSubject: Re: [HACKERS] PHP 4.0.4pl1 / Beta 5\n\n\nLarry Rosenman writes:\n\n> AND make sure we nuke any OLD version in $(destdir)/include... Which\n> will cause a file not found vs. compile errors based on redeclares...?\n\nDeleting files in the install directory during installation is very\ninappropriate. At least let's try to get rid of it for 7.2.\n\n--\nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Feb 2001 10:50:29 -0600", "msg_from": "\"Larry Rosenman\" <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "RE: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Larry Rosenman writes:\n\n> AND make sure we nuke any OLD version in $(destdir)/include... Which\n> will cause a file not found vs. compile errors based on redeclares...?\n\nDeleting files in the install directory during installation is very\ninappropriate. At least let's try to get rid of it for 7.2.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Feb 2001 17:56:04 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Larry Rosenman writes:\n>> AND make sure we nuke any OLD version in $(destdir)/include... Which\n>> will cause a file not found vs. compile errors based on redeclares...?\n\n> Deleting files in the install directory during installation is very\n> inappropriate. At least let's try to get rid of it for 7.2.\n\nI don't like it much either, but I agree with Larry that it's an\nessential transition step for now. Perhaps we can remove it again\nin 7.2 or 7.3 or so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 12:34:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "Tom Lane writes:\n\n> > Deleting files in the install directory during installation is very\n> > inappropriate. At least let's try to get rid of it for 7.2.\n>\n> I don't like it much either, but I agree with Larry that it's an\n> essential transition step for now. Perhaps we can remove it again\n> in 7.2 or 7.3 or so.\n\nI doubt that it ever really worked, or could work, to install a new\nversion over an old one without deleting the old one first. This here is\njust one problem. We can't be making these funny workarounds every time\nthe set of installed user visible files changes. For example, if an older\nversion had a header file that the new version doesn't have, then user\ncode that includes this file will still be silently broken.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Feb 2001 23:16:40 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " }, { "msg_contents": "* Peter Eisentraut <peter_e@gmx.net> [010221 16:09]:\n> Tom Lane writes:\n> \n> > > Deleting files in the install directory during installation is very\n> > > inappropriate. At least let's try to get rid of it for 7.2.\n> >\n> > I don't like it much either, but I agree with Larry that it's an\n> > essential transition step for now. Perhaps we can remove it again\n> > in 7.2 or 7.3 or so.\n> \n> I doubt that it ever really worked, or could work, to install a new\n> version over an old one without deleting the old one first. This here is\n> just one problem. We can't be making these funny workarounds every time\n> the set of installed user visible files changes. For example, if an older\n> version had a header file that the new version doesn't have, then user\n> code that includes this file will still be silently broken.\nTHIS CHANGED WITHIN A BETA CYCLE. THAT SHOULD HAVE WORKED. \n\nLER\n\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 21 Feb 2001 16:27:36 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I doubt that it ever really worked, or could work, to install a new\n> version over an old one without deleting the old one first. This here is\n> just one problem. We can't be making these funny workarounds every time\n> the set of installed user visible files changes. For example, if an older\n> version had a header file that the new version doesn't have, then user\n> code that includes this file will still be silently broken.\n\nWell, the idea is to make the breakage be not so silent ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 18:01:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PHP 4.0.4pl1 / Beta 5 " } ]
[ { "msg_contents": "\nStarting with PostgreSQL 7.1beta5 (or current CVS), PHP's pgsql\nextension needs to only include <postgres_fe.h> to compile. \n\nHere is a patch:\n\nIndex: php_pgsql.h\n===================================================================\nRCS file: /cvsroot/php/ext/pgsql/php_pgsql.h,v\nretrieving revision 1.1.1.2\ndiff -c -r1.1.1.2 php_pgsql.h\n*** php_pgsql.h\t2000/12/23 23:05:41\t1.1.1.2\n--- php_pgsql.h\t2001/02/18 21:15:45\n***************\n*** 29,35 ****\n \n #ifdef PHP_PGSQL_PRIVATE\n #undef SOCKET_SIZE_TYPE\n! #include <postgres.h>\n #include <libpq-fe.h>\n \n #ifdef PHP_WIN32\n--- 29,35 ----\n \n #ifdef PHP_PGSQL_PRIVATE\n #undef SOCKET_SIZE_TYPE\n! #include <postgres_fe.h>\n #include <libpq-fe.h>\n \n #ifdef PHP_WIN32\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Sun, 18 Feb 2001 15:21:46 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": true, "msg_subject": "PHP needs to only include <postgres_fe.h> now..." } ]
[ { "msg_contents": "\nWhat works:\n\n# select o.id from op o order by o.id;\n# select o.id from op o union all SELECT -1 order by id;\n\nDoes not work:\n\n# select o.id from op o union all SELECT -1 order by o.id;\nERROR: Relation 'o' does not exist\n# select o.id from op o union all SELECT -1 from op o order by o.id;\nERROR: Relation 'o' does not exist\n\n\nRunning today's CVS. (I finally converted my main workstation\nto 7.1...)\n\n-- \nmarko\n\n", "msg_date": "Mon, 19 Feb 2001 02:53:57 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Bug: aliasing in ORDER BY when UNIONing" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> What works:\n> # select o.id from op o union all SELECT -1 order by id;\n\nThis is valid SQL.\n\n> # select o.id from op o union all SELECT -1 order by o.id;\n> ERROR: Relation 'o' does not exist\n\nThis is not valid SQL. For one thing, the table alias \"o\" is not\nvisible outside the first component SELECT.\n\nYes, I know 7.0 took it... but its handling of ORDER BY on UNION\nwas pretty darn broken.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 18 Feb 2001 20:24:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug: aliasing in ORDER BY when UNIONing " }, { "msg_contents": "On Sun, Feb 18, 2001 at 08:24:20PM -0500, Tom Lane wrote:\n> Marko Kreen <marko@l-t.ee> writes:\n> > What works:\n> > # select o.id from op o union all SELECT -1 order by id;\n> \n> This is valid SQL.\n> \n> > # select o.id from op o union all SELECT -1 order by o.id;\n> > ERROR: Relation 'o' does not exist\n> \n> This is not valid SQL. For one thing, the table alias \"o\" is not\n> visible outside the first component SELECT.\n> \n> Yes, I know 7.0 took it... but its handling of ORDER BY on UNION\n> was pretty darn broken.\n\nDoh. But if I have several tables with a field 'id'? Then only\nway is to use the column number? But the query is big and composed\nof several sources, fields and other stuff is separated - oh\nwell... Thankfully the field is not 'id' so maybe its not that\nbad.\n\nAnyway such stuff should be documented I guess. From current\ndocs I read that it should work. I would have expected that one\nof the select's aliases would be transferred to ORDER BY but its\nnot possible?\n\n-- \nmarko\n\n", "msg_date": "Mon, 19 Feb 2001 04:02:37 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Bug: aliasing in ORDER BY when UNIONing" }, { "msg_contents": "Marko Kreen <marko@l-t.ee> writes:\n> # select o.id from op o union all SELECT -1 order by o.id;\n> ERROR: Relation 'o' does not exist\n>> \n>> This is not valid SQL. For one thing, the table alias \"o\" is not\n>> visible outside the first component SELECT.\n>> \n>> Yes, I know 7.0 took it... but its handling of ORDER BY on UNION\n>> was pretty darn broken.\n\n> Doh. But if I have several tables with a field 'id'? Then only\n> way is to use the column number?\n\nYou could assign column names:\n\nSELECT o.id as id1, p.id as id2, ... UNION ... ORDER BY id1, id2;\n\n> Anyway such stuff should be documented I guess. From current\n> docs I read that it should work.\n\nWhere?\n\n> I would have expected that one\n> of the select's aliases would be transferred to ORDER BY but its\n> not possible?\n\nThe first subselect's column names are transferred to ORDER BY.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 00:26:44 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug: aliasing in ORDER BY when UNIONing " }, { "msg_contents": "On Mon, Feb 19, 2001 at 12:26:44AM -0500, Tom Lane wrote:\n> Marko Kreen <marko@l-t.ee> writes:\n> > Anyway such stuff should be documented I guess. From current\n> > docs I read that it should work.\n> \n> Where?\n\nAnd ofcourse, you are right :) I was confused of result columns\nvs. table columns but in the ORDER BY desc there is even\nexplicitly said result columns.\n\nThanks.\n\n-- \nmarko\n\n", "msg_date": "Mon, 19 Feb 2001 10:51:32 +0200", "msg_from": "Marko Kreen <marko@l-t.ee>", "msg_from_op": true, "msg_subject": "Re: Bug: aliasing in ORDER BY when UNIONing" } ]
[ { "msg_contents": "At 20:40 18/02/01 -0500, Tom Lane wrote:\n>Philip Warner <pjw@rhyme.com.au> writes:\n>>> Hmm, that's definitely what SQL99 uses for the syntax. I wonder where\n>>> Jan got the SELECT INTO syntax --- did he borrow it from Oracle?\n>\n>> Sadly, we made it up.\n>\n>Ah so. Well, friendliness aside, I'd go with the spec's syntax.\n\nProbably a reasonably defensible position, too.\n\n\n>> We *do* need to support ROW_COUNT, but allowing\n>\n>> GET DIAGNOSTICS Select ROW_COUNT, SQLCODE, OID Into :a,:b:,:c;\n>\n>> is a lot friendlier than the standard:\n>\n>> GET DIAGNOSTICS :a = ROW_COUNT;\n>> GET DIAGNOSTICS EXCEPTION 1 :b = SQLCODE;\n>> GET DIAGNOSTICS :c = OID;\n>\n>It looks to me like SQL99 allows\n>\n>\tGET DIAGNOSTICS :a = ROW_COUNT, :b = OID, ...;\n\nYes, but condition information (eg. SPI RESULT or SQLCODE), needs a\nseparate statement to row information (eg. ROW_COUNT). ie.\n\n GET DIAGNOSTICS :a = ROW_COUNT, :c = OID;\n GET DIAGNOSTICS EXCEPTION 1 :b = SQLCODE;\n\nbut it's not much of a problem, really. And I agree the 'x = y' syntax is\nbetter.\n\nUnfortunately, I don't have an awful lot of free time at the moment, so I\nwon't be able to look at this for at *least* as week.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Mon, 19 Feb 2001 14:42:06 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": true, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items) " }, { "msg_contents": "> >> GET DIAGNOSTICS :c = OID;\n> >\n> >It looks to me like SQL99 allows\n> >\n> >\tGET DIAGNOSTICS :a = ROW_COUNT, :b = OID, ...;\n> \n> Yes, but condition information (eg. SPI RESULT or SQLCODE), needs a\n> separate statement to row information (eg. ROW_COUNT). ie.\n> \n> GET DIAGNOSTICS :a = ROW_COUNT, :c = OID;\n> GET DIAGNOSTICS EXCEPTION 1 :b = SQLCODE;\n> \n> but it's not much of a problem, really. And I agree the 'x = y' syntax is\n> better.\n> \n> Unfortunately, I don't have an awful lot of free time at the moment, so I\n> won't be able to look at this for at *least* as week.\n\nWell, this clearly is a release-stopper because we don't want to release\na non-standard GET DIAGNOSTICS. It will be fixed before 7.1 final by\nsomeone. I have added it to the open items list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 18 Feb 2001 23:29:10 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items)" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Unfortunately, I don't have an awful lot of free time at the moment, so I\n> won't be able to look at this for at *least* as week.\n\nIt looks like a pretty straightforward change; I'll try to get it done\ntoday.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 12:36:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items) " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Well, this clearly is a release-stopper because we don't want to release\n> a non-standard GET DIAGNOSTICS. It will be fixed before 7.1 final by\n> someone. I have added it to the open items list.\n\nDone.\n\nI ended up using RESULT_OID for the keyword that wasn't specified by\nSQL99, after I realized that it actually *is* a keyword in the plpgsql\ngrammar, and therefore had better not conflict with any plain\nidentifiers that a user might want to use. Both RESULT and OID look\nmighty dangerous from that perspective.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 15:03:36 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items) " }, { "msg_contents": "Philip Warner wrote:\n>\n> Unfortunately, I don't have an awful lot of free time at the moment, so I\n> won't be able to look at this for at *least* as week.\n\n I'll do it as soon as we decided about the final syntax and\n keywords.\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 19 Feb 2001 15:14:19 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items)" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Well, this clearly is a release-stopper because we don't want to release\n> > a non-standard GET DIAGNOSTICS. It will be fixed before 7.1 final by\n> > someone. I have added it to the open items list.\n> \n> Done.\n> \n> I ended up using RESULT_OID for the keyword that wasn't specified by\n> SQL99, after I realized that it actually *is* a keyword in the plpgsql\n> grammar, and therefore had better not conflict with any plain\n> identifiers that a user might want to use. Both RESULT and OID look\n> mighty dangerous from that perspective.\n\nOpen list updated. Looks like the list is done. Can I move \"Stuck\nbtree spinlocks\" to the TODO list. Is \"visibility of joined columns in JOIN\nclauses\" done?\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 1 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\n\nSource Code Changes\n-------------------\nLAZY VACUUM (Vadim)\nvisibility of joined columns in JOIN clauses\nStuck btree spinlocks\n\nDocumentation Changes\n---------------------\nODBC cleanups/improvements (Nick Gorham, Stephan Szabo, Zoltan Kovacs, \n Michael Fork)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 15:14:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Open list updated. Looks like the list is done. Can I move \"Stuck\n> btree spinlocks\" to the TODO list. Is \"visibility of joined columns in JOIN\n> clauses\" done?\n\nI think both of those are actually done. Vadim might want to tweak\nthe timeouts I selected for buffer spinlocks, but that's easily done\nif he does.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 15:20:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items) " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Open list updated. Looks like the list is done. Can I move \"Stuck\n> > btree spinlocks\" to the TODO list. Is \"visibility of joined columns in JOIN\n> > clauses\" done?\n> \n> I think both of those are actually done. Vadim might want to tweak\n> the timeouts I selected for buffer spinlocks, but that's easily done\n> if he does.\n\n\nOK, I have removed these items. Doesn't look like much left. Let me\nmove Lazy Vacuum to TODO, and remove ODBC. I will keep the web page in\ncase we need to add some later.\n\nThanks folks for clearing these items.\n\n---------------------------------------------------------------------------\n\n P O S T G R E S Q L\n\n 7 . 1 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\n\nSource Code Changes\n-------------------\nLAZY VACUUM (Vadim)\n\nDocumentation Changes\n---------------------\nODBC cleanups/improvements (Nick Gorham, Stephan Szabo, Zoltan Kovacs, \n Michael Fork)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 15:44:03 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items)" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Open list updated. Looks like the list is done. Can I move \"Stuck\n> > btree spinlocks\" to the TODO list. Is \"visibility of joined columns in JOIN\n> > clauses\" done?\n> \n> I think both of those are actually done. Vadim might want to tweak\n> the timeouts I selected for buffer spinlocks, but that's easily done\n> if he does.\n\nGreat, so you already have it using spinlocks, but using timeouts, and\nit will not die under heavy load.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 15:51:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: GET DIAGNOSTICS (was Re: Open 7.1 items)" } ]
[ { "msg_contents": "Dear All PostgreSQL Expert,\n\n May I know where can I download the postgreSQl for window98 and how to\ninstall it?\n\nThanks!\n\nJasper\n\n\n\n\n\n\n\nDear All PostgreSQL Expert,\n \n   May I know where can I download the \npostgreSQl for window98 and how to\ninstall it?\n \nThanks!\n \nJasper", "msg_date": "Mon, 19 Feb 2001 12:40:02 +0800", "msg_from": "\"Chua King Hua\" <chuakh@hhb.com.my>", "msg_from_op": true, "msg_subject": "postgreSQL on window98" } ]
[ { "msg_contents": "\n> > The easy fix is to just set the delay to zero. Looks like that will fix\n> > most of the problem.\n> \n> Except that Vadim had a reason for setting it to 5, and I'm loath to see\n> that changed unless someone actaully understands the ramifications other\n> then increasing performance ...\n\nVadim originally intended 5 milliseconds, he only read the parameter wrong.\nThen noticing, that the parameter is actually microseconds, he iirc decided to\nleave it as is because the discussion at that time seemed to lead to the conclusion, \nthat a simple yield would be optimal in lack of some sort of detection, and\nthe select with 5 us seemed closest to that.\n\nAt least on AIX it looks like the select with 0 timeout is a noop, and does not\nyield the processor. There was discussion, that other OS's (BSD) also does an \nimmediate return in case of 0 timeout.\n\nMinimum select(2) delay is 1 msec on AIX (tested with Tom's test.c).\n\nSo, what was the case against using yield (2) ?\n\nAndreas\n", "msg_date": "Mon, 19 Feb 2001 10:04:43 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: beta5 ..." }, { "msg_contents": "> At least on AIX it looks like the select with 0 timeout is a noop, and does not\n> yield the processor. There was discussion, that other OS's (BSD) also does an \n> immediate return in case of 0 timeout.\n> \n> Minimum select(2) delay is 1 msec on AIX (tested with Tom's test.c).\n> \n> So, what was the case against using yield (2) ?\n\nBSDi doesn't have yield(). It does have sched_yield(), but that\ncontrols threads:\n\n\tforce the current pthread to be rescheduled\n\nso there doesn't seem to be any portable way to do this. Sleeps of zero\ndo no kernel call, and sleeps > 0 sleep for a minimum of one tick.\n\nIf you really want a near-zero sleep, you need to do a no-op kernel\ncall, like umask(), but doing a simple kernel call usually is not enough\nbecause kernels usually favor the last-running process because of the\nCPU cache. We need a \"try to schedule someone else if they are ready to\nrun, if not, return right away\" call.\n\nI think ultimately, we need the type of near-committers feedback, but\nnot for 7.1.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 10:08:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Re: beta5 ..." }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> So, what was the case against using yield (2) ?\n\n$ man 2 yield\nNo entry for yield in section 2 of the manual.\n\nLack of portability :-(\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 10:29:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Re: beta5 ... " } ]
[ { "msg_contents": "\n> We *do* need to support ROW_COUNT, but allowing\n> \n> GET DIAGNOSTICS Select ROW_COUNT, SQLCODE, OID Into :a,:b:,:c;\n> \n> is a lot friendlier than the standard:\n> \n> GET DIAGNOSTICS :a = ROW_COUNT;\n> GET DIAGNOSTICS EXCEPTION 1 :b = SQLCODE;\n> GET DIAGNOSTICS :c = OID;\n> \n> (not that we even support SQLCODE at this stage).\n\nInformix does:\n\tGET DIAGNOSTICS :a = ROW_COUNT, EXCEPTION 1 :b = SQLCODE;\n\nseparated with comma, don't know if that is standard, but it sure looks more \nlike the standard.\n\nAndreas\n\t\n", "msg_date": "Mon, 19 Feb 2001 11:23:57 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: GET DIAGNOSTICS (was Re: Open 7.1 items)" } ]
[ { "msg_contents": "Your name : Sezai YILMAZ \nYour email address : sezaiy@ata.cs.hun.edu.tr\n\n\nSystem Configuration\n---------------------\n Architecture (example: Intel Pentium) : AMD Duron\n\n Operating System (example: Linux 2.0.26 ELF) : Linux 2.2.17 ELF\n\n PostgreSQL version (example: PostgreSQL-7.0): PostgreSQL-7.0.3\n\n Compiler used (example: gcc 2.8.0) : gcc 2.95.3\n\n\nPlease enter a FULL description of your problem:\n------------------------------------------------\n\nLocale support for Turkish causes a problem. The problem is with \ncharacter 'I' (capital of 9.th character of English alphabet). \nWhen character 'I' is given to tolower() function and locale is \nset to \"tr_TR\", it downgrades to special Turkish character '�' \n(its is called \"y acute\"), not 'i'. This causes the following \nproblem:\n\nWith Turkish locale it is not possible to write SQL queries in \nCAPITAL letters. SQL identifiers like \"INSERT\" and \"UNION\" first \nare downgraded to \"�nsert\" and \"un�on\". Then \"�nsert\" and \"un�on\" \ndoes not match as SQL identifier.\n\n\n\nPlease describe a way to repeat the problem. Please try to provide a\nconcise reproducible example, if at all possible: \n----------------------------------------------------------------------\n\nWhen you set \"LC_ALL\" environment variable to \"tr_TR\" this \nproblem happens.\n\n\n\nIf you know how this problem might be fixed, list the solution below:\n---------------------------------------------------------------------\n\nIn file:\n\n[postgresqlsourcepath]/src/backend/parser/scan.l\n\nThis block uses function tolower() which is affected by locale \nsettings of the shell which runs postmaster.\n\n================================================================\n{identifier} {\n int i;\n ScanKeyword *keyword;\n\n for(i = 0; yytext[i]; i++)\n if (isascii((unsigned char)yytext[i]) &&\n isupper(yytext[i]))\n yytext[i] = tolower(yytext[i]);\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n================================================================\n\nI think it should be better to use another thing which does what \nfunction tolower() does but only in English language. This should\nstay in English locale. I think this will solve the problem.\n\n'a' - 'A' = 32\n\nSo we can use the following line instead of the last line marked \nin above block.\n\nyytext[i] += 32;\n", "msg_date": "Mon, 19 Feb 2001 13:50:05 +0200", "msg_from": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr>", "msg_from_op": true, "msg_subject": "Turkish locale bug" }, { "msg_contents": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> With Turkish locale it is not possible to write SQL queries in \n> CAPITAL letters. SQL identifiers like \"INSERT\" and \"UNION\" first \n> are downgraded to \"�nsert\" and \"un�on\". Then \"�nsert\" and \"un�on\"\n> does not match as SQL identifier.\n\nUgh.\n\n> for(i = 0; yytext[i]; i++)\n> if (isascii((unsigned char)yytext[i]) &&\n> isupper(yytext[i]))\n> yytext[i] = tolower(yytext[i]);\n\n> I think it should be better to use another thing which does what \n> function tolower() does but only in English language. This should\n> stay in English locale. I think this will solve the problem.\n\n> yytext[i] += 32;\n\nHm. Several problems here:\n\n(1) This solution would break in other locales where isupper() may\nreturn TRUE for characters other than 'A'..'Z'.\n\n(2) We could fix that by gutting the isascii/isupper test as well,\nreducing it to \"yytext[i] >= 'A' && yytext[i] <= 'Z'\", but I'd prefer to\nstill be able to say that \"identifiers fold to lower case\" works for\nwhatever the local locale thinks is upper and lower case. It would be\nstrange if identifier folding did not agree with the SQL lower()\nfunction.\n\n(3) I do not like the idea of hard-wiring knowledge of ASCII encoding\nhere, even if it's unlikely that anyone would ever try to run Postgres\non a non-ASCII-based system.\n\nI see your problem, but I'm not sure of a solution that doesn't have bad\nside-effects elsewhere. Ideas anyone?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 21:30:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010219 20:31]:\n> \n> Hm. Several problems here:\n> \n> (1) This solution would break in other locales where isupper() may\n> return TRUE for characters other than 'A'..'Z'.\n> \n> (2) We could fix that by gutting the isascii/isupper test as well,\n> reducing it to \"yytext[i] >= 'A' && yytext[i] <= 'Z'\", but I'd prefer to\n> still be able to say that \"identifiers fold to lower case\" works for\n> whatever the local locale thinks is upper and lower case. It would be\n> strange if identifier folding did not agree with the SQL lower()\n> function.\nWhat about EBCDIC (IBM MainFrame, I.E. Linux on S/390, Z/390). \n\nEBCDIC has 3 different ranges that contain letters.\n\nX'C1'-X'C9' (A-I)\nX'D1'-X'D9' (J-R)\nX'E2'-X'E9' (S-Z)\n\nand the *LOWER* case ones subtract X'40' (SPACE) to get there.\n\nPlus Numbers are X'F0'- X'F9'. \n\nThis is from 5 year ago mainframe assembler memory....\n> \n> (3) I do not like the idea of hard-wiring knowledge of ASCII encoding\n> here, even if it's unlikely that anyone would ever try to run Postgres\n> on a non-ASCII-based system.\nNot unlikely now. See APACHE and other ports to now handle EBCDIC.\n> \n> I see your problem, but I'm not sure of a solution that doesn't have bad\n> side-effects elsewhere. Ideas anyone?\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 20:39:15 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Turkish locale bug" }, { "msg_contents": "Larry Rosenman <ler@lerctr.org> writes:\n> What about EBCDIC (IBM MainFrame, I.E. Linux on S/390, Z/390). \n\nRight, that was what I meant about not wanting to hardwire assumptions\nabout ASCII.\n\nWe could instead code it as\n\n\tif (isupper(ch))\n\t ch = ch + ('a' - 'A');\n\nwhich I believe will work on EBCDIC as well as ASCII. However, it still\nbreaks down if isupper() claims that anything besides 'A'..'Z' is\nuppercase --- and the simple 'A' to 'Z' range check does *not* work in\nEBCDIC.\n\nIt would be an interesting timewaster to try to get Postgres working on\nan EBCDIC platform ;-). I'm sure there are a lot of ASCII dependencies\nlurking in the code that would need to be snuffed out. However, that\ndoesn't mean that I'm eager to add another one here ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 22:00:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Turkish locale bug " }, { "msg_contents": "* Tom Lane <tgl@sss.pgh.pa.us> [010219 21:02]:\n> Larry Rosenman <ler@lerctr.org> writes:\n> > What about EBCDIC (IBM MainFrame, I.E. Linux on S/390, Z/390). \n> \n> Right, that was what I meant about not wanting to hardwire assumptions\n> about ASCII.\n> \n> We could instead code it as\n> \n> \tif (isupper(ch))\n> \t ch = ch + ('a' - 'A');\nwhat about:\n if (isupper(ch) && isalpha(ch)) \n ch = ch + ('a' - 'A'); \n\n? \n\nor does that break somewhere? \n\n\n\nLER\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Mon, 19 Feb 2001 21:15:23 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: Turkish locale bug" }, { "msg_contents": "Tom Lane wrote:\n> \n> Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> > With Turkish locale it is not possible to write SQL queries in\n> > CAPITAL letters. SQL identifiers like \"INSERT\" and \"UNION\" first\n> > are downgraded to \"�nsert\" and Then \"�nsert\" and \"un�on\"\n> > does not match as SQL identifier.\n> \n> Ugh.\n<snip>\n\nHow about thinking in the other direction.... is it possible for\nPostgreSQL\nto be able to recognised localised versions of SQL queries?\n\n i.e. For a Turkish locale it associates \"�nsert\" INSERT and \"un�on\"\nwith UNION.\n\nPerhaps including this in the compilation stage (checking which locates\nare installed on a system, or maybe which locales are specified\nsomewhere)?\n\nNot sure what this would do to performance though, as having to do extra\nSQL identifier matching might be a bit slow.\n\nThis would have the advantage of the present SQL queries out there\nworking.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n", "msg_date": "Tue, 20 Feb 2001 14:30:26 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug" }, { "msg_contents": "Justin Clift <aa2@bigpond.net.au> writes:\n> How about thinking in the other direction.... is it possible for\n> PostgreSQL to be able to recognised localised versions of SQL queries?\n\n> i.e. For a Turkish locale it associates \"�nsert\" INSERT and \"un�on\"\n> with UNION.\n\nHmm. Wouldn't that mean that if someone actually wrote �nsert,\nit would be taken as matching the INSERT keyword, not as an identifier?\nIf I understood Sezai correctly, that would surprise a Turkish user.\nBut if this behavior is OK then you might have a good answer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 22:37:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug " }, { "msg_contents": "\n\nJustin Clift wrote:\n> \n> Tom Lane wrote:\n> >\n> > Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> > > With Turkish locale it is not possible to write SQL queries in\n> > > CAPITAL letters. SQL identifiers like \"INSERT\" and \"UNION\" first\n> > > are downgraded to \"�nsert\" and Then \"�nsert\" and \"un�on\"\n> > > does not match as SQL identifier.\n> >\n> > Ugh.\n> <snip>\n> \n> How about thinking in the other direction.... is it possible for\n> PostgreSQL\n> to be able to recognised localised versions of SQL queries?\n> \n> i.e. For a Turkish locale it associates \"�nsert\" INSERT and \"un�on\"\n> with UNION.\n\nI don't have any opinion how can solve this problem. But,\nI don't agree with this solution. SQL is naturally English. I am \nagainst SQL to be localized.\n\nregards\n-sezai\n", "msg_date": "Tue, 20 Feb 2001 10:44:55 +0200", "msg_from": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr>", "msg_from_op": true, "msg_subject": "Re: Turkish locale bug" }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Justin Clift <aa2@bigpond.net.au> writes:\n> > How about thinking in the other direction.... is it possible for\n> > PostgreSQL to be able to recognised localised versions of SQL queries?\n> \n> > i.e. For a Turkish locale it associates \"�nsert\" INSERT and \"un�on\"\n> > with UNION.\n> \n> Hmm. Wouldn't that mean that if someone actually wrote �nsert,\n> it would be taken as matching the INSERT keyword, not as an identifier?\n> If I understood Sezai correctly, that would surprise a Turkish user.\n> But if this behavior is OK then you might have a good answer.\n\nThis solution is simple and clear. But it is not a good solution, \nI think. I don't prefer \"�nsert\" to be understood as \"INSERT\" and \n\"un�on\" as \"UNION\" in SQL keywords. I think this behaviour is not\nOK.\n\nIt should be better to write functions isalpha_en(), isupper_en() \nand tolower_en() which actually behave with English locale. Then\nuse these function in that block.\n\nregards\n-sezai\n\n> \n> regards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 11:00:02 +0200", "msg_from": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr>", "msg_from_op": true, "msg_subject": "Re: Re: Turkish locale bug" }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> > With Turkish locale it is not possible to write SQL queries in\n> > CAPITAL letters. SQL identifiers like \"INSERT\" and \"UNION\" first\n> > are downgraded to \"�nsert\" and \"un�on\". Then \"�nsert\" and \"un�on\"\n> > does not match as SQL identifier.\n> \n> Ugh.\n> \n> > for(i = 0; yytext[i]; i++)\n> > if (isascii((unsigned char)yytext[i]) &&\n> > isupper(yytext[i]))\n> > yytext[i] = tolower(yytext[i]);\n> \n> > I think it should be better to use another thing which does what\n> > function tolower() does but only in English language. This should\n> > stay in English locale. I think this will solve the problem.\n> \n> > yytext[i] += 32;\n> \n> Hm. Several problems here:\n> \n> (1) This solution would break in other locales where isupper() may\n> return TRUE for characters other than 'A'..'Z'.\n> \n> (2) We could fix that by gutting the isascii/isupper test as well,\n> reducing it to \"yytext[i] >= 'A' && yytext[i] <= 'Z'\", but I'd prefer to\n> still be able to say that \"identifiers fold to lower case\" works for\n> whatever the local locale thinks is upper and lower case. It would be\n> strange if identifier folding did not agree with the SQL lower()\n> function.\n> \n> (3) I do not like the idea of hard-wiring knowledge of ASCII encoding\n> here, even if it's unlikely that anyone would ever try to run Postgres\n> on a non-ASCII-based system.\n> \n> I see your problem, but I'm not sure of a solution that doesn't have bad\n> side-effects elsewhere. Ideas anyone?\n> \n> regards, tom lane\n\nYou are right. What about this one?\n\n================================================================\n{identifier} {\n int i;\n ScanKeyword *keyword;\n\n /* I think many platforms understands the \n following and sets locale to 7-bit ASCII \n character set (English) */\n\n\t\t setlocale(LC_ALL, \"C\"); \n\n for(i = 0; yytext[i]; i++)\n if (isascii((unsigned char)yytext[i]) &&\n isupper(yytext[i]))\n yytext[i] = tolower(yytext[i]);\n\n /* This sets locale to default locale which \n user prefer to use */\n\n\t\t setlocale(LC_ALL, \"\"); \n================================================================\n\nThis works on my Linux box. But, I am not sure with other \nplatforms. What do you think about performance?\n\nregards\n-sezai\n", "msg_date": "Tue, 20 Feb 2001 11:24:59 +0200", "msg_from": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr>", "msg_from_op": true, "msg_subject": "Re: Turkish locale bug" }, { "msg_contents": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> You are right. What about this one?\n\n> \t\t setlocale(LC_ALL, \"C\"); \n\n> for(i = 0; yytext[i]; i++)\n> if (isascii((unsigned char)yytext[i]) &&\n> isupper(yytext[i]))\n> yytext[i] = tolower(yytext[i]);\n\n> /* This sets locale to default locale which \n> user prefer to use */\n\n> \t\t setlocale(LC_ALL, \"\"); \n\nThis isn't really better than \"if (isupper(ch)) ch = ch + ('a' - 'A')\".\nIt still breaks the existing locale-aware handling of identifier case,\nwhich I believe is considered a good thing in all locales except C\nand Turkish. Another small problem is that setlocale() is moderately\nexpensive in most implementations, and we don't want to call it twice\nfor every identifier scanned.\n\nI am starting to think that the only real solution is a special case\nfor Turkish users. Perhaps use tolower() normally but have a compile-\ntime option to use a non-locale-aware method:\n\n#ifdef LOCALE_AWARE_IDENTIFIER_FOLDING\n if (isupper(yytext[i]))\n yytext[i] = tolower(yytext[i]);\n#else\n /* this assumes ASCII encoding... */\n if (yytext[i] >= 'A' && yytext[i] <= 'Z')\n yytext[i] += 'a' - 'A';\n#endif\n\nand then document that you have to disable\nLOCALE_AWARE_IDENTIFIER_FOLDING to use Turkish locale.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 11:00:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug " }, { "msg_contents": "Merhaba Sezai!\n\n> I am starting to think that the only real solution is a special case\n> for Turkish users. Perhaps use tolower() normally but have a compile-\n> time option to use a non-locale-aware method:\n\nistm that this illustrates the tip of the locale iceberg as we think\nabout moving to a more \"locale independent\" strategy. Applying\nlocale-specific munging when scanning tokens prohibits a\ncontext-sensitive interpretation of tokens, which we will need to fully\nimplement a reasonable set of (or reasonable interpretation of) SQL9x\ncharacter set and collation features.\n\nAnyway, your proposal is just fine since we haven't decoupled these\nthings farther back in the server. But eventually we should hope to have\nSQL_ASCII and other character sets enforced in context.\n\n - Thomas\n", "msg_date": "Tue, 20 Feb 2001 16:36:19 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> Anyway, your proposal is just fine since we haven't decoupled these\n> things farther back in the server. But eventually we should hope to have\n> SQL_ASCII and other character sets enforced in context.\n\nNow I'm confused. Are you saying that we *should* treat identifier case\nunder ASCII rules only? That seems like a step backwards to me, but\nthen I don't use any non-US locale myself...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 11:47:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug " }, { "msg_contents": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> With Turkish locale it is not possible to write SQL queries in CAPITAL\n> letters. SQL identifiers like \"INSERT\" and \"UNION\" first are\n> downgraded to \"�nsert\" and \"un�on\". Then \"�nsert\" and\n> \"un�on\" does not match as SQL identifier.\n\nI believe this should now work correctly with the changes I just\ncommitted. If you have the time, please try it out --- you can get\ncurrent sources from our CVS server, or use a nightly snapshot dated\ntomorrow or later, or use 7.1beta5 when it comes out (which should be\nshortly).\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 14:11:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug " }, { "msg_contents": "\n\nTom Lane wrote:\n> \n> Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> > With Turkish locale it is not possible to write SQL queries in CAPITAL\n> > letters. SQL identifiers like \"INSERT\" and \"UNION\" first are\n> > downgraded to \"�nsert\" and \"un�on\". Then \"�nsert\" and\n> > \"un�on\" does not match as SQL identifier.\n> \n> I believe this should now work correctly with the changes I just\n> committed. If you have the time, please try it out --- you can get\n> current sources from our CVS server, or use a nightly snapshot dated\n> tomorrow or later, or use 7.1beta5 when it comes out (which should be\n> shortly).\n> \n> regards, tom lane\n\nI have tested it with nightly snapshot dated 22 Feb 2001 and it is \nworking. Thanks a lot.\n\nregards\n-sezai\n", "msg_date": "Fri, 23 Feb 2001 09:30:55 +0200", "msg_from": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr>", "msg_from_op": true, "msg_subject": "Re: Turkish locale bug" }, { "msg_contents": "> > Anyway, your proposal is just fine since we haven't decoupled these\n> > things farther back in the server. But eventually we should hope to have\n> > SQL_ASCII and other character sets enforced in context.\n> Now I'm confused. Are you saying that we *should* treat identifier case\n> under ASCII rules only? That seems like a step backwards to me, but\n> then I don't use any non-US locale myself...\n\n(Just a follow up...)\n\nI haven't had time to review the spec on this, but my recollection is\nthat the entire SQL language can be described using the SQL_ASCII\ncharacter set. I would assume that this might include unquoted\nidentifiers. I'd looked at much of this some time ago, but not recently\nso my memory might be faultly (for, um, not the first time :/\n\n - Thomas\n", "msg_date": "Fri, 23 Feb 2001 17:53:23 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> (Just a follow up...)\n\n> I haven't had time to review the spec on this, but my recollection is\n> that the entire SQL language can be described using the SQL_ASCII\n> character set. I would assume that this might include unquoted\n> identifiers.\n\nThe keywords are all ASCII, but SQL99 appears to contemplate allowing\nmost of Unicode for unquoted identifiers. See my later message.\n(I've already committed the changes described therein, btw...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 12:58:50 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Turkish locale bug " }, { "msg_contents": "Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n\n> Justin Clift wrote:\n> > \n> > Tom Lane wrote:\n> > >\n> > > Sezai YILMAZ <sezaiy@ata.cs.hun.edu.tr> writes:\n> > > > With Turkish locale it is not possible to write SQL queries in\n> > > > CAPITAL letters. SQL identifiers like \"INSERT\" and \"UNION\" first\n> > > > are downgraded to \"ınsert\" and Then \"ınsert\" and \"unıon\"\n> > > > does not match as SQL identifier.\n> > >\n> > > Ugh.\n> > <snip>\n> > \n> > How about thinking in the other direction.... is it possible for\n> > PostgreSQL\n> > to be able to recognised localised versions of SQL queries?\n> > \n> > i.e. For a Turkish locale it associates \"ınsert\" INSERT and \"unıon\"\n> > with UNION.\n> \n> I don't have any opinion how can solve this problem. But,\n> I don't agree with this solution. SQL is naturally English. I am \n> against SQL to be localized.\n\n\nHas anyone come up with a good solution? The last one I saw from Tom\nLane required compile-time options which isn't an option for us.\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.", "msg_date": "02 Mar 2001 18:46:10 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: Re: Turkish locale bug" }, { "msg_contents": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=) writes:\n> Has anyone come up with a good solution? The last one I saw from Tom\n> Lane required compile-time options which isn't an option for us.\n\nAs far as I know it's fixed in the currently-committed sources. The\nkey is to do case normalization for keyword-testing separately from\ncase normalization of an identifier (after it's been determined not\nto be a keyword). Amazingly enough, SQL99 actually requires this...\n\nIn Turkish this means that either INSERT or insert will be seen as\na keyword, while either XINSERT or xinsert will become \"x�nsert\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2001 19:11:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Turkish locale bug " }, { "msg_contents": "I said:\n> In Turkish this means that either INSERT or insert will be seen as\n> a keyword, while either XINSERT or xinsert will become \"x�nsert\".\n\nSheesh. Gotta think twice before pressing SEND. That should be\n\n\tINSERT -> keyword\n\tinsert -> keyword\n\tXINSERT -> \"x�nsert\"\n\txinsert -> \"xinsert\"\n\nsince of course the issue is the lowercase transform of \"I\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 02 Mar 2001 19:13:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: Turkish locale bug " } ]
[ { "msg_contents": "Hi all!\n\nIRCCS E. Medea is an Italian no-profit scientific and clinical research institute. It spreads over 5 different geographical locations connected through a private data network. \n\nWe are using PostgreSQL as DBMS because its object-oriented features qualify it for managing biomedical data and it is freely available to community via the open source philosophy.\n\nBioengineering Lab. staff developed a PostgreSQL Replicator tool (pgReplicator) in order to extend original potentials of PostgreSQL ORDBMS and make it able to manage distributed databases by means of asynchronous data replication.\n\npgReplicator is available under GNU public license.\n\nIf you are interested in replicating PostgreSQL databases please visit\n\t http://pgreplicator.sourceforge.net \nfor more information.\n\nSuggestions are welcomed.\n\nThanks all.\n", "msg_date": "Mon, 19 Feb 2001 14:57:02 +0100", "msg_from": "bioengineering.lab@bp.lnf.it", "msg_from_op": true, "msg_subject": "None" } ]
[ { "msg_contents": "\n> > So, what was the case against using yield (2) ?\n> \n> $ man 2 yield\n> No entry for yield in section 2 of the manual.\n> \n> Lack of portability :-(\n\nI can't beleive that AIX finally has a convenience function that \nis missing in mainstream unix :-)\n\n$man 2 yield\nPurpose\n\tYields the processor to processes with higher priorities.\nDescription\nThe yield subroutine forces the current running process or thread to relinquish\nuse of the processor. If the run queue is empty when the yield subroutine is\ncalled, the calling process or kernel thread is immediately rescheduled. If the\ncalling process has multiple threads, only the calling thread is affected. The\nprocess or thread resumes execution after all threads of equal or greater\npriority are scheduled to run.\n\nAndreas\n", "msg_date": "Mon, 19 Feb 2001 16:45:12 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: AW: Re: beta5 ... " }, { "msg_contents": "> I can't beleive that AIX finally has a convenience function that\n> is missing in mainstream unix :-)\n\nBetter not report it; they'll take it out ;)\n\n - Thomas\n", "msg_date": "Tue, 20 Feb 2001 04:40:24 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: AW: AW: Re: beta5 ..." }, { "msg_contents": "Hi all,\n\nAs a matter of curiosity, is each beta compiled and then regression\ntested against *every* one of the known \"supported\" platforms before\nrelease?\n\nLike, as an official \"checklist\" type step?\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n", "msg_date": "Tue, 20 Feb 2001 19:44:40 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "> As a matter of curiosity, is each beta compiled and then regression\n> tested against *every* one of the known \"supported\" platforms before\n> release?\n\nNo. But the changes from beta to beta are usually done with platform\ncompatibility in mind, and we try to stay away from introducing\nplatform-specific breakage.\n\nWe *do* make a great effort to solicit regression tests on all supported\nplatforms between first beta and final release, with an explicit push\nduring the last beta cycle, just before docs freeze for the release.\nThis is easy for the common platforms, and we have been fortunate that\nthe more exotic platforms have usually had an interested supporter to\nrun the tests and report results.\n\nLack of reported regression tests for a release or two is sufficient\ncause to drop a platform to the \"unsupported\" list. We don't remove the\nplatform from all lists to help remind us that the platform *could* be\nsupported, and once was, in case someone wants to rehabilitate its\nstatus.\n\n - Thomas\n", "msg_date": "Tue, 20 Feb 2001 14:43:44 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": ">> As a matter of curiosity, is each beta compiled and then regression\n>> tested against *every* one of the known \"supported\" platforms before\n>> release?\n\nWho are you expecting to do that, exactly?\n\nOne of the differences between Postgres and a proprietary commercial \ndatabase is that there is no vast support machinery behind the scenes.\nWhat you see going on on this list is what you get: beta testing\nconsists of the activities performed and reported by list members.\n\nNormally, if we are about to push out a beta then two or three people\nwill double-check that the current CVS tip builds and passes regression\non their personal machines. But the \"supported platforms\" coverage\ndepicted in the docs consists of all the platforms that are reported to\nus as working during the entire beta test period, including many that\nthe key developers have no direct access to. There's no way that we\ncould reverse the process and cause that to happen before a beta release\ninstead of after; certainly no way that we could cause all that effort\nto be repeated for each beta version.\n\nIf you are using a beta version then you are part of that testing\nprocess, not a beneficiary of something that's happened behind closed\ndoors.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 10:50:57 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "I was just thinking that perhaps as part of the \"beta\" release process\nit would be worthwhile saying \"New beta about to be released\" (or\nsimilar) and then have the appropriate people for each platform/OS do a\ncompile against the up-to-the-minute CVS and give a yes/no for each of\ntheir platforms.\n\nIn a way, this seems to be done presently, but without the addition of a\nformalised checklist of \"platforms this beta compiles/regress-tests on\nas of release time\".\n\nI'm thinking about the release process.\n\n+ Justin\n\n\nTom Lane wrote:\n> \n> >> As a matter of curiosity, is each beta compiled and then regression\n> >> tested against *every* one of the known \"supported\" platforms before\n> >> release?\n> \n> Who are you expecting to do that, exactly?\n> \n> One of the differences between Postgres and a proprietary commercial\n> database is that there is no vast support machinery behind the scenes.\n> What you see going on on this list is what you get: beta testing\n> consists of the activities performed and reported by list members.\n> \n> Normally, if we are about to push out a beta then two or three people\n> will double-check that the current CVS tip builds and passes regression\n> on their personal machines. But the \"supported platforms\" coverage\n> depicted in the docs consists of all the platforms that are reported to\n> us as working during the entire beta test period, including many that\n> the key developers have no direct access to. There's no way that we\n> could reverse the process and cause that to happen before a beta release\n> instead of after; certainly no way that we could cause all that effort\n> to be repeated for each beta version.\n> \n> If you are using a beta version then you are part of that testing\n> process, not a beneficiary of something that's happened behind closed\n> doors.\n> \n> regards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 11:24:45 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "Justin Clift wrote:\n> \n> I was just thinking that perhaps as part of the \"beta\" release process\n> it would be worthwhile saying \"New beta about to be released\" (or\n> similar) and then have the appropriate people for each platform/OS do a\n> compile against the up-to-the-minute CVS and give a yes/no for each of\n> their platforms.\n\nWhat would the big advantage to releasing the beta and testing _that_ be\n?\n\nApart from delaying the beta, that is ;) ?\n\nIt would be nice if someone (pgsql inc., great bridge, etc.) provided a \ncentral web page for registering the results so that you won't need to \nscan athe whole list to find out if your platform is already tested.\n\n----------\nHannu\n", "msg_date": "Wed, 21 Feb 2001 08:29:57 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "\nVince, is this something that PostgreSQL.Org can have on the web page\nrelatively quickly?\n\nOn Wed, 21 Feb 2001, Hannu Krosing wrote:\n\n> Justin Clift wrote:\n> >\n> > I was just thinking that perhaps as part of the \"beta\" release process\n> > it would be worthwhile saying \"New beta about to be released\" (or\n> > similar) and then have the appropriate people for each platform/OS do a\n> > compile against the up-to-the-minute CVS and give a yes/no for each of\n> > their platforms.\n>\n> What would the big advantage to releasing the beta and testing _that_ be\n> ?\n>\n> Apart from delaying the beta, that is ;) ?\n>\n> It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n> central web page for registering the results so that you won't need to\n> scan athe whole list to find out if your platform is already tested.\n>\n> ----------\n> Hannu\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 21 Feb 2001 09:45:12 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Wed, 21 Feb 2001, Tom Lane wrote:\n\n> The Hermit Hacker <scrappy@hub.org> writes:\n> > Vince, is this something that PostgreSQL.Org can have on the web page\n> > relatively quickly?\n>\n> > On Wed, 21 Feb 2001, Hannu Krosing wrote:\n> >> It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n> >> central web page for registering the results so that you won't need to\n> >> scan athe whole list to find out if your platform is already tested.\n>\n> Sounded like a great idea to me too. If Vince doesn't want to mess with\n> it, I'll try to stir up some interest at Great Bridge.\n\nSomething like this, I think, is more appropriate on the project site,\nthat's all ...\n\n\n", "msg_date": "Wed, 21 Feb 2001 11:03:04 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "The Hermit Hacker <scrappy@hub.org> writes:\n> Vince, is this something that PostgreSQL.Org can have on the web page\n> relatively quickly?\n\n> On Wed, 21 Feb 2001, Hannu Krosing wrote:\n>> It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n>> central web page for registering the results so that you won't need to\n>> scan athe whole list to find out if your platform is already tested.\n\nSounded like a great idea to me too. If Vince doesn't want to mess with\nit, I'll try to stir up some interest at Great Bridge.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 10:04:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 ... " }, { "msg_contents": "Hannu Krosing writes:\n\n> It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n> central web page for registering the results so that you won't need to\n> scan athe whole list to find out if your platform is already tested.\n\n\"Platform already tested\" is a misguided concept. Almost any machine is\ncustomized or deviating in some form or other. A listing of the form\n\n\tbeta1\tbeta2\t...\nLinux\tok\tok\nSolaris\tok\tbroken\n...\n\nis, IMHO, worse than useless, because it would actually decrease the\namount of wide-spread, diverse testing.\n\nI wouldn't mind an automated process that builds and runs the test suite\nregularly on many machines to inform developers during the development\ncycle that they broke something really bad, but to make this part of the\nbeta testing process is sheer folly.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Feb 2001 17:10:08 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "Peter Eisentraut wrote:\n> Hannu Krosing writes:\n> > It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n> > central web page for registering the results so that you won't need to\n> > scan athe whole list to find out if your platform is already tested.\n \n> \"Platform already tested\" is a misguided concept. Almost any machine is\n> customized or deviating in some form or other. A listing of the form\n \n> beta1 beta2 ...\n> Linux ok ok\n> Solaris ok broken\n> ...\n \n> is, IMHO, worse than useless, because it would actually decrease the\n> amount of wide-spread, diverse testing.\n \nIt goes even further than that, of course -- there are different\nversions to worry about. As a hypothetical, suppose for a moment that\nPostgreSQL works fine on a Linux 2.2.17 box with glibc 2.1.3, but does\nnot work fine on Linux 2.4.1 with glibc 2.2 due to some undocumented\nchange to strncmp() (;-)). Currently, we're not that fine-grained --\nwhile regression results are useful for ascertaining what is and isn't\nsupported, the regression results are very dependent on the environment\nof the machine.\n\nAny process that would discourage widespread testing is not good, IMHO.\n\nHaving a form by which you could register pass/fail/diffs for your\nparticular platform/environment could be good, with a blank slate each\nrelease. But a blanket 'we support Linux' is, IMHO, not good -- _which_\nLinux? 1.0? 1.2? 2.0? 2.0.38 but not 2.0.15? With libc 4 in a.out? Or\ndo you have to have ELF Libc 5? Libc 5.2.38 works, but 5.4.44 doesn't? \nGlibc 2.0.5 but not 2.1.3? RedHat kernel 2.2.17 but not SuSE kernel\n2.2.17? And, the worst: RedHat kernel 2.2.18 for RedHat 7 versus RedHat\nkernel 2.2.18 for RedHat 6.2 (the kernel patches applied could in fact\nbe different enough to matter)?\n\nThese are all hypothetical examples, of course -- but Linux is not the\nonly platform that has these versioning problems just waiting to bite. \nLinux probably has more of them than most, but it is not alone in having\nthem.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 21 Feb 2001 11:44:03 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Wed, 21 Feb 2001, The Hermit Hacker wrote:\n\n>\n> Vince, is this something that PostgreSQL.Org can have on the web page\n> relatively quickly?\n\nThe beta or registering the results? After the last time I won't\nput beta releases on the website, but if you want the results thing\nit can be done in a short time. Just tell me what info you want\nin it and it'll be there.\n\nVince.\n\n>\n> On Wed, 21 Feb 2001, Hannu Krosing wrote:\n>\n> > Justin Clift wrote:\n> > >\n> > > I was just thinking that perhaps as part of the \"beta\" release process\n> > > it would be worthwhile saying \"New beta about to be released\" (or\n> > > similar) and then have the appropriate people for each platform/OS do a\n> > > compile against the up-to-the-minute CVS and give a yes/no for each of\n> > > their platforms.\n> >\n> > What would the big advantage to releasing the beta and testing _that_ be\n> > ?\n> >\n> > Apart from delaying the beta, that is ;) ?\n> >\n> > It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n> > central web page for registering the results so that you won't need to\n> > scan athe whole list to find out if your platform is already tested.\n> >\n> > ----------\n> > Hannu\n> >\n>\n> Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n>\n>\n\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 21 Feb 2001 15:05:00 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Wed, 21 Feb 2001, Vince Vielhaber wrote:\n\n> On Wed, 21 Feb 2001, The Hermit Hacker wrote:\n>\n> >\n> > Vince, is this something that PostgreSQL.Org can have on the web page\n> > relatively quickly?\n>\n> The beta or registering the results? After the last time I won't\n> put beta releases on the website, but if you want the results thing\n> it can be done in a short time. Just tell me what info you want\n> in it and it'll be there.\n\nHrmmm ... some sort of input form where someone can enter OS specific\ninfo, and maybe upload the results of the regression tests as far as\n'failed' or 'succeeded'? the report generated would list the OS info and\nx out of y tests failed ... and a link to a full listing of which\nfailed/succeeded?\n\n > > Vince.\n>\n> >\n> > On Wed, 21 Feb 2001, Hannu Krosing wrote:\n> >\n> > > Justin Clift wrote:\n> > > >\n> > > > I was just thinking that perhaps as part of the \"beta\" release process\n> > > > it would be worthwhile saying \"New beta about to be released\" (or\n> > > > similar) and then have the appropriate people for each platform/OS do a\n> > > > compile against the up-to-the-minute CVS and give a yes/no for each of\n> > > > their platforms.\n> > >\n> > > What would the big advantage to releasing the beta and testing _that_ be\n> > > ?\n> > >\n> > > Apart from delaying the beta, that is ;) ?\n> > >\n> > > It would be nice if someone (pgsql inc., great bridge, etc.) provided a\n> > > central web page for registering the results so that you won't need to\n> > > scan athe whole list to find out if your platform is already tested.\n> > >\n> > > ----------\n> > > Hannu\n> > >\n> >\n> > Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy\n> > Systems Administrator @ hub.org\n> > primary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n> >\n> >\n>\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n>\n>\n>\n>\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Wed, 21 Feb 2001 18:16:14 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "I have a bunch of machines here, some are rather old (K6-200s,P133s, some\n486s etc) but they're just collecting dust now. I would be more than happy\nto install any OS and do build testing for PostgreSQL is there is a need..\n\nWhat OSes need to have PostgreSQL built/tested on that the developers don't\nhave access to? If I can get the OS (and install it), I would be happy to\ndedicate those machines to PG build testing.. I could set them up on the\nnetwork here (proxy, cable modem) or at the office (T1) and give developers\naccess if needed too.\n\nI have several FreeBSD boxes running PG beta 4 now, but I'd bet at least one\nof you is using FreeBSD (and it compiles and installs rather nicely\nanyway)..\n\n-Mitch\n\n\n", "msg_date": "Wed, 21 Feb 2001 17:22:41 -0500", "msg_from": "\"Mitch Vincent\" <mitch@venux.net>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Wed, 21 Feb 2001, The Hermit Hacker wrote:\n\n> On Wed, 21 Feb 2001, Vince Vielhaber wrote:\n>\n> > On Wed, 21 Feb 2001, The Hermit Hacker wrote:\n> >\n> > >\n> > > Vince, is this something that PostgreSQL.Org can have on the web page\n> > > relatively quickly?\n> >\n> > The beta or registering the results? After the last time I won't\n> > put beta releases on the website, but if you want the results thing\n> > it can be done in a short time. Just tell me what info you want\n> > in it and it'll be there.\n>\n> Hrmmm ... some sort of input form where someone can enter OS specific\n> info, and maybe upload the results of the regression tests as far as\n> 'failed' or 'succeeded'? the report generated would list the OS info and\n> x out of y tests failed ... and a link to a full listing of which\n> failed/succeeded?\n\nLemme see what I can cobble together taking into consideration some of\nthe things Lamar and Peter also mentioned. Note: I'm probably 450\nmessagees behind due to a 2 day dsl outage; I may have missed some of\nthe conversation. Some messages trickled in, the rest flooded in over\nnight. I may be nearing the time for incoming mail folders :)\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 21 Feb 2001 17:27:23 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "Vince Vielhaber wrote:\n> the things Lamar and Peter also mentioned. Note: I'm probably 450\n> messagees behind due to a 2 day dsl outage; I may have missed some of\n> the conversation. Some messages trickled in, the rest flooded in over\n> night. I may be nearing the time for incoming mail folders :)\n\nJoin the club. I have just finished configuring my Netscape e-mail for\nincoming folders -- _important_ direct e-mails (having to to with my\nactual job) were getting lost in and amongst the various lists I am a\nmember of. I get around 600 e-mails per day on fifteen or so different\nmailing lists (the ones at PostgreSQL.org, a half dozen at\nBroadcast.net, Bugtraq/Linux-alert/RedHat-announce, redhat-beta,\nAOLserver/OpenNSD/OpenACS, and a handful of Linux announce lists,\nunixODBC, CERT, plus all of our Internet listeners). Netscapes filters\nare a lifesaver! Of course, there are other more capable packages out\nthere, but Netscape works the same on Win9x and Linux, both of which are\nin use on my notebook.\n\nI have to keep up, or the e-mail flood after a couple of days is just\nabout unbearable.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 21 Feb 2001 17:35:26 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "* Lamar Owen <lamar.owen@wgcr.org> [010221 16:36]:\n> Vince Vielhaber wrote:\n> > the things Lamar and Peter also mentioned. Note: I'm probably 450\n> > messagees behind due to a 2 day dsl outage; I may have missed some of\n> > the conversation. Some messages trickled in, the rest flooded in over\n> > night. I may be nearing the time for incoming mail folders :)\n> \n> Join the club. I have just finished configuring my Netscape e-mail for\n> incoming folders -- _important_ direct e-mails (having to to with my\n> actual job) were getting lost in and amongst the various lists I am a\n> member of. I get around 600 e-mails per day on fifteen or so different\n> mailing lists (the ones at PostgreSQL.org, a half dozen at\n> Broadcast.net, Bugtraq/Linux-alert/RedHat-announce, redhat-beta,\n> AOLserver/OpenNSD/OpenACS, and a handful of Linux announce lists,\n> unixODBC, CERT, plus all of our Internet listeners). Netscapes filters\n> are a lifesaver! Of course, there are other more capable packages out\n> there, but Netscape works the same on Win9x and Linux, both of which are\n> in use on my notebook.\n> \n> I have to keep up, or the e-mail flood after a couple of days is just\n> about unbearable.\nslocal/procmail/mutt on a Unix Box makes it easier.\n\nMy Mailing list stuff gets filtered off. \n\nLER\n\n> --\n> Lamar Owen\n> WGCR Internet Radio\n> 1 Peter 4:11\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Wed, 21 Feb 2001 16:44:04 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Wed, 21 Feb 2001, The Hermit Hacker wrote:\n\n> On Wed, 21 Feb 2001, Vince Vielhaber wrote:\n>\n> > On Wed, 21 Feb 2001, The Hermit Hacker wrote:\n> >\n> > >\n> > > Vince, is this something that PostgreSQL.Org can have on the web page\n> > > relatively quickly?\n> >\n> > The beta or registering the results? After the last time I won't\n> > put beta releases on the website, but if you want the results thing\n> > it can be done in a short time. Just tell me what info you want\n> > in it and it'll be there.\n>\n> Hrmmm ... some sort of input form where someone can enter OS specific\n> info, and maybe upload the results of the regression tests as far as\n> 'failed' or 'succeeded'? the report generated would list the OS info and\n> x out of y tests failed ... and a link to a full listing of which\n> failed/succeeded?\n\nhttp://hub.org/~vev/regress.php\n\nWhat other info is needed to distinguish these systems?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 21 Feb 2001 20:21:24 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "Hi Vince,\n\nThat's really nifty.\n\nI don't know how to word it, but I think it might be worth including\nsomething to find out if the machine was \"out-of-the box\" with just the\nrecommended installation utils (i.e. a \"new build\" of AIX, NT, Solaris,\netc, then gcc, bison or whatever) vs. a machine that has been actively\nused/developed with for a while.\n\nThis is so we can accurately know if a particular version/beta of\nPostgreSQL compiles on a stock(-ish) system or if the successful/failed\nreports are only coming from those machines with updated/newer/different\nthings added.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n\nVince Vielhaber wrote:\n> \n<snip>\n> \n> http://hub.org/~vev/regress.php\n> \n> What other info is needed to distinguish these systems?\n> \n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n", "msg_date": "Thu, 22 Feb 2001 15:51:47 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "What about adding a field where they paste the output of 'uname -a' on their\nsystem...?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Justin Clift\n> Sent: Thursday, February 22, 2001 12:52 PM\n> To: Vince Vielhaber\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] beta5 ...\n>\n>\n> Hi Vince,\n>\n> That's really nifty.\n>\n> I don't know how to word it, but I think it might be worth including\n> something to find out if the machine was \"out-of-the box\" with just the\n> recommended installation utils (i.e. a \"new build\" of AIX, NT, Solaris,\n> etc, then gcc, bison or whatever) vs. a machine that has been actively\n> used/developed with for a while.\n>\n> This is so we can accurately know if a particular version/beta of\n> PostgreSQL compiles on a stock(-ish) system or if the successful/failed\n> reports are only coming from those machines with updated/newer/different\n> things added.\n>\n> Regards and best wishes,\n>\n> Justin Clift\n> Database Administrator\n>\n> Vince Vielhaber wrote:\n> >\n> <snip>\n> >\n> > http://hub.org/~vev/regress.php\n> >\n> > What other info is needed to distinguish these systems?\n> >\n> > Vince.\n> > --\n> >\n> ==========================================================================\n> > Vince Vielhaber -- KA8CSH email: vev@michvhf.com\nhttp://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n\n", "msg_date": "Thu, 22 Feb 2001 13:07:36 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "RE: beta5 ..." }, { "msg_contents": "On Thu, 22 Feb 2001, Christopher Kings-Lynne wrote:\n\n> What about adding a field where they paste the output of 'uname -a' on their\n> system...?\n\nGot this and Justin's changes along with compiler version. Anyone think\nof anything else?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 22 Feb 2001 10:12:39 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "RE: beta5 ..." }, { "msg_contents": "Vince Vielhaber writes:\n > On Thu, 22 Feb 2001, Christopher Kings-Lynne wrote:\n > \n > > What about adding a field where they paste the output of 'uname\n > > -a' on their system...?\n > \n > Got this and Justin's changes along with compiler version. Anyone\n > think of anything else?\n\nArchitecture. IRIX, Solaris and AIX allow applications and libraries\nto be built 32 or 64 bit.\n\nYou may also like to add a field for configure options used. Or is\nthis just for results OOTB?\n-- \nPete Forman -./\\.- Disclaimer: This post is originated\nWesternGeco -./\\.- by myself and does not represent\npete.forman@westerngeco.com -./\\.- opinion of Schlumberger, Baker\nhttp://www.crosswinds.net/~petef -./\\.- Hughes or their divisions.\n", "msg_date": "Thu, 22 Feb 2001 16:00:44 +0000", "msg_from": "Pete Forman <pete.forman@westerngeco.com>", "msg_from_op": false, "msg_subject": "RE: beta5 ..." }, { "msg_contents": "Vince Vielhaber writes:\n\n> http://hub.org/~vev/regress.php\n>\n> What other info is needed to distinguish these systems?\n\nThe operating systems should be ordered by some key other than maybe\nauthor's preference. ;-)\n\nLinux needs to be split into one for each distribution.\n\n'Sun' should probably be SunOS.\n\nAlso of interest:\n\n- config.guess output\n\n- Linker version\n\n- GNU make version\n\n- configure command line (`pg_config --configure`)\n\nBison version is probably not interesting, since anything but 1.28 is not\nto be considered serious.\n\n'Platform' could be better named 'CPU type'. 'CPU speed' and 'Total RAM'\nare probably not interesting for anything but statistics.\n\n'libc' version is probably not interesting for anything but Linux? If\nso, it is already implied if you name the distributor.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Feb 2001 17:33:20 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "> Got this and Justin's changes along with compiler version. Anyone think\n> of anything else?\n\nHmm. Any suggestions on how we collate the test results for our release\ndocs? And how we solicit tests for remaining platforms?\n\nIn previous releases (and until now), I have kept track of results\nposted on the -hackers mailing list, and then when the beta cycle winds\ndown would send out a list containing those platforms which have not yet\nbeen tested.\n\nIt was easy for me to do, and it gave visibility on the developers' list\nfor the current status of testing.\n\nShould the procedure now change? And if so, have we just signed me up\nfor more work rummaging around a web page to transcribe results? :/\n\nCould we perhaps have a reference on that page to the current\ndeveloper's doc page of \"supported platforms\"? That would help tie the\ncurrent state of the docs to the current state of the web site report\nform, and it would let people know that they might also post their\nresults to the -hackers list to make sure that their results are known\nto others. If we are storing this stuff in a database, then perhaps it\nwould be easy to dump those results in a form which maps into the docs? \n\n<philosophy style=randomthought mode=aside>\nI *know* that having web pages for data entry, etc etc are good things.\nBut at some point, the fun of working on PG is (at least for me)\ninteracting with *people*, not web sites, and I'd like to avoid building\nin procedures which inadvertently discourage that interaction.\n</philosophy>\n\nSuggestions?\n\n - Thomas\n", "msg_date": "Thu, 22 Feb 2001 16:38:01 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Thu, 22 Feb 2001, Pete Forman wrote:\n\n> Vince Vielhaber writes:\n> > On Thu, 22 Feb 2001, Christopher Kings-Lynne wrote:\n> >\n> > > What about adding a field where they paste the output of 'uname\n> > > -a' on their system...?\n> >\n> > Got this and Justin's changes along with compiler version. Anyone\n> > think of anything else?\n>\n> Architecture. IRIX, Solaris and AIX allow applications and libraries\n> to be built 32 or 64 bit.\n\nAdded.\n\n> You may also like to add a field for configure options used. Or is\n> this just for results OOTB?\n\nThat comes later. This part is just for identifying the system itself.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 22 Feb 2001 11:57:10 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "RE: beta5 ..." }, { "msg_contents": "On Thu, 22 Feb 2001, Peter Eisentraut wrote:\n\n> Vince Vielhaber writes:\n>\n> > http://hub.org/~vev/regress.php\n> >\n> > What other info is needed to distinguish these systems?\n>\n> The operating systems should be ordered by some key other than maybe\n> author's preference. ;-)\n\nActually it's more random than by preference. FreeBSD came first 'cuze\nI run it and I always list it first (and alphabetically it comes before\nLinux). I then kept the bsds together, but those were actually added\nlast. Some of the others came from looking at the directory where the\nFAQs reside and going in that order.\n\n> Linux needs to be split into one for each distribution.\n\nI need a list of them since the only ones I can think of are redhat, suse\nand slackware (does slackware even still exist?).\n\n> 'Sun' should probably be SunOS.\n\nOk.\n\n> Also of interest:\n>\n> - config.guess output\n\ncomes later. This is mainly for machine identification. But it is noted\nsince I didn't think of it.\n\n> - Linker version\n>\n> - GNU make version\n>\n> - configure command line (`pg_config --configure`)\n\nComes later.\n\n> Bison version is probably not interesting, since anything but 1.28 is not\n> to be considered serious.\n>\n> 'Platform' could be better named 'CPU type'. 'CPU speed' and 'Total RAM'\n> are probably not interesting for anything but statistics.\n\nChanged platform.\n\n> 'libc' version is probably not interesting for anything but Linux? If\n> so, it is already implied if you name the distributor.\n\nAnd if someone upgrades libc? I add that 'cuze when a friend of mine was\nusing redhat for his isp (quite a while ago) someone upgraded his libc for\nhim - what a mess that made!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 22 Feb 2001 12:07:12 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "On Thu, 22 Feb 2001, Thomas Lockhart wrote:\n\n> > Got this and Justin's changes along with compiler version. Anyone think\n> > of anything else?\n>\n> Hmm. Any suggestions on how we collate the test results for our release\n> docs? And how we solicit tests for remaining platforms?\n>\n> In previous releases (and until now), I have kept track of results\n> posted on the -hackers mailing list, and then when the beta cycle winds\n> down would send out a list containing those platforms which have not yet\n> been tested.\n\nCan you provide me with a list of platforms it should be tested on?\n\n>\n> It was easy for me to do, and it gave visibility on the developers' list\n> for the current status of testing.\n>\n> Should the procedure now change? And if so, have we just signed me up\n> for more work rummaging around a web page to transcribe results? :/\n\nNo, I wouldn't do that to you. You tell me how you want the results\nto look and I'll give you copy-n-paste. All of this info will be stored\nin a table so the output is however it's wanted.\n\n> Could we perhaps have a reference on that page to the current\n> developer's doc page of \"supported platforms\"? That would help tie the\n> current state of the docs to the current state of the web site report\n> form, and it would let people know that they might also post their\n> results to the -hackers list to make sure that their results are known\n> to others. If we are storing this stuff in a database, then perhaps it\n> would be easy to dump those results in a form which maps into the docs?\n\nNo problem.\n\n> <philosophy style=randomthought mode=aside>\n> I *know* that having web pages for data entry, etc etc are good things.\n> But at some point, the fun of working on PG is (at least for me)\n> interacting with *people*, not web sites, and I'd like to avoid building\n> in procedures which inadvertently discourage that interaction.\n> </philosophy>\n>\n> Suggestions?\n\nIf anything this will make it easier for you and give you more time to\ninteract and less time to have to dig for results which may not be as\ncomplete as you'd like.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Thu, 22 Feb 2001 12:14:07 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "Hi Vince,\n\nHere's the next thing... how do you want to distinguish between Solaris\nSPARC, Solaris INTEL (and maybe even Solaris MAC even though it isn't\nsold any longer)? Each of these has a 32 and 64 bit mode also.\n\nI thought that might be what \"Platform\" could be used for, but\n\"Architecture\" sounds right.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n\nVince Vielhaber wrote:\n> \n<snip> \n> > Bison version is probably not interesting, since anything but 1.28 is not\n> > to be considered serious.\n> >\n> > 'Platform' could be better named 'CPU type'. 'CPU speed' and 'Total RAM'\n> > are probably not interesting for anything but statistics.\n> \n> Changed platform.\n> \n> > 'libc' version is probably not interesting for anything but Linux? If\n> > so, it is already implied if you name the distributor.\n> \n> And if someone upgrades libc? I add that 'cuze when a friend of mine was\n> using redhat for his isp (quite a while ago) someone upgraded his libc for\n> him - what a mess that made!\n> \n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n", "msg_date": "Fri, 23 Feb 2001 11:08:09 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "> Can you provide me with a list of platforms it should be tested on?\n\nThe current list is at\n\n \nhttp://www.postgresql.org/devel-corner/docs/admin/supported-platforms.html\n\n> No, I wouldn't do that to you. You tell me how you want the results\n> to look and I'll give you copy-n-paste. All of this info will be stored\n> in a table so the output is however it's wanted.\n\nOK, if we entered in the current list of supported platforms, or even if\nnot, that could form the basis for the \"solicitation email\" to get the\nrest of the platforms tested. Look at what I collate in the docs, but if\nyou give me more that won't be a problem.\n\nbtw, for docs I'm not sure how to include much more information, since\nit has to fit on a page (in tabular form, presumably). Suggestions?\n\n> > Could we perhaps have a reference on that page to the current\n> > developer's doc page of \"supported platforms\"? <blah blah blah>\n> No problem.\n\nOk, the URL would be the same as above, for *development*. Not sure how\nwe will do the same info on the \"released side\" of the web site?\n\n> If anything this will make it easier for you and give you more time to\n> interact and less time to have to dig for results which may not be as\n> complete as you'd like.\n\nYup, you are right. Thanks.\n\nHmm, would generating an email to the -hackers list when something gets\nupdated be useful? istm it would not end up being spam, but what do\ny'all think?\n\n - Thomas\n", "msg_date": "Fri, 23 Feb 2001 01:26:18 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." } ]
[ { "msg_contents": "Over the weekend I noticed that Tatsuo's pgbench benchmark seems to\nspend an awful lot of its time down inside _bt_check_unique. This\nhappens because with table declarations like\n\ncreate table accounts(aid int, primary key(aid), bid int,\n\t\t\tabalance int, filler char(84)) \n\nand commands like\n\nupdate accounts set abalance = abalance + x where aid = y\n\nthe \"update\" is inserting a new tuple and the index on aid wants to\nmake sure this insertion doesn't violate the uniqueness constraint.\nTo do that, it has to visit all active *and dead* tuples with the same\naid index value, to make sure they're all deleted or being deleted by\nthe current transaction. That's expensive if they're scattered all over\nthe table.\n\nHowever, since we have not changed the aid column from its prior value,\nit seems like this check is wasted effort. We should be able to deduce\nthat if the prior state of the row was OK then this one is too.\n\nI'm not quite sure how to implement this, but I wanted to toss the idea\nout for discussion. Probably we'd have to have some cooperation between\nthe heap_update level (where the fact that it's an update is known, and\nwhere we'd have a chance to test for changes in particular columns) and\nthe index access level. Maybe it's wrong for the index access level to\nhave primary responsibility for uniqueness checks in the first place.\n\nObviously this isn't going to happen for 7.1, but it might make a nice\nperformance improvement for 7.2.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 15:59:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Performance-improvement idea: shortcircuit unique-index checks" }, { "msg_contents": "> I'm not quite sure how to implement this, but I wanted to toss the idea\n> out for discussion. Probably we'd have to have some cooperation between\n> the heap_update level (where the fact that it's an update is known, and\n> where we'd have a chance to test for changes in particular columns) and\n> the index access level. Maybe it's wrong for the index access level to\n> have primary responsibility for uniqueness checks in the first place.\n> \n> Obviously this isn't going to happen for 7.1, but it might make a nice\n> performance improvement for 7.2.\n\nSeems a better solution would be to put a 'deleted' bit in the index so\nwe would have to visit those heap tuples only once for a committed\nstatus. Similar to what we do with heap tuples so we don't have to\nvisit pg_log repeatedly.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 16:12:43 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance-improvement idea: shortcircuit unique-index\n checks" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Seems a better solution would be to put a 'deleted' bit in the index so\n> we would have to visit those heap tuples only once for a committed\n> status. Similar to what we do with heap tuples so we don't have to\n> visit pg_log repeatedly.\n\nThat's only a partial solution, since the index is still going to have\nto visit the row's existing tuple (which is, by definition, not yet\ncommitted dead). My point is that the index scanning done for\nuniqueness checks can be eliminated *entirely* for one pretty-common\ncase.\n\nA deleted bit in index entries might be useful too, but I think it\nattacks a different set of cases.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 16:21:29 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Performance-improvement idea: shortcircuit unique-index checks " }, { "msg_contents": "On Mon, 19 Feb 2001, Tom Lane wrote:\n\n> I'm not quite sure how to implement this, but I wanted to toss the idea\n> out for discussion. Probably we'd have to have some cooperation between\n> the heap_update level (where the fact that it's an update is known, and\n> where we'd have a chance to test for changes in particular columns) and\n> the index access level. Maybe it's wrong for the index access level to\n> have primary responsibility for uniqueness checks in the first place.\n> \n> Obviously this isn't going to happen for 7.1, but it might make a nice\n> performance improvement for 7.2.\n> \n> Comments?\n\nThis sounds like a win for alot of updates where keys don't change.\n\nAlso, if work is going to be done here, it might be nice to make the\nunique constraint have the correct semantics for checking after statement\nrather than per-row when multiple rows are changed in the same statement\nsince I'm pretty sure the standard semantics is that as long as the\nrows are different at the end of the statement it's okay (which is\nnot what we do currently AFAICS). I'm really not sure what's involved in\nthat though.\n\n", "msg_date": "Mon, 19 Feb 2001 13:48:00 -0800 (PST)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: Performance-improvement idea: shortcircuit unique-index\n checks" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Seems a better solution would be to put a 'deleted' bit in the index so\n> > we would have to visit those heap tuples only once for a committed\n> > status. Similar to what we do with heap tuples so we don't have to\n> > visit pg_log repeatedly.\n> \n> That's only a partial solution, since the index is still going to have\n> to visit the row's existing tuple (which is, by definition, not yet\n> committed dead). My point is that the index scanning done for\n> uniqueness checks can be eliminated *entirely* for one pretty-common\n> case.\n\nI see.\n\n> \n> A deleted bit in index entries might be useful too, but I think it\n> attacks a different set of cases.\n\nYes. Let me add some TODO items:\n\n\t* Add deleted bit to index tuples to reduce heap access \n\t* Prevent index uniqueness checks when UPDATE does not modify column\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 19 Feb 2001 16:52:46 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Performance-improvement idea: shortcircuit unique-index\n checks" }, { "msg_contents": "Bruce Momjian wrote:\n> \n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> Yes. Let me add some TODO items:\n> \n> * Add deleted bit to index tuples to reduce heap access\n\nISTM this isn't a bad idea. However note that there remains only\n1 bit unused in IndexTupleData.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Tue, 20 Feb 2001 09:51:12 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: Performance-improvement idea: shortcircuit\n unique-indexchecks" } ]
[ { "msg_contents": "A friend asked me to figure out how to access PostgreSQL from Tcl via\nODBC. For posterity, here's the step by step \"how I did it\" that I\nemailed to him. I don't know Tcl, this was just about getting the\ncompile options correct and doing the proper sysadminning to make\nthings work.\n\nComments, suggestions and clarifications appreciated, hopefully this\nwill save the next person going through the pain a few steps:\n\nhttp://www.flutterby.com/archives/2001_Feb/19_PostgreSQLfromTclwithODBC.html\n", "msg_date": "Mon, 19 Feb 2001 14:49:45 -0800 (PST)", "msg_from": "Dan Lyke <danlyke@flutterby.com>", "msg_from_op": true, "msg_subject": "A How-To: PostgreSQL from Tcl via ODBC" }, { "msg_contents": "\nHooray! These instructions are just what an\nalmost-novice needs.\n\nWith the exception of changing the password to\n'postgresql', the procedures started smoothly.\n\nRan into a hitch at 'make' which reported that 'bison'\nwas not installed. I'm running debian potato, so used\nthe apt-get install of bison. Bison is installed in\n/usr/bin. I copied it to /home/billb/pgsql.\n\nStill getting the 'bison missing' message.\n\nCan anyone show me the error of my ways.\n\nTIA\nBill\n\n\n\n--- Dan Lyke <danlyke@flutterby.com> wrote:\n> A friend asked me to figure out how to access\n> PostgreSQL from Tcl via\n> ODBC. For posterity, here's the step by step \"how I\n> did it\" that I\n> emailed to him. I don't know Tcl, this was just\n> about getting the\n> compile options correct and doing the proper\n> sysadminning to make\n> things work.\n> \n> Comments, suggestions and clarifications\n> appreciated, hopefully this\n> will save the next person going through the pain a\n> few steps:\n> \n>\nhttp://www.flutterby.com/archives/2001_Feb/19_PostgreSQLfromTclwithODBC.html\n\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail - only $35 \na year! http://personal.mail.yahoo.com/\n", "msg_date": "Tue, 20 Feb 2001 08:40:17 -0800 (PST)", "msg_from": "Bill Barnes <kgbsoft@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: A How-To: PostgreSQL from Tcl via ODBC" }, { "msg_contents": "Bill Barnes <kgbsoft@yahoo.com> writes:\n> Ran into a hitch at 'make' which reported that 'bison'\n> was not installed. I'm running debian potato, so used\n> the apt-get install of bison. Bison is installed in\n> /usr/bin. I copied it to /home/billb/pgsql.\n\n> Still getting the 'bison missing' message.\n\nRe-run configure, and watch to make sure that it finds bison this time.\nYou'll need flex too, if you intend to build from CVS sources.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 12:28:39 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A How-To: PostgreSQL from Tcl via ODBC " }, { "msg_contents": "\nThanks. That cleared the bison problem.\n\nflex didn't work the same way though. Copied it also\nto /home/billb/pgsql. Reported missing. Needs to go\nsomeplace else?\n\nTIA\nBill\n\n--- Tom Lane <tgl@sss.pgh.pa.us> wrote:\n> Bill Barnes <kgbsoft@yahoo.com> writes:\n> > Ran into a hitch at 'make' which reported that\n> 'bison'\n> > was not installed. I'm running debian potato, so\n> used\n> > the apt-get install of bison. Bison is installed\n> in\n> > /usr/bin. I copied it to /home/billb/pgsql.\n> \n> > Still getting the 'bison missing' message.\n> \n> Re-run configure, and watch to make sure that it\n> finds bison this time.\n> You'll need flex too, if you intend to build from\n> CVS sources.\n> \n> \t\t\tregards, tom lane\n\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail - only $35 \na year! http://personal.mail.yahoo.com/\n", "msg_date": "Tue, 20 Feb 2001 10:15:01 -0800 (PST)", "msg_from": "Bill Barnes <kgbsoft@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: Re: A How-To: PostgreSQL from Tcl via ODBC " }, { "msg_contents": "> Ran into a hitch at 'make' which reported that 'bison'\n> was not installed. I'm running debian potato, so used\n> the apt-get install of bison. Bison is installed in\n> /usr/bin. I copied it to /home/billb/pgsql.\n>\n> Still getting the 'bison missing' message.\n\nYou need to remove config.cache before reconfiguring.\n\nHere's a hint for all who are getting PostgreSQL from CVS, are anyone else\nreally: Run configure with --cache=/dev/null. There is never a reason\nwhy you would need that cache, and there is an infinite number of reasons\nwhy you don't want it. It's going to save you a lot of head aches.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 20 Feb 2001 19:23:40 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re: A How-To: PostgreSQL from Tcl via ODBC" }, { "msg_contents": "Bill Barnes <kgbsoft@yahoo.com> writes:\n> Thanks. That cleared the bison problem.\n\n> flex didn't work the same way though. Copied it also\n> to /home/billb/pgsql. Reported missing. Needs to go\n> someplace else?\n\nHmm, should work the same: configure will find it if it's in your PATH.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 13:39:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A How-To: PostgreSQL from Tcl via ODBC " }, { "msg_contents": "Bill Barnes writes:\n> Ran into a hitch at 'make' which reported that 'bison'\n> was not installed. I'm running debian potato, so used\n> the apt-get install of bison. Bison is installed in\n> /usr/bin. I copied it to /home/billb/pgsql.\n\nI'm a Debian user too, but I had bison installed. Remove it from\n/home/billb/pgsql (as long as it's in your path things should be\nokay), and try running \"configure\" again.\n\nDan\n", "msg_date": "Tue, 20 Feb 2001 11:03:48 -0800 (PST)", "msg_from": "Dan Lyke <danlyke@flutterby.com>", "msg_from_op": true, "msg_subject": "Re: A How-To: PostgreSQL from Tcl via ODBC" }, { "msg_contents": "Tom Lane writes:\n> Re-run configure, and watch to make sure that it finds bison this time.\n> You'll need flex too, if you intend to build from CVS sources.\n\nAnd if you're going to use the ODBC drivers under Linux (or any other\nOS that links C \"strings\" into read only memory) you'll need pretty\nrecent CVS sources.\n\nOne of the bugs I had to track down even though my original CVS update\nwas only a few weeks old.\n\nDan\n", "msg_date": "Tue, 20 Feb 2001 11:06:06 -0800 (PST)", "msg_from": "Dan Lyke <danlyke@flutterby.com>", "msg_from_op": true, "msg_subject": "Re: Re: A How-To: PostgreSQL from Tcl via ODBC " }, { "msg_contents": "Dan Lyke <danlyke@flutterby.com> writes:\n> Bill Barnes writes:\n>> Ran into a hitch at 'make' which reported that 'bison'\n>> was not installed. I'm running debian potato, so used\n>> the apt-get install of bison. Bison is installed in\n>> /usr/bin. I copied it to /home/billb/pgsql.\n\n> I'm a Debian user too, but I had bison installed. Remove it from\n> /home/billb/pgsql (as long as it's in your path things should be\n> okay), and try running \"configure\" again.\n\nYeah, if it's in your path to begin with then it shouldn't be necessary\nto copy it (and it's hard to believe that /usr/bin isn't in your path).\n\nI suspect Peter Eisentraut had the right answer: flush the config.cache\nfile before running configure, else it'll reuse its previous result\nabout whether bison exists.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 16:04:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: A How-To: PostgreSQL from Tcl via ODBC " } ]
[ { "msg_contents": "I notice that if the platform's template doesn't set CFLAGS, then\nconfigure will give you -g in CFLAGS whether you ask for it or not\n(courtesy of AC_PROG_CC). The --enable-debug configure switch thus does\nnot function as advertised. If we are going to say that --enable-debug\nisn't recommended for production, don't you think there should be a way\nto turn it off? Perhaps this means that all the template files should\nforce a setting of CFLAGS; or else that we should not use the stock\nversion of AC_PROG_CC. Or maybe just set CFLAGS to empty right before\nAC_PROG_CC?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 18:08:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "enable-debug considered pointless" }, { "msg_contents": "Tom Lane writes:\n\n> I notice that if the platform's template doesn't set CFLAGS, then\n> configure will give you -g in CFLAGS whether you ask for it or not\n> (courtesy of AC_PROG_CC). The --enable-debug configure switch thus does\n> not function as advertised. If we are going to say that --enable-debug\n> isn't recommended for production, don't you think there should be a way\n> to turn it off? Perhaps this means that all the template files should\n> force a setting of CFLAGS;\n\nThis was sort of the idea, but I see some disappeared.\n\n> or else that we should not use the stock version of AC_PROG_CC. Or\n> maybe just set CFLAGS to empty right before AC_PROG_CC?\n\nProbably best for now. Eventually, I'd like it to look more like the\nAC_PROG_CXX code, all in one place. Right now the templates are a\nsafe-guard against trying to build on a platforms that's not supported at\nall, but it should actually be possible to do just that, without shared\nlibraries maybe, and with the software-TAS that you implemented. But ISTM\nthat we've covered the recent wave of new operating systems, so this is\nnot a pressing issue to me.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 20 Feb 2001 16:47:04 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: enable-debug considered pointless" } ]
[ { "msg_contents": "I find that if one specifies, say,\n\n\tconfigure --with-includes=/usr/local/include\n\none gets compiler commands like\n\n\tcc -Ae -g +z -I/usr/local/include -I../../../src/include -I../../../src/interfaces/libpq -c -o pgtcl.o pgtcl.c\n\nbecause the -I commands are added to CPPFLAGS which appears before any\n-I commands the makefiles themselves add. This strikes me as uncool.\nFor example, it will be impossible to compile Postgres if there are\nheaders from an old version lurking in /usr/local/include, because those\nwill be read instead of the ones from our source tree. How hard would\nit be to make the --with-includes -I directives appear after our own?\n\nThe same problem arises for --with-libs, btw.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 19 Feb 2001 18:36:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Ordering problem with --with-includes" }, { "msg_contents": "Tom Lane writes:\n\n> How hard would it be to make the --with-includes -I directives appear\n> after our own?\n\nNot hard, but tedious.\n\n> The same problem arises for --with-libs, btw.\n\nNot tedious, but hard.\n\nI'll look into it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 20 Feb 2001 18:10:11 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Ordering problem with --with-includes" } ]
[ { "msg_contents": "Hi,\nin my java code I am creating 3 temporary tables, then calling a stored\nprocedure which calls another stored procedure.\nthen I drop the temporary tables. \n\nthe first time around , eveything is OK , then when repeating the action I\nget \n\"ExecOpenScanR: failed to open relation 2808495 \"\ncould it be that an index exists when the table doesn't or\ndoes it mean the functions did not stop properly ?\nplease help.\nPam Withnall\n\n", "msg_date": "Tue, 20 Feb 2001 16:30:14 +1100", "msg_from": "Pam Withnall <Pamw@zoom.com.au>", "msg_from_op": true, "msg_subject": "ExecOpenScanR: failed to open relation " }, { "msg_contents": "Pam Withnall <Pamw@zoom.com.au> writes:\n> in my java code I am creating 3 temporary tables, then calling a stored\n> procedure which calls another stored procedure.\n> then I drop the temporary tables. \n> the first time around , eveything is OK , then when repeating the action I\n> get \n> \"ExecOpenScanR: failed to open relation 2808495 \"\n\nIf you're using plpgsql, you can't drop and recreate temp tables between\nprocedure executions, because the cached query plans for the procedure\nwill still refer to the old version of the tables.\n\nEither create the temp table *once* per backend, or use pltcl, which\ndoesn't try to cache query plans.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 15:30:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: ExecOpenScanR: failed to open relation " }, { "msg_contents": "Tom Lane wrote:\n> Pam Withnall <Pamw@zoom.com.au> writes:\n> > in my java code I am creating 3 temporary tables, then calling a stored\n> > procedure which calls another stored procedure.\n> > then I drop the temporary tables.\n> > the first time around , eveything is OK , then when repeating the action I\n> > get\n> > \"ExecOpenScanR: failed to open relation 2808495 \"\n>\n> If you're using plpgsql, you can't drop and recreate temp tables between\n> procedure executions, because the cached query plans for the procedure\n> will still refer to the old version of the tables.\n>\n> Either create the temp table *once* per backend, or use pltcl, which\n> doesn't try to cache query plans.\n\n as long as you don't tell it to (using spi_prepare and\n spi_execp explicitly in PL/Tcl) :-)\n\n\nJan\n\n--\n\n#======================================================================#\n# It's easier to get forgiveness for being wrong than for being right. #\n# Let's break this rule - forgive me. #\n#================================================== JanWieck@Yahoo.com #\n\n\n\n_________________________________________________________\nDo You Yahoo!?\nGet your free @yahoo.com address at http://mail.yahoo.com\n\n", "msg_date": "Mon, 26 Feb 2001 12:16:19 -0500 (EST)", "msg_from": "Jan Wieck <janwieck@Yahoo.com>", "msg_from_op": false, "msg_subject": "Re: ExecOpenScanR: failed to open relation" } ]
[ { "msg_contents": "Hi,\n\n While trying 7.1b4 I got this using JDBC2:\n\nERROR: A request from 10.0.0.46 (10.0.0.46) resulted in\njava.lang.NumberFormatException: 20 18:46:53+09\njava.lang.NumberFormatException: 20 18:46:53+09\n at java.lang.Integer.parseInt(Integer.java, Compiled Code)\n at java.lang.Integer.parseInt(Integer.java, Compiled Code)\n at java.sql.Date.valueOf(Date.java, Compiled Code)\n at org.postgresql.jdbc2.ResultSet.getDate(ResultSet.java,\nCompiled Code)\n at org.postgresql.jdbc2.ResultSet.getDate(ResultSet.java,\nCompiled Code)\n\n\nSorry for the incomplete stack trace (and lack of line numbers) but the\nrest of it shouldn't matter. BTW, I am using the new 7.1 JDBC driver.\nI'll try to look at the Java code tomorrow but I'm hoping someone\nalready has a fix.\n\n\n\n--Rainer\n\n", "msg_date": "Tue, 20 Feb 2001 18:58:18 +0900", "msg_from": "Rainer Mager <rmager@vgkk.com>", "msg_from_op": true, "msg_subject": "JDBC bug in 7.1b4" }, { "msg_contents": "We have a Unicode (UTF-8) database that we are trying to upgrade to 7.1b4.\nWe did a pg_dumpall (yes, using the old version) and then tried a restore.\nWe hit the following 3 problems:\n\n1. Some of the text is large, about 20k characters, and is multiline. For\nalmost all of the lines this was fine (postgres put a \\ at the end of the\nprevios line) but for some it was not. The lines I looked at all had\nnon-English characters (Japanese and/or Korean) at the end of the line. When\nthe restore encountered these lines it failed and, since the dump uses COPY,\nthe entire table was left blank.\n\n2. Some two-byte dash/hyphen characters DID get correctly imported into the\ndatabase but could not be read out again via JDBC, that is, when read the\nrecord was truncated at the character. This _might_ be related to a long\nstanding Java core bug regarding improper conversions between certain\nlanguages and the internal Unicode representation for hyphens.\n\n3. One other character, a two-byte apostrophe, was not restoreable,\nsimilarly to the hyphen problem.\n\n\nAfter fighting the above, I decided to try doing the dump with the -dn\nflags. This fixed problem #1 but not 2 or 3. If needed I can try to get\ndetails about the problem characters.\n\n\nFinally, not a bug but, we have written a small perl script that inserts\ntransactions around every 500 INSERT lines in a PG dump. This speeds up\nlarge restores by about 100 times! Really! I think this might be a good\nthing for the dump command to do automatically.\n\n\nBest regards,\n\n--Rainer\n\n", "msg_date": "Thu, 22 Feb 2001 17:40:09 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "> We have a Unicode (UTF-8) database that we are trying to upgrade to 7.1b4.\n> We did a pg_dumpall (yes, using the old version) and then tried a restore.\n> We hit the following 3 problems:\n> \n> 1. Some of the text is large, about 20k characters, and is multiline. For\n> almost all of the lines this was fine (postgres put a \\ at the end of the\n> previos line) but for some it was not. The lines I looked at all had\n> non-English characters (Japanese and/or Korean) at the end of the line. When\n> the restore encountered these lines it failed and, since the dump uses COPY,\n> the entire table was left blank.\n> \n> 2. Some two-byte dash/hyphen characters DID get correctly imported into the\n> database but could not be read out again via JDBC, that is, when read the\n> record was truncated at the character. This _might_ be related to a long\n> standing Java core bug regarding improper conversions between certain\n> languages and the internal Unicode representation for hyphens.\n> \n> 3. One other character, a two-byte apostrophe, was not restoreable,\n> similarly to the hyphen problem.\n> \n> \n> After fighting the above, I decided to try doing the dump with the -dn\n> flags. This fixed problem #1 but not 2 or 3. If needed I can try to get\n> details about the problem characters.\n\nThis might be related to a known bug with 7.0.x. Can you grab a patch\nfrom ftp://ftp.sra.co.jp/pub/cmd/postgres/7.0.3/patches/copy.patch.gz\nand try again?\n\nOr even better, can you give me a minimum set of data that reproduces\nyour problem?\n--\nTatsuo Ishii\n", "msg_date": "Fri, 23 Feb 2001 10:31:30 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "Well, I tried the patch and the newly produced dump was identical to the bad\ndump from before, so the patch had no affect. I will try to trim it down to\na reasonably small file and email it to you.\n\n--Rainer\n\n> -----Original Message-----\n> From: pgsql-bugs-owner@postgresql.org\n> [mailto:pgsql-bugs-owner@postgresql.org]On Behalf Of Tatsuo Ishii\n> Sent: Friday, February 23, 2001 10:32 AM\n> To: rmager@vgkk.com\n> Cc: pgsql-bugs@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: Re: [BUGS] Problem with 7.0.3 dump -> 7.1b4 restore\n> This might be related to a known bug with 7.0.x. Can you grab a patch\n> from ftp://ftp.sra.co.jp/pub/cmd/postgres/7.0.3/patches/copy.patch.gz\n> and try again?\n>\n> Or even better, can you give me a minimum set of data that reproduces\n> your problem?\n> --\n> Tatsuo Ishii\n\n", "msg_date": "Fri, 23 Feb 2001 17:42:27 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "RE: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "Attached is a single INSERT that shows the problem. The character after the\nword \"Fiber\" truncates the text when using JDBC. NOTE, the text IS in the\ndatabase, that is, the dump/restore seems ok, the problem is when trying to\nread the text later. The database is UTF8 and I just tested with beta 5.\n\nOh, BTW, if I try to set (INSERT) this same character via JDBC and then\nretreive it again then everything is fine.\n\n\n--Rainer\n\n> -----Original Message-----\n> From: pgsql-bugs-owner@postgresql.org\n> [mailto:pgsql-bugs-owner@postgresql.org]On Behalf Of Tatsuo Ishii\n> Sent: Friday, February 23, 2001 10:32 AM\n>\n> Or even better, can you give me a minimum set of data that reproduces\n> your problem?\n> --\n> Tatsuo Ishii", "msg_date": "Wed, 28 Feb 2001 10:14:48 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "RE: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "> Attached is a single INSERT that shows the problem. The character after the\n> word \"Fiber\" truncates the text when using JDBC. NOTE, the text IS in the\n> database, that is, the dump/restore seems ok, the problem is when trying to\n> read the text later. The database is UTF8 and I just tested with beta 5.\n> \n> Oh, BTW, if I try to set (INSERT) this same character via JDBC and then\n> retreive it again then everything is fine.\n\nThanks. I'll dig into it.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 28 Feb 2001 10:30:21 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "RE: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "> Attached is a single INSERT that shows the problem. The character after the\n> word \"Fiber\" truncates the text when using JDBC. NOTE, the text IS in the\n> database, that is, the dump/restore seems ok, the problem is when trying to\n> read the text later. The database is UTF8 and I just tested with beta 5.\n> \n> Oh, BTW, if I try to set (INSERT) this same character via JDBC and then\n> retreive it again then everything is fine.\n\nI have tested your data using psql:\n\nunicode=# create table pr_prop_info(i1 int, i2 int, i3 int, t text);\nCREATE\nunicode=# \\encoding LATIN1\nunicode=# \\i example.sql \nINSERT 2378114 1\nunicode=# select * from pr_prop_info;\n\nThe character after the word \"Fiber\" looks like \"�Optic Cable\". So as\nlong as the server/client encoding set correctly, it looks ok. I guess\nwe have some problems with JDBC driver. Unfortunately I am not a Java\nguru at all. Can anyone look into our JDBC driver regarding this\nproblem?\n--\nTatsuo Ishii\n", "msg_date": "Wed, 28 Feb 2001 11:02:20 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "RE: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "Hi all,\n\n\tI haven't been following the current thread on failed tests but I just had\nsome so I thought I'd mention it. If this is a repeat then I apologize.\n\n\tI configured with:\n\n./configure --enable-multibyte --enable-syslog --with-java --with-maxbackend\ns=70\n\n\n\n\tAnd the tests give me this error:\n\nRunning with noclean mode on. Mistakes will not be cleaned up.\n/opt/home/rmager/pgsql/src/test/regress/./tmp_check/install//usr/local/pgsql\n/bin/pg_encoding: error while loading shared libraries:\n/opt/home/rmager/pgsql/src/test/regress/./tmp_check/install//usr/local/pgsql\n/bin/pg_encoding: undefined symbol: pg_char_to_encoding\ninitdb: pg_encoding failed\n\nPerhaps you did not configure PostgreSQL for multibyte support or\nthe program was not successfully installed.\n\n\n\n--Rainer\n\n", "msg_date": "Fri, 23 Mar 2001 15:19:36 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "Problems with latest tests" }, { "msg_contents": "I tried to submit the results of my regression tests and got this:\n\nWarning: PostgreSQL query failed: ERROR: parser: parse error at or near \"t\"\nin\n/home/projects/pgsql/developers/vev/public_html/regress/regress.php on line\n359\nDatabase write failed.\n\n\n--Rainer\n\n", "msg_date": "Mon, 26 Mar 2001 09:28:12 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "Problem with test results submission form" }, { "msg_contents": "I'm trying to run the latest CVS code's regression tests and have a problem.\nThey fail at initdb with this:\n\n\nRunning with noclean mode on. Mistakes will not be cleaned up.\n/opt/home/rmager/devel/External/pgsql/src/test/regress/./tmp_check/install//\nusr/local/pgsql/bin/pg_encoding: erro\nr while loading shared libraries:\n/opt/home/rmager/devel/External/pgsql/src/test/regress/./tmp_check/install//\nusr\n/local/pgsql/bin/pg_encoding: undefined symbol: pg_char_to_encoding\ninitdb: pg_encoding failed\n\nPerhaps you did not configure PostgreSQL for multibyte support or\nthe program was not successfully installed.\n\n\n\n\nI ran configure with this:\n\n./configure --enable-multibyte --enable-syslog --with-java\n\n\n\n\nAny ideas?\n\n--Rainer\n\n", "msg_date": "Mon, 26 Mar 2001 09:30:34 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "Problems with Multibyte in 7.1 beta?" }, { "msg_contents": "Hi all,\n\n\tWe're using PG 7.0 and 7.1beta and are having dead lock problems. The docs\nsay the Postgres detects dead locks and automatically rolls back 1\ntransaction to recover but this is not our experience. Are the docs\nincorrect or is this more serious?\n\n\nThanks,\n\n--Rainer\n\n", "msg_date": "Mon, 26 Mar 2001 13:46:43 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "Dead locks" }, { "msg_contents": "\"Rainer Mager\" <rmager@vgkk.com> writes:\n> We're using PG 7.0 and 7.1beta and are having dead lock problems. The docs\n> say the Postgres detects dead locks and automatically rolls back 1\n> transaction to recover but this is not our experience. Are the docs\n> incorrect or is this more serious?\n\nWhich beta release?\n\nThere are some known undetected-deadlock cases in 7.0, which were\nrepaired in late January --- that would have been beta4 or possibly\nbeta5, I forget now. If you still see this behavior with 7.1RC1 then\nI would like details.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Mar 2001 09:50:23 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Dead locks " }, { "msg_contents": "I just tested a bug I originally fount in 7.1b4 with the new 7.1RC3 and it\nstill exists. I would consider this a major bug because I know of no work\naround.\n\nBasically what happens is that a dump of an existing Unicode database (from\n7.03) has a double-byte hyphen character that becomes \\255 in the dump. When\nthe data is imported into the new 7.1 database it seems to correctly appear\n(verified via psql) BUT when reading this record via JDBC the data is\ntruncated at this character.\n\nI communicated briefly with Ishii-san regarding this a while back but I\nnever followed up. Considering RC3 is now out I thought I should revisit the\nissue. It should be easy to test by editing and postgres Unicode database\ndump and putting \\255 somewhere in a string. I'm not sure if it matters but\nthe dump was done with \"-dn\" flags.\n\nThanks,\n\n--Rainer\n\n\n> -----Original Message-----\n> From: Tatsuo Ishii [mailto:t-ishii@sra.co.jp]\n> Sent: Wednesday, February 28, 2001 11:02 AM\n> To: rmager@vgkk.com\n> Cc: pgsql-bugs@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: RE: [BUGS] Problem with 7.0.3 dump -> 7.1b4 restore\n>\n>\n> > Attached is a single INSERT that shows the problem. The\n> character after the\n> > word \"Fiber\" truncates the text when using JDBC. NOTE, the text\n> IS in the\n> > database, that is, the dump/restore seems ok, the problem is\n> when trying to\n> > read the text later. The database is UTF8 and I just tested with beta 5.\n> >\n> > Oh, BTW, if I try to set (INSERT) this same character via JDBC and then\n> > retreive it again then everything is fine.\n>\n> I have tested your data using psql:\n>\n> unicode=# create table pr_prop_info(i1 int, i2 int, i3 int, t text);\n> CREATE\n> unicode=# \\encoding LATIN1\n> unicode=# \\i example.sql\n> INSERT 2378114 1\n> unicode=# select * from pr_prop_info;\n>\n> The character after the word \"Fiber\" looks like \"�Optic Cable\". So as\n> long as the server/client encoding set correctly, it looks ok. I guess\n> we have some problems with JDBC driver. Unfortunately I am not a Java\n> guru at all. Can anyone look into our JDBC driver regarding this\n> problem?\n> --\n> Tatsuo Ishii\n\n", "msg_date": "Wed, 11 Apr 2001 15:58:34 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "RE: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "I noticed that 7.1 has officially been released. Does anyone know the status\nof the bug I reported regarding encoding problems when dumping a 7.0 db an\nrestoring on 7.1?\n\n\nThanks,\n\n--Rainer\n\n", "msg_date": "Mon, 16 Apr 2001 12:15:20 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "RE: Problem with 7.0.3 dump -> 7.1b4 restore" }, { "msg_contents": "Hi,\n\n\tI'm trying to see if I can patch this bug myself because we are under some\ntime constraints. Can anyone give me a tip regarding where in the postgres\nsource the internal UTF-8 code is converted during a dump?\n\n\tI believe that the character 0xAD is a ASCII character that looks like a\ndash. According to the UTF-8 spec, anything over 0x7F requires another byte\nwith it (which, I think, means that you should never see the 0xAD character\nby itself in a postgres dump, but I am seeing this). So, I'm guessing that\nsome piece of the UTF-8 conversion routine is a bit off.\n\n\tAny tips on where to start? I would try to hack a fix by searching for the\noffending character in the dump and replacing it with a normal dash but\nunfortunately 0xAD is a valid byte when paired with other bytes and these\nalso exist in our dump.\n\n\n--Rainer\n\n> -----Original Message-----\n> From: pgsql-bugs-owner@postgresql.org\n> [mailto:pgsql-bugs-owner@postgresql.org]On Behalf Of Rainer Mager\n> Sent: Monday, April 16, 2001 12:15 PM\n> To: pgsql-bugs@postgresql.org; pgsql-hackers@postgresql.org\n> Subject: RE: [BUGS] Problem with 7.0.3 dump -> 7.1b4 restore\n>\n>\n> I noticed that 7.1 has officially been released. Does anyone know\n> the status\n> of the bug I reported regarding encoding problems when dumping a 7.0 db an\n> restoring on 7.1?\n>\n>\n> Thanks,\n>\n> --Rainer\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n", "msg_date": "Wed, 18 Apr 2001 08:24:26 +0900", "msg_from": "\"Rainer Mager\" <rmager@vgkk.com>", "msg_from_op": false, "msg_subject": "RE: [BUGS] Problem with 7.0.3 dump -> 7.1b4 restore" } ]
[ { "msg_contents": "Why not add CFLAGS=+02 to the hpux template?\n\nAlso, it should work now to build with any random combination of C and C++\ncompilers, so maybe try that and remove that point.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 20 Feb 2001 17:26:10 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "About the FAQ_HPUX updates" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Why not add CFLAGS=+02 to the hpux template?\n\nHmm... now that you ask, probably no good reason.\n\n> Also, it should work now to build with any random combination of C and C++\n> compilers, so maybe try that and remove that point.\n\nOK, I will. That definitely didn't work a couple releases ago, but if\nyou think the situation has improved, I'll check it.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 11:29:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: About the FAQ_HPUX updates " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Why not add CFLAGS=+02 to the hpux template?\n\nTested and done.\n\n> Also, it should work now to build with any random combination of C and C++\n> compilers, so maybe try that and remove that point.\n\nYou're right, that works now. FAQ_HPUX updated. Thanks.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 14:07:35 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: About the FAQ_HPUX updates " } ]
[ { "msg_contents": "Would there be any value in setting up a project on sourceforge to make use\nof their compile farm? I know that it doesn't cover all platforms, but it\nwould perhaps be a start to mechanical compile and regression testing.\n\nJust a thought...\n\n\nMikeA\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: 20 February 2001 15:51\nTo: Justin Clift\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] beta5 ... \n\n\n>> As a matter of curiosity, is each beta compiled and then regression\n>> tested against *every* one of the known \"supported\" platforms before\n>> release?\n\nWho are you expecting to do that, exactly?\n\nOne of the differences between Postgres and a proprietary commercial \ndatabase is that there is no vast support machinery behind the scenes.\nWhat you see going on on this list is what you get: beta testing\nconsists of the activities performed and reported by list members.\n\nNormally, if we are about to push out a beta then two or three people\nwill double-check that the current CVS tip builds and passes regression\non their personal machines. But the \"supported platforms\" coverage\ndepicted in the docs consists of all the platforms that are reported to\nus as working during the entire beta test period, including many that\nthe key developers have no direct access to. There's no way that we\ncould reverse the process and cause that to happen before a beta release\ninstead of after; certainly no way that we could cause all that effort\nto be repeated for each beta version.\n\nIf you are using a beta version then you are part of that testing\nprocess, not a beneficiary of something that's happened behind closed\ndoors.\n\n\t\t\tregards, tom lane\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nNick West - Global Infrastructure Manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************\n\n\n\n\n\nRE: [HACKERS] beta5 ... \n\n\nWould there be any value in setting up a project on sourceforge to make use of their compile farm?  I know that it doesn't cover all platforms, but it would perhaps be a start to mechanical compile and regression testing.\nJust a thought...\n\n\nMikeA\n\n\n\n-----Original Message-----\nFrom: Tom Lane [mailto:tgl@sss.pgh.pa.us]\nSent: 20 February 2001 15:51\nTo: Justin Clift\nCc: pgsql-hackers@postgresql.org\nSubject: Re: [HACKERS] beta5 ... \n\n\n>> As a matter of curiosity, is each beta compiled and then regression\n>> tested against *every* one of the known \"supported\" platforms before\n>> release?\n\nWho are you expecting to do that, exactly?\n\nOne of the differences between Postgres and a proprietary commercial \ndatabase is that there is no vast support machinery behind the scenes.\nWhat you see going on on this list is what you get: beta testing\nconsists of the activities performed and reported by list members.\n\nNormally, if we are about to push out a beta then two or three people\nwill double-check that the current CVS tip builds and passes regression\non their personal machines.  But the \"supported platforms\" coverage\ndepicted in the docs consists of all the platforms that are reported to\nus as working during the entire beta test period, including many that\nthe key developers have no direct access to.  There's no way that we\ncould reverse the process and cause that to happen before a beta release\ninstead of after; certainly no way that we could cause all that effort\nto be repeated for each beta version.\n\nIf you are using a beta version then you are part of that testing\nprocess, not a beneficiary of something that's happened behind closed\ndoors.\n\n                        regards, tom lane\n\n\n\n**********************************************************************\nThis email and any files transmitted with it are confidential and\nintended solely for the use of the individual or entity to whom they\nare addressed. If you have received this email in error please notify\nNick West - Global Infrastructure Manager.\n\nThis footnote also confirms that this email message has been swept by\nMIMEsweeper for the presence of computer viruses.\n\nwww.mimesweeper.com\n**********************************************************************", "msg_date": "Tue, 20 Feb 2001 16:26:10 -0000", "msg_from": "Michael Ansley <Michael.Ansley@intec-telecom-systems.com>", "msg_from_op": true, "msg_subject": "RE: beta5 ... " }, { "msg_contents": "> Would there be any value in setting up a project on sourceforge to\n> make use of their compile farm? I know that it doesn't cover all\n> platforms, but it would perhaps be a start to mechanical compile and\n> regression testing.\n\nI haven't looked at the platforms available in the compile farm\nrecently, but afaik regression coverage for over half a dozen platforms\nalready happens without (extra) effort: Tom Lane has three or more\nplatforms, I've got Linux, Bruce has BSDI, Marc has FreeBSD, we have\nsome active W32 developers, etc etc.\n\nWhat would SF add to this mix?\n\n - Thomas\n", "msg_date": "Tue, 20 Feb 2001 16:40:22 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: beta5 ..." }, { "msg_contents": "* Thomas Lockhart <lockhart@alumni.caltech.edu> [010220 10:51]:\n> > Would there be any value in setting up a project on sourceforge to\n> > make use of their compile farm? I know that it doesn't cover all\n> > platforms, but it would perhaps be a start to mechanical compile and\n> > regression testing.\n> \n> I haven't looked at the platforms available in the compile farm\n> recently, but afaik regression coverage for over half a dozen platforms\n> already happens without (extra) effort: Tom Lane has three or more\n> platforms, I've got Linux, Bruce has BSDI, Marc has FreeBSD, we have\n> some active W32 developers, etc etc.\nI have a UnixWare 7.1.1 box I run PG on....\n> \n> What would SF add to this mix?\n> \n> - Thomas\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n", "msg_date": "Tue, 20 Feb 2001 10:57:14 -0600", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ..." }, { "msg_contents": ">> Would there be any value in setting up a project on sourceforge to\n>> make use of their compile farm? I know that it doesn't cover all\n>> platforms, but it would perhaps be a start to mechanical compile and\n>> regression testing.\n\n> I haven't looked at the platforms available in the compile farm\n> recently, but afaik regression coverage for over half a dozen platforms\n> already happens without (extra) effort: Tom Lane has three or more\n> platforms, I've got Linux, Bruce has BSDI, Marc has FreeBSD, we have\n> some active W32 developers, etc etc.\n\nI run HPUX and Linux/PPC routinely, so that's only two here. Still, we\nhave reasonable coverage among the core team and a bunch more platforms\nused by active pgsql-hackers people. Also, the project does have an\nAlpha in-house at hub.org (if Marc ever gets it back into commission\nafter that failed OS reinstall...)\n\n> What would SF add to this mix?\n\nThe current list of machines at cf.sourceforge.net seems to be\n\n lqqqqqqqChoose compile farm server...qqqqqqqk\n x A. [x86] Linux 2.2 (Debian 2.2) x\n x C. [x86] FreeBSD (4.2-stable) x\n x x\n x G. [Alpha] Compaq Tru64 (5.1) x\n x H. [Alpha] Linux 2.2 (RedHat 7.0) x\n x x\n x L. [Sparc - E240] Linux 2.2 (Debian 2.2) x\n x M. [Sparc - E240] Sun Solaris (8) x\n x x\n x Exit x\n mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj\n\nI think I'll go try a build on that Solaris 8 machine, since we've heard\nsome reports of problems on Solaris. However, I'm not sure that we need\nany organized use of their compilefarm. If they made it easy to\n*automatically* run build/install/regress test on multiple machines,\nI could see the facility being useful (especially so once a few more\nplatforms are offered). But right now it looks like it's just shell\naccess to platforms other than your own, which is not going to help us\nall that much.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 15:10:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ... " }, { "msg_contents": "As well as Linux I run Solaris 8 SPARC (32-bit not 64), Solaris 7 SPARC\n(SMP, 32-bit not 64), Solaris 7 Intel (both SMP and uni-processor) and\nSolaris 8 Intel (both SMP and uni-processor).\n\nI can be counted on to do testing of these as required in about 2 weeks\nfrom now, after I get a new permanent connection here.\n\nWith luck I'll additionally have the finances to buy some SPARC 64-bit\nmachines in a few months.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n\nTom Lane wrote:\n> > What would SF add to this mix?\n> \n> The current list of machines at cf.sourceforge.net seems to be\n> \n> lqqqqqqqChoose compile farm server...qqqqqqqk\n> x A. [x86] Linux 2.2 (Debian 2.2) x\n> x C. [x86] FreeBSD (4.2-stable) x\n> x x\n> x G. [Alpha] Compaq Tru64 (5.1) x\n> x H. [Alpha] Linux 2.2 (RedHat 7.0) x\n> x x\n> x L. [Sparc - E240] Linux 2.2 (Debian 2.2) x\n> x M. [Sparc - E240] Sun Solaris (8) x\n> x x\n> x Exit x\n> mqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqqj\n> \n> I think I'll go try a build on that Solaris 8 machine, since we've heard\n> some reports of problems on Solaris. However, I'm not sure that we need\n> any organized use of their compilefarm. If they made it easy to\n> *automatically* run build/install/regress test on multiple machines,\n> I could see the facility being useful (especially so once a few more\n> platforms are offered). But right now it looks like it's just shell\n> access to platforms other than your own, which is not going to help us\n> all that much.\n> \n> regards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 11:35:25 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: Re: beta5 ..." } ]
[ { "msg_contents": "Hi,\n\nDoes anyone know how postgres/ postmaster handles the\nsituation where the physical hard disk space is full ?\nDoes it crash / corrupt the database, or does it\ncleanly exit with appopriate message so that relevant\ntables can be pruned (by the user) to free up disk\nspace and get it working again ?\n\nThanks,\nRini\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail - only $35 \na year! http://personal.mail.yahoo.com/\n", "msg_date": "Tue, 20 Feb 2001 08:34:39 -0800 (PST)", "msg_from": "Rini Dutta <rinid@rocketmail.com>", "msg_from_op": true, "msg_subject": "handling of database size exceeding physical disk space" } ]
[ { "msg_contents": "I happen to know this very well... It handles things very gracefully as far\nas I can tell. I complains that it can't extend the table and bails out of\nthe transaction. I just wish it didn't happen so often... <grin>\n\nMike Diehl,\nNetwork Monitoring Tool Devl.\n284-3137\njdiehl@sandia.gov\n\n\n> -----Original Message-----\n> From: Rini Dutta [mailto:rinid@rocketmail.com]\n> Sent: February 20, 2001 9:35 AM\n> To: pgsql-general@postgresql.org; pgsql-sql@postgresql.org\n> Cc: pgsql-hackers@postgresql.org\n> Subject: [SQL] handling of database size exceeding physical disk space\n> \n> \n> Hi,\n> \n> Does anyone know how postgres/ postmaster handles the\n> situation where the physical hard disk space is full ?\n> Does it crash / corrupt the database, or does it\n> cleanly exit with appopriate message so that relevant\n> tables can be pruned (by the user) to free up disk\n> space and get it working again ?\n> \n> Thanks,\n> Rini\n> \n> __________________________________________________\n> Do You Yahoo!?\n> Get personalized email addresses from Yahoo! Mail - only $35 \n> a year! http://personal.mail.yahoo.com/\n> \n\n", "msg_date": "Tue, 20 Feb 2001 09:53:14 -0700", "msg_from": "\"Diehl, Jeffrey\" <jdiehl@sandia.gov>", "msg_from_op": true, "msg_subject": "RE: [SQL] handling of database size exceeding physical disk\n space" }, { "msg_contents": "Thanks ! I'm using JDBC to insert into the tables.\nWould it throw an SQLException in such a situation ?\n\nRini\n\n--- \"Diehl, Jeffrey\" <jdiehl@sandia.gov> wrote:\n> I happen to know this very well... It handles\n> things very gracefully as far\n> as I can tell. I complains that it can't extend the\n> table and bails out of\n> the transaction. I just wish it didn't happen so\n> often... <grin>\n> \n> Mike Diehl,\n> Network Monitoring Tool Devl.\n> 284-3137\n> jdiehl@sandia.gov\n> \n> \n> > -----Original Message-----\n> > From: Rini Dutta [mailto:rinid@rocketmail.com]\n> > Sent: February 20, 2001 9:35 AM\n> > To: pgsql-general@postgresql.org;\n> pgsql-sql@postgresql.org\n> > Cc: pgsql-hackers@postgresql.org\n> > Subject: [SQL] handling of database size exceeding\n> physical disk space\n> > \n> > \n> > Hi,\n> > \n> > Does anyone know how postgres/ postmaster handles\n> the\n> > situation where the physical hard disk space is\n> full ?\n> > Does it crash / corrupt the database, or does it\n> > cleanly exit with appopriate message so that\n> relevant\n> > tables can be pruned (by the user) to free up disk\n> > space and get it working again ?\n> > \n> > Thanks,\n> > Rini\n> > \n> > __________________________________________________\n> > Do You Yahoo!?\n> > Get personalized email addresses from Yahoo! Mail\n> - only $35 \n> > a year! http://personal.mail.yahoo.com/\n> > \n> \n\n\n__________________________________________________\nDo You Yahoo!?\nGet personalized email addresses from Yahoo! Mail - only $35 \na year! http://personal.mail.yahoo.com/\n", "msg_date": "Tue, 20 Feb 2001 11:11:34 -0800 (PST)", "msg_from": "Rini Dutta <rinid@rocketmail.com>", "msg_from_op": false, "msg_subject": "RE: [SQL] handling of database size exceeding physical disk space" } ]
[ { "msg_contents": "\n> > Anyway, your proposal is just fine since we haven't decoupled these\n> > things farther back in the server. But eventually we should hope to have\n> > SQL_ASCII and other character sets enforced in context.\n> \n> Now I'm confused. Are you saying that we *should* treat identifier case\n> under ASCII rules only? That seems like a step backwards to me, but\n> then I don't use any non-US locale myself...\n\nI think we need to treat anything that is not quoted as US_ASCII,\niirc this is how Informix behaves. Users wanting locale aware identifiers\nwould need to double quote those, thus avoiding non ASCII case conversions\nalltogether.\n\nAndreas\n", "msg_date": "Tue, 20 Feb 2001 18:04:36 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: Re: [BUGS] Turkish locale bug " }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n>> Now I'm confused. Are you saying that we *should* treat identifier case\n>> under ASCII rules only? That seems like a step backwards to me, but\n>> then I don't use any non-US locale myself...\n\n> I think we need to treat anything that is not quoted as US_ASCII,\n> iirc this is how Informix behaves. Users wanting locale aware identifiers\n> would need to double quote those, thus avoiding non ASCII case conversions\n> alltogether.\n\nI dug into the SQL99 spec, and I find it appears to have different rules\nfor identifier folding than for keyword recognition. Section 5.2 syntax\nrules 1-12 make it perfectly clear that they have an expansive idea of\nwhat characters are allowed in identifiers (most of Unicode, it looks\nlike ;-)). They also define the case-normalized form of an identifier\nin terms of Unicode case translations (rule 21). But they then say\n\n 28) For the purposes of identifying <key word>s, any <simple Latin\n lower case letter> contained in a candidate <key word> shall\n be effectively treated as the corresponding <simple Latin upper\n case letter>.\n\nIt appears to me that to implement the SQL99 rules correctly in a non-C\nlocale, we need to do casefolding twice. First, casefold only 'A'..'Z'\nand test to see if we have a keyword. If not, do the casefolding again\nusing isupper/tolower to produce the normalized form of the identifier.\n\nThis would solve Sezai's problem without adding a special case for\nTurkish, and it doesn't seem unreasonably slow. Anyone object to it?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 12:23:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: Re: [BUGS] Turkish locale bug " } ]
[ { "msg_contents": "Hello:\n\nI have problems with a LOCK sentence and ECPG. Could anybody tell me if I can execute some sentences simila to:\n\nEXEC SQL LOCK TABLE users IN SHARE ROW EXCLUSIVE MODE;\n\n\nWhere is my error?\n\n\nThanks\n\n\nJhon Orellana\nwebmaster@ecuadorcomercial.com\nsoporte@ecuadorcomercial.com\n\n\n\n\n\n\n\nHello:\n \nI have problems with a LOCK sentence and ECPG. \nCould anybody tell me if I can execute some sentences simila to:\n \nEXEC SQL LOCK TABLE users IN SHARE ROW EXCLUSIVE MODE;\n \n \nWhere is my error?\n \n \nThanks\n \n \nJhon Orellana\nwebmaster@ecuadorcomercial.com\nsoporte@ecuadorcomercial.com", "msg_date": "Tue, 20 Feb 2001 13:17:47 -0500", "msg_from": "<soporte@ecuadorcomercial.com>", "msg_from_op": true, "msg_subject": "Help, please" } ]
[ { "msg_contents": "I can find no use of LP_DELETE as defined as a flag in storage/itemid.h.\nA few places test it, but no one sets it.\n\nCan anyone else? Can I remove it in 7.2?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Feb 2001 15:55:26 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Use of LP_DELETE in buffer header" }, { "msg_contents": "> I can find no use of LP_DELETE as defined as a flag in storage/itemid.h.\n> A few places test it, but no one sets it.\n> \n> Can anyone else? Can I remove it in 7.2?\n\nNever mind. It is used by work Vadim is doing:\n\n---------------------------------------------------------------------------\n\n************\n*** 31,44 ****\n */\n #define LP_USED 0x01 /* this line pointer is being us ed */\n \n- #ifdef XLOG\n- \n #define LP_DELETE 0x02 /* item is to be deleted */\n \n #define ItemIdDeleted(itemId) \\\n (((itemId)->lp_flags & LP_DELETE) != 0)\n- \n- #endif\n \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Feb 2001 16:07:52 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Use of LP_DELETE in buffer header" } ]
[ { "msg_contents": "Hi all\nI develop in Visual FoxPro and I need to connect a client of Visual FoxPro\nwith a database PostgreSQL of a Server Linux\ncan somebody help me?\nI don't know English a lot so they excuse me if I write bad\n\nThank's\n\n", "msg_date": "Tue, 20 Feb 2001 18:16:47 -0500", "msg_from": "\"Giuliano\" <ggonzale@delvalle.com.pe>", "msg_from_op": true, "msg_subject": "Client Window-VFP with Linux-PostgreSQL" }, { "msg_contents": "Giuliano wrote:\n> \n> Hi all\n> I develop in Visual FoxPro and I need to connect a client of Visual FoxPro\n> with a database PostgreSQL of a Server Linux\n> can somebody help me?\n\nUse the ODBC driver. You can get it from\n\nftp://www.postgresql.org/pub/odbc/latest/\n\nAnd make sure your postgres runs with tcp-socet access on (switch -i)\nand that you client computer is allowed to connect, see data/pg_hba.conf\n\n----------------\nHannu\n", "msg_date": "Wed, 21 Feb 2001 01:29:22 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: Client Window-VFP with Linux-PostgreSQL" } ]
[ { "msg_contents": "Hiroshi,\nIs there any chance you can send the pgbench changes to me so that I can\ntest this scenario?\nThanks.\nPeter\n\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp]\n> Sent: Tuesday, February 20, 2001 3:31 PM\n> To: Tom Lane\n> Cc: Schmidt, Peter; pgsql-hackers@postgresql.org; \n> pgsql-admin@postgresql.org\n> Subject: Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance\n> \n> \n> Tom Lane wrote:\n> > \n> > \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > >> Hmm, you mean you set up a separate test database for \n> each pgbench\n> > >> \"client\", but all under the same postmaster?\n> > \n> > > Yes. Different database is to make the conflict as less \n> as possible.\n> > > The conflict among backends is a greatest enemy of CommitDelay.\n> > \n> > Okay, so this errs in the opposite direction from the \n> original form of\n> > the benchmark: there will be *no* cross-backend locking \n> delays, except\n> > for accesses to the common WAL log. That's good as a \n> comparison point,\n> > but we shouldn't trust it absolutely either.\n> > \n> \n> Of cource it's only one of the test cases.\n> Because I've ever seen one-sided test cases, I had to\n> provide this test case unwillingly.\n> There are some obvious cases that CommitDelay is harmful\n> and I've seen no test case other than such cases i.e\n> 1) There's only one session.\n> 2) The backends always conflict(e.g pgbench with scaling factor 1).\n> \n> > >> What platform is this on --- in particular, how long a delay\n> > >> is CommitDelay=1 in reality? What -B did you use?\n> > \n> > > platform) i686-pc-linux-gnu, compiled by GCC \n> egcs-2.91.60(turbolinux 4.2)\n> > > min delay) 10msec according to your test program.\n> > > -B) 64 (all other settings are default)\n> > \n> > Thanks. Could I trouble you to run it again with a larger -B, say\n> > 1024 or 2048? What I've found is that at -B 64, the benchmark is\n> > so constrained by limited buffer space that it doesn't reflect\n> > performance at a more realistic production setting.\n> > \n> \n> OK I would try it later though I'm not sure I could\n> increase -B that large in my current environment.\n> \n> Regards,\n> Hiroshi Inoue\n> \n\n\n\n\n\nRE: [HACKERS] Re: [ADMIN] v7.1b4 bad performance\n\n\nHiroshi,\nIs there any chance you can send the pgbench changes to me so that I can test this scenario?\nThanks.\nPeter\n\n> -----Original Message-----\n> From: Hiroshi Inoue [mailto:Inoue@tpf.co.jp]\n> Sent: Tuesday, February 20, 2001 3:31 PM\n> To: Tom Lane\n> Cc: Schmidt, Peter; pgsql-hackers@postgresql.org; \n> pgsql-admin@postgresql.org\n> Subject: Re: [HACKERS] Re: [ADMIN] v7.1b4 bad performance\n> \n> \n> Tom Lane wrote:\n> > \n> > \"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> > >> Hmm, you mean you set up a separate test database for \n> each pgbench\n> > >> \"client\", but all under the same postmaster?\n> > \n> > > Yes. Different database is to make the conflict as less \n> as possible.\n> > > The conflict among backends is a greatest enemy of CommitDelay.\n> > \n> > Okay, so this errs in the opposite direction from the \n> original form of\n> > the benchmark: there will be *no* cross-backend locking \n> delays, except\n> > for accesses to the common WAL log.  That's good as a \n> comparison point,\n> > but we shouldn't trust it absolutely either.\n> > \n> \n> Of cource it's only one of the test cases.\n> Because I've ever seen one-sided test cases, I had to\n> provide this test case unwillingly.\n> There are some obvious cases that CommitDelay is harmful\n> and I've seen no test case other than such cases i.e\n> 1) There's only one session.\n> 2) The backends always conflict(e.g pgbench with scaling factor 1).\n> \n> > >> What platform is this on --- in particular, how long a delay\n> > >> is CommitDelay=1 in reality?  What -B did you use?\n> > \n> > > platform) i686-pc-linux-gnu, compiled by GCC \n> egcs-2.91.60(turbolinux 4.2)\n> > > min delay) 10msec according to your test program.\n> > > -B)  64 (all other settings are default)\n> > \n> > Thanks.  Could I trouble you to run it again with a larger -B, say\n> > 1024 or 2048?  What I've found is that at -B 64, the benchmark is\n> > so constrained by limited buffer space that it doesn't reflect\n> > performance at a more realistic production setting.\n> > \n> \n> OK I would try it later though I'm not sure I could\n> increase -B that large in my current environment.\n> \n> Regards,\n> Hiroshi Inoue\n>", "msg_date": "Tue, 20 Feb 2001 15:34:39 -0800", "msg_from": "\"Schmidt, Peter\" <peter.schmidt@prismedia.com>", "msg_from_op": true, "msg_subject": "RE: [HACKERS] Re: v7.1b4 bad performance" } ]
[ { "msg_contents": "Do we put special information into the first page of each heap file? I\nthought we used to, but I don't see that anymore.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 20 Feb 2001 19:34:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "File header" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Do we put special information into the first page of each heap file? I\n> thought we used to, but I don't see that anymore.\n\nYou're thinking of indexes, perhaps? I don't recall ever seeing any\nsign of a header page on heap files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 19:59:55 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: File header " } ]
[ { "msg_contents": "From: \"Jeff Lu\" <jklcom@mindspring.com>\nSubject: Multibyte examples\nDate: Tue, 20 Mar 2001 12:38:26 -0500\nMessage-ID: <NDBBIHPECLIGKCCLMACAEEPDCJAA.jklcom@mindspring.com>\n\n> Tatsuo Ishii\n> \n> I've finally got the envirnoment set up to work with PostgreSQL, cygwin on\n> NT\n> \n> Couldn't the regression test to run.\n> \n> When tried\n> % mbregress.sh\n> \n> I got lots of errors\n\nWhat kind of errors did you get exactly?\n\n> I want to see some examples on multibytes.\n> I'm trying to write a CGI program that will gather user info from a form\n> through HTTP Request. The information may contain Chinese Characters. The\n> result will be written to PostgreSQL.\n> \n> Do you have any examples like this?\n\nI'm not sure what kind of encodings you are talking about when you say\n\"Chinese Chinese\". Anyway you will find some examples:\n\nBig5:\t src/test/mb/big5.sql\nEUC_CN:\t src/test/mb/euc_cn.sql\nEUC_TW:\t src/test/mb/euc_tw.sql\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Feb 2001 09:59:54 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Multibyte examples" }, { "msg_contents": "From: \"Jeff Lu\" <jklcom@mindspring.com>\nSubject: RE: Multibyte examples\nDate: Tue, 20 Mar 2001 22:17:39 -0500\nMessage-ID: <NDBBIHPECLIGKCCLMACACEPMCJAA.jklcom@mindspring.com>\n\n> Please excuse my ignorance in PostgreSQL for I'm new to this.\n> \n> This is what I got when tried to run mbregress.sh, what am I missing?\n> \n> destroydb: not found\n> destroydb: not found\n> /usr/local/pgsql/bin/pg_encoding: not found\n> destroydb: not found\n> destroydb: not found\n[snip]\n\nIt is likely your PostgreSQL installation or your personal environment\nsettings are not correct. Please read the INSTALL doc carefully and\ntry again.\n\n> I looked in the examples such as\n> Big5:\t src/test/mb/big5.sql\n> EUC_CN:\t src/test/mb/euc_cn.sql\n> EUC_TW:\t src/test/mb/euc_tw.sql\n> \n> \n> Are these files excutable?\n\nNo. These are just SQL files. You can execute them like:\n\npsql -e -f src/test/mb/big5.sql your_database_name\n\n> I want to see some examples using C API to\n> read/write to database using Big5 encoding. I'm just not sure what to do\n> before writing the data to the database. Do I need to do any conversion?\n\nThere are C examples somewhere under the source tree.\n\nTo use Big5 in frontend side, you just set an environment variable\ncalled \"PGCLIENTENCODING\" to \"Big5\". Note that PostgreSQL 7.0.x has\nbugs thus it will not work with this feature. Grab\nftp://ftp.sra.co.jp/pub/cmd/postgres/7.0.3/patches/libpq.patch.gz and\napply it. Or you could use 7.1 beta.\n--\nTatsuo Ishii\n\n", "msg_date": "Wed, 21 Feb 2001 13:01:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "RE: Multibyte examples" }, { "msg_contents": ">> This is what I got when tried to run mbregress.sh, what am I missing?\n>> \n>> destroydb: not found\n\nHm. destroydb was renamed to dropdb a release or two back, but it seems\nmbregress.sh hasn't gotten the word yet.\n\n>> /usr/local/pgsql/bin/pg_encoding: not found\n\nThis is a little more troubling. Did you configure the system with\nmultibyte support enabled? pg_encoding should have been created and\ninstalled if so.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 20 Feb 2001 23:58:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RE: Multibyte examples " }, { "msg_contents": "> >> This is what I got when tried to run mbregress.sh, what am I missing?\n> >> \n> >> destroydb: not found\n> \n> Hm. destroydb was renamed to dropdb a release or two back, but it seems\n> mbregress.sh hasn't gotten the word yet.\n\nGood point. I didn't realize it. Will fix.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Feb 2001 14:05:22 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: RE: Multibyte examples " } ]
[ { "msg_contents": "I've been working to make PostgreSQL run on SunOS4 (again).\n\nSo far I have found following issues:\n\no c.h 's sunos4 part should not include varargs.h. (Tom has already\n fixed it) Instead, stdlib.h and stdarg.h should be included.\n\no no RAND_MAX or EXIT_FAILURE found. I simply added them to c.h.\n\no regex/utils.h included twice somewhere. I added #ifndef\n UTILS_H... to utils.h\n\no utils/adt/formatting.c rely on sprintf() returns length of formatted\n strings. This is not true for SunOS4. I have changed sprintf to\n snprintf.\n\no SunOS4 does not have strdup, strtoul. --> use backend/port/strtoul.c\n etc.\n\no SunOS4 does not have atexit (used in psql). --> igore it\n\no SunOS4 does not have getopt. --> use utils/getopt.c. also getopt.h\n need to be created, and checking for getopt is needed to configure.in.\n\no to make shared library I have added an entry for SunOS4 in\n Makefile.shlib.\n\no to make shared libraries (such as libpgeasy.so) relying on libpq,\n \"ld foo.o bar.o ... -L ../libpq -lpq\" is executed but fails. I\n changed it to:\n\n ld foo.o bar.o ... ../libpq.a\n\n instead.\n\no pg_id needs Makefile.in.\n\nincluded are patched for *7.0.x*. Sould I make same changes to 7.1?\nComments anyone?", "msg_date": "Wed, 21 Feb 2001 15:52:58 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "SunOS4" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> included are patched for *7.0.x*. Sould I make same changes to 7.1?\n> Comments anyone?\n\nI think some of these issues are already handled in current sources,\nbut sure, we want to be able to run on SunOS. Portability is a good\nthing.\n\nI'd suggest consulting with Peter Eisentraut about the shared library\nissues; he's overhauled the Makefiles enough that there may be a\ndifferent or better way to do that part now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 10:11:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 " }, { "msg_contents": "Tatsuo Ishii writes:\n\n> o c.h 's sunos4 part should not include varargs.h. (Tom has already\n> fixed it) Instead, stdlib.h and stdarg.h should be included.\n\nThis should be okay by now.\n\n> o no RAND_MAX or EXIT_FAILURE found. I simply added them to c.h.\n\nEXIT_FAILURE is defined in src/bin/psql/settings.h; I can't find it used\noutside psql. RAND_MAX should be inside an #ifndef RAND_MAX, not in a\nSunOS specific section.\n\n> o regex/utils.h included twice somewhere. I added #ifndef\n> UTILS_H... to utils.h\n\nOkay.\n\n> o utils/adt/formatting.c rely on sprintf() returns length of formatted\n> strings. This is not true for SunOS4. I have changed sprintf to\n> snprintf.\n\nOkay.\n\n> o SunOS4 does not have strdup, strtoul. --> use backend/port/strtoul.c\n> etc.\n\nOkay. Instead of ../../etc. in makefiles you should use $(top_srcdir) or\n$(top_builddir).\n\n> o SunOS4 does not have atexit (used in psql). --> igore it\n\nMaybe on_exit() is available, or even more portable?\n\n> o SunOS4 does not have getopt. --> use utils/getopt.c. also getopt.h\n> need to be created, and checking for getopt is needed to configure.in.\n\nUgh.\n\n#include \"../../utils/getopt.h\" is definitely not good.\n\n+ #ifndef HAVE_GETOPT_H\n+ char *__progname = \"pg_id\";\n+ #endif\n\nseems to be misguided.\n\nThe getopt.h file doesn't seem necessary. The external variables should\nbe defined in every program that needs them. The getopt() function\ndoesn't need to be declared.\n\n> o to make shared library I have added an entry for SunOS4 in\n> Makefile.shlib.\n\nI'm not sure that entry is right. Libtool wants it to look like\n\n$(LD) -assert pure-text -Bshareable\n\nyou have\n\n$(LD) -dc -dp -Bdynamic\n\n>\n> o to make shared libraries (such as libpgeasy.so) relying on libpq,\n> \"ld foo.o bar.o ... -L ../libpq -lpq\" is executed but fails. I\n> changed it to:\n> ld foo.o bar.o ... ../libpq.a\n> instead.\n\nCan you elaborate on why that's necessary? Perhaps a problem with the\ncommand line (see above)? Why only ecpg?\n\n> o pg_id needs Makefile.in.\n\nNothing needs a Makefile.in. Substitution symbols go in Makefile.global.\n\n> included are patched for *7.0.x*. Sould I make same changes to 7.1?\n> Comments anyone?\n\n7.0 build patches are pretty much useless for 7.1, I'm afraid. You should\nwork with 7.1 before proceeding.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Wed, 21 Feb 2001 17:34:32 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SunOS4" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Tatsuo Ishii writes:\n>> o regex/utils.h included twice somewhere. I added #ifndef\n>> UTILS_H... to utils.h\n\n> Okay.\n\nActually, the problem is probably gone. c.h was including regex/utils.h\nif the platform didn't have memmove(), but I thought that was a very\nugly thing to do and moved the memmove() macro to c.h instead. However,\nan #ifndef UTILS_H is harmless and good practice, so I don't object to\nputting it in anyway. (You might want to make the symbol REGEX_UTILS_H,\nthough, to avoid possible conflicts with other files named utils.h ...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 12:59:58 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 " }, { "msg_contents": "> > o no RAND_MAX or EXIT_FAILURE found. I simply added them to c.h.\n> \n> EXIT_FAILURE is defined in src/bin/psql/settings.h; I can't find it used\n> outside psql.\n\nSo SunOS should be ok with this in current.\n\n> RAND_MAX should be inside an #ifndef RAND_MAX, not in a\n> SunOS specific section.\n\nOk.\n\n> > o regex/utils.h included twice somewhere. I added #ifndef\n> > UTILS_H... to utils.h\n> \n> Okay.\n\nI will add REGEX_UTILS_H per Tom's suggestion.\n\n> > o SunOS4 does not have strdup, strtoul. --> use backend/port/strtoul.c\n> > etc.\n> \n> Okay. Instead of ../../etc. in makefiles you should use $(top_srcdir) or\n> $(top_builddir).\n\nI see.\n\n> > o SunOS4 does not have atexit (used in psql). --> igore it\n> \n> Maybe on_exit() is available, or even more portable?\n\nLet me check it.\n\n> > o SunOS4 does not have getopt. --> use utils/getopt.c. also getopt.h\n> > need to be created, and checking for getopt is needed to configure.in.\n> \n> Ugh.\n> \n> #include \"../../utils/getopt.h\" is definitely not good.\n> \n> + #ifndef HAVE_GETOPT_H\n> + char *__progname = \"pg_id\";\n> + #endif\n> \n> seems to be misguided.\n\nBut our getopt implementaion requires\n\nchar *__progname = \"pg_id\";\n\nno?\n\n> The getopt.h file doesn't seem necessary. The external variables should\n> be defined in every program that needs them. The getopt() function\n> doesn't need to be declared.\n\nI agree with we don't need getopt.h.\n\n> > o to make shared library I have added an entry for SunOS4 in\n> > Makefile.shlib.\n> \n> I'm not sure that entry is right. Libtool wants it to look like\n> \n> $(LD) -assert pure-text -Bshareable\n> \n> you have\n> \n> $(LD) -dc -dp -Bdynamic\n\nIt comes from our makefiles/Makefile.sunos4. Let me check if what\nLibtool suggests works.\n\n> > o to make shared libraries (such as libpgeasy.so) relying on libpq,\n> > \"ld foo.o bar.o ... -L ../libpq -lpq\" is executed but fails. I\n> > changed it to:\n> > ld foo.o bar.o ... ../libpq.a\n> > instead.\n> \n> Can you elaborate on why that's necessary? Perhaps a problem with the\n> command line (see above)? Why only ecpg?\n\nNot only ecpg. libpgeasy, libpq++ also.\n\n> > o pg_id needs Makefile.in.\n> \n> Nothing needs a Makefile.in. Substitution symbols go in Makefile.global.\n\nOh, things have been changed dramatically since 7.0. I see now.\n\n> > included are patched for *7.0.x*. Sould I make same changes to 7.1?\n> > Comments anyone?\n> \n> 7.0 build patches are pretty much useless for 7.1, I'm afraid. You should\n> work with 7.1 before proceeding.\n\nOf course.\n\nBTW, I observe some regression failures under SunOS4 due to the\ndifference of strtol. It does not detect overflow. So, following\nINSERT in regress/sql/int4.sql does not throw an error, but inserts\na random value.\n\n -- bad input values -- should give warnings \n INSERT INTO INT4_TBL(f1) VALUES ('1000000000000');\n\nWhat should we do?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 22 Feb 2001 10:08:34 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4" }, { "msg_contents": "> > > o SunOS4 does not have atexit (used in psql). --> igore it\n> > \n> > Maybe on_exit() is available, or even more portable?\n> \n> Let me check it.\n\nSunOS4 has on_exit. Can we change atexit to on_exit?\n\n> > > o to make shared library I have added an entry for SunOS4 in\n> > > Makefile.shlib.\n> > \n> > I'm not sure that entry is right. Libtool wants it to look like\n> > \n> > $(LD) -assert pure-text -Bshareable\n> > \n> > you have\n> > \n> > $(LD) -dc -dp -Bdynamic\n> \n> It comes from our makefiles/Makefile.sunos4. Let me check if what\n> Libtool suggests works.\n\nIt doesn't work. Probably that is for GNU ld? On sparc platforms, we\nusually do not use GNU ld (I don't remember the reason\nthough). However \n\n$(LD) -assert pure-text -Bdynamic\n\nworks.\n\n> > > o to make shared libraries (such as libpgeasy.so) relying on libpq,\n> > > \"ld foo.o bar.o ... -L ../libpq -lpq\" is executed but fails. I\n> > > changed it to:\n> > > ld foo.o bar.o ... ../libpq.a\n> > > instead.\n> > \n> > Can you elaborate on why that's necessary? Perhaps a problem with the\n> > command line (see above)?\n\n$(LD) -assert pure-text -Bdynamic (eliminating -dc -dp) works with -L\n../libpq -lpq. But is it safe? Can we live without -dc -dp? SunOS4\nguru anywhere?\n--\nTatsuo Ishii\n", "msg_date": "Thu, 22 Feb 2001 10:47:07 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> o SunOS4 does not have atexit (used in psql). --> igore it\n> \n> Maybe on_exit() is available, or even more portable?\n\n> SunOS4 has on_exit. Can we change atexit to on_exit?\n\natexit is ANSI C. on_exit is not found here (HPUX) at all. Looks\nlike we need another configure test :-( ... but I recommend we stick\nwith atexit as the preferred form.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 23:04:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > o SunOS4 does not have atexit (used in psql). --> igore it\n> > \n> > Maybe on_exit() is available, or even more portable?\n> \n> > SunOS4 has on_exit. Can we change atexit to on_exit?\n> \n> atexit is ANSI C. on_exit is not found here (HPUX) at all. Looks\n> like we need another configure test :-( ... but I recommend we stick\n> with atexit as the preferred form.\n> \n> \t\t\tregards, tom lane\n\nOk. First test if atexit exists. on_exit is second.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 22 Feb 2001 13:18:36 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4 " }, { "msg_contents": "> I've been working to make PostgreSQL run on SunOS4 (again).\n\nI have committed massive changes for SunOS4 port. Tested on:\n\nSunOS 4.1.4\nVine Linux 2.1 (variant of RedHat Linux 6.2J)\nFreeBSD 4.2-RELEASE\n\nPlease let me know if I have broken something.\n--\nTatsuo Ishii\n", "msg_date": "Tue, 27 Feb 2001 17:45:29 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> > I've been working to make PostgreSQL run on SunOS4 (again).\n>\n> I have committed massive changes for SunOS4 port. Tested on:\n>\n> SunOS 4.1.4\n> Vine Linux 2.1 (variant of RedHat Linux 6.2J)\n> FreeBSD 4.2-RELEASE\n>\n> Please let me know if I have broken something.\n\nI think the OPTARG_DECL is not necessary. We've gotten by since the\nbeginning of time with always declaring them explicitly. And you're not\neven using it consistently.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 16:54:39 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: SunOS4" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have committed massive changes for SunOS4 port. Tested on:\n> SunOS 4.1.4\n> Vine Linux 2.1 (variant of RedHat Linux 6.2J)\n> FreeBSD 4.2-RELEASE\n> Please let me know if I have broken something.\n\nEverything still builds and passes regression on HPUX, but I concur with\nPeter that the HAVE_OPTARG configure stuff must be unnecessary. Please\nobserve that\n\tsrc/backend/bootstrap/bootstrap.c\n\tsrc/backend/postmaster/postmaster.c\n\tsrc/backend/tcop/postgres.c\n\tsrc/bin/pg_dump/pg_dump.c\n\tsrc/bin/psql/startup.c\n\tsrc/interfaces/ecpg/preproc/ecpg.c\nall seem to be getting along fine with no configure test. There are\nalso a bunch of contrib modules that use optarg, and would also need\nto be changed if you want to apply a configure test.\n\nI suggest reverting the configure and config.h changes and instead\nmaking pg_restore and pg_id follow the coding practices used in the\nabove-mentioned files for optarg.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 14:13:22 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: SunOS4 " }, { "msg_contents": "> Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> > I have committed massive changes for SunOS4 port. Tested on:\n> > SunOS 4.1.4\n> > Vine Linux 2.1 (variant of RedHat Linux 6.2J)\n> > FreeBSD 4.2-RELEASE\n> > Please let me know if I have broken something.\n> \n> Everything still builds and passes regression on HPUX, but I concur with\n> Peter that the HAVE_OPTARG configure stuff must be unnecessary. Please\n> observe that\n> \tsrc/backend/bootstrap/bootstrap.c\n> \tsrc/backend/postmaster/postmaster.c\n> \tsrc/backend/tcop/postgres.c\n> \tsrc/bin/pg_dump/pg_dump.c\n> \tsrc/bin/psql/startup.c\n> \tsrc/interfaces/ecpg/preproc/ecpg.c\n> all seem to be getting along fine with no configure test. There are\n> also a bunch of contrib modules that use optarg, and would also need\n> to be changed if you want to apply a configure test.\n> \n> I suggest reverting the configure and config.h changes and instead\n> making pg_restore and pg_id follow the coding practices used in the\n> above-mentioned files for optarg.\n> \n> \t\t\tregards, tom lane\n\ndone.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 01 Mar 2001 14:05:57 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: SunOS4 " } ]
[ { "msg_contents": "\n Hi,\n\n I a little work with encodings (Japanese, Latin(s)) and I see that \nPG use non-standard encoding names.\n\n Why is here SJIS instead Shift-JIS, EUC_JP intead EUC-JP, \nLatin2 instead ISO-8859-2 ?\n\n It is not good for example for applications that output data to HTML and \nneeds set correct meta-tags, for this is needful maintain in application\nPostgreSQL's specific names and standard names and translate between these\n...\n\n Comments?\n\n \n\t\t\tKarel\n\n", "msg_date": "Wed, 21 Feb 2001 10:17:36 +0100 (CET)", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Encoding names" }, { "msg_contents": "> I a little work with encodings (Japanese, Latin(s)) and I see that \n> PG use non-standard encoding names.\n> \n> Why is here SJIS instead Shift-JIS, EUC_JP intead EUC-JP, \n> Latin2 instead ISO-8859-2 ?\n> \n> It is not good for example for applications that output data to HTML and \n> needs set correct meta-tags, for this is needful maintain in application\n> PostgreSQL's specific names and standard names and translate between these\n> ...\n> \n> Comments?\n\nBut HTML meta tags used to use their own encoding names such as\nx-euc-jp, x-sjis....\n\nWell, the reaons are:\n\n1) shell does not like \"-\" (configure and some Unix commands in\n PostgreSQL accepts encoding names)\n2) I don't like longer names\n\nBTW, I and Thomas (and maybe others) are interested in implementing\nCREATE CHRACATER SET staffs in SQL92/99. The encoding names might be\nchanged at that time...\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Feb 2001 18:41:44 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Encoding names" }, { "msg_contents": "\n> But HTML meta tags used to use their own encoding names such as\n> x-euc-jp, x-sjis....\n\n Not sure, my mozilla understand \"ISO-xxxx-x\", \"Shift-JIS\" format too.\nBut it's irrelevant, important is that something like \"Latin2\" or \"SJIS\"\nor \"EUC_JP\" are less standard names. And here aren't HTML only, but other\nformats too (I-MODE, Wap, XML ...etc).\n\n> Well, the reaons are:\n> \n> 1) shell does not like \"-\" (configure and some Unix commands in\n> PostgreSQL accepts encoding names)\n>\n> 2) I don't like longer names\n\n Sorry, but both are great traverses and please never say \"I don't like\" \nif you talking about already existing standards, it's way to chaos.\n \n Sorry of this hard words, but I hope you understand me :-) \n\n> BTW, I and Thomas (and maybe others) are interested in implementing\n> CREATE CHRACATER SET staffs in SQL92/99. The encoding names might be\n\n Well, I look forward.\n\n\t\t\tKarel\n\n", "msg_date": "Wed, 21 Feb 2001 10:54:03 +0100 (CET)", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: Encoding names" }, { "msg_contents": "> > But HTML meta tags used to use their own encoding names such as\n> > x-euc-jp, x-sjis....\n> \n> Not sure, my mozilla understand \"ISO-xxxx-x\", \"Shift-JIS\" format too.\n> But it's irrelevant, important is that something like \"Latin2\" or \"SJIS\"\n> or \"EUC_JP\" are less standard names. And here aren't HTML only, but other\n> formats too (I-MODE, Wap, XML ...etc).\n\nThey were introduced recently. If I remever correctly, when I started\nto implemnet the multi-byte fucntionality, most of browsers did not\naccept \"Shift-JIS\" as their meta tags.\n\n> > Well, the reaons are:\n> > \n> > 1) shell does not like \"-\" (configure and some Unix commands in\n> > PostgreSQL accepts encoding names)\n> >\n> > 2) I don't like longer names\n> \n> Sorry, but both are great traverses and please never say \"I don't like\" \n> if you talking about already existing standards, it's way to chaos.\n> \n> Sorry of this hard words, but I hope you understand me :-) \n\nPlease understand there is no standard for charset/encoding names in\nSQL92/99 itself. The SQL standard just says \"you can import any\ncharset/encoding from anywhere if you can\". Please correct me if I am\nwrong.\n\nHowever, I do not object to change encoding names if there are enough\nagrees (and as long as the backward compatibilities are kept). \n\n> > BTW, I and Thomas (and maybe others) are interested in implementing\n> > CREATE CHRACATER SET staffs in SQL92/99. The encoding names might be\n> \n> Well, I look forward.\n\nGood.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 21 Feb 2001 19:18:26 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Encoding names" }, { "msg_contents": "\nOn Wed, 21 Feb 2001, Tatsuo Ishii wrote:\n\n> Please understand there is no standard for charset/encoding names in\n> SQL92/99 itself. The SQL standard just says \"you can import any\n> charset/encoding from anywhere if you can\". Please correct me if I am\n> wrong.\n\nIn SQL standards not, but all probably known for example ISO names or \nsome form for this.\n\n> However, I do not object to change encoding names if there are enough\n> agrees (and as long as the backward compatibilities are kept). \n\n You not must change current names, you can add to pg_conv_tbl[] new lines\nwith names synonym for already existing encoding.\n\n An example:\n\n {LATIN1, \"LATIN1\",\t0, latin12mic, mic2latin1, 0, 0}, \n {LATIN1, \"ISO-8859-1\",\t0, latin12mic, mic2latin1, 0, 0}, \n\n And if you order this table by alphabet and in pg_char_to_encoding()\nyou use Knuth's binary search intead current seq. scannig by for() every\nthing will faster and more nice. It's easy.\n\n What? :-)\n\n\t\tKarel\n\n\n", "msg_date": "Wed, 21 Feb 2001 12:01:28 +0100 (CET)", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: Encoding names" }, { "msg_contents": "> You not must change current names, you can add to pg_conv_tbl[] new lines\n> with names synonym for already existing encoding...\n> {LATIN1, \"LATIN1\", 0, latin12mic, mic2latin1, 0, 0},\n> {LATIN1, \"ISO-8859-1\", 0, latin12mic, mic2latin1, 0, 0},\n> And if you order this table by alphabet and in pg_char_to_encoding()\n> you use Knuth's binary search intead current seq. scannig by for() every\n> thing will faster and more nice. It's easy.\n\nAs you probably know, there is already a binary search algorithm coded\nup for the date/time string lookups in utils/adt/datetime.c. Since that\nlookup caches the last value (which could be done here too) most lookups\nare immediate.\n\nAre you proposing to make a change Karel, or just encouraging others? :)\n\n - Thomas\n", "msg_date": "Wed, 21 Feb 2001 13:38:45 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Encoding names" }, { "msg_contents": "\nOn Wed, 21 Feb 2001, Thomas Lockhart wrote:\n\n> > You not must change current names, you can add to pg_conv_tbl[] new lines\n> > with names synonym for already existing encoding...\n> > {LATIN1, \"LATIN1\", 0, latin12mic, mic2latin1, 0, 0},\n> > {LATIN1, \"ISO-8859-1\", 0, latin12mic, mic2latin1, 0, 0},\n> > And if you order this table by alphabet and in pg_char_to_encoding()\n> > you use Knuth's binary search intead current seq. scannig by for() every\n> > thing will faster and more nice. It's easy.\n> \n> As you probably know, there is already a binary search algorithm coded\n> up for the date/time string lookups in utils/adt/datetime.c. Since that\n> lookup caches the last value (which could be done here too) most lookups\n> are immediate.\n> \n> Are you proposing to make a change Karel, or just encouraging others? :)\n> \n\n No problem for me. Do you want patch with this to tommorow breakfast?\nIMHO it's acceptable for current 7.1 too, it's really small change.\n\n Or do it Tatsuo?\n\n\t\t\tKarel\n\n", "msg_date": "Wed, 21 Feb 2001 15:13:24 +0100 (CET)", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: Encoding names" }, { "msg_contents": "> > As you probably know, there is already a binary search algorithm coded\n> > up for the date/time string lookups in utils/adt/datetime.c. Since that\n> > lookup caches the last value (which could be done here too) most lookups\n> > are immediate.\n> > \n> > Are you proposing to make a change Karel, or just encouraging others? :)\n> > \n> \n> No problem for me. Do you want patch with this to tommorow breakfast?\n> IMHO it's acceptable for current 7.1 too, it's really small change.\n> \n> Or do it Tatsuo?\n\nPlease go ahead. By the way, there is one more place you need to tweak\nthe encoding name table. Take a look at\ninterfaces/libpq/fe-connect.c. It's ugly to have simlilar tables in\ntwo places, but I did not find better way to avoid to link huge\nUnicode conversion tables in the frontend.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 22 Feb 2001 00:05:07 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Encoding names" }, { "msg_contents": "\nOn Thu, 22 Feb 2001, Tatsuo Ishii wrote:\n\n> > > As you probably know, there is already a binary search algorithm coded\n> > > up for the date/time string lookups in utils/adt/datetime.c. Since that\n> > > lookup caches the last value (which could be done here too) most lookups\n> > > are immediate.\n> > > \n> > > Are you proposing to make a change Karel, or just encouraging others? :)\n> > > \n> > \n> > No problem for me. Do you want patch with this to tommorow breakfast?\n> > IMHO it's acceptable for current 7.1 too, it's really small change.\n> > \n> > Or do it Tatsuo?\n> \n> Please go ahead. By the way, there is one more place you need to tweak\n> the encoding name table. Take a look at\n> interfaces/libpq/fe-connect.c. It's ugly to have simlilar tables in\n> two places, but I did not find better way to avoid to link huge\n> Unicode conversion tables in the frontend.\n\n Hmm, I see. It's realy a little ugly maintain two places with same\nthings. What this solution:\n\n * split (on backend) pg_conv_tbl[] into two tables:\n\n\tencstr2enc[] \t- for encoding names (strings) to encode 'id'.\n\t\t\t This table will sort by alphabet. \n\t\t\t\n\tpg_conv_tbl[]\t- table with encoding 'id' and with encoding routines.\n\t\t\t This table will order by encoding 'id' and this\n\t\t\t order allows found relevant routines, an example:\n\n\tpg_conv_tbl[ LATIN1 ]\n\n\n This solution allows to use and share on libpq and backend one encstr2enc[] \ntable and basic functions those works with this table -- like\npg_char_to_encoding().\n\n May be will better define for encoding 'id' separate enum datetype instead\ncurrent mistake-able '#define'. \n\n\t\t\tKarel\n\n", "msg_date": "Wed, 21 Feb 2001 16:42:20 +0100 (CET)", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": true, "msg_subject": "Re: Encoding names" } ]
[ { "msg_contents": "I think a formating mode where only the relevant digits are written\nto the output would be great as an alternative to the discussed \nfixed formatting strings. In this context i think of 'relevant' \nas in the following:\n\n'Output as few characters as possible but ensure that scanf is \nstill able to rebuild the binary reprressentation of the floating \npoint number exactly.'\n\nTo make this happen we would need to compute a seperate formatting\nstring for each floating point value:\n\nE.g. if the binary value is exactly '1.00000E00' then we just\nwrite '1' to the output, because the rest is just 'ASCII noise'\nand not neccessary for rebuilding the identical binary value for\nthe given floating point value.\n\nThe advantage would be, that we only generate as much ASCII data\nas absolutly neccessary to rebuild the original data exactly.\nAt least this is what I would expect from pg_dump.\n\nrobert schrem\n", "msg_date": "Wed, 21 Feb 2001 10:19:41 +0100", "msg_from": "Robert Schrem <Robert.Schrem@WiredMinds.de>", "msg_from_op": true, "msg_subject": "Re: floating point representation" }, { "msg_contents": "At 10:19 21/02/01 +0100, Robert Schrem wrote:\n>The advantage would be, that we only generate as much ASCII data\n>as absolutly neccessary to rebuild the original data exactly.\n>At least this is what I would expect from pg_dump.\n\npg_dump is only one side of thre problem, but the simplest solution might\nbe to add an option to dump the hex mantissa, exponent & sign. This should\nbe low-cost and an exact representation of the machine version of the number.\n\nThe other issues, like what is sent to psql & via interfaces like odbc\n(currently text) should be application/DBA based and setable on a\nper-attribute basis. eg. some applications want 1.0000 because the data\ncame from a piece of hardware with a know error, and 1.0000 means 1.0000+/-\n0.00005 etc. Maybe this is just an argument for a new 'number with error'\ntype...\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Wed, 21 Feb 2001 21:13:23 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: Re: floating point representation" }, { "msg_contents": "On Wed, 21 Feb 2001, you wrote:\n> At 10:19 21/02/01 +0100, Robert Schrem wrote:\n> >The advantage would be, that we only generate as much ASCII data\n> >as absolutly neccessary to rebuild the original data exactly.\n> >At least this is what I would expect from pg_dump.\n> \n> pg_dump is only one side of thre problem, but the simplest solution might\n> be to add an option to dump the hex mantissa, exponent & sign. This should\n> be low-cost and an exact representation of the machine version of the number.\n\nThe hex dumps should be done in a machine independant way - I think\nthat's what you ment when stating mantissa, exponent & sign seperatly,\nright? I think this whould be a good solution...\n\n> The other issues, like what is sent to psql & via interfaces like odbc\n> (currently text) should be application/DBA based and setable on a\n> per-attribute basis. \n\nYou think of an additional tag in a CREATE TABLE instruction like\n\nCREATE TABLE temperature (\n id id,\n messure_time timestamp default now() formatted as \"hhmmmdd\",\n value float formatted as \"%3.2f\"\n);\n\nor maybe\n\nCREATE TABLE temperature (\n id id,\n messure_time(\"hhmmmdd\") timestamp default now(),\n value(\"%3.2f\") float\n);\n\nor is there something in SQL99 ?\n\n> eg. some applications want 1.0000 because the data\n> came from a piece of hardware with a know error, and 1.0000 means 1.0000+/-\n> 0.00005 etc. Maybe this is just an argument for a new 'number with error'\n> type...\n\nI think a float value in a database column has no known error \nrange and therefore we should not care about the 'physical' error \nof a value in this context. Just think of a computed column in a \nVIEW - how can we know for shure how percise such a result is if\nwe don't have any additional information about the messurement \nerrors of all operands (or constants) involved.\n\nIf you would introduce a new type - 'number with error' - this \nwould be totally different and a big contribution to the solution \nof this. Then you can also handle errors like 1.0000+/-0.00002 \npercisely - witch you can't by only formatting 1.0000.\n\nrobert schrem\n\n\n\n\n\n\n\n", "msg_date": "Wed, 21 Feb 2001 12:34:04 +0100", "msg_from": "Robert Schrem <Robert.Schrem@WiredMinds.de>", "msg_from_op": true, "msg_subject": "Re: Re: floating point representation" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> The other issues, like what is sent to psql & via interfaces like odbc\n> (currently text) should be application/DBA based and setable on a\n> per-attribute basis. eg. some applications want 1.0000 because the data\n> came from a piece of hardware with a know error, and 1.0000 means 1.0000+/-\n> 0.00005 etc. Maybe this is just an argument for a new 'number with error'\n> type...\n\nFWIW, there is a number-with-error type in contrib/seg ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 10:19:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: floating point representation " } ]
[ { "msg_contents": "Is there any way in psql to connect to a database and reduce the run\npriority of the child thread it kicks off ?\ni.e. equivalent of 'nice' on the thread?\n\n From first looks at the code, it seems to fork off the process and there is\na pid that can be niced.\nIf an extra run level parameter is passed in to the PQExec interface\n(defaulted for compatibility with older code), would it work?\n\nThis assumes that there isn't already a mechanism to reduce the priority of\nspecific queries.\n\nWhat I am looking for is a postgres system that runs 100 users or so at\n'full speed', and major day long queries at a 'when idle' priority.\n\n\nCheers,\nChris\n\n", "msg_date": "Wed, 21 Feb 2001 15:34:20 -0000", "msg_from": "Chris Storah <cstorah@emis-support.demon.co.uk>", "msg_from_op": true, "msg_subject": "low priority postmaster threads?" }, { "msg_contents": "Chris Storah <cstorah@emis-support.demon.co.uk> writes:\n> Is there any way in psql to connect to a database and reduce the run\n> priority of the child thread it kicks off ?\n> i.e. equivalent of 'nice' on the thread?\n\nNot at the moment, though it'd be a fairly trivial hack on postgres.c\nto add a \"-nice n\" backend switch, which you could then specify at\nconnection time via PGOPTIONS.\n\n> What I am looking for is a postgres system that runs 100 users or so at\n> 'full speed', and major day long queries at a 'when idle' priority.\n\nThe trouble here is that CPU nice doesn't (on most platforms) change the\nbehavior of the I/O scheduler, so this would only be of use to the\nextent that your queries are CPU bound and not I/O bound.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 16:41:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: low priority postmaster threads? " }, { "msg_contents": "I wrote:\n> Chris Storah <cstorah@emis-support.demon.co.uk> writes:\n>> What I am looking for is a postgres system that runs 100 users or so at\n>> 'full speed', and major day long queries at a 'when idle' priority.\n\n> The trouble here is that CPU nice doesn't (on most platforms) change the\n> behavior of the I/O scheduler, so this would only be of use to the\n> extent that your queries are CPU bound and not I/O bound.\n\nNow that I think twice, there's an even more severe problem with trying\nto nice() down an individual backend, namely priority inversion.\n\nWhat happens when the low-priority process holds some lock or other,\nand then a higher-priority process comes along and wants the lock?\nThe high-priority process has to wait, that's what. But there's no\nmechanism to raise the priority of the lower-priority lock holder, which\nmeans that the high-priority process is now effectively lowered to the\nlower priority; it may have to wait quite a long time, if there are\nother high-priority processes sucking CPU away from the low-priority\nguy.\n\nIn short, forget about nice'ing an individual backend; you probably\nwon't like the results. Sorry.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 18:10:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: low priority postmaster threads? " } ]
[ { "msg_contents": "I've looked through my archives of pgsql-general and pgsql-hackers and\nhaven't seen this, but I do tend to flush the deleted messages\noccasionally. I'm trying to get a build off the current CVS tree, but\nmy working build is from Sunday evening, so I feel moderately current.\n\nTwo builds of 7.1beta4 from the CVS tree. The one on my production\nmachine was built January 29, the one on my development machine was\nbuilt from sources gotten Sunday evening.\n\nWhen I do this:\n\n $ /usr/local/pgsql/bin/createdb test\n $ /usr/local/pgsql/bin/psql test \n test=# create table abc (id serial, stuff text);\n test=# insert into abc (stuff) values ('xyz');\n test=# insert into abc (stuff) values ('xyz');\n test=# insert into abc (stuff) values ('xyz');\n test=# insert into abc (stuff) values ('qrs');\n test=# insert into abc (stuff) values ('qrs');\n test=# insert into abc (stuff) values ('qrs');\n test=# insert into abc (stuff) values ('qrs');\n test=# insert into abc (stuff) values ('qrs');\n test=# select count(id) from abc;\n\nOn my production machine (PosgreSQL built from CVS on January 29) I\nget the expected result:\n\n count \n -------\n 8\n (1 row)\n\nOn my development machine (Built from CVS late Sunday, February 18), I\nget:\n\n test=# select count(id) from abc;\n ERROR: ExecEvalAggref: no aggregates in this expression context\n test=# \n\nApologies if this has gone by already, I'm in the process of trying to\nget an absolutely current CVS update to build and if anyone's got\nsuggestions on where to start before I dive into the PostgreSQL code\nwhole hog for the first time I'd sure appreciate it.\n\nDon't think it matters, but just in case, development box is:\n\n Linux wynand 2.2.17 #7 Wed Nov 8 09:47:05 PST 2000 i586 unknown\n\nProduction server, and the machine that the production server binaries\nwere built on, both of which work:\n\n Linux mail 2.2.12 #6 Tue Jan 18 17:49:47 PST 2000 i586 unknown\n Linux francon 2.2.18 #2 SMP Sun Jan 28 20:10:18 PST 2001 i686 unknown\n\nAll using libc-2.1.3.so\n\nDan\n", "msg_date": "Wed, 21 Feb 2001 15:02:08 -0800 (PST)", "msg_from": "Dan Lyke <danlyke@flutterby.com>", "msg_from_op": true, "msg_subject": "Bug: COUNT() and ExecEvalAggref error" }, { "msg_contents": "Dan Lyke <danlyke@flutterby.com> writes:\n> On my development machine (Built from CVS late Sunday, February 18), I\n> get:\n\n> test=# select count(id) from abc;\n> ERROR: ExecEvalAggref: no aggregates in this expression context\n\nTry make distclean, configure, make all. Someone else reported this\nsame symptom recently due to having inconsistent object files.\n\nIn general, anytime you do \"cvs update\", the safest approach is a make\ndistclean and full rebuild. If you have not enabled dependency tracking\nthen that's the *only* procedure that will work reliably. If you do\nuse --enable-depend then you may be able to get away with partial\nrebuilds, but I don't trust that feature myself. I figure I can do an\nawful lot of automatic rebuilds in the time it will cost me to track\ndown even one \"bug\" caused by inconsistent files.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 21 Feb 2001 18:34:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug: COUNT() and ExecEvalAggref error " } ]
[ { "msg_contents": "Oops.\n\nI rechecked the start up script, and the 7.0.3 doesn't have fsync off or\nwhatever. Dunno why I thought it was on (heh maybe because it was a lot\nfaster than 6.5.3!).\n\nHmm, this means 7.0.3 is quite fast...\n\nCheerio,\nLink.\n\n\n", "msg_date": "Thu, 22 Feb 2001 15:26:43 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": true, "msg_subject": "RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "Lincoln Yeoh wrote:\n> \n> Oops.\n> \n> I rechecked the start up script, and the 7.0.3 doesn't have fsync off or\n> whatever. Dunno why I thought it was on (heh maybe because it was a lot\n> faster than 6.5.3!).\n> \n> Hmm, this means 7.0.3 is quite fast...\n> \n\nYour app seems to have many rollbacks.\nYes rollback of 7.0 is a lot faster than 6.5 even\nwhen fsync on.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Fri, 23 Feb 2001 08:27:06 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: [ADMIN] v7.1b4 bad performance" } ]
[ { "msg_contents": "hi,\ni have functin which did compile on 7.0.3 and 7.1beta1, and now it doesn't.\nit includes were:\n\n#include <stdio.h>\n#include <string.h>\n#include <ctype.h>\n#include <postgres.h>\n\nsince in 7.1beta4 there is no postgres.h i changed this to:\n\n#include <stdio.h>\n#include <string.h>\n#include <ctype.h>\n#include <c.h>\n\nall i need this for is to have type declarations, and postgresql versions of\nmalloc, realloc and free - i.e. i dont use spi.\n\nwhen compiling i get this errors:\ngcc -O2 -Wall -ansi -I \"/home/users/pgdba/work/include/postgresql/\" -c dfti.c\n-fpic\nIn file included from dfti.c:4:\n/home/users/pgdba/work/include/postgresql/c.h:312: parse error before `regproc'\n/home/users/pgdba/work/include/postgresql/c.h:312: warning: type defaults to `int' in declaration of `regproc'\n/home/users/pgdba/work/include/postgresql/c.h:312: warning: data definition has no type or storage class\n/home/users/pgdba/work/include/postgresql/c.h:313: parse error before `RegProcedure'\n/home/users/pgdba/work/include/postgresql/c.h:313: warning: type defaults to `int' in declaration of `RegProcedure'\n/home/users/pgdba/work/include/postgresql/c.h:313: warning: data definition has no type or storage class\n/home/users/pgdba/work/include/postgresql/c.h:364: parse error before `oidvector'\n/home/users/pgdba/work/include/postgresql/c.h:364: warning: type defaults to `int' in declaration of `oidvector'\n/home/users/pgdba/work/include/postgresql/c.h:364: warning: data definition has no type or storage class\n/home/users/pgdba/work/include/postgresql/c.h:373: `NAMEDATALEN' undeclared here (not in a function)\ndfti.c: In function `empty_text':\ndfti.c:16: warning: implicit declaration of function `palloc'\ndfti.c: In function `dfti_prepare':\ndfti.c:37: warning: implicit declaration of function `elog'\ndfti.c:37: `ERROR' undeclared (first use in this function)\ndfti.c:37: (Each undeclared identifier is reported only once\ndfti.c:37: for each function it appears in.)\ndfti.c:84: warning: implicit declaration of function `repalloc'\ndfti.c:86: warning: implicit declaration of function `pfree'\n\nmy knowledge of c is extremly limited, so i can't work on those errors. \nmy postgresql is build from cvs snapshot taken 21th of february 11:23.\n\ncan anyone help me with this?\n\ndepesz\n\np.s. i'm not posting the whole function code since it's ugly, long and it *did*\nwork (without warnings) with previous persions.\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\n najwspanialsz� rzecz� jak� da�o nam nowoczesne spo�ecze�stwo,\n jest niesamowita wr�cz �atwo�� unikania kontakt�w z nim ...\n", "msg_date": "Thu, 22 Feb 2001 08:28:46 +0100", "msg_from": "hubert depesz lubaczewski <depesz@depesz.pl>", "msg_from_op": true, "msg_subject": "problem while compiling user c functions in 7.1beta4" }, { "msg_contents": "On Thu, 22 Feb 2001 20:28, hubert depesz lubaczewski wrote:\n\n> since in 7.1beta4 there is no postgres.h i changed this to:\n\nI did a cvsup update about 12 hours ago and look:-\n\n22:05:23 chris@berty:/usr/src/cvs/pgsql $ find . -name postgres.h\n./src/include/postgres.h\n22:16:22 chris@berty:/usr/src/cvs/pgsql $ \n\n-- \nSincerely etc.,\n\n NAME Christopher Sawtell\n CELL PHONE 021 257 4451\n ICQ UIN 45863470\n EMAIL csawtell @ xtra . co . nz\n CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz\n\n -->> Please refrain from using HTML or WORD attachments in e-mails to me \n<<--\n\n", "msg_date": "Thu, 22 Feb 2001 22:21:12 +1300", "msg_from": "Christopher Sawtell <csawtell@xtra.co.nz>", "msg_from_op": false, "msg_subject": "Re: [GENERAL] problem while compiling user c functions in 7.1beta4" }, { "msg_contents": "On Thu, Feb 22, 2001 at 10:21:12PM +1300, Christopher Sawtell wrote:\n> I did a cvsup update about 12 hours ago and look:-\n> 22:05:23 chris@berty:/usr/src/cvs/pgsql $ find . -name postgres.h\n> ./src/include/postgres.h\n> 22:16:22 chris@berty:/usr/src/cvs/pgsql $ \n\nsorry. my fault. i was wrong because the files were not installed in working\ndirectory. strange. error in makefile's?\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\n najwspanialsz� rzecz� jak� da�o nam nowoczesne spo�ecze�stwo,\n jest niesamowita wr�cz �atwo�� unikania kontakt�w z nim ...\n", "msg_date": "Thu, 22 Feb 2001 10:39:19 +0100", "msg_from": "hubert depesz lubaczewski <depesz@depesz.pl>", "msg_from_op": true, "msg_subject": "Re: problem while compiling user c functions in 7.1beta4" }, { "msg_contents": "On Thu, 22 Feb 2001 22:39, hubert depesz lubaczewski wrote:\n> On Thu, Feb 22, 2001 at 10:21:12PM +1300, Christopher Sawtell wrote:\n> > I did a cvsup update about 12 hours ago and look:-\n> > 22:05:23 chris@berty:/usr/src/cvs/pgsql $ find . -name postgres.h\n> > ./src/include/postgres.h\n> > 22:16:22 chris@berty:/usr/src/cvs/pgsql $\n>\n> sorry. my fault. i was wrong because the files were not installed in\n> working directory. strange. error in makefile's?\n\nVery strange indeed.\n\nI have found that using cvsup is a very reliable way to keep the code\nin order. In my experience postgresql is of ultra-superior quality and \neverything just makes \"out of the box\".\n\n-- \nSincerely etc.,\n\n NAME Christopher Sawtell\n CELL PHONE 021 257 4451\n ICQ UIN 45863470\n EMAIL csawtell @ xtra . co . nz\n CNOTES ftp://ftp.funet.fi/pub/languages/C/tutorials/sawtell_C.tar.gz\n\n -->> Please refrain from using HTML or WORD attachments in e-mails to me \n<<--\n\n", "msg_date": "Thu, 22 Feb 2001 23:24:48 +1300", "msg_from": "Christopher Sawtell <csawtell@xtra.co.nz>", "msg_from_op": false, "msg_subject": "Re: problem while compiling user c functions in 7.1beta4" }, { "msg_contents": "On Thu, Feb 22, 2001 at 11:24:48PM +1300, Christopher Sawtell wrote:\n> Very strange indeed.\n> I have found that using cvsup is a very reliable way to keep the code\n> in order. In my experience postgresql is of ultra-superior quality and \n> everything just makes \"out of the box\".\n\ni had the same problems in past too. i'm usings cvs -z9 update, and ...\neverything builds great. works out of the box, just some files in include\ndirectory are missing.\n\ndepesz\n\n-- \nhubert depesz lubaczewski http://www.depesz.pl/\n------------------------------------------------------------------------\n najwspanialsz� rzecz� jak� da�o nam nowoczesne spo�ecze�stwo,\n jest niesamowita wr�cz �atwo�� unikania kontakt�w z nim ...\n", "msg_date": "Thu, 22 Feb 2001 12:44:38 +0100", "msg_from": "hubert depesz lubaczewski <depesz@depesz.pl>", "msg_from_op": true, "msg_subject": "Re: problem while compiling user c functions in 7.1beta4" }, { "msg_contents": "hubert depesz lubaczewski <depesz@depesz.pl> writes:\n> sorry. my fault. i was wrong because the files were not installed in working\n> directory. strange. error in makefile's?\n\nNo, an extremely deliberate change, which was discussed at length in the\nmailing lists. The default install now installs only client-side header\nfiles, no server-side files. If you want to compile server-side code,\neither point your -I path at pgsql/src/include or do \"make\ninstall-all-headers\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 22 Feb 2001 22:50:48 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: problem while compiling user c functions in 7.1beta4 " } ]
[ { "msg_contents": "Tom Lane Wrote:\n> The trouble here is that CPU nice doesn't (on most platforms) change the\n> behavior of the I/O scheduler, so this would only be of use to the\n> extent that your queries are CPU bound and not I/O bound.\n\nAssuming there is a major processor hit, and the backend has a UW-SCSI RAID\nbox with enough I/O capability...\n\n\n>What happens when the low-priority process holds some lock or other,\n>and then a higher-priority process comes along and wants the lock?\n\nIf the query was a select only, would the locking problem still apply?\n(The long queries in this case are in the form of 'select * from [all tables\njoined together] where x')\n\nI will make a couple of changes and test it to see if there are any\nperformance gains in particular cases.\nThe other option is to add another processor :)\n\nChris\n\n", "msg_date": "Thu, 22 Feb 2001 09:25:42 -0000", "msg_from": "Chris Storah <cstorah@emis-support.demon.co.uk>", "msg_from_op": true, "msg_subject": "RE: low priority postmaster threads? " } ]
[ { "msg_contents": "Here's my dillema:\n\nWe are currently building a site with multiple machines to run our website \nand client sites as well. I would like to run the postgres binary on 2 \nmachines concurrently to assist in load balancing. $PGDATA will be kept on \na RAID 1+0. I need to know where postgres does it's row & table locking. \n If it's done in memory, I've got some problems! If it's done at or near \nthe $PGDATA directory (which sounds like bad performance decision) that \nwould be piece of cake. Any advice or ideas on this issue would be \nGREATLY appreciated.\n\nThanks in advance!!\n\nLarry\n\n\n-- \nLawrence M. Kagan\nAllied Infosystems, Inc.\nE-mail: larry@alliedinfosystems.com\nWeb: www.alliedinfo.com\nPhone: (954) 647-4600\nToll-free: (877) WEB-5888\n", "msg_date": "Thu, 22 Feb 2001 10:43:58 +0000", "msg_from": "\"Lawrence M. Kagan\" <larry@alliedinfosystems.com>", "msg_from_op": true, "msg_subject": "Where is locking done?" }, { "msg_contents": "Hi,\n\ni've tried to fetch a TIMESTAMP column from the database into a Java\nTimestamp instance using the ResultSet.getTimestamp(int index) method.\nWhenever i call this method i get some error message:\n\n User.findUser: Bad Timestamp Format at 19 in 2001-03-19 22:05:50.45+01\n Bad Timestamp Format at 19 in 2001-03-19 22:05:50.45+01\n at\norg.postgresql.jdbc2.ResultSet.getTimestamp(ResultSet.java:447)\n at de.reswi.portal.User_DO.bind(User_DO.java:85)\n\nIf i try to bind this column to a java.sql.Date instance using\nResultSet.getDate(int index) everything works fine but i loose the precision\ni need.\n\nBTW: it's possible to write Timestamp type objects into the column. The\nMethod ResultSet.setTimestamp(int index, Timestamp stamp) works fine.\n\nCiao,\n - ralf\n\n\n----- Original Message -----\nFrom: \"Lawrence M. Kagan\" <larry@alliedinfosystems.com>\nTo: <pgsql-hackers@postgresql.org>\nSent: Thursday, February 22, 2001 11:43 AM\nSubject: [HACKERS] Where is locking done?\n\n\n> Here's my dillema:\n>\n> We are currently building a site with multiple machines to run our website\n> and client sites as well. I would like to run the postgres binary on 2\n> machines concurrently to assist in load balancing. $PGDATA will be kept\non\n> a RAID 1+0. I need to know where postgres does it's row & table\nlocking.\n> If it's done in memory, I've got some problems! If it's done at or near\n> the $PGDATA directory (which sounds like bad performance decision) that\n> would be piece of cake. Any advice or ideas on this issue would be\n> GREATLY appreciated.\n>\n> Thanks in advance!!\n>\n> Larry\n>\n>\n> --\n> Lawrence M. Kagan\n> Allied Infosystems, Inc.\n> E-mail: larry@alliedinfosystems.com\n> Web: www.alliedinfo.com\n> Phone: (954) 647-4600\n> Toll-free: (877) WEB-5888\n>\n\n", "msg_date": "Mon, 12 Mar 2001 22:12:58 +0100", "msg_from": "\"Ralf Edmund Stranzenbach\" <ralf@reswi.de>", "msg_from_op": false, "msg_subject": "JDBC handling of a Timestamp-Column" } ]
[ { "msg_contents": "Hi all:\n The attachement is the Chinese (GB) patch for PgAccess, don't know\nif it's correct to post here.\nIt's simple to do the translation, And I've test in 7.0.2 & current CVS,\nseems pretty good.\nIf anyone want this little thing, I'll very happy.\nuse it is very simple, just gunzip it and copy to\n$PGDIR/share/pgaccess/lib/languages/ for current CVS version,\nand $PGDIR/pgaccess/lib/languages/ for 7.0*\nBTW: I havn't got the tools to translate it to BIG5 encoding, is there\nanybody to to it?\n\nRegards\n\nLaser", "msg_date": "Thu, 22 Feb 2001 21:12:00 +0800", "msg_from": "\"He Weiping(Laser Henry)\" <laser@zhengmai.com.cn>", "msg_from_op": true, "msg_subject": "Chinese patch for Pgaccess" }, { "msg_contents": "> Hi all:\n> The attachement is the Chinese (GB) patch for PgAccess, don't know\n> if it's correct to post here.\n> It's simple to do the translation, And I've test in 7.0.2 & current CVS,\n> seems pretty good.\n> If anyone want this little thing, I'll very happy.\n> use it is very simple, just gunzip it and copy to\n> $PGDIR/share/pgaccess/lib/languages/ for current CVS version,\n> and $PGDIR/pgaccess/lib/languages/ for 7.0*\n> BTW: I havn't got the tools to translate it to BIG5 encoding, is there\n> anybody to to it?\n\nOK, I have added this to the other pgaccess language files.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Feb 2001 10:38:32 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Chinese patch for Pgaccess" }, { "msg_contents": "> > Hi all:\n> > The attachement is the Chinese (GB) patch for PgAccess, don't know\n> > if it's correct to post here.\n> > It's simple to do the translation, And I've test in 7.0.2 & current CVS,\n> > seems pretty good.\n> > If anyone want this little thing, I'll very happy.\n> > use it is very simple, just gunzip it and copy to\n> > $PGDIR/share/pgaccess/lib/languages/ for current CVS version,\n> > and $PGDIR/pgaccess/lib/languages/ for 7.0*\n> > BTW: I havn't got the tools to translate it to BIG5 encoding, is there\n> > anybody to to it?\n> \n> OK, I have added this to the other pgaccess language files.\n\nI think the file name (src/bin/pgaccess/lib/languages/chinese) is not\nappropriate. There are several encodings for Chinese including\nGB(EUC-CN), Big5, EUC-TW. At least we should be able to distinguish\nthem. What about \"chinese(GB)\" or whatever?\n--\nTatsuo Ishii\n\n", "msg_date": "Fri, 23 Feb 2001 10:02:11 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: Chinese patch for Pgaccess" }, { "msg_contents": "> > > Hi all:\n> > > The attachement is the Chinese (GB) patch for PgAccess, don't know\n> > > if it's correct to post here.\n> > > It's simple to do the translation, And I've test in 7.0.2 & current CVS,\n> > > seems pretty good.\n> > > If anyone want this little thing, I'll very happy.\n> > > use it is very simple, just gunzip it and copy to\n> > > $PGDIR/share/pgaccess/lib/languages/ for current CVS version,\n> > > and $PGDIR/pgaccess/lib/languages/ for 7.0*\n> > > BTW: I havn't got the tools to translate it to BIG5 encoding, is there\n> > > anybody to to it?\n> > \n> > OK, I have added this to the other pgaccess language files.\n> \n> I think the file name (src/bin/pgaccess/lib/languages/chinese) is not\n> appropriate. There are several encodings for Chinese including\n> GB(EUC-CN), Big5, EUC-TW. At least we should be able to distinguish\n> them. What about \"chinese(GB)\" or whatever?\n\nRenamed to chinese-gb.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Feb 2001 20:21:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] Re: Chinese patch for Pgaccess" }, { "msg_contents": ">\n> > appropriate. There are several encodings for Chinese including\n> > GB(EUC-CN), Big5, EUC-TW. At least we should be able to distinguish\n> > them. What about \"chinese(GB)\" or whatever?\n>\n> Renamed to chinese-gb.\n>\n\nI think chinese-gb is ok, thanks!\n\nRegards\n\nLaser\n\n", "msg_date": "Fri, 23 Feb 2001 11:14:12 +0800", "msg_from": "\"He Weiping(Laser Henry)\" <laser@zhengmai.com.cn>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Re: Chinese patch for Pgaccess" }, { "msg_contents": "> >\n> > > appropriate. There are several encodings for Chinese including\n> > > GB(EUC-CN), Big5, EUC-TW. At least we should be able to distinguish\n> > > them. What about \"chinese(GB)\" or whatever?\n> >\n> > Renamed to chinese-gb.\n> >\n> \n> I think chinese-gb is ok, thanks!\n\n\nI ended up using chinese_gb. The underscore was more consistent.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Feb 2001 23:04:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] Re: [INTERFACES] Re: Chinese patch for Pgaccess" } ]
[ { "msg_contents": "> It may be that WAL has changed the rollback\n> time-characteristics to worse than pre-wal ?\n\nNothing changed ... yet. And in future rollbacks\nof read-only transactions will be as fast as now,\nanyway.\n\n> > So my guess is that the 7.1 updates (with default\n> > fsync) are significantly slower than 7.0.3 fsync=off\n> > now.\n\nDo you update tables with foreign keys?\nDid you run tests in multi-user or single-user\nenvironment?\n\nVadim\n\n-----------------------------------------------\nFREE! The World's Best Email Address @email.com\nReserve your name now at http://www.email.com\n\n\n", "msg_date": "Thu, 22 Feb 2001 09:40:09 -0500 (EST)", "msg_from": "Vadim Mikheev <vadim4o@email.com>", "msg_from_op": true, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "Vadim Mikheev wrote:\n> \n> > It may be that WAL has changed the rollback\n> > time-characteristics to worse than pre-wal ?\n> \n> Nothing changed ... yet. And in future rollbacks\n> of read-only transactions will be as fast as now,\n> anyway.\n\nWhat about rollbacks of a bunch uf inserts/updates/deletes?\n\nI remember a scenario where an empty table was used by several \nbackends for gathering report data, and when the report is \ndone they will rollback to keep the table empty.\n\nShould this kind of usage be replaced in the future by \nhaving backend id as a key and then doing delete by that \nkey in the end ?\n\n--------------\nHannu\n", "msg_date": "Thu, 22 Feb 2001 19:32:25 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "At 09:40 AM 22-02-2001 -0500, Vadim Mikheev wrote:\n>> It may be that WAL has changed the rollback\n>> time-characteristics to worse than pre-wal ?\n>\n>Nothing changed ... yet. And in future rollbacks\n>of read-only transactions will be as fast as now,\n>anyway.\n\nThe rollbacks are ok for me at least - even the 6.5.3 rollbacks are ok.\n\n>> > So my guess is that the 7.1 updates (with default\n>> > fsync) are significantly slower than 7.0.3 fsync=off\n>> > now.\n>\n>Do you update tables with foreign keys?\n>Did you run tests in multi-user or single-user\n>environment?\n\nNo foreign keys. Multiuser- I had apachebench do a concurrency of two.\ne.g.\nab -n 100 -c 2 \"<url>\"\n\n7.1beta4 snapshot was giving about 22 hits per sec max. 7.0.3 was doing\nabout 60 hits per sec max. That's a significant difference in speed to me.\n\nThing is, it was only updating _one_ row in a table with only one row (and\ncommitting). Everything else was selects.\n\nThe typical sequence was:\n\nrollback;\nbegin;\nselect (session where supplied id and cookie matches and not timed out)\nupdate session and set lastactive to 'now'\ncommit;\nbegin;\nselect (bunch of stuff);\n.. (selects but no updates or inserts )\nselect (bunch of stuff);\nrollback;\n\nAny reason for the gradual slow down in both 7.0.3 (e.g. 60 hits/sec to 30)\nand 7.1beta4 (e.g. 22 hits/sec to 15)? The session table expands due to the\nconcurrency? \n\nShould I switch to \"select session .... for update\"? Would that reduce the\ngradual slowdown? \n\nThe reason why I do so many rollbacks is because that appears to be the\nrecommended way to begin a new transaction using perl DBI - not supposed to\nissue an explicit BEGIN. \n\nI do the first rollback/begin so I don't get stale transaction timestamps\nfrom the previous \"last rollback\".\n\nI do the last rollback/begin in order to free up any locks, before waiting\nfor an undeterminable time for the next connection.\n\nCheerio,\nLink.\n\n", "msg_date": "Sat, 24 Feb 2001 10:22:08 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" } ]
[ { "msg_contents": "Hi,\n\nI happenned to come across the following in the\ndocumentation on WAL implementation in v7.1 -\n\n***************************************************** \nBefore WAL, any crash during writing could result in: \n\n 1.index tuples pointing to non-existent table rows\n\n 2.index tuples lost in split operations\n\n 3.totally corrupted table or index page content,\nbecause of partially written data pages\n*****************************************************\n\nDoes anybody know what kind of a problem this refers\nto ? Does this mean that incomplete transactions would\nbe stored or does this mean that the entire table\nmight get corrupted and unusable, implying loss of all\ndata ?\n\n( I am using postgresql v7.0.x , and would ideally\nlike to migrate to v7.1 after a few months ... unless\nit is critical enough to do so earlier. )\n\nThanks,\nRini\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Auctions - Buy the things you want at great prices! http://auctions.yahoo.com/\n", "msg_date": "Thu, 22 Feb 2001 06:44:01 -0800 (PST)", "msg_from": "Rini Dutta <rinid@rocketmail.com>", "msg_from_op": true, "msg_subject": "how critical is WAL ?" } ]
[ { "msg_contents": "Here is an article about GPL and GPL version 3.0.\n\n http://icd.pennnet.com/Articles/Article_Display.cfm?Section=Articles&SubSection=Display&ARTICLE_ID=92350&VERSION_NUM=1\n\nThe interesting thing is that Stallman says:\n\n \"Our position is that it makes no difference whether programs are linked\n statically or dynamically,\" explains Stallman. \"Either one makes a\n combined program.\n\nThis would seem to imply that our dynamic linking of libreadline in\nPostgreSQL backend binaries makes the distribution of backend binaries\nfall under the GPL. (Of course, we can use *BSD libedit now.)\n\nLet me add I don't agree with this, and find the whole GPL\nheavy-handedness very distasteful.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Feb 2001 10:50:17 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "GPL, readline, and static/dynamic linking" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n\n> Here is an article about GPL and GPL version 3.0.\n> \n> http://icd.pennnet.com/Articles/Article_Display.cfm?Section=Articles&SubSection=Display&ARTICLE_ID=92350&VERSION_NUM=1\n> \n> The interesting thing is that Stallman says:\n> \n> \"Our position is that it makes no difference whether programs are linked\n> statically or dynamically,\" explains Stallman. \"Either one makes a\n> combined program.\n> \n> This would seem to imply that our dynamic linking of libreadline in\n> PostgreSQL backend binaries makes the distribution of backend binaries\n> fall under the GPL.\n\nThis was discussed extensively earlier. Linking dynamically or\nstatically doesn't make a difference in the case of a library, but as\nlong as readline is an optional feature for the user it's not a\nproblem. \n\n\n-- \nTrond Eivind Glomsr�d\nRed Hat, Inc.\n", "msg_date": "22 Feb 2001 11:57:48 -0500", "msg_from": "teg@redhat.com (Trond Eivind =?iso-8859-1?q?Glomsr=F8d?=)", "msg_from_op": false, "msg_subject": "Re: GPL, readline, and static/dynamic linking" }, { "msg_contents": "Mensaje citado por: Trond Eivind Glomsr�d <teg@redhat.com>:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> \n> > Here is an article about GPL and GPL version 3.0.\n> > \n> > \n>\nhttp://icd.pennnet.com/Articles/Article_Display.cfm?Section=Articles&SubSection=Display&ARTICLE_ID=92350&VERSION_NUM=1\n> > \n> > The interesting thing is that Stallman says:\n> > \n> > \"Our position is that it makes no difference whether programs are\n> linked\n> > statically or dynamically,\" explains Stallman. \"Either one makes a\n> > combined program.\n> > \n> > This would seem to imply that our dynamic linking of libreadline in\n> > PostgreSQL backend binaries makes the distribution of backend\n> binaries\n> > fall under the GPL.\n> \n> This was discussed extensively earlier. Linking dynamically or\n> statically doesn't make a difference in the case of a library, but as\n> long as readline is an optional feature for the user it's not a\n> problem. \n\nI agree with Trond on this. It's like the problem that PHP had with bc until it\ngot LGPLed. All they did was say you could compile PHP with it, but you had to\ndownloaded by ourself.\n\nSaludos... :-)\n\n\n\n\nSystem Administration: It's a dirty job,\nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s email: martin@math.unl.edu.ar\nSanta Fe - Argentina http://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 22 Feb 2001 17:45:43 -0300 (ART)", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: GPL, readline, and static/dynamic linking" }, { "msg_contents": "On Thu, Feb 22, 2001 at 10:50:17AM -0500, Bruce Momjian wrote:\n> Let me add I don't agree with this, and find the whole GPL\n> heavy-handedness very distasteful.\n\nPlease, not this again. Is there a piss-and-moan-about-the-GPL \nschedule posted somewhere? \n\nEither PG is in compliance, or it's not. Only libreadline's copyright \nholder has the right to complain if it's not. There is no need to \nspeculate; if we care about compliance, we need only ask the owner. \nIf the owner says we're violating his license, then we can comply, or\nnegotiate, or stop using the code. The GPL is no different from any \nother license, that way.\n\nComplaining about the terms on something you got for nothing has to be \nthe biggest waste of time and attention I've seen on this list.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Thu, 22 Feb 2001 13:03:27 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: GPL, readline, and static/dynamic linking" }, { "msg_contents": "> > This was discussed extensively earlier. Linking dynamically or\n> > statically doesn't make a difference in the case of a library, but as\n> > long as readline is an optional feature for the user it's not a\n> > problem. \n> \n> I agree with Trond on this. It's like the problem that PHP had with bc until it\n> got LGPLed. All they did was say you could compile PHP with it, but you had to\n> downloaded by ourself.\n\nYes, we don't distribute libreadline. We just check in 'configure' for\nit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Thu, 22 Feb 2001 16:04:37 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: GPL, readline, and static/dynamic linking" }, { "msg_contents": "Mensaje citado por: Bruce Momjian <pgman@candle.pha.pa.us>:\n\n> > > This was discussed extensively earlier. Linking dynamically or\n> > > statically doesn't make a difference in the case of a library, but\n> as\n> > > long as readline is an optional feature for the user it's not a\n> > > problem. \n> > \n> > I agree with Trond on this. It's like the problem that PHP had with bc\n> until it\n> > got LGPLed. All they did was say you could compile PHP with it, but\n> you had to\n> > downloaded by ourself.\n> \n> Yes, we don't distribute libreadline. We just check in 'configure' for\n> it.\n\nIn that case, I would say that there is nothing to discuss. RMS has nothing to\nsay about this.\nThe only problem would be if Postgres would be distributed with libreadline.\n\nSaludos... :-)\n\n\nSystem Administration: It's a dirty job,\nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s email: martin@math.unl.edu.ar\nSanta Fe - Argentina http://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Thu, 22 Feb 2001 18:36:36 -0300 (ART)", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: GPL, readline, and static/dynamic linking" } ]
[ { "msg_contents": "I think UP or SMP should be an option to check, perhaps just a box for the\nnumber of processors. Also something to capture the compile flags. I have\na dual Ppro, and it compiles fine unless I use the -j3 or -j4 commands,\nthen I get an error.\n\nMatt\n\n> -----Original Message-----\n> From:\tVince Vielhaber [SMTP:vev@michvhf.com]\n> Sent:\tThursday, February 22, 2001 10:57 AM\n> To:\tPete Forman\n> Cc:\tpgsql-hackers@postgresql.org\n> Subject:\tRE: [HACKERS] beta5 ...\n> \n> On Thu, 22 Feb 2001, Pete Forman wrote:\n> \n> > Vince Vielhaber writes:\n> > > On Thu, 22 Feb 2001, Christopher Kings-Lynne wrote:\n> > >\n> > > > What about adding a field where they paste the output of 'uname\n> > > > -a' on their system...?\n> > >\n> > > Got this and Justin's changes along with compiler version. Anyone\n> > > think of anything else?\n> >\n> > Architecture. IRIX, Solaris and AIX allow applications and libraries\n> > to be built 32 or 64 bit.\n> \n> Added.\n> \n> > You may also like to add a field for configure options used. Or is\n> > this just for results OOTB?\n> \n> That comes later. This part is just for identifying the system itself.\n> \n> Vince.\n> -- \n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n> Online Campground Directory http://www.camping-usa.com\n> Online Giftshop Superstore http://www.cloudninegifts.com\n> ==========================================================================\n> \n> \n", "msg_date": "Thu, 22 Feb 2001 11:36:28 -0600", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "RE: beta5 ..." }, { "msg_contents": "Matthew writes:\n\n> I think UP or SMP should be an option to check, perhaps just a box for the\n> number of processors. Also something to capture the compile flags. I have\n> a dual Ppro, and it compiles fine unless I use the -j3 or -j4 commands,\n> then I get an error.\n\nWhich error?\n\nParallel make doesn't work when you build from a CVS tree, but it should\nwork with a distribution tarball.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Thu, 22 Feb 2001 19:54:12 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "RE: beta5 ..." } ]
[ { "msg_contents": "> I believe it was straight from CVS, perhaps it was the beta4 tarball.\n> Don't know if that counts as a distribution tarball or not. I will test\n> the 7.0.3 release, and double check what the error I'm getting if you\n> would like.\n> \n> -----Original Message-----\n> From:\tPeter Eisentraut [SMTP:peter_e@gmx.net]\n> Sent:\tThursday, February 22, 2001 12:54 PM\n> To:\tMatthew\n> Cc:\t'Vince Vielhaber'; Pete Forman; pgsql-hackers@postgresql.org\n> Subject:\tRE: [HACKERS] beta5 ...\n> \n> Matthew writes:\n> \n> > I think UP or SMP should be an option to check, perhaps just a box for\n> the\n> > number of processors. Also something to capture the compile flags. I\n> have\n> > a dual Ppro, and it compiles fine unless I use the -j3 or -j4 commands,\n> > then I get an error.\n> \n> Which error?\n> \n> Parallel make doesn't work when you build from a CVS tree, but it should\n> work with a distribution tarball.\n> \n> -- \n> Peter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n", "msg_date": "Thu, 22 Feb 2001 13:05:33 -0600", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "FW: beta5 ..." } ]
[ { "msg_contents": "Appears that permissions applied to a table using the SERIAL element\naren't carried over to the sequence itself.\n\nCreate table with SERIAL, set-up permissions so table is accessible,\nattempt to use inserts. No permission to sequence :(\n\nGuess I won't be using this shortcut, between this and them being left\nbehind it makes SERIAL a pain in the ass. Atleast names for the\nsequence are usually predictable.\n\n--\nRod Taylor\n\nThere are always four sides to every story: your side, their side, the\ntruth, and what really happened.", "msg_date": "Thu, 22 Feb 2001 23:48:15 -0500", "msg_from": "\"Rod Taylor\" <rod.taylor@inquent.com>", "msg_from_op": true, "msg_subject": "Permissions on SERIAL" } ]
[ { "msg_contents": "Can someone explain why LockMethodCtl is in shared memory while\nLockMethodTable is in postmaster memory context?\n\nI realize LockMethodCtl has a spinlock, so it has to be in shared\nmemory, but couldn't it all be put in shared memory?\n\nAlso, the code:\n\nLockShmemSize(int maxBackends)\n{\n int size = 0;\n\n size += MAXALIGN(sizeof(PROC_HDR)); /* ProcGlobal */\n size += MAXALIGN(maxBackends * sizeof(PROC)); /* each MyProc*/\n size += MAXALIGN(maxBackends * sizeof(LOCKMETHODCTL)); /* each\n * lockMethodTable->ctl */\n\nIs there one LOCKMETHODCTL for every backend? I thought there was only\none of them.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 01:38:39 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Lock structures" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can someone explain why LockMethodCtl is in shared memory while\n> LockMethodTable is in postmaster memory context?\n> I realize LockMethodCtl has a spinlock, so it has to be in shared\n> memory, but couldn't it all be put in shared memory?\n\nI think the original point was not to assume that the shared-memory\npointers would be the same in each backend. Right now we don't need\nthat, but I see no good reason to change the data structure.\n\n> size += MAXALIGN(maxBackends * sizeof(LOCKMETHODCTL)); /* each\n> * lockMethodTable->ctl */\n\n> Is there one LOCKMETHODCTL for every backend? I thought there was only\n> one of them.\n\nYou're right, that line is erroneous; it should read\n\n size += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODCTL));\n\nNot a significant error but it should be changed for clarity ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 10:29:59 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lock structures " }, { "msg_contents": "> > Is there one LOCKMETHODCTL for every backend? I thought there was only\n> > one of them.\n> \n> You're right, that line is erroneous; it should read\n> \n> size += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODCTL));\n> \n> Not a significant error but it should be changed for clarity ...\n\nI assume the fix should be done in 7.2, not 7.1, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 13:01:01 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Lock structures" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there one LOCKMETHODCTL for every backend? I thought there was only\n> one of them.\n>> \n>> You're right, that line is erroneous; it should read\n>> \n>> size += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODCTL));\n>> \n>> Not a significant error but it should be changed for clarity ...\n\n> I assume the fix should be done in 7.2, not 7.1, right?\n\nI see no reason to put it off ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 13:05:24 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lock structures " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there one LOCKMETHODCTL for every backend? I thought there was only\n> > one of them.\n> >> \n> >> You're right, that line is erroneous; it should read\n> >> \n> >> size += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODCTL));\n> >> \n> >> Not a significant error but it should be changed for clarity ...\n> \n> > I assume the fix should be done in 7.2, not 7.1, right?\n> \n> I see no reason to put it off ...\n\nTom, what about the names? There is LOCKMETHODCTL, LOCKMETHODTABLE,\nand LOCKMODES. How about LOCKSTYLESHARED, LOCKSTYLE, and leave\nLOCKMODES unchanged?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 13:10:04 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Lock structures" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, what about the names? There is LOCKMETHODCTL, LOCKMETHODTABLE,\n> and LOCKMODES. How about LOCKSTYLESHARED, LOCKSTYLE, and leave\n> LOCKMODES unchanged?\n\nI think both of those names are worse (less descriptive) than what\nwe have ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 13:28:14 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Lock structures " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there one LOCKMETHODCTL for every backend? I thought there was only\n> > one of them.\n> >> \n> >> You're right, that line is erroneous; it should read\n> >> \n> >> size += MAX_LOCK_METHODS * MAXALIGN(sizeof(LOCKMETHODCTL));\n> >> \n> >> Not a significant error but it should be changed for clarity ...\n> \n> > I assume the fix should be done in 7.2, not 7.1, right?\n> \n> I see no reason to put it off ...\n> \n\nThanks. Fix applied.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 13:28:45 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Lock structures" } ]
[ { "msg_contents": "Hi,\nI cannot use any kind of odbc because my customers have his local m$\naccess db's locally then export them on .txt with tab or | separated, then\nput on my server trought ftp.\n\nand is working ok except that the customers are on spanish databases then\na data like:\n--DATE-----NAME---------LANG------\n 1/6/2000|Ferran Adri�|Castellano|\n\nwhen sended trought ftp on my server is converted to:\n--DATE-----NAME------------LANG------\n 1/6/2000|Ferran Adri\\xe0|Castellano|\n\nso when imported on Postgresql with:\nCOPY products FROM '/var/lib/postgres/iii2.txt' USING DELIMITERS '|' \\g\n--DATE-----NAME-----------------------LANG------\n 1/6/2000|Ferran Adri\\xe0|Castellano|NULL\n\non the same cell, ignoring the '|' completelly\n\non 'postmaster.init' I have: LANG=es_ES but doesnt' works...\nusing tabulators as a separators also causes same problem...\n\nany pointers to solve this will be really apreciated\n\nthe other problem is that if a m$ access database has a return carraige on\na text cell the import also fails.\n\n\nbests from barcelona,\nteixi.\n", "msg_date": "Fri, 23 Feb 2001 12:04:30 +0100", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": true, "msg_subject": "HELP: m$ access -> psql howto ?" }, { "msg_contents": "On Sat, 24 Feb 2001 18:48:15 -0800 \"Richard T. Robino\"\n<rickspam@wavedivision.com> wrote:\n> Use binary type for transferring files via FTP.\nit's the same\n\nOn Sat, 24 Feb 2001 09:27:43 +0100 Stefan Huber\n<schweinsaug@crosswinds.net> wrote:\n> wild guess: maybe you must escape the pipe-symbol: ...USING DELIMITERS\n'\\|'\nthe same :(\n\nfinally I managed to export LANG=es_ES on system then on postmaster.ini I\nalso have LANG=es_ES but when I do:\n\nCOPY products FROM '/var/lib/postgres/dadesi.txt' USING DELIMITERS '|' \\g\n\nit causes:\n\nSELECT edicion FROM products;\n edicion \n-----------------\n Espa�a|Nacional <-------puts on the same cell either there's an '|' in\nthe middle!!!\n\nSELECT protagonista FROM products;\n protagonista \n------------------------------\n Ferran Adri�|Castellano <-------puts on the same cell\n el Bulli taller\n ICC\n Ferran Adri�|Franc�s|Francia <-------puts on the same cell 2 '|'\n\nAlso have tried with @ or tabs as delimiters with any result at all!!\n\nwhy this only occurs on some cells? what more could I check?\n\nbest regards,\njaume.\n\n\n\n\n> On 2/23/01 3:04 AM, \"Jaume Teixi\" <teixi@6tems.com> wrote:\n> \n> > Hi,\n> > I cannot use any kind of odbc because my customers have his local m$\n> > access db's locally then export them on .txt with tab or | separated,\nthen\n> > put on my server trought ftp.\n> > \n> > and is working ok except that the customers are on spanish databases\nthen\n> > a data like:\n> > --DATE-----NAME---------LANG------\n> > 1/6/2000|Ferran Adri�|Castellano|\n> > \n> > when sended trought ftp on my server is converted to:\n> > --DATE-----NAME------------LANG------\n> > 1/6/2000|Ferran Adri\\xe0|Castellano|\n> > \n> > so when imported on Postgresql with:\n> > COPY products FROM '/var/lib/postgres/iii2.txt' USING DELIMITERS '|'\n\\g\n> > --DATE-----NAME-----------------------LANG------\n> > 1/6/2000|Ferran Adri\\xe0|Castellano|NULL\n> > \n> > on the same cell, ignoring the '|' completelly\n> > \n> > on 'postmaster.init' I have: LANG=es_ES but doesnt' works...\n> > using tabulators as a separators also causes same problem...\n> > \n> > any pointers to solve this will be really apreciated\n> > \n> > the other problem is that if a m$ access database has a return\ncarraige on\n> > a text cell the import also fails.\n> > \n> > \n> > bests from barcelona,\n> > teixi.\n> > \n", "msg_date": "Mon, 26 Feb 2001 11:48:12 +0100", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": true, "msg_subject": "Re: NOSUCCESS: m$ access -> psql howto ?" } ]
[ { "msg_contents": "dear all\nI hava 2 problems about view\n1. i can't insert into view\n2. i can't create view with union\n\nI try to insert into view as following\ncreate table t1 (id int,name varchar(12) check(id<=10));\ncreate table t2 (id int,name varchar(12) check(id>10));\ncreate view v1 as select * from t1,t2;\ninsert into v1 values(1,'wan1');\ninsert into v1 values(12,'wan12');\n\nit does not show any problem but it doen't have data in table t1 and table\nt2\n\n------------------------------\nif i want to distribute database into 2 database servers\nand i want to insert into database1.table1 when database1.table1.id <=100\nand i want to insert into database2.table2 when database2.table2.id >100\n\nHow can i do that with create view .........as .......union all ............\nand insert into view ,afterthat view is check condition and distrubute data\ninto diferent database\nup on condition\n\nand How to configure the postgres sql server?\n\nIf you have idea or example for solving this problem , pls help me\nthank you so much , i'm looking forward to seeing your response.\nRegards,\n\n\n", "msg_date": "Fri, 23 Feb 2001 18:14:34 +0700", "msg_from": "\"Jaruwan Laongmal\" <jaruwan@gits.net.th>", "msg_from_op": true, "msg_subject": "ask for help !!! (emergency case)" }, { "msg_contents": "Jaruwan Laongmal <jaruwan@gits.net.th> wrote:\n>create view v1 as select * from t1,t2;\n>insert into v1 values(1,'wan1');\n\nQuoting http://www.postgresql.org/docs/aw_pgsql_book/node149.html:\n: Because views are not ordinary tables, INSERTs , UPDATEs , and DELETEs on\n: views have no effect. The next section shows how rules can correct this\n: problem. \n\nHTH,\nRay\n-- \nPinky, Are You Pondering What I'm Pondering?\nEwww, I think so Brain, but I think I'd rather eat the Macarena. \n\tPinky and the Brain in \"Plan Brain From Outer Space\"\n\n", "msg_date": "Sun, 25 Feb 2001 16:29:35 +0000 (UTC)", "msg_from": "jdassen@cistron.nl (J.H.M. Dassen (Ray))", "msg_from_op": false, "msg_subject": "Re: ask for help !!! (emergency case)" } ]
[ { "msg_contents": "\n> I hava 2 problems about view\n> 1. i can't insert into view\n\n> I try to insert into view as following\n> create table t1 (id int,name varchar(12) check(id<=10));\n> create table t2 (id int,name varchar(12) check(id>10));\n> create view v1 as select * from t1,t2;\n\nThis is not an updateable view in any database product.\nIt is a cartesian product join of t1 and t2.\n\nYou probably wanted:\ncreate view v1 as \nselect * from t1\nunion all\nselect * from t2;\n\n> insert into v1 values(1,'wan1');\n> insert into v1 values(12,'wan12');\n> \n> it does not show any problem but it doen't have data in table \n> t1 and table t2\n\nVersion 7.1 will give you an error if you don't create an appropriate\ninsert and update rule for the view.\n\nInsert and update rules are not yet automatically created for views.\n\nAndreas\n", "msg_date": "Fri, 23 Feb 2001 12:19:49 +0100", "msg_from": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at>", "msg_from_op": true, "msg_subject": "AW: ask for help !!! (emergency case)" }, { "msg_contents": "Zeugswetter Andreas SB <ZeugswetterA@wien.spardat.at> writes:\n> You probably wanted:\n> create view v1 as \n> select * from t1\n> union all\n> select * from t2;\n\nProbably, but we don't support UNION in views before 7.1 :-(\n\nI'm not real clear on why t1 and t2 are separate tables at all in this\nexample. Seems like making v1 be the real table, and t1 and t2 be\nselective views of it, would work a lot easier.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 10:41:34 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AW: ask for help !!! (emergency case) " } ]
[ { "msg_contents": "> > > It may be that WAL has changed the rollback\n> > > time-characteristics to worse than pre-wal ?\n> >\n> > Nothing changed ... yet. And in future rollbacks\n> > of read-only transactions will be as fast as now,\n> > anyway.\n> \n> What about rollbacks of a bunch uf inserts/updates/deletes?\n>\n> I remember a scenario where an empty table was used\n> by several backends for gathering report data, and\n> when the report is done they will rollback to keep\n> the table empty.\n>\n> Should this kind of usage be replaced in the future by \n> having backend id as a key and then doing delete by that \n> key in the end ?\n\nIsn't it what we have right now?\nBut I believe that in future we must remove\nmodifications made by aborted transactions\nimmediately, without keeping them till vacuum.\nSo - yes: rollback of read-write transactions\nwill take longer time.\n\nVadim\n\n-----------------------------------------------\nFREE! The World's Best Email Address @email.com\nReserve your name now at http://www.email.com\n\n\n", "msg_date": "Fri, 23 Feb 2001 09:41:57 -0500 (EST)", "msg_from": "Vadim Mikheev <vadim4o@email.com>", "msg_from_op": true, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" }, { "msg_contents": "Vadim Mikheev wrote:\n\n>> Should this kind of usage be replaced in the future by \n>> having backend id as a key and then doing delete by that \n>> key in the end ?\n> \n> \n> Isn't it what we have right now?\n\nI meant doing it at the application level, not what backend does internally.\n\nLike we are supposed to implement time-travel now that it is (mostly) \ngone from core functionality :c)\n\n> But I believe that in future we must remove\n> modifications made by aborted transactions\n> immediately, without keeping them till vacuum.\n> So - yes: rollback of read-write transactions\n> will take longer time.\n\nbut will\n\nINSERT-DELETE-COMMIT\ntake longer than\n\nINSERT-ABORT\n\n?\n\n----------------\nHannu\n\n", "msg_date": "Fri, 23 Feb 2001 17:00:21 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: RE: Re: [ADMIN] v7.1b4 bad performance" } ]
[ { "msg_contents": "Hi,\nmy customers have his local m$ access db's locally then export them on\n.txt with tab or | separated, then put on my server trought ftp.\n\nand is working ok except that the customers are on spanish databases then\na data like:\n--DATE-----NAME---------LANG------\n 1/6/2000|Ferran Adri�|Castellano|\n\nwhen sended trought ftp on my server is converted to:\n--DATE-----NAME------------LANG------\n 1/6/2000|Ferran Adri\\xe0|Castellano|\n\nso when imported on Postgresql with:\nCOPY products FROM '/var/lib/postgres/iii2.txt' USING DELIMITERS '|' \\g\nproduces:\n--DATE-----NAME-----------------------LANG------\n 1/6/2000|Ferran Adri\\xe0|Castellano|NULL\n\non the same cell, ignoring the '|' completelly\n\non 'postmaster.init' I have: LANG=es_ES but doesnt' works...\nusing tabulators as a separators also causes same problem...\n\nany pointers to solve this will be really apreciated\n\nthe other problem is that if a m$ access database has a return carraige on\na text cell the import also fails.\n\n\nbests from barcelona,\nteixi.\n", "msg_date": "Fri, 23 Feb 2001 16:49:16 +0100", "msg_from": "Jaume Teixi <teixi@6tems.com>", "msg_from_op": true, "msg_subject": "hacker mechanism for m$access -> pgsql on different language systems" } ]
[ { "msg_contents": "Looking at the XLOG stuff, I notice that we already have a field\n(logRec) in the per-backend PROC structures that shows whether a\ntransaction is currently in progress with at least one change made\n(ie at least one XLOG entry written).\n\nIt would be very easy to extend the existing code so that the commit\ndelay is not done unless there is at least one other backend with\nnonzero logRec --- or, more generally, at least N other backends with\nnonzero logRec. We cannot tell if any of them are actually nearing\ntheir commits, but this seems better than just blindly waiting. Larger\nvalues of N would presumably improve the odds that at least one of them\nis nearing its commit.\n\nA further refinement, still quite cheap to implement since the info is\nin the PROC struct, would be to not count backends that are blocked\nwaiting for locks. These guys are less likely to be ready to commit\nin the next few milliseconds than the guys who are actively running;\nindeed they cannot commit until someone else has committed/aborted to\nrelease the lock they need.\n\nComments? What should the threshold N be ... or do we need to make\nthat a tunable parameter?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 11:32:21 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "CommitDelay performance improvement" }, { "msg_contents": "> Looking at the XLOG stuff, I notice that we already have a field\n> (logRec) in the per-backend PROC structures that shows whether a\n> transaction is currently in progress with at least one change made\n> (ie at least one XLOG entry written).\n> \n> It would be very easy to extend the existing code so that the commit\n> delay is not done unless there is at least one other backend with\n> nonzero logRec --- or, more generally, at least N other backends with\n> nonzero logRec. We cannot tell if any of them are actually nearing\n> their commits, but this seems better than just blindly waiting. Larger\n> values of N would presumably improve the odds that at least one of them\n> is nearing its commit.\n\nWhy not just set a flag in there when someone nears commit and clear\nwhen they are about to commit?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 13:23:49 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Why not just set a flag in there when someone nears commit and clear\n> when they are about to commit?\n\nDefine \"nearing commit\", in such a way that you can specify where you\nplan to set that flag.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 13:30:12 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Why not just set a flag in there when someone nears commit and clear\n> > when they are about to commit?\n> \n> Define \"nearing commit\", in such a way that you can specify where you\n> plan to set that flag.\n\nIs there significant time between entry of CommitTransaction() and the\nfsync()? Maybe not.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 13:37:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Is there significant time between entry of CommitTransaction() and the\n> fsync()? Maybe not.\n\nI doubt it. No I/O anymore, anyway, unless the commit record happens to\noverrun an xlog block boundary.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 14:55:27 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Is there significant time between entry of CommitTransaction() and the\n> > fsync()? Maybe not.\n> \n> I doubt it. No I/O anymore, anyway, unless the commit record happens to\n> overrun an xlog block boundary.\n\nThat's what I was afraid of. Since we don't write the dirty blocks to\nthe kernel anymore, we don't really have much happening before someone\nsays they are about to commit. In the old days, we were write()'ing\nthose buffers, and we had some delay and kernel calls in there.\n\nGuess that idea is dead.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 15:05:27 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "On Fri, Feb 23, 2001 at 11:32:21AM -0500, Tom Lane wrote:\n> A further refinement, still quite cheap to implement since the info is\n> in the PROC struct, would be to not count backends that are blocked\n> waiting for locks. These guys are less likely to be ready to commit\n> in the next few milliseconds than the guys who are actively running;\n> indeed they cannot commit until someone else has committed/aborted to\n> release the lock they need.\n> \n> Comments? What should the threshold N be ... or do we need to make\n> that a tunable parameter?\n\nOnce you make it tuneable, you're stuck with it. You can always add\na knob later, after somebody discovers a real need.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 23 Feb 2001 13:21:33 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> On Fri, Feb 23, 2001 at 11:32:21AM -0500, Tom Lane wrote:\n> > A further refinement, still quite cheap to implement since the info is\n> > in the PROC struct, would be to not count backends that are blocked\n> > waiting for locks. These guys are less likely to be ready to commit\n> > in the next few milliseconds than the guys who are actively running;\n> > indeed they cannot commit until someone else has committed/aborted to\n> > release the lock they need.\n> > \n> > Comments? What should the threshold N be ... or do we need to make\n> > that a tunable parameter?\n> \n> Once you make it tuneable, you're stuck with it. You can always add\n> a knob later, after somebody discovers a real need.\n\nI wonder if Tom should implement it, but leave it at zero until people\ncan report that a non-zero helps. We already have the parameter, we can\njust make it smarter and let people test it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 16:49:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n>> Comments? What should the threshold N be ... or do we need to make\n>> that a tunable parameter?\n\n> Once you make it tuneable, you're stuck with it. You can always add\n> a knob later, after somebody discovers a real need.\n\nIf we had a good idea what the default level should be, I'd be willing\nto go without a knob. I'm thinking of a default of about 5 (ie, at\nleast 5 other active backends to trigger a commit delay) ... but I'm not\nso confident of that that I think it needn't be tunable. It's really\ndependent on your average and peak transaction lengths, and that's\ngoing to vary across installations, so unless we want to try to make it\nself-adjusting, a knob seems like a good idea.\n\nA self-adjusting delay might well be a great idea, BTW, but I'm trying\nto be conservative about how much complexity we should add right now.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 17:18:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> ncm@zembu.com (Nathan Myers) writes:\n> >> Comments? What should the threshold N be ... or do we need to make\n> >> that a tunable parameter?\n> \n> > Once you make it tuneable, you're stuck with it. You can always add\n> > a knob later, after somebody discovers a real need.\n> \n> If we had a good idea what the default level should be, I'd be willing\n> to go without a knob. I'm thinking of a default of about 5 (ie, at\n> least 5 other active backends to trigger a commit delay) ... but I'm not\n> so confident of that that I think it needn't be tunable. It's really\n> dependent on your average and peak transaction lengths, and that's\n> going to vary across installations, so unless we want to try to make it\n> self-adjusting, a knob seems like a good idea.\n> \n> A self-adjusting delay might well be a great idea, BTW, but I'm trying\n> to be conservative about how much complexity we should add right now.\n\nOH, so you are saying N backends should have dirtied buffers before\ndoing the delay? Hmm, that seems almost untunable to me.\n\nLet's suppose we decide to sleep. When we wake up, can we know that\nsomeone else has fsync'ed for us? And if they have, should we be more\nlikely to fsync() in the future?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 17:26:53 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> A self-adjusting delay might well be a great idea, BTW, but I'm trying\n>> to be conservative about how much complexity we should add right now.\n\n> OH, so you are saying N backends should have dirtied buffers before\n> doing the delay? Hmm, that seems almost untunable to me.\n\n> Let's suppose we decide to sleep. When we wake up, can we know that\n> someone else has fsync'ed for us?\n\nXLogFlush will find that it has nothing to do, so yes we can.\n\n> And if they have, should we be more\n> likely to fsync() in the future?\n\nYou mean less likely. My thought for a self-adjusting delay was to\nratchet the delay up a little every time it succeeds in avoiding an\nfsync, and down a little every time it fails to do so. No change when\nwe don't delay at all (because of no other active backends). But\ntesting this and making sure it behaves reasonably seems like more work\nthan we should try to accomplish before 7.1.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 17:33:56 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> > And if they have, should we be more\n> > likely to fsync() in the future?\n\nI meant more likely to sleep().\n\n> You mean less likely. My thought for a self-adjusting delay was to\n> ratchet the delay up a little every time it succeeds in avoiding an\n> fsync, and down a little every time it fails to do so. No change when\n> we don't delay at all (because of no other active backends). But\n> testing this and making sure it behaves reasonably seems like more work\n> than we should try to accomplish before 7.1.\n\nIt could be tough. Imagine the delay increasing to 3 seconds? Seems\nthere has to be an upper bound on the sleep. The more you delay, the\nmore likely you will be to find someone to fsync you. Are we waking\nprocesses up after we have fsync()'ed them? If so, we can keep\nincreasing the delay.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 17:38:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "On Fri, Feb 23, 2001 at 05:18:19PM -0500, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> >> Comments? What should the threshold N be ... or do we need to make\n> >> that a tunable parameter?\n> \n> > Once you make it tuneable, you're stuck with it. You can always add\n> > a knob later, after somebody discovers a real need.\n> \n> If we had a good idea what the default level should be, I'd be willing\n> to go without a knob. I'm thinking of a default of about 5 (ie, at\n> least 5 other active backends to trigger a commit delay) ... but I'm not\n> so confident of that that I think it needn't be tunable. It's really\n> dependent on your average and peak transaction lengths, and that's\n> going to vary across installations, so unless we want to try to make it\n> self-adjusting, a knob seems like a good idea.\n> \n> A self-adjusting delay might well be a great idea, BTW, but I'm trying\n> to be conservative about how much complexity we should add right now.\n\nWhen thinking about tuning N, I like to consider what are the interesting \npossible values for N:\n\n 0: Ignore any other potential committers.\n 1: The minimum possible responsiveness to other committers.\n 5: Tom's guess for what might be a good choice.\n 10: Harry's guess.\n ~0: Always delay.\n\nI would rather release with N=1 than with 0, because it actually responds \nto conditions. What N might best be, >1, probably varies on a lot of \nhard-to-guess parameters.\n\nIt seems to me that comparing various choices (and other, more interesting,\nalgorithms) to the N=1 case would be more productive than comparing them \nto the N=0 case, so releasing at N=1 would yield better statistics for \nactually tuning in 7.2.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 23 Feb 2001 14:57:36 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It could be tough. Imagine the delay increasing to 3 seconds? Seems\n> there has to be an upper bound on the sleep. The more you delay, the\n> more likely you will be to find someone to fsync you.\n\nGood point, and an excellent illustration of the fact that\nself-adjusting algorithms aren't that easy to get right the first\ntime ;-)\n\n> Are we waking processes up after we have fsync()'ed them?\n\nNot at the moment. That would be another good mechanism to investigate\nfor 7.2; but right now there's no infrastructure that would allow a\nbackend to discover which other ones were sleeping for fsync.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 18:02:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> When thinking about tuning N, I like to consider what are the interesting \n> possible values for N:\n> \n> 0: Ignore any other potential committers.\n> 1: The minimum possible responsiveness to other committers.\n> 5: Tom's guess for what might be a good choice.\n> 10: Harry's guess.\n> ~0: Always delay.\n> \n> I would rather release with N=1 than with 0, because it actually responds \n> to conditions. What N might best be, >1, probably varies on a lot of \n> hard-to-guess parameters.\n> \n> It seems to me that comparing various choices (and other, more interesting,\n> algorithms) to the N=1 case would be more productive than comparing them \n> to the N=0 case, so releasing at N=1 would yield better statistics for \n> actually tuning in 7.2.\n\nWe don't release code becuase it has better tuning oportunities for\nlater releases. What we can do is give people parameters where the\ndefault is safe, and they can play and report to us.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 18:37:06 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It could be tough. Imagine the delay increasing to 3 seconds? Seems\n> > there has to be an upper bound on the sleep. The more you delay, the\n> > more likely you will be to find someone to fsync you.\n> \n> Good point, and an excellent illustration of the fact that\n> self-adjusting algorithms aren't that easy to get right the first\n> time ;-)\n\nI see. I am concerned that anything done to 7.1 at this point may cause\nproblems with performance under certain circumstances. Let's see what\nthe new code shows our testers.\n\n> \n> > Are we waking processes up after we have fsync()'ed them?\n> \n> Not at the moment. That would be another good mechanism to investigate\n> for 7.2; but right now there's no infrastructure that would allow a\n> backend to discover which other ones were sleeping for fsync.\n\nCan we put the backends to sleep waiting for a lock, and have them wake\nup later?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 18:40:59 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Can we put the backends to sleep waiting for a lock, and have them wake\n> up later?\n\nLocks don't have timeouts. There is no existing mechanism that will\nserve this purpose; we'll have to create a new one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 19:12:41 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "On Fri, Feb 23, 2001 at 06:37:06PM -0500, Bruce Momjian wrote:\n> > When thinking about tuning N, I like to consider what are the interesting \n> > possible values for N:\n> > \n> > 0: Ignore any other potential committers.\n> > 1: The minimum possible responsiveness to other committers.\n> > 5: Tom's guess for what might be a good choice.\n> > 10: Harry's guess.\n> > ~0: Always delay.\n> > \n> > I would rather release with N=1 than with 0, because it actually\n> > responds to conditions. What N might best be, >1, probably varies on\n> > a lot of hard-to-guess parameters.\n> >\n> > It seems to me that comparing various choices (and other, more\n> > interesting, algorithms) to the N=1 case would be more productive\n> > than comparing them to the N=0 case, so releasing at N=1 would yield\n> > better statistics for actually tuning in 7.2.\n>\n> We don't release code because it has better tuning opportunities for\n> later releases. What we can do is give people parameters where the\n> default is safe, and they can play and report to us.\n\nPerhaps I misunderstood. I had perceived N=1 as a conservative choice\nthat was nevertheless preferable to N=0.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 23 Feb 2001 17:20:46 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> > > It seems to me that comparing various choices (and other, more\n> > > interesting, algorithms) to the N=1 case would be more productive\n> > > than comparing them to the N=0 case, so releasing at N=1 would yield\n> > > better statistics for actually tuning in 7.2.\n> >\n> > We don't release code because it has better tuning opportunities for\n> > later releases. What we can do is give people parameters where the\n> > default is safe, and they can play and report to us.\n> \n> Perhaps I misunderstood. I had perceived N=1 as a conservative choice\n> that was nevertheless preferable to N=0.\n\nI think zero delay is the conservative choice at this point, unless we\nhear otherwise from testers.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 21:05:20 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Can we put the backends to sleep waiting for a lock, and have them wake\n> > up later?\n> \n> Locks don't have timeouts. There is no existing mechanism that will\n> serve this purpose; we'll have to create a new one.\n\nThat is what I suspected.\n\nHaving thought about it, We currently have a few options:\n\n\t1) let every backend fsync on its own\n\t2) try to delay backends so they all fsync() at the same time\n\t3) delay fsync until after commit\n\nItems 2 and 3 attempt to bunch up fsyncs. Option 2 has backends waiting\nto fsync() on the expectation that some other backend may commit soon. \nOption 3 I may turn out to be the best solution. No matter how smart we\nmake the code, we will never know for sure if someone is about to commit\nand whether it is worth waiting.\n\nMy idea would be to let committing backends return \"COMMIT\" to the user,\nand set a need_fsync flag that is guaranteed to cause an fsync within X\nmilliseconds. This way, if other backends commit in the next X\nmillisecond, they can all use one fsync().\n\nNow, I know many will complain that we are returning commit while not\nhaving the stuff on the platter. But consider, we only lose data from a\nOS crash or hardware failure. Do people who commit something, and then\nthe machines crashes 2 milliseconds after the commit, really expect the\ndata to be on the disk when they restart? Maybe they do, but it seems\nthe benefit of grouped fsyncs() is large enough that many will say they\nwould rather have this option.\n\nThis was my point long ago that we could offer sub-second reliability\nwith no-fsync performance if we just had some process running that wrote\ndirty pages and fsynced every 20 milliseconds.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 21:31:12 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "At 14:57 23/02/01 -0800, Nathan Myers wrote:\n>\n>When thinking about tuning N, I like to consider what are the interesting \n>possible values for N:\n>\n\nIt may have been much earler in the debate, but has anyone checked to see\nwhat the maximum possible gains might be - or is it self-evident to people\nwho know the code?\n\nWould it be worth considering creating a test case with no flush in\nRecordTransactionCommit, and rely on checkpointing to flush? I realize this\nis never an option in production, but is it possible to modify the code in\nthis way? I *should* give an upper limit on the gains that can be made by\nflushing at the best possible time.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 24 Feb 2001 14:54:12 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "At 21:31 23/02/01 -0500, Bruce Momjian wrote:\n>Now, I know many will complain that we are returning commit while not\n>having the stuff on the platter. \n\nYou're definitely right there.\n\n>Maybe they do, but it seems\n>the benefit of grouped fsyncs() is large enough that many will say they\n>would rather have this option.\n\nI'd prefer to wait for a lock manager that supports timeouts and contention\nnotification.\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 24 Feb 2001 14:56:42 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "At 11:32 23/02/01 -0500, Tom Lane wrote:\n>Looking at the XLOG stuff, I notice that we already have a field\n>(logRec) in the per-backend PROC structures that shows whether a\n>transaction is currently in progress with at least one change made\n>(ie at least one XLOG entry written).\n\nWould it be worth adding a field 'waiting for fsync since xxx', so the\nsecond process can (a) log that it is expecting someone else to FSYNC (for\nperf stats, if we want them), and (b) wait for (xxx + delta)ms/us etc?\n\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 24 Feb 2001 14:59:42 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> At 21:31 23/02/01 -0500, Bruce Momjian wrote:\n> >Now, I know many will complain that we are returning commit while not\n> >having the stuff on the platter. \n> \n> You're definitely right there.\n> \n> >Maybe they do, but it seems\n> >the benefit of grouped fsyncs() is large enough that many will say they\n> >would rather have this option.\n> \n> I'd prefer to wait for a lock manager that supports timeouts and contention\n> notification.\n\nI understand, and if that was going to fix the problem completely, but\nit isn't. It is just going to allow us more flexibility at guessing who\nmay be about to commit.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 23:03:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> At 21:31 23/02/01 -0500, Bruce Momjian wrote:\n> >Now, I know many will complain that we are returning commit while not\n> >having the stuff on the platter. \n> \n> You're definitely right there.\n> \n> >Maybe they do, but it seems\n> >the benefit of grouped fsyncs() is large enough that many will say they\n> >would rather have this option.\n> \n> I'd prefer to wait for a lock manager that supports timeouts and contention\n> notification.\n> \n\nThere is one more thing. Even though the kernel says the data is on the\nplatter, it still may not be there. Some OS's may return from fsync\nwhen the data is _queued_ to the disk, rather than actually wanting for\nthe drive return code to say it completed. Second, some disks report\nback that the data is on the disk when it is actually in the disk memory\nbuffer, not really on the disk.\n\nBasically, I am not sure how much we lose by doing the delay after\nreturning COMMIT, and I know we gain quite a bit by enabling us to group\nfsync calls.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 23:14:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "On Fri, Feb 23, 2001 at 09:05:20PM -0500, Bruce Momjian wrote:\n> > > > It seems to me that comparing various choices (and other, more\n> > > > interesting, algorithms) to the N=1 case would be more productive\n> > > > than comparing them to the N=0 case, so releasing at N=1 would yield\n> > > > better statistics for actually tuning in 7.2.\n> > >\n> > > We don't release code because it has better tuning opportunities for\n> > > later releases. What we can do is give people parameters where the\n> > > default is safe, and they can play and report to us.\n> > \n> > Perhaps I misunderstood. I had perceived N=1 as a conservative choice\n> > that was nevertheless preferable to N=0.\n> \n> I think zero delay is the conservative choice at this point, unless we\n> hear otherwise from testers.\n\nI see, I had it backwards: N=0 corresponds to \"always delay\", and \nN=infinity (~0) is \"never delay\", or what you call zero delay. N=1 is \nnot interesting. N=M/2 or N=sqrt(M) or N=log(M) might be interesting, \nwhere M is the number of backends, or the number of backends with begun \ntransactions, or something. N=10 would be conservative (and maybe \npointless) just because it would hardly ever trigger a delay.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 23 Feb 2001 20:24:40 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "At 23:14 23/02/01 -0500, Bruce Momjian wrote:\n>\n>There is one more thing. Even though the kernel says the data is on the\n>platter, it still may not be there.\n\nThis is true, but it does not mean we should say 'the disk is slightly\nunreliable, so we can be too'. Also, IIRC, the last time this was\ndiscussed, someone commented that buying expensive disks and a UPS gets you\nreliability (barring a direct lightining strike) - it had something to do\nwith write-ordering and hardware caches. In any case, I'd hate to see DB\ndesign decisions based closely on harware capability. At least two of my\ncustomers use high performance ram disks for databases - do these also\nsuffer from 'flush is not really flush' problems?\n\n>Basically, I am not sure how much we lose by doing the delay after\n>returning COMMIT, and I know we gain quite a bit by enabling us to group\n>fsync calls.\n\nIf included, this should be an option only, and not the default option. In\nfact I'd quite like to see such a feature, although I'd not only do a\n'flush every X ms', but I'd also do a 'flush every X transactions' - this\nway a DBA can say 'I dont mind losing the last 20 TXs in a crash'. Bear in\nmind that on a fast system, 20ms is a lot of transactions.\n\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sat, 24 Feb 2001 15:26:14 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> At 23:14 23/02/01 -0500, Bruce Momjian wrote:\n> >\n> >There is one more thing. Even though the kernel says the data is on the\n> >platter, it still may not be there.\n> \n> This is true, but it does not mean we should say 'the disk is slightly\n> unreliable, so we can be too'. Also, IIRC, the last time this was\n> discussed, someone commented that buying expensive disks and a UPS gets you\n> reliability (barring a direct lightining strike) - it had something to do\n> with write-ordering and hardware caches. In any case, I'd hate to see DB\n> design decisions based closely on harware capability. At least two of my\n> customers use high performance ram disks for databases - do these also\n> suffer from 'flush is not really flush' problems?\n\nWell, I am saying we are being pretty rigid here when we may be on top\nof a system that is not, meaning that our rigidity is buying us little.\n\n> \n> >Basically, I am not sure how much we lose by doing the delay after\n> >returning COMMIT, and I know we gain quite a bit by enabling us to group\n> >fsync calls.\n> \n> If included, this should be an option only, and not the default option. In\n> fact I'd quite like to see such a feature, although I'd not only do a\n> 'flush every X ms', but I'd also do a 'flush every X transactions' - this\n> way a DBA can say 'I dont mind losing the last 20 TXs in a crash'. Bear in\n> mind that on a fast system, 20ms is a lot of transactions.\n\nYes, I can see this as a good option for many users. My old complaint\nwas that we allowed only two very extreme options, fsync() all the time,\nor fsync() never and recover from a crash.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 23:37:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> My idea would be to let committing backends return \"COMMIT\" to the user,\n> and set a need_fsync flag that is guaranteed to cause an fsync within X\n> milliseconds. This way, if other backends commit in the next X\n> millisecond, they can all use one fsync().\n\nGuaranteed by what? We have no mechanism available to make an fsync\nhappen while the backend is waiting for input.\n\n> Now, I know many will complain that we are returning commit while not\n> having the stuff on the platter.\n\nI think that's unacceptable on its face. A remote client may take\naction on the basis that COMMIT was returned. If the server then\ncrashes, the client is unlikely to realize this for some time (certainly\nat least one TCP timeout interval). It won't look like a \"milliseconds\nlater\" situation to that client. In fact, the client might *never*\nrealize there was a problem; what if it disconnects after getting the\nCOMMIT?\n\nIf the dbadmin thinks he doesn't need fsync before commit, he'll likely\nbe running with fsync off anyway. For the ones who do think they need\nfsync, I don't believe that we get to rearrange the fsync to occur after\ncommit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 00:00:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > My idea would be to let committing backends return \"COMMIT\" to the user,\n> > and set a need_fsync flag that is guaranteed to cause an fsync within X\n> > milliseconds. This way, if other backends commit in the next X\n> > millisecond, they can all use one fsync().\n> \n> Guaranteed by what? We have no mechanism available to make an fsync\n> happen while the backend is waiting for input.\n\nWe would need a separate binary that can look at shared memory and fsync\nis someone requested it. Again, nothing for 7.1.X.\n\n> > Now, I know many will complain that we are returning commit while not\n> > having the stuff on the platter.\n> \n> I think that's unacceptable on its face. A remote client may take\n> action on the basis that COMMIT was returned. If the server then\n> crashes, the client is unlikely to realize this for some time (certainly\n> at least one TCP timeout interval). It won't look like a \"milliseconds\n> later\" situation to that client. In fact, the client might *never*\n> realize there was a problem; what if it disconnects after getting the\n> COMMIT?\n> \n> If the dbadmin thinks he doesn't need fsync before commit, he'll likely\n> be running with fsync off anyway. For the ones who do think they need\n> fsync, I don't believe that we get to rearrange the fsync to occur after\n> commit.\n\nI can see someone wanting some fsync, but not take the hit. My argument\nis that having this ability, there would be no need to turn off fsync.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Feb 2001 00:04:23 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> It may have been much earler in the debate, but has anyone checked to see\n> what the maximum possible gains might be - or is it self-evident to people\n> who know the code?\n\nfsync off provides an upper bound to the speed achievable from being\nsmarter about when to fsync... I doubt that fsync-once-per-checkpoint\nwould be much different.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 00:07:09 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "Preliminary results from experimenting with an\nN-transactions-must-be-running-to-cause-commit-delay heuristic are\nattached. It seems to be a pretty definite win. I'm currently running\na more extensive set of cases on another machine for comparison.\n\nThe test case is pgbench, unmodified, but run at scalefactor 10\nto reduce write contention on the 'branch' rows. Postmaster\nparameters are -N 100 -B 1024 in all cases. The fsync-off (with,\nof course, no commit delay either) case is shown for comparison.\n\"commit siblings\" is the number of other backends that must be\nrunning active (unblocked, at least one XLOG entry made) transactions\nbefore we will do a precommit delay.\n\ncommit delay=1 is effectively commit delay=10000 (10msec) on this\nhardware. Interestingly, it seems that we can push the delay up\nto two or three clock ticks without degradation, given positive N.\n\n\t\t\tregards, tom lane", "msg_date": "Sat, 24 Feb 2001 00:22:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> I see, I had it backwards: N=0 corresponds to \"always delay\", and \n> N=infinity (~0) is \"never delay\", or what you call zero delay. N=1 is \n> not interesting. N=M/2 or N=sqrt(M) or N=log(M) might be interesting, \n> where M is the number of backends, or the number of backends with begun \n> transactions, or something. N=10 would be conservative (and maybe \n> pointless) just because it would hardly ever trigger a delay.\n\nWhy is N=1 not interesting? That requires at least one other backend\nto be in a transaction before you'll delay. That would seem to be\nthe minimum useful value --- N=0 (always delay) seems clearly to be\ntoo stupid to be useful.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 01:07:17 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "> Philip Warner <pjw@rhyme.com.au> writes:\n> > It may have been much earler in the debate, but has anyone checked to see\n> > what the maximum possible gains might be - or is it self-evident to people\n> > who know the code?\n> \n> fsync off provides an upper bound to the speed achievable from being\n> smarter about when to fsync... I doubt that fsync-once-per-checkpoint\n> would be much different.\n\nThat was my point, people should be doing fsync once per checkpoint\nrather than never.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Feb 2001 01:36:00 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "On Sat, Feb 24, 2001 at 01:07:17AM -0500, Tom Lane wrote:\n> ncm@zembu.com (Nathan Myers) writes:\n> > I see, I had it backwards: N=0 corresponds to \"always delay\", and \n> > N=infinity (~0) is \"never delay\", or what you call zero delay. N=1 is \n> > not interesting. N=M/2 or N=sqrt(M) or N=log(M) might be interesting, \n> > where M is the number of backends, or the number of backends with begun \n> > transactions, or something. N=10 would be conservative (and maybe \n> > pointless) just because it would hardly ever trigger a delay.\n> \n> Why is N=1 not interesting? That requires at least one other backend\n> to be in a transaction before you'll delay. That would seem to be\n> the minimum useful value --- N=0 (always delay) seems clearly to be\n> too stupid to be useful.\n\nN=1 seems arbitrarily aggressive. It assumes any open transaction will \ncommit within a few milliseconds; otherwise the delay is wasted. On a \nfairly busy system, it seems to me to impose a strict upper limit on \ntransaction rate for any client, regardless of actual system I/O load. \n(N=0 would impose that strict upper limit even for a single client.)\n\nDelaying isn't free, because it means that the client can't turn around \nand do even a cheap query for a while. In a sense, when you delay you are \ncharging the committer a tax to try to improve overall throughput. If the \ndelay lets you reduce I/O churn enough to increase the total bandwidth, \nthen it was worthwhile; if not, you just cut system performance, and \nresponsiveness to each client, for nothing.\n\nThe above suggests that maybe N should depend on recent disk I/O activity,\nso you get a larger N (and thus less likely delay and more certain payoff) \nfor a more lightly-loaded system. On a system that has maxed its I/O \nbandwidth, clients will suffer delays anyhow, so they might as well \nsuffer controlled delays that result in better total throughput. On a \nlightly-loaded system there's no need, or payoff, for such throttling.\n\nCan we measure disk system load by averaging the times taken for fsyncs?\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sat, 24 Feb 2001 17:21:38 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Attached are graphs from more thorough runs of pgbench with a commit\ndelay that occurs only when at least N other backends are running active\ntransactions.\n\nMy initial try at this proved to be too noisy to tell much. The noise\nseems to be coming from WAL checkpoints that occur during a run and\npush down the reported TPS value for the particular case that's running.\nWhile we'd need to include WAL checkpoints to make an honest performance\ncomparison against another RDBMS, I think they are best ignored for the\npurpose of figuring out what the commit-delay behavior ought to be.\nAccordingly, I modified my test script to minimize the occurrence of\ncheckpoint activity during runs (see attached script). There are still\nsome data points that are unexpectedly low compared to their neighbors;\npresumably these were affected by checkpoints or other system activity.\n\nIt's not entirely clear what set of parameters is best, but it is\nabsolutely clear that a flat zero-commit-delay policy is NOT best.\n\nThe test conditions are postmaster options -N 100 -B 1024, pgbench scale\nfactor 10, pgbench -t (transactions per client) 100. (Hence the results\nfor a single client rely on only 100 transactions, and are pretty noisy.\nThe noise level should decrease as the number of clients increases.)\n\nComments anyone?\n\n\t\t\tregards, tom lane\n\n\n\n#! /bin/sh\n\n# Expected postmaster options: -N 100 -B 1024 -c checkpoint_timeout=1800\n# Recommended pgbench setup: pgbench -i -s 10 bench\n\nfor del in 0 ; do\nfor sib in 1 ; do\nfor cli in 1 10 20 30 40 50 ; do\necho \"commit_delay = $del\"\necho \"commit_siblings = $sib\"\npsql -c \"vacuum branches; vacuum tellers; delete from history; vacuum history; checkpoint;\" bench\nPGOPTIONS=\"-c commit_delay=$del -c commit_siblings=$sib\" \\\n\tpgbench -c $cli -t 100 -n bench\ndone\ndone\ndone\n\nfor del in 10000 30000 50000 100000 ; do\nfor sib in 1 5 10 20 ; do\nfor cli in 1 10 20 30 40 50 ; do\necho \"commit_delay = $del\"\necho \"commit_siblings = $sib\"\npsql -c \"vacuum branches; vacuum tellers; delete from history; vacuum history; checkpoint;\" bench\nPGOPTIONS=\"-c commit_delay=$del -c commit_siblings=$sib\" \\\n\tpgbench -c $cli -t 100 -n bench\ndone\ndone\ndone", "msg_date": "Sun, 25 Feb 2001 00:41:28 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "At 00:41 25/02/01 -0500, Tom Lane wrote:\n>\n>Comments anyone?\n>\n\nDon't suppose you could post the original data?\n\n\n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 25 Feb 2001 18:01:45 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> Don't suppose you could post the original data?\n\nSure.\n\n\t\t\tregards, tom lane\n\ncommit_delay = 0\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 10.996953(including connections establishing)\ntps = 11.051216(excluding connections establishing)\ncommit_delay = 0\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 17.779923(including connections establishing)\ntps = 17.924390(excluding connections establishing)\ncommit_delay = 0\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 17.289815(including connections establishing)\ntps = 17.429343(excluding connections establishing)\ncommit_delay = 0\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 17.292171(including connections establishing)\ntps = 17.432905(excluding connections establishing)\ncommit_delay = 0\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 17.733478(including connections establishing)\ntps = 17.913251(excluding connections establishing)\ncommit_delay = 0\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 18.325273(including connections establishing)\ntps = 18.534556(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 10.449347(including connections establishing)\ntps = 10.500278(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 17.865721(including connections establishing)\ntps = 18.015078(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 17.980234(including connections establishing)\ntps = 18.131986(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 18.858489(including connections establishing)\ntps = 19.027436(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 19.320221(including connections establishing)\ntps = 19.496999(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 19.440978(including connections establishing)\ntps = 19.621221(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.298701(including connections establishing)\ntps = 11.357102(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 19.722266(including connections establishing)\ntps = 19.903373(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.042737(including connections establishing)\ntps = 19.214042(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 19.013869(including connections establishing)\ntps = 19.185863(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.081644(including connections establishing)\ntps = 20.273612(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.379646(including connections establishing)\ntps = 20.577183(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 10.896660(including connections establishing)\ntps = 10.951360(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 19.506836(including connections establishing)\ntps = 19.686328(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 18.801060(including connections establishing)\ntps = 18.968530(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 19.855547(including connections establishing)\ntps = 20.044110(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.557934(including connections establishing)\ntps = 20.760724(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.278060(including connections establishing)\ntps = 20.473699(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.098777(including connections establishing)\ntps = 11.155340(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 18.638060(including connections establishing)\ntps = 18.801436(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.815520(including connections establishing)\ntps = 20.003053(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.034017(including connections establishing)\ntps = 20.231631(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.676088(including connections establishing)\ntps = 20.879088(excluding connections establishing)\ncommit_delay = 10000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.692725(including connections establishing)\ntps = 20.895842(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.160902(including connections establishing)\ntps = 11.218247(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 18.831596(including connections establishing)\ntps = 19.000649(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 20.239767(including connections establishing)\ntps = 20.434566(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.686848(including connections establishing)\ntps = 20.891519(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 21.014861(including connections establishing)\ntps = 21.224443(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 21.315164(including connections establishing)\ntps = 21.533027(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.384356(including connections establishing)\ntps = 11.444286(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 18.614866(including connections establishing)\ntps = 18.780395(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 20.462955(including connections establishing)\ntps = 20.661262(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.769457(including connections establishing)\ntps = 20.975243(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 19.280678(including connections establishing)\ntps = 19.457795(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.852166(including connections establishing)\ntps = 21.057769(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.129848(including connections establishing)\ntps = 11.188346(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 19.154248(including connections establishing)\ntps = 19.328718(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.487838(including connections establishing)\ntps = 19.668323(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.387741(including connections establishing)\ntps = 20.586291(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 21.187943(including connections establishing)\ntps = 21.403037(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.870339(including connections establishing)\ntps = 21.080454(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.119876(including connections establishing)\ntps = 11.177152(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 18.987202(including connections establishing)\ntps = 19.157841(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.771415(including connections establishing)\ntps = 19.957555(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.277710(including connections establishing)\ntps = 20.473996(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.736168(including connections establishing)\ntps = 20.942539(excluding connections establishing)\ncommit_delay = 30000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 18.894930(including connections establishing)\ntps = 19.064049(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.006743(including connections establishing)\ntps = 11.062485(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 18.240024(including connections establishing)\ntps = 18.399169(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.817212(including connections establishing)\ntps = 20.002657(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.260368(including connections establishing)\ntps = 20.455821(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.928079(including connections establishing)\ntps = 21.135532(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 21.216875(including connections establishing)\ntps = 21.431381(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.362410(including connections establishing)\ntps = 11.421545(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 18.879526(including connections establishing)\ntps = 19.047014(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 20.100514(including connections establishing)\ntps = 20.292700(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.108420(including connections establishing)\ntps = 20.326053(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.876438(including connections establishing)\ntps = 21.083252(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.929535(including connections establishing)\ntps = 21.139167(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.037506(including connections establishing)\ntps = 11.094671(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 16.197469(including connections establishing)\ntps = 16.321687(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.408106(including connections establishing)\ntps = 19.586455(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.628612(including connections establishing)\ntps = 20.832682(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.687795(including connections establishing)\ntps = 20.892172(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 21.072593(including connections establishing)\ntps = 21.285268(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 11.114714(including connections establishing)\ntps = 11.172162(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 19.558748(including connections establishing)\ntps = 19.742513(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 18.631916(including connections establishing)\ntps = 18.797678(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 19.825138(including connections establishing)\ntps = 20.012726(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.088452(including connections establishing)\ntps = 20.280854(excluding connections establishing)\ncommit_delay = 50000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.297366(including connections establishing)\ntps = 20.493717(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 15.439671(including connections establishing)\ntps = 15.549962(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 19.693075(including connections establishing)\ntps = 19.876400(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 18.946142(including connections establishing)\ntps = 19.115107(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 18.454647(including connections establishing)\ntps = 18.616867(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.280877(including connections establishing)\ntps = 20.476160(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 1\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.500824(including connections establishing)\ntps = 20.701014(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 10.952132(including connections establishing)\ntps = 11.006296(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 17.366365(including connections establishing)\ntps = 17.508544(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 19.543583(including connections establishing)\ntps = 19.725347(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.115157(including connections establishing)\ntps = 20.307981(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 20.223466(including connections establishing)\ntps = 20.420063(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 5\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.148971(including connections establishing)\ntps = 20.341425(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 10.751800(including connections establishing)\ntps = 10.805719(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 17.248793(including connections establishing)\ntps = 17.389532(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 18.971746(including connections establishing)\ntps = 19.141706(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 20.250238(including connections establishing)\ntps = 20.445726(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 18.616027(including connections establishing)\ntps = 18.782432(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 10\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.101571(including connections establishing)\ntps = 20.293550(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 1\nnumber of transactions per client: 100\nnumber of transactions actually processed: 100/100\ntps = 10.630630(including connections establishing)\ntps = 10.682598(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 10\nnumber of transactions per client: 100\nnumber of transactions actually processed: 1000/1000\ntps = 17.308711(including connections establishing)\ntps = 17.450166(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 20\nnumber of transactions per client: 100\nnumber of transactions actually processed: 2000/2000\ntps = 18.041733(including connections establishing)\ntps = 18.196939(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 30\nnumber of transactions per client: 100\nnumber of transactions actually processed: 3000/3000\ntps = 18.610682(including connections establishing)\ntps = 18.775963(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 40\nnumber of transactions per client: 100\nnumber of transactions actually processed: 4000/4000\ntps = 19.522874(including connections establishing)\ntps = 19.705095(excluding connections establishing)\ncommit_delay = 100000\ncommit_siblings = 20\nCHECKPOINT\ntransaction type: TPC-B (sort of)\nscaling factor: 10\nnumber of clients: 50\nnumber of transactions per client: 100\nnumber of transactions actually processed: 5000/5000\ntps = 20.085380(including connections establishing)\ntps = 20.277826(excluding connections establishing)\n", "msg_date": "Sun, 25 Feb 2001 02:03:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "On Sun, Feb 25, 2001 at 12:41:28AM -0500, Tom Lane wrote:\n> Attached are graphs from more thorough runs of pgbench with a commit\n> delay that occurs only when at least N other backends are running active\n> transactions. ...\n> It's not entirely clear what set of parameters is best, but it is\n> absolutely clear that a flat zero-commit-delay policy is NOT best.\n> \n> The test conditions are postmaster options -N 100 -B 1024, pgbench scale\n> factor 10, pgbench -t (transactions per client) 100. (Hence the results\n> for a single client rely on only 100 transactions, and are pretty noisy.\n> The noise level should decrease as the number of clients increases.)\n\nIt's hard to interpret these results. In particular, \"delay 10k, sibs 20\"\n(10k,20), or cyan-triangle, is almost the same as \"delay 50k, sibs 1\" \n(50k,1), or green X. Those are pretty different parameters to get such\nsimilar results.\n\nThe only really bad performers were (0), (10k,1), (100k,20). The best\nwere (30k,1) and (30k,10), although (30k,5) also did well except at 40.\nWhy would 30k be a magic delay, regardless of siblings? What happened\nat 40?\n\nAt low loads, it seems (100k,1) (brown +) did best by far, which seems\nvery odd. Even more odd, it did pretty well at very high loads but had \nproblems at intermediate loads. \n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Sun, 25 Feb 2001 00:42:49 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "At 00:42 25/02/01 -0800, Nathan Myers wrote:\n>\n>The only really bad performers were (0), (10k,1), (100k,20). The best\n>were (30k,1) and (30k,10), although (30k,5) also did well except at 40.\n>Why would 30k be a magic delay, regardless of siblings? What happened\n>at 40?\n>\n\nI had assumed that 40 was one of the glitches - it would be good if Tom (or\nsomeone else) could rerun the suite, to see if we see the same dip.\n\nI agree that 30k looks like the magic delay, and probably 30/5 would be a\ngood conservative choice. But now I think about the choice of number, I\nthink it must vary with the speed of the machine and length of the\ntransactions; at 20tps, each TX is completing in around 50ms. Probably the\ndelay needs to be set at a value related to the average TX duration, and\nsince that is not really a known figure, perhaps we should go with 30% of\nTX duration, with a max of 100k. \n\nAlternatively, can PG monitor the commits/second, then set the delay to\nreflect half of the average TX time (or 100ms, whichever is smaller)? Is\nthis too baroque?\n\n \n----------------------------------------------------------------\nPhilip Warner | __---_____\nAlbatross Consulting Pty. Ltd. |----/ - \\\n(A.B.N. 75 008 659 498) | /(@) ______---_\nTel: (+61) 0500 83 82 81 | _________ \\\nFax: (+61) 0500 83 82 82 | ___________ |\nHttp://www.rhyme.com.au | / \\|\n | --________--\nPGP key available upon request, | /\nand from pgp5.ai.mit.edu:11371 |/\n", "msg_date": "Sun, 25 Feb 2001 20:12:15 +1100", "msg_from": "Philip Warner <pjw@rhyme.com.au>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "> -----Original Message-----\n> From: Tom Lane\n> \n> Attached are graphs from more thorough runs of pgbench with a commit\n> delay that occurs only when at least N other backends are running active\n> transactions.\n> \n> My initial try at this proved to be too noisy to tell much. The noise\n> seems to be coming from WAL checkpoints that occur during a run and\n> push down the reported TPS value for the particular case that's running.\n> While we'd need to include WAL checkpoints to make an honest performance\n> comparison against another RDBMS, I think they are best ignored for the\n> purpose of figuring out what the commit-delay behavior ought to be.\n> Accordingly, I modified my test script to minimize the occurrence of\n> checkpoint activity during runs (see attached script). There are still\n> some data points that are unexpectedly low compared to their neighbors;\n> presumably these were affected by checkpoints or other system activity.\n> \n> It's not entirely clear what set of parameters is best, but it is\n> absolutely clear that a flat zero-commit-delay policy is NOT best.\n> \n> The test conditions are postmaster options -N 100 -B 1024, pgbench scale\n> factor 10, pgbench -t (transactions per client) 100. (Hence the results\n> for a single client rely on only 100 transactions, and are pretty noisy.\n> The noise level should decrease as the number of clients increases.)\n> \n> Comments anyone?\n>\n\nHow about the case with scaling factor 1 ? i.e Could your\nproposal detect lock conflicts in reality ? If so, I agree with\nyour proposal.\n\nBTW there seems to be a misunderstanding about CommitDelay,\ni.e\n\n CommitDelay is completely a waste of time unless there's\n an overlap of commit.\n\nIf other backends use the delay(cpu cycle) the delay is never\na waste of time totally.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Sun, 25 Feb 2001 23:25:15 +0900", "msg_from": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "RE: CommitDelay performance improvement " }, { "msg_contents": "> >Basically, I am not sure how much we lose by doing the delay after\n> >returning COMMIT, and I know we gain quite a bit by enabling us to group\n> >fsync calls.\n> \n> If included, this should be an option only, and not the default option. \n\n Sure it should never become the default, because the \"D\" in ACID is just\nabout forbidding this kind of behaviour...\n\n-- \n Dominique\n", "msg_date": "Sun, 25 Feb 2001 17:27:06 +0100", "msg_from": "Dominique Quatravaux <dom@idealx.com>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "\"Hiroshi Inoue\" <Inoue@tpf.co.jp> writes:\n> How about the case with scaling factor 1 ? i.e Could your\n> proposal detect lock conflicts in reality ?\n\nThe code is set up to not count backends that are waiting on locks.\nThat is, to do a commit delay there must be at least N other backends\nthat are in transactions, have written at least one XLOG entry in\ntheir transaction (so it's not a read-only xact and will need to\nwrite a commit record), and are not waiting on a lock.\n\nIs that what you meant?\n\n> BTW there seems to be a misunderstanding about CommitDelay,\n> i.e\n> CommitDelay is completely a waste of time unless there's\n> an overlap of commit.\n> If other backends use the delay(cpu cycle) the delay is never\n> a waste of time totally.\n\nGood point. In fact, if we measure only the total throughput in\ntransactions per second then the commit delay will not appear to be\nhurting performance no matter how long it is, so long as other backends\nare in the RUN state for the whole delay. This suggests that pgbench\nshould also measure the average transaction time seen by any one client.\nIs that a simple change?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 12:35:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "Philip Warner <pjw@rhyme.com.au> writes:\n> At 00:42 25/02/01 -0800, Nathan Myers wrote:\n>> The only really bad performers were (0), (10k,1), (100k,20). The best\n>> were (30k,1) and (30k,10), although (30k,5) also did well except at 40.\n>> Why would 30k be a magic delay, regardless of siblings? What happened\n>> at 40?\n\n> I had assumed that 40 was one of the glitches - it would be good if Tom (or\n> someone else) could rerun the suite, to see if we see the same dip.\n\nYes, I assumed the same. I posted the script; could someone else make\nthe same run? We really need more than one test case ;-)\n\n> I agree that 30k looks like the magic delay, and probably 30/5 would be a\n> good conservative choice. But now I think about the choice of number, I\n> think it must vary with the speed of the machine and length of the\n> transactions; at 20tps, each TX is completing in around 50ms.\n\nYes, I think so too. This machine is able to do about 40 pgbench tr/sec\nsingle-client with fsync off, so the computational load is right about\n25msec per transaction. That's presumably why 30msec looks like a good\ndelay number. What interested me was that there doesn't seem to be a\nvery sharp peak; anything from 10 to 100 msec yields fairly comparable\nresults. This is a good thing ... if there *were* a sharp peak at the\naverage xact length, tuning the delay parameter would be an impossible\ntask in real-world cases where the transactions aren't all alike.\n\nOn the data so far, I'm inclined to go with 10k/5 as the default, so as\nnot to risk wasting time with overly long delays on machines that are\nfaster than this one. But we really need some data from other machines\nbefore deciding. It'd be nice to see some results with <10k delays too,\nfrom a machine where the kernel supports better-than-10msec delay\nresolution. Where's the Alpha contingent??\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 12:49:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> At low loads, it seems (100k,1) (brown +) did best by far, which seems\n> very odd. Even more odd, it did pretty well at very high loads but had \n> problems at intermediate loads. \n\nIn theory, all these variants should behave exactly the same for a\nsingle client, since there will be no commit delay in any of 'em in\nthat case. I'm inclined to write off the aberrant result for 100k/1\nas due to outside factors --- maybe the WAL file happened to be located\nin a particularly convenient place on the disk during that run, or\nsome such. Since there's only 100 transactions in that test, it wouldn't\ntake much to affect the result.\n\nLikewise, the places where one mid-load datapoint is well below either\nneighbor are probably due to outside factors --- either a background\nWAL checkpoint or other activity on the machine, mail arrival for\ninstance. I left the machine alone during the test, but I didn't bother\nto shut down the usual system services.\n\nMy feeling is that this test run tells us that zero commit delay is\ninferior to nonzero under these test conditions, but there's too much\nnoise to pick out one of the nonzero-delay parameter combinations as\nbeing clearly better than the rest. (BTW, I did repeat the zero-delay\nseries just to be sure it wasn't itself an outlier...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 13:19:13 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": "Tom Lane wrote:\n> \n> Philip Warner <pjw@rhyme.com.au> writes:\n> > At 00:42 25/02/01 -0800, Nathan Myers wrote:\n> >> The only really bad performers were (0), (10k,1), (100k,20). The best\n> >> were (30k,1) and (30k,10), although (30k,5) also did well except at 40.\n> >> Why would 30k be a magic delay, regardless of siblings? What happened\n> >> at 40?\n> \n> > I had assumed that 40 was one of the glitches - it would be good if Tom (or\n> > someone else) could rerun the suite, to see if we see the same dip.\n> \n> Yes, I assumed the same. I posted the script; could someone else make\n> the same run? We really need more than one test case ;-)\n> \n\nI could find the sciript but seem to have missed your change\nabout commit_siblings. Where could I get it ?\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Mon, 26 Feb 2001 09:17:03 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: CommitDelay performance improvement" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n>> Yes, I assumed the same. I posted the script; could someone else make\n>> the same run? We really need more than one test case ;-)\n\n> I could find the sciript but seem to have missed your change\n> about commit_siblings. Where could I get it ?\n\nEr ... duh ... I didn't commit it yet. Well, it's harmless enough\nas long as commit_delay defaults to 0, so I'll go ahead and commit.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 19:35:47 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " }, { "msg_contents": ">> I could find the sciript but seem to have missed your change\n>> about commit_siblings. Where could I get it ?\n\n> Er ... duh ... I didn't commit it yet. Well, it's harmless enough\n> as long as commit_delay defaults to 0, so I'll go ahead and commit.\n\nIn CVS now.\n\nHowever, it might be well to wait to run tests until we tweak pgbench\nto measure the average elapsed time for a transaction. As you pointed\nout earlier today, overall TPS is not the only figure of merit we need\nto worry about.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 19:55:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CommitDelay performance improvement " } ]
[ { "msg_contents": "I seem to be having some problems w/ the psql jdbc driver. I'm able to load the driver, but as soon as I try to connect w/ the database. here's my code:\n\nimport java.sql.*;\n\npublic class dataBase {\n public static void main(String [] args){\n try {\n Class.forName(\"org.postgresql.DriverClass\").newInstance();\n System.out.println(\"Driver Loaded Successfully\");\n } catch (Exception e) {\n System.out.println(\"Unable to Load Driver \" + e.getMessage() );\n }\n try {\n Connection conn = DriverManager.getConnection(\"jdbc:postgresql://localhost/mapping\",\n \"mapping\", \"\");\n } catch (SQLException e) {\n System.out.println(\"SQLException: \" + e.getMessage());\n }\n }\n}\n\nWhen I run this code as shown, I get the following output:\nDriver Loaded Successfully\nSQLException: No suitable driver\n\nI'm trying to connect to the database \"mapping\" under the user \"mapping\" w/ no password. Does anybody know what I'm doing wrong? I'm running psql ver 7.0.2 and jdbc driver \"jdbc7.0-1.2.jar\" and the java 1.3 jdk. Any comments or suggestions would be greatly appreciated. thanks.\n\n\n\n\n\n\n\nI seem to be having some problems w/ the psql jdbc \ndriver.  I'm able to load the driver, but as soon as I try to connect w/ \nthe database. here's my code:\n \nimport java.sql.*;\n \npublic class dataBase {\n    public static void main(String \n[] args){\n        try \n{\n        \n    \nClass.forName(\"org.postgresql.DriverClass\").newInstance();\n        \n    System.out.println(\"Driver Loaded \nSuccessfully\");\n        } catch \n(Exception e) {\n        \n    System.out.println(\"Unable to Load Driver \" + e.getMessage() \n);\n        \n}\n        try \n{\n        \n    Connection conn = \nDriverManager.getConnection(\"jdbc:postgresql://localhost/mapping\",\n        \n        \"mapping\", \"\");\n        } catch \n(SQLException e) {\n        \n    System.out.println(\"SQLException: \" + \ne.getMessage());\n        \n}\n    }\n}\n \nWhen I run this code as shown, I get the following \noutput:\nDriver Loaded Successfully\nSQLException: No suitable driver\n \nI'm trying to connect to the database \"mapping\" \nunder the user \"mapping\" w/ no password. Does anybody know what I'm doing wrong? \nI'm running psql ver 7.0.2 and jdbc driver \"jdbc7.0-1.2.jar\" and the java 1.3 \njdk.  Any comments or suggestions would be greatly appreciated. \nthanks.", "msg_date": "Fri, 23 Feb 2001 09:58:45 -0700", "msg_from": "\"Andy Engdahl\" <andy@crophailmanagement.com>", "msg_from_op": true, "msg_subject": "PostgreSQL JDBC" }, { "msg_contents": "On Fri, 23 Feb 2001, Andy Engdahl wrote:\n\n> I seem to be having some problems w/ the psql jdbc driver. I'm able to load the driver, but as soon as I try to connect w/ the database. here's my code:\n> \n> import java.sql.*;\n> \n> public class dataBase {\n> public static void main(String [] args){\n> try {\n> Class.forName(\"org.postgresql.DriverClass\").newInstance();\n\n The class instance will load successfully, but this isn't what you\nwant. Replace org.postgresql.DriverClass with org.postgresql.Driver.\n\nJeff\n\n-- \nErrors have occurred.\nWe won't tell you where or why.\nLazy programmers.\n\t\t-- Hacking haiku\n\n", "msg_date": "Fri, 23 Feb 2001 13:05:00 -0500 (EST)", "msg_from": "Jeff Duffy <jduffy@greatbridge.com>", "msg_from_op": false, "msg_subject": "Re: PostgreSQL JDBC" }, { "msg_contents": "At 13:05 23/02/01 -0500, Jeff Duffy wrote:\n>On Fri, 23 Feb 2001, Andy Engdahl wrote:\n>\n> > I seem to be having some problems w/ the psql jdbc driver. I'm able to \n> load the driver, but as soon as I try to connect w/ the database. here's \n> my code:\n> >\n> > import java.sql.*;\n> >\n> > public class dataBase {\n> > public static void main(String [] args){\n> > try {\n> > Class.forName(\"org.postgresql.DriverClass\").newInstance();\n>\n> The class instance will load successfully, but this isn't what you\n>want. Replace org.postgresql.DriverClass with org.postgresql.Driver.\n\n\nI'd also remove the newInstance() as it will create a second object that \nwill just occupy memory (the class has a static initialiser).\n\nPeter\n\n\n", "msg_date": "Sat, 24 Feb 2001 17:04:59 +0000", "msg_from": "Peter Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: Re: PostgreSQL JDBC" } ]
[ { "msg_contents": "Mark Stosberg (mark@summersault.com) reports a bug with a severity of 3\nThe lower the number the more severe it is.\n\nShort Description\nDate calculation produces wrong output with 7.02\n\nLong Description\nI use Postgres nearly every day and am very appreciative of the project. \n\nI think this example will my bug: \n\n[PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96]\ncascade=> select date(CURRENT_DATE + ('30 days'::reltime));\n date\n----------\n9097-10-20\n\n#############\n\nIt's quite likely my \"date math\" syntax is wrong, but it seems that Postgres should either return the right result, or let me know something is fault. \n\nSample Code\n\n\nNo file was uploaded with this report\n\n", "msg_date": "Fri, 23 Feb 2001 12:18:15 -0500 (EST)", "msg_from": "pgsql-bugs@postgresql.org", "msg_from_op": true, "msg_subject": "Date calculation produces wrong output with 7.02" }, { "msg_contents": "Mark Stosberg (mark@summersault.com) writes:\n\n> [PostgreSQL 7.0.2 on i686-pc-linux-gnu, compiled by gcc 2.96]\n> cascade=> select date(CURRENT_DATE + ('30 days'::reltime));\n> date\n> ----------\n> 9097-10-20\n\nUgh. What is happening here is that there is no '+' operator between\ntypes date and reltime, but there is one between date and int4 (with\nbehavior of adding that many days to the date). And reltime is\nconsidered binary-compatible with int4, so you get\n\nselect date(CURRENT_DATE + ('30 days'::reltime)::int4);\n\nNow '30 days'::reltime::int4 yields 2592000, so you get a silly final\nresult.\n\nThe correct query for Mark is\n\n\tselect date(CURRENT_DATE + ('30 days'::interval));\n\nbut I wonder whether the binary equivalence between reltime and int4\nmight not be ill-advised. Thomas, any thoughts here?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 12:40:40 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Date calculation produces wrong output with 7.02 " }, { "msg_contents": "> Date calculation produces wrong output with 7.02\n> cascade=> select date(CURRENT_DATE + ('30 days'::reltime));\n> date\n> ----------\n> 9097-10-20\n> It's quite likely my \"date math\" syntax is wrong, but it seems\n> that Postgres should either return the right result, or let me\n> know something is fault.\n\nYour syntax is right, and Postgres is wrong :(\n\nThe problem is that there is no explicit date+reltime math operator.\nBut, there *is* a date+int operator which assumes the int is in days,\nand there *is* a \"binary compatible\" entry for reltime->int and vica\nversa.\n\nSo, Postgres is actually doing\n\n select date(CURRENT_DATE + int('30 days'::reltime));\n\nbut the units are \"seconds\" coming from reltime, and the subsequent math\nassumes it was \"days\".\n\nYou can work around the problem with\n\n select date(CURRENT_DATE + interval('30 days'::reltime));\n\nor with\n\n select date(CURRENT_DATE + '30 days'::reltime/86400);\n\nThis problem is in the current CVS tree also. A workaround of removing\nthe reltime==int assumed compatibility could be applied to 7.1 (I\nhaven't thought of what that would affect) or we can build some explicit\noperators to make sure that the seconds->days conversion happens (which\nwould require an initdb).\n\nbtw, \"interval\" is to be preferred over \"reltime\" for most operations,\nas recommended in the PostgreSQL docs on data types.\n\nComments?\n\n - Thomas\n", "msg_date": "Fri, 23 Feb 2001 17:48:45 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: Date calculation produces wrong output with 7.02" }, { "msg_contents": "Thomas Lockhart <lockhart@alumni.caltech.edu> writes:\n> This problem is in the current CVS tree also. A workaround of removing\n> the reltime==int assumed compatibility could be applied to 7.1 (I\n> haven't thought of what that would affect) or we can build some explicit\n> operators to make sure that the seconds->days conversion happens (which\n> would require an initdb).\n> btw, \"interval\" is to be preferred over \"reltime\" for most operations,\n> as recommended in the PostgreSQL docs on data types.\n\nRemoving the binary compatibility was my thought also. If we are trying\nto discourage use of reltime, then this seems like a good change to\nmake...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 13:36:19 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Date calculation produces wrong output with 7.02 " } ]
[ { "msg_contents": "... in the sense that they are reduced to constants instantly, rather\nthan being preserved as function calls. For example:\n\nregression=# create table foo (f1 time default current_time);\nCREATE\nregression=# insert into foo default values;\nINSERT 618879 1\n\n<< wait a few seconds >>\n\nregression=# insert into foo default values;\nINSERT 618880 1\nregression=# select * from foo;\n f1\n----------\n 12:41:45\n 12:41:45\n(2 rows)\n\nThe problem appears to be that Thomas inserted new pg_proc entries on\n11-Nov-2000 that create direct text-to-date and text-to-time\nconversions, replacing the old indirect text-to-timestamp-to-date/time\nimplementation of CURRENT_DATE/TIME. Unfortunately, whereas\ntext-to-timestamp is marked noncachable, these new entries are not,\nand so the parser decides it can fold date('now'::text) to a constant.\n\nWe have three choices:\n\n1. Change these pg_proc entries. This does not force an initdb,\nexactly, but it won't take effect without one either.\n\n2. Change the function calls emitted by the parser for\nCURRENT_DATE/TIME. This doesn't force an initdb either, but it's a\nworkaround whereas #1 actually fixes the real bug. (Although #2 might\nappear to break stored rules in beta databases, any such rules are\nalready broken because they've already been reduced to constants...)\n\n3. Ship 7.1 with broken CURRENT_DATE/TIME functionality.\n\nI tend to favor #1, but need agreement to change it. Comments?\nIf we do #1, should we bump catversion.h, or leave it alone?\n(I'd vote for not changing catversion, I think.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 12:51:00 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "CURRENT_DATE and CURRENT_TIME are broken" }, { "msg_contents": "> We have three choices:\n> \n> 1. Change these pg_proc entries. This does not force an initdb,\n> exactly, but it won't take effect without one either.\n> \n> 2. Change the function calls emitted by the parser for\n> CURRENT_DATE/TIME. This doesn't force an initdb either, but it's a\n> workaround whereas #1 actually fixes the real bug. (Although #2 might\n> appear to break stored rules in beta databases, any such rules are\n> already broken because they've already been reduced to constants...)\n> \n> 3. Ship 7.1 with broken CURRENT_DATE/TIME functionality.\n> \n> I tend to favor #1, but need agreement to change it. Comments?\n> If we do #1, should we bump catversion.h, or leave it alone?\n> (I'd vote for not changing catversion, I think.)\n\nI vote for anything but #2, and agree catversion should not be changed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 13:27:18 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CURRENT_DATE and CURRENT_TIME are broken" } ]
[ { "msg_contents": "Linux 2.2.14\nPostgres 7.0.3\nDBI 1.14\n\nI am working on Freeside and need to have above 31 character column\nnames. I need postgresql to stop auto-truncating when a create command is\nexecuted. \n\nI have tried editing /src/include/postgres_ext.h and set the NAMEDATELEN\nto 64 and it still gives me a NOTICE: truncating ...\n\nYour help is greatly appreciated.\n\n-- \nAdam Rose\nSystems Programmer/Jr. Systems/Network Administrator\nadamr@eaze.net\n\n\n\n\n", "msg_date": "Fri, 23 Feb 2001 11:59:15 -0600 (CST)", "msg_from": "Adam Rose <adamr@eaze.net>", "msg_from_op": true, "msg_subject": "Truncating column names" }, { "msg_contents": "Adam Rose <adamr@eaze.net> writes:\n> I have tried editing /src/include/postgres_ext.h and set the NAMEDATELEN\n> to 64 and it still gives me a NOTICE: truncating ...\n\nThat should work (did work, last I tried it). I suspect you failed to\ncomplete the follow-through: full rebuild, reinstall, initdb.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 15:54:20 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Truncating column names " }, { "msg_contents": "To clarify further, I am using the RPMs from the postgresql.org\nwebsite. I installed the SRPM, changed the /src/include/postgres_ext.h,\nretar-balled the postgres-7.0.3 dir, and reinstalled the rpms (rpm -e,\nrpm -Uvh).\n\nI don't know if this helps or if there is something I need to do further\n(probably so). If needed, what docs explain your followup proceedure.\n\nI really appreciate everyones help.\n\nAdam Rose\n\nOn Fri, 23 Feb 2001, Tom Lane wrote:\n\n> Adam Rose <adamr@eaze.net> writes:\n> > I have tried editing /src/include/postgres_ext.h and set the NAMEDATELEN\n> > to 64 and it still gives me a NOTICE: truncating ...\n> \n> That should work (did work, last I tried it). I suspect you failed to\n> complete the follow-through: full rebuild, reinstall, initdb.\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nAdam Rose\nSystems Programmer/Jr. Systems/Network Administrator\nadamr@eaze.net\n(817)557-3038\n\n\n\n", "msg_date": "Fri, 23 Feb 2001 15:12:01 -0600 (CST)", "msg_from": "Adam Rose <adamr@eaze.net>", "msg_from_op": true, "msg_subject": "Re: Truncating column names " } ]
[ { "msg_contents": "\n... if anyone wants to take a quick gander at it while I wait to announce\nits availability ... let me know if therea re any obvious problems iwht it\n...\n\n\nMarc G. Fournier ICQ#7615664 IRC Nick: Scrappy\nSystems Administrator @ hub.org\nprimary: scrappy@hub.org secondary: scrappy@{freebsd|postgresql}.org\n\n", "msg_date": "Fri, 23 Feb 2001 14:25:55 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "beta5 packages ..." }, { "msg_contents": "> \n> ... if anyone wants to take a quick gander at it while I wait to announce\n> its availability ... let me know if therea re any obvious problems iwht it\n> ...\n\nI was wondering what open items are left? Are we ready to start the\nrelease process with a docs freeze?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 15:19:28 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "The Hermit Hacker wrote:\n> \n> ... if anyone wants to take a quick gander at it while I wait to announce\n> its availability ... let me know if therea re any obvious problems iwht it\n> ...\n\nQuick note: it will be Sunday at the earliest before I can build RPM's\nof beta5. If the package release is after Sunday, it will be the\nfollowing Sunday, as my day job has its busiest time this coming week\n(IOW, I'm going to be swamped -- actually, I am already swamped right\nnow preparing for next week, but I will be virutally off-line next\nweek).\n\nIt has already been requested that I get the contrib stuff in beta5's\nRPMset -- I will attempt to do that, but I'm making no guarantees at\nthis point.\n--\nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 23 Feb 2001 15:26:40 -0500", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "Mensaje citado por: Bruce Momjian <pgman@candle.pha.pa.us>:\n\n> > \n> > ... if anyone wants to take a quick gander at it while I wait to\n> announce\n> > its availability ... let me know if therea re any obvious problems\n> iwht it\n> > ...\n> \n> I was wondering what open items are left? Are we ready to start the\n> release process with a docs freeze?\n\nHow did the PHP-4.0.4pl1+Postgres-7.1Beta5 end?\nI followed it, but don't remember where it ended. Is it OK for compiling? Is\nsome hacking needed?\n\nSaludos... :-)\n\n\nSystem Administration: It's a dirty job,\nbut someone told I had to do it.\n-----------------------------------------------------------------\nMart�n Marqu�s email: martin@math.unl.edu.ar\nSanta Fe - Argentina http://math.unl.edu.ar/~martin/\nAdministrador de sistemas en math.unl.edu.ar\n-----------------------------------------------------------------\n", "msg_date": "Fri, 23 Feb 2001 17:51:51 -0300 (ART)", "msg_from": "\"Martin A. Marques\" <martin@math.unl.edu.ar>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I was wondering what open items are left? Are we ready to start the\n> release process with a docs freeze?\n\nI need some feedback on my commitdelay proposal first. If we add a\nruntime parameter to control that, it had better be documented.\n\nI have a couple of other bugs outstanding, but nothing in docs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 15:58:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I was wondering what open items are left? Are we ready to start the\n> > release process with a docs freeze?\n> \n> I need some feedback on my commitdelay proposal first. If we add a\n> runtime parameter to control that, it had better be documented.\n\nI think we need to give up on the delay for 7.1.X. I don't see any\ngood/easy solutions. Looking at the existing proc bit seems like it\ndoesn't give us enough information to know if we should wait, and\nbecause there really isn't much time between start commit and fsync(),\nmy idea is dead. I think we have to keep it at zero and try again in\n7.2. We may have to get ugly and hack a bit change in the executor when\nwe are winding up the query. (yikes!)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 16:16:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "> Bruce Momjian writes:\n> \n> > I was wondering what open items are left? Are we ready to start the\n> > release process with a docs freeze?\n> \n> I still have the JDBC docs to finish and someone was going to send some\n> PL/pgSQL stuff, but I guess I'll have to remind him again. What exactly\n> is the goal of a docs freeze?\n\nIt is so Thomas can package the docs into various formats.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 16:17:08 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "Bruce Momjian writes:\n\n> I was wondering what open items are left? Are we ready to start the\n> release process with a docs freeze?\n\nI still have the JDBC docs to finish and someone was going to send some\nPL/pgSQL stuff, but I guess I'll have to remind him again. What exactly\nis the goal of a docs freeze?\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Fri, 23 Feb 2001 22:24:56 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I think we need to give up on the delay for 7.1.X. I don't see any\n> good/easy solutions.\n\nI take it you think my idea is not even worth trying. Why not?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 16:35:10 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ... " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I think we need to give up on the delay for 7.1.X. I don't see any\n> > good/easy solutions.\n> \n> I take it you think my idea is not even worth trying. Why not?\n\nYou are suggesting looking at the \"I have modified something\" bit in\nProc, and using that to trigger the delay, right?\n\nWell, clearly it would help because a single backend would not do any\ndelay, however, that is the same as doing a zero delay all the time,\nwhich is what we are doing now.\n\nSo, the change would have to show that doing the delay when some other\nbackend has dirtied a buffer is _better_ than doing no delay.\n\nI guess the question is \"What is the average time from that bit being\nset to the actual commit, and what is its relation to the duration of an\nfsync()?\" If the bit set/commit time is small by comparison, it would\nbe worth using the bit. However, we have also seen that the delay\nitself is usually 10ms, which is pretty long by itself.\n\nYour bit does allow us to _not_ wait if there aren't other backends in\nprocess, which is a good thing.\n\nOK, let's look at the average duration from bit set to commit. If the\nuser is in a multi-statement transaction, the delay could be quite long.\nIf they are doing an UPDATE/DELETE that is doing a sequential scan, that will\nbe long too. If they are doing an INSERT, that should be quick, though\nINSERT/SELECT could be long.\n\nI guess the 10ms minimum delay time is a problem for me. The good thing\nis that this delay happens _only_ if other backends are actually\nrunning, though if someone is sitting in psql and the are inside a\ntransaction, that is going to cause a wait too.\n\nLet's keep talking. I see us so near release, I am not sure if we can\nget something that is a clear win, and we saw how the 5us fix almost got\nout in the final before we realized the performance problems with it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 16:46:02 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> So, the change would have to show that doing the delay when some other\n> backend has dirtied a buffer is _better_ than doing no delay.\n\nAgreed. However, we have as yet no data that proves nonzero commit\ndelay is bad in the presence of multiple active backends. As Hiroshi\npointed out, all the pgbench results we did last weekend are garbage\n(unless they were done with scale factor > 1) because write conflicts on\nthe single \"branch\" row would prevent more than one backend from ever\nbeing ready to commit at the same time. Hiroshi's results suggest that\npositive commit delay can be worthwhile when there are nonconflicting\ntransactions.\n\nNote that with the extension to ignore blocked backends, my proposal\nwould not count backends waiting on a write conflict, and would\ntherefore not execute the delay in the scalefactor=1 pgbench case.\nSo those benchmarks do not prove it would hurt anything to have\ncommit delay > 0 with my proposal.\n\n> I guess the question is \"What is the average time from that bit being\n> set to the actual commit,\n\nThis is obviously very application-dependent, but we know that pgbench\nspeeds of 40-200 tr/sec are easily achieved by 7.1 for single backends\nwith fsync off. So it's evident that the total transaction time before\ncommit starts is a small number of milliseconds for transactions of that\ncomplexity.\n\n> and what is its relation to the duration of an fsync()?\n\nfsync is slow, slow, slow, at least on my platform ... I did kernel\ntraces on pgbench last weekend and saw multiple clock-tick interrupts\nduring the fsync call.\n\n> I guess the 10ms minimum delay time is a problem for me.\n\nYeah, the whole thing would be a lot better if we could get a shorter\ndelay. But that doesn't mean it's no good at all.\n\n> The good thing\n> is that this delay happens _only_ if other backends are actually\n> running, though if someone is sitting in psql and the are inside a\n> transaction, that is going to cause a wait too.\n\nHmm. A further refinement would be to add a waiting-for-client-input\nbit to PROC, although if you have a fast-responding client, ignoring\nsuch backends wouldn't necessarily be a good thing. Notice that the\npgbench transaction involves multiple client requests ...\n\n> Let's keep talking. I see us so near release, I am not sure if we can\n> get something that is a clear win, and we saw how the 5us fix almost got\n> out in the final before we realized the performance problems with it.\n\nYeah, because our attention hadn't been drawn to it. It won't escape\nso easily now ;-). The real concern here is that I'm not currently\nconvinced that commit_delay = 0 is a good answer under heavy load.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 17:05:07 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Commit delay (was Re: beta5 packages)" }, { "msg_contents": "> Hmm. A further refinement would be to add a waiting-for-client-input\n> bit to PROC, although if you have a fast-responding client, ignoring\n> such backends wouldn't necessarily be a good thing. Notice that the\n> pgbench transaction involves multiple client requests ...\n> \n> > Let's keep talking. I see us so near release, I am not sure if we can\n> > get something that is a clear win, and we saw how the 5us fix almost got\n> > out in the final before we realized the performance problems with it.\n> \n> Yeah, because our attention hadn't been drawn to it. It won't escape\n> so easily now ;-). The real concern here is that I'm not currently\n> convinced that commit_delay = 0 is a good answer under heavy load.\n\nOK, clearly your looking at the bit is better than what we have now, so\nhow about committing something that looks at the bit, but leave the\ndefault at zero. Then, let people test zero and non-zero delays and\nlet's see what they find. That seems safe because we aren't enabling\nthe problematic delay by default, at least until we find it is a help in\nmost cases.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 17:10:07 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit delay (was Re: beta5 packages)" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> OK, clearly your looking at the bit is better than what we have now, so\n> how about committing something that looks at the bit, but leave the\n> default at zero. Then, let people test zero and non-zero delays and\n> let's see what they find. That seems safe because we aren't enabling\n> the problematic delay by default, at least until we find it is a help in\n> most cases.\n\nWhat I think I will do is write the code and try some pgbench tests with\nscalefactor > 1. If that looks promising, I'll post or commit the code\nand ask people to do more tests. We can hold off changing the default\ndelay back to nonzero until we have more data...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 17:23:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Commit delay (was Re: beta5 packages) " }, { "msg_contents": "Hi,\n\nIs it desirable for me to build Solaris 8 SPARC packages (Solaris .pkg\nformat) of beta5?\n\nI have experience in doing this.\n\nRegards and best wishes,\n\nJustin Clift\nDatabase Administrator\n", "msg_date": "Mon, 26 Feb 2001 11:18:37 +1100", "msg_from": "Justin Clift <aa2@bigpond.net.au>", "msg_from_op": false, "msg_subject": "Re: beta5 packages ..." } ]
[ { "msg_contents": "I just talked to Tom Lane, and have added the following to the TODO\nlist:\n\n * Merge LockMethodCtl and LockMethodTable into one shared structure\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 14:21:22 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Lock structures" } ]
[ { "msg_contents": "Sorry to have been too brief:\n\n * Merge LockMethodCtl and LockMethodTable into one shared structure (Bruce)\n\nBasically, lock methods are now stored in structures, one in shared\nmemory with a spinlock, and another in normal memory created by the\npostmaster. The goal for 7.2 is to merge these into one shared memory\nstructure to clarify the code.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Fri, 23 Feb 2001 14:23:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Lock structures" } ]
[ { "msg_contents": "\nOk, what am I missing? Don't bother trying to run it, it's not hooked\nup :)\n\nYes there are some extra linuxes, if noone comes up with another distro\nI'll lop the extras off. BTW, is VA Linux a distribution or just a tool\ncompany??\n\nhttp://hub.org/~vev/regress.php\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 23 Feb 2001 15:53:14 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "regression test form" }, { "msg_contents": "On Fri, Feb 23, 2001 at 03:53:14PM -0500, Vince Vielhaber wrote:\n> \n> Yes there are some extra linuxes, if noone comes up with another distro\n> I'll lop the extras off. BTW, is VA Linux a distribution or just a tool\n> company??\n\nDebian is a pretty important Linux distibution, probably second only\nto Red Hat in number of installations. PG is packaged for it by \nOliver Elphick, who is on this list. Debian is currently supported \non x86, SPARC, PowerPC, M68K, ARM, and Alpha architectures.\n\nVA Linux is a hardware vendor. They ship with any of Red Hat, Debian, \nor Suse distributions installed, per customer preference.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Fri, 23 Feb 2001 13:05:20 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: regression test form" }, { "msg_contents": "On Fri, 23 Feb 2001, Nathan Myers wrote:\n\n> On Fri, Feb 23, 2001 at 03:53:14PM -0500, Vince Vielhaber wrote:\n> >\n> > Yes there are some extra linuxes, if noone comes up with another distro\n> > I'll lop the extras off. BTW, is VA Linux a distribution or just a tool\n> > company??\n>\n> Debian is a pretty important Linux distibution, probably second only\n> to Red Hat in number of installations. PG is packaged for it by\n> Oliver Elphick, who is on this list. Debian is currently supported\n> on x86, SPARC, PowerPC, M68K, ARM, and Alpha architectures.\n\nDebian's already on the list. Right above Corel, at most I may reorder\nthem alphabetically.\n\n> VA Linux is a hardware vendor. They ship with any of Red Hat, Debian,\n> or Suse distributions installed, per customer preference.\n\nNow that's really interesting. I saw something on tv in business news\nand they never mentioned hardware, but were showing people shrinkwrapping\nsoftware and packing it in boxes. That'll teach me to look in on CNN!\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Fri, 23 Feb 2001 17:23:01 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": true, "msg_subject": "Re: regression test form" } ]
[ { "msg_contents": "FYI, I downloaded / compiled / installed beta5 and did a select version()\nfrom psql and got:\n\ntemplate1=# select version();\n version\n------------------------------------------------------------------------\n PostgreSQL 7.1beta4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n(1 row)\n\n\n> -----Original Message-----\n> From:\tThe Hermit Hacker [SMTP:scrappy@hub.org]\n> Sent:\tFriday, February 23, 2001 12:26 PM\n> To:\tpgsql-hackers@postgresql.org\n> Subject:\t[HACKERS] beta5 packages ...\n> \n> \n> ... if anyone wants to take a quick gander at it while I wait to announce\n> its availability ... let me know if therea re any obvious problems iwht it\n> ...\n> \n> \n> Marc G. Fournier ICQ#7615664 IRC Nick:\n> Scrappy\n> Systems Administrator @ hub.org\n> primary: scrappy@hub.org secondary:\n> scrappy@{freebsd|postgresql}.org\n", "msg_date": "Fri, 23 Feb 2001 15:26:55 -0600", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "RE: beta5 packages ..." } ]
[ { "msg_contents": "Scratch that... my fault, I started the wrong one. I'm getting the proper\nversion now.\n\n> -----Original Message-----\n> From:\tMatthew \n> \n> FYI, I downloaded / compiled / installed beta5 and did a select version()\n> from psql and got:\n> \n> template1=# select version();\n> version\n> ------------------------------------------------------------------------\n> PostgreSQL 7.1beta4 on i686-pc-linux-gnu, compiled by GCC egcs-2.91.66\n> (1 row)\n> \n> \n> \n", "msg_date": "Fri, 23 Feb 2001 15:29:52 -0600", "msg_from": "Matthew <matt@ctlno.com>", "msg_from_op": true, "msg_subject": "RE: beta5 packages ..." } ]
[ { "msg_contents": "With current sources:\n\nDEBUG: copy: line 629980, XLogWrite: new log file created - try to increase WAL_FILES\nDEBUG: copy: line 694890, XLogWrite: new log file created - try to increase WAL_FILES\nFATAL 2: copy: line 759383, ZeroFill(logfile 0 seg 13) failed: No space left on device\nServer process (pid 3178) exited with status 512 at Fri Feb 23 21:53:19 2001\nTerminating any active server processes...\nServer processes were terminated at Fri Feb 23 21:53:19 2001\nReinitializing shared memory and semaphores\nDEBUG: starting up\nDEBUG: database system was interrupted at 2001-02-23 21:53:11\nDEBUG: CheckPoint record at (0, 21075456)\nDEBUG: Redo record at (0, 21075456); Undo record at (0, 0); Shutdown TRUE\nDEBUG: NextTransactionId: 4296; NextOid: 145211\nDEBUG: database system was not properly shut down; automatic recovery in progress...\nDEBUG: redo starts at (0, 21075520)\nThe Data Base System is starting up\nDEBUG: open(logfile 0 seg 0) failed: No such file or directory\nTRAP: Failed Assertion(\"!(readOff > 0):\", File: \"xlog.c\", Line: 1441)\n!(readOff > 0) (0) [Bad file descriptor]\npostmaster: Startup proc 3179 exited with status 134 - abort\n\n\nRegardless of whether this particular behavior is fixable, this brings\nup something that I think we *must* do before 7.1 release: create a\nutility that blows away a corrupted logfile to allow the system to\nrestart with whatever is in the datafiles. Otherwise, there is no\nrecovery technique for WAL restart failures, short of initdb and\nrestore from last backup. I'd rather be able to get at data of\nquestionable up-to-dateness than not have any chance of recovery\nat all.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 23 Feb 2001 22:10:38 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "WAL does not recover gracefully from out-of-disk-space" }, { "msg_contents": "Tom Lane wrote:\n> \n> With current sources:\n> \n> DEBUG: copy: line 629980, XLogWrite: new log file created - try to increase WAL_FILES\n> DEBUG: copy: line 694890, XLogWrite: new log file created - try to increase WAL_FILES\n> FATAL 2: copy: line 759383, ZeroFill(logfile 0 seg 13) failed: No space left on device\n> Server process (pid 3178) exited with status 512 at Fri Feb 23 21:53:19 2001\n> Terminating any active server processes...\n> Server processes were terminated at Fri Feb 23 21:53:19 2001\n> Reinitializing shared memory and semaphores\n> DEBUG: starting up\n> DEBUG: database system was interrupted at 2001-02-23 21:53:11\n> DEBUG: CheckPoint record at (0, 21075456)\n> DEBUG: Redo record at (0, 21075456); Undo record at (0, 0); Shutdown TRUE\n> DEBUG: NextTransactionId: 4296; NextOid: 145211\n> DEBUG: database system was not properly shut down; automatic recovery in progress...\n> DEBUG: redo starts at (0, 21075520)\n> The Data Base System is starting up\n> DEBUG: open(logfile 0 seg 0) failed: No such file or directory\n> TRAP: Failed Assertion(\"!(readOff > 0):\", File: \"xlog.c\", Line: 1441)\n> !(readOff > 0) (0) [Bad file descriptor]\n> postmaster: Startup proc 3179 exited with status 134 - abort\n> \n> Regardless of whether this particular behavior is fixable, this brings\n> up something that I think we *must* do before 7.1 release: create a\n> utility that blows away a corrupted logfile to allow the system to\n> restart with whatever is in the datafiles. Otherwise, there is no\n> recovery technique for WAL restart failures, short of initdb and\n> restore from last backup. I'd rather be able to get at data of\n> questionable up-to-dateness than not have any chance of recovery\n> at all.\n> \n\nI've asked 2 or 3 times how to recover from recovery failure but\ngot no answer. We should some recipi for the failure before 7.1\nrelease.\n\nRegards,\nHiroshi Inoue\n", "msg_date": "Sat, 24 Feb 2001 17:41:13 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-space" }, { "msg_contents": "Was the following bug already fixed ?\n\nRegards,\nHiroshi Inoue\n\nTom Lane wrote:\n> \n> With current sources:\n> \n> DEBUG: copy: line 629980, XLogWrite: new log file created - try to increase WAL_FILES\n> DEBUG: copy: line 694890, XLogWrite: new log file created - try to increase WAL_FILES\n> FATAL 2: copy: line 759383, ZeroFill(logfile 0 seg 13) failed: No space left on device\n> Server process (pid 3178) exited with status 512 at Fri Feb 23 21:53:19 2001\n> Terminating any active server processes...\n> Server processes were terminated at Fri Feb 23 21:53:19 2001\n> Reinitializing shared memory and semaphores\n> DEBUG: starting up\n> DEBUG: database system was interrupted at 2001-02-23 21:53:11\n> DEBUG: CheckPoint record at (0, 21075456)\n> DEBUG: Redo record at (0, 21075456); Undo record at (0, 0); Shutdown TRUE\n> DEBUG: NextTransactionId: 4296; NextOid: 145211\n> DEBUG: database system was not properly shut down; automatic recovery in progress...\n> DEBUG: redo starts at (0, 21075520)\n> The Data Base System is starting up\n> DEBUG: open(logfile 0 seg 0) failed: No such file or directory\n> TRAP: Failed Assertion(\"!(readOff > 0):\", File: \"xlog.c\", Line: 1441)\n> !(readOff > 0) (0) [Bad file descriptor]\n> postmaster: Startup proc 3179 exited with status 134 - abort\n", "msg_date": "Thu, 08 Mar 2001 13:16:01 +0900", "msg_from": "Hiroshi Inoue <Inoue@tpf.co.jp>", "msg_from_op": false, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-space" }, { "msg_contents": "> Was the following bug already fixed ?\n\nI was going to ask same Q.\n\nI see that seek+write was changed to write-s in XLogFileInit\n(that was induced by subj, right?), but what about problem\nitself?\n\n> > DEBUG: redo starts at (0, 21075520)\n> > The Data Base System is starting up\n> > DEBUG: open(logfile 0 seg 0) failed: No such file or directory\n ^^^^^^^^^^^^^^^\nredo started in seg 1 and shouldn't try to read seg 0...\n\nBTW, were performance tests run after seek+write --> write-s\nchange? Write-s were not obviously faster to me, that's why I've\nused seek+write, but never tested that area -:(\n\nVadim\n\n\n", "msg_date": "Thu, 8 Mar 2001 04:37:42 -0800", "msg_from": "\"Vadim Mikheev\" <vmikheev@sectorbase.com>", "msg_from_op": false, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-space" }, { "msg_contents": "Hiroshi Inoue <Inoue@tpf.co.jp> writes:\n> Was the following bug already fixed ?\n\nDunno. I've changed the WAL ReadRecord code so that it fails soft (no\nAsserts or elog(STOP)s) for all failure cases, so the particular crash\nmode exhibited here should be gone. But I'm not sure why the code\nappears to be trying to open the wrong log segment, as Vadim comments.\nThat bug might still be there. Need to try to reproduce the problem\nwith new code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 11:35:51 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-space " }, { "msg_contents": "\"Vadim Mikheev\" <vmikheev@sectorbase.com> writes:\n> I see that seek+write was changed to write-s in XLogFileInit\n> (that was induced by subj, right?), but what about problem\n> itself?\n\n> BTW, were performance tests run after seek+write --> write-s\n> change?\n\nThat change was for safety, not for performance. It might be a\nperformance win on systems that support fdatasync properly (because it\nlets us use fdatasync), otherwise it's probably not a performance win.\nBut we need it regardless --- if you didn't want a fully-allocated WAL\nfile, why'd you bother with the original seek-and-write-1-byte code?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 08 Mar 2001 11:39:26 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: WAL does not recover gracefully from out-of-disk-space " } ]
[ { "msg_contents": "Hi Teodorescu,\n\nI have made patches which enable pgaccess to input Japanese characters\nin the table editing window. As you might know, to input Japanese\ncharacters, we first type in \"hiragana\" then convert it to \"kanji\". To\nmake this proccess transparent to tcl application programs, libraries\nare provided with localized version of Tcl/Tk. The patches bind\ncertain keys to initiate a function (kanjiInput) that is responsible\nfor the conversion process. If the function is not available, those\nkeys will not be binded.\n\nComments?\n--\nTatsuo Ishii", "msg_date": "Sat, 24 Feb 2001 21:41:14 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "pgaccess Japanese input capability patch" }, { "msg_contents": "> Tatsuo Ishii wrote:\n> > \n> > Hi Teodorescu,\n> > \n> > I have made patches which enable pgaccess to input Japanese characters\n> > in the table editing window. As you might know, to input Japanese\n> > characters, we first type in \"hiragana\" then convert it to \"kanji\". To\n> > make this proccess transparent to tcl application programs, libraries\n> > are provided with localized version of Tcl/Tk. The patches bind\n> > certain keys to initiate a function (kanjiInput) that is responsible\n> > for the conversion process. If the function is not available, those\n> > keys will not be binded.\n> > \n> > Comments?\n> \n> Applied!!\n> \n> Cannot test them :-) , still lack of time to learn japanese :-)\n\nThanks. I will apply same patches to the PostgreSQL current.\n\n> BTW, please send me a GIF snapshot of PgAccess showing text in japanese!\n> Might be interesting to put it into the web page!\n\nSure. I will send you it by private email.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 26 Feb 2001 13:19:33 +0900", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: pgaccess Japanese input capability patch" } ]
[ { "msg_contents": "Hi,\n\nHere is a patch against 7.1beta5 to use mmap(), and thus a\nsingle write, to initialise xlogs. It may well improve\nperformance of this on platforms/filesystems which write\nmetadata synchronously.\n\nIt needs a configure test, but certainly builds and runs\nOK.\n\nIt also wraps the file reopening in an \"ifdef WIN32\", since\nit certainly isn't needed for UNIX-like platforms (which I\nassume includes BeOS).\n\nMatthew.\n\n\ndiff -ruN postgresql-7.1beta5-clean/src/backend/access/transam/xlog.c postgresql-7.1beta5/src/backend/access/transam/xlog.c\n--- postgresql-7.1beta5-clean/src/backend/access/transam/xlog.c\tFri Feb 23 18:12:00 2001\n+++ postgresql-7.1beta5/src/backend/access/transam/xlog.c\tSat Feb 24 15:23:41 2001\n@@ -24,6 +24,10 @@\n #include <locale.h>\n #endif\n \n+#ifdef\t_HAVE_MMAP\n+#include <sys/mman.h>\n+#endif\n+\n #include \"access/transam.h\"\n #include \"access/xact.h\"\n #include \"catalog/catversion.h\"\n@@ -36,6 +40,7 @@\n #include \"access/xlogutils.h\"\n #include \"utils/builtins.h\"\n #include \"utils/relcache.h\"\n+#include \"utils/pfile.h\"\n \n #include \"miscadmin.h\"\n \n@@ -53,6 +58,10 @@\n StartUpID\tThisStartUpID = 0;\n XLogRecPtr\tRedoRecPtr;\n \n+#ifdef\t_HAVE_MMAP\n+void\t\t*zmmap = NULL;\n+#endif\n+\n int\t\t\tXLOG_DEBUG = 0;\n \n /* To read/update control file and create new log file */\n@@ -955,7 +964,6 @@\n {\n \tchar\t\tpath[MAXPGPATH];\n \tchar\t\ttpath[MAXPGPATH];\n-\tchar\t\tzbuffer[BLCKSZ];\n \tint\t\t\tfd;\n \tint\t\t\tnbytes;\n \n@@ -987,28 +995,36 @@\n \t\telog(STOP, \"InitCreate(logfile %u seg %u) failed: %m\",\n \t\t\t logId, logSeg);\n \n-\t/*\n-\t * Zero-fill the file. We have to do this the hard way to ensure that\n-\t * all the file space has really been allocated --- on platforms that\n-\t * allow \"holes\" in files, just seeking to the end doesn't allocate\n-\t * intermediate space. This way, we know that we have all the space\n-\t * and (after the fsync below) that all the indirect blocks are down\n-\t * on disk. Therefore, fdatasync(2) will be sufficient to sync future\n-\t * writes to the log file.\n-\t */\n-\tMemSet(zbuffer, 0, sizeof(zbuffer));\n-\tfor (nbytes = 0; nbytes < XLogSegSize; nbytes += sizeof(zbuffer))\n+#ifdef\t_HAVE_MMAP\n+\tif (!zmmap || (write(fd, zmmap, XLogSegSize) != XLogSegSize))\n+#endif\n \t{\n-\t\tif ((int) write(fd, zbuffer, sizeof(zbuffer)) != (int) sizeof(zbuffer))\n-\t\t\telog(STOP, \"ZeroFill(logfile %u seg %u) failed: %m\",\n-\t\t\t\t logId, logSeg);\n+\t\t/*\n+\t \t* Zero-fill the file. We have to do this the hard way to ensure that\n+\t \t* all the file space has really been allocated --- on platforms that\n+\t \t* allow \"holes\" in files, just seeking to the end doesn't allocate\n+\t \t* intermediate space. This way, we know that we have all the space\n+\t \t* and (after the fsync below) that all the indirect blocks are down\n+\t \t* on disk. Therefore, fdatasync(2) will be sufficient to sync future\n+\t \t* writes to the log file.\n+\t \t*/\n+\t\tchar\t\tzbuffer[BLCKSZ];\n+\t\tMemSet(zbuffer, 0, sizeof(zbuffer));\n+\t\tfor (nbytes = 0; nbytes < XLogSegSize; nbytes += sizeof(zbuffer))\n+\t\t{\n+\t\t\tif ((int) write(fd, zbuffer, sizeof(zbuffer)) != (int) sizeof(zbuffer))\n+\t\t\t\telog(STOP, \"ZeroFill(logfile %u seg %u) failed: %m\",\n+\t\t\t\t \tlogId, logSeg);\n+\t\t}\n \t}\n \n \tif (pg_fsync(fd) != 0)\n \t\telog(STOP, \"fsync(logfile %u seg %u) failed: %m\",\n \t\t\t logId, logSeg);\n \n+#ifdef\tWIN32\n \tclose(fd);\n+#endif\n \n \t/*\n \t * Prefer link() to rename() here just to be sure that we don't overwrite\n@@ -1026,10 +1042,12 @@\n \t\t\t logId, logSeg);\n #endif\n \n+#ifdef\tWIN32\n \tfd = BasicOpenFile(path, O_RDWR | PG_BINARY, S_IRUSR | S_IWUSR);\n \tif (fd < 0)\n \t\telog(STOP, \"InitReopen(logfile %u seg %u) failed: %m\",\n \t\t\t logId, logSeg);\n+#endif\n \n \treturn (fd);\n }\n@@ -1255,11 +1273,8 @@\n \tif (noBlck || readOff != (RecPtr->xrecoff % XLogSegSize) / BLCKSZ)\n \t{\n \t\treadOff = (RecPtr->xrecoff % XLogSegSize) / BLCKSZ;\n-\t\tif (lseek(readFile, (off_t) (readOff * BLCKSZ), SEEK_SET) < 0)\n-\t\t\telog(STOP, \"ReadRecord: lseek(logfile %u seg %u off %u) failed: %m\",\n-\t\t\t\t readId, readSeg, readOff);\n-\t\tif (read(readFile, readBuf, BLCKSZ) != BLCKSZ)\n-\t\t\telog(STOP, \"ReadRecord: read(logfile %u seg %u off %u) failed: %m\",\n+\t\tif (pg_pread(readFile, readBuf, BLCKSZ, (readOff * BLCKSZ)) != BLCKSZ)\n+\t\t\telog(STOP, \"ReadRecord: pg_pread(logfile %u seg %u off %u) failed: %m\",\n \t\t\t\t readId, readSeg, readOff);\n \t\tif (((XLogPageHeader) readBuf)->xlp_magic != XLOG_PAGE_MAGIC)\n \t\t{\n@@ -1415,19 +1430,13 @@\n \t\telog(LOG, \"Formatting logfile %u seg %u block %u at offset %u\",\n \t\t\t readId, readSeg, readOff, EndRecPtr.xrecoff % BLCKSZ);\n \t\treadFile = XLogFileOpen(readId, readSeg, false);\n-\t\tif (lseek(readFile, (off_t) (readOff * BLCKSZ), SEEK_SET) < 0)\n-\t\t\telog(STOP, \"ReadRecord: lseek(logfile %u seg %u off %u) failed: %m\",\n-\t\t\t\t readId, readSeg, readOff);\n-\t\tif (read(readFile, readBuf, BLCKSZ) != BLCKSZ)\n-\t\t\telog(STOP, \"ReadRecord: read(logfile %u seg %u off %u) failed: %m\",\n+\t\tif (pg_pread(readFile, readBuf, BLCKSZ, (readOff * BLCKSZ)) != BLCKSZ)\n+\t\t\telog(STOP, \"ReadRecord: pg_pread(logfile %u seg %u off %u) failed: %m\",\n \t\t\t\t readId, readSeg, readOff);\n \t\tmemset(readBuf + EndRecPtr.xrecoff % BLCKSZ, 0,\n \t\t\t BLCKSZ - EndRecPtr.xrecoff % BLCKSZ);\n-\t\tif (lseek(readFile, (off_t) (readOff * BLCKSZ), SEEK_SET) < 0)\n-\t\t\telog(STOP, \"ReadRecord: lseek(logfile %u seg %u off %u) failed: %m\",\n-\t\t\t\t readId, readSeg, readOff);\n-\t\tif (write(readFile, readBuf, BLCKSZ) != BLCKSZ)\n-\t\t\telog(STOP, \"ReadRecord: write(logfile %u seg %u off %u) failed: %m\",\n+\t\tif (pg_pwrite(readFile, readBuf, BLCKSZ, (readOff * BLCKSZ)) != BLCKSZ)\n+\t\t\telog(STOP, \"ReadRecord: pg_pwrite(logfile %u seg %u off %u) failed: %m\",\n \t\t\t\t readId, readSeg, readOff);\n \t\treadOff++;\n \t}\n@@ -1797,6 +1806,28 @@\n \treturn buf;\n }\n \n+\n+#ifdef\t_HAVE_MMAP\n+static void\n+ZeroMapInit(void)\n+{\n+\tint zfd;\n+\n+\tzfd = BasicOpenFile(\"/dev/zero\", O_RDONLY, 0);\n+\tif (zfd < 0) {\n+\t\telog(LOG, \"Can't open /dev/zero: %m\");\n+\t\treturn;\n+\t}\n+\tzmmap = mmap(NULL, XLogSegSize, PROT_READ, MAP_SHARED, zfd, 0);\n+\tif (!zmmap)\n+\t\telog(LOG, \"Can't mmap /dev/zero: %m\");\n+\tclose(zfd);\n+}\n+#else\n+#define\tZeroMapInit()\n+#endif\n+\n+\n /*\n * This func must be called ONCE on system startup\n */\n@@ -1811,6 +1842,9 @@\n \tchar\t\tbuffer[_INTL_MAXLOGRECSZ + SizeOfXLogRecord];\n \n \telog(LOG, \"starting up\");\n+\n+\tZeroMapInit();\n+\n \tCritSectionCount++;\n \n \tXLogCtl->xlblocks = (XLogRecPtr *) (((char *) XLogCtl) + sizeof(XLogCtlData));\n\n", "msg_date": "Sat, 24 Feb 2001 15:49:37 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "A patch for xlog.c" }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> Here is a patch against 7.1beta5 to use mmap(), and thus a\n> single write, to initialise xlogs. It may well improve\n> performance of this on platforms/filesystems which write\n> metadata synchronously.\n\nHave you *demonstrated* any actual performance improvement from this?\nHow much? On what platforms?\n\nI don't believe in adding unportable alternative implementations without\npretty darn compelling reasons ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 11:41:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c " }, { "msg_contents": "On Sat, 24 Feb 2001, Tom Lane wrote:\n\n> > Here is a patch against 7.1beta5 to use mmap(), and thus a\n> > single write, to initialise xlogs. It may well improve\n> > performance of this on platforms/filesystems which write\n> > metadata synchronously.\n>\n> Have you *demonstrated* any actual performance improvement from this?\n> How much? On what platforms?\n\nForgive me if I posted it to the wrong place -- I was far from\nproposing this for inclusion. It is but a small step on the\nway to my plan of mmap()ifying all of the WAL stuff (which may\nalso prove a waste of effort).\n\nOn Linux 2.4 w/asynchronous ext2, it's good for about 5%, which\ncertainly wouldn't alone be worth the effort. I tried synchronous\next2, but the numbers were so poor with both that nobody who cared\nabout performance would be using it (1.2 sec per file, vs. over a\nminute).\n\nI don't have access to any kind machine running UFS/FFS. Perhaps\nsomeone on the list might do me the favour of trying the attached\ntest on such a platform with synchronous metadata writes (see top\nof file for #ifdefs).\n\n> I don't believe in adding unportable alternative implementations\n> without pretty darn compelling reasons ...\n\nmmap() is hardly unportable. From a quick look, all the current\nnames in include/port/ (which must surely make up a vast majority\nof deployed recent postgresql versions) except QNX and Win32 can\nsupport POSIX mmap.\n\nThanks for the reply,\n\nMatthew.", "msg_date": "Sat, 24 Feb 2001 20:37:20 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: A patch for xlog.c " }, { "msg_contents": "I am confused why mmap() is better than writing to a real file. Don't\nwe need to write to a real file so it is available for database\nrecovery?\n\n\n> Hi,\n> \n> Here is a patch against 7.1beta5 to use mmap(), and thus a\n> single write, to initialise xlogs. It may well improve\n> performance of this on platforms/filesystems which write\n> metadata synchronously.\n> \n> It needs a configure test, but certainly builds and runs\n> OK.\n> \n> It also wraps the file reopening in an \"ifdef WIN32\", since\n> it certainly isn't needed for UNIX-like platforms (which I\n> assume includes BeOS).\n> \n> Matthew.\n> \n> \n> diff -ruN postgresql-7.1beta5-clean/src/backend/access/transam/xlog.c postgresql-7.1beta5/src/backend/access/transam/xlog.c\n> --- postgresql-7.1beta5-clean/src/backend/access/transam/xlog.c\tFri Feb 23 18:12:00 2001\n> +++ postgresql-7.1beta5/src/backend/access/transam/xlog.c\tSat Feb 24 15:23:41 2001\n> @@ -24,6 +24,10 @@\n> #include <locale.h>\n> #endif\n> \n> +#ifdef\t_HAVE_MMAP\n> +#include <sys/mman.h>\n> +#endif\n> +\n> #include \"access/transam.h\"\n> #include \"access/xact.h\"\n> #include \"catalog/catversion.h\"\n> @@ -36,6 +40,7 @@\n> #include \"access/xlogutils.h\"\n> #include \"utils/builtins.h\"\n> #include \"utils/relcache.h\"\n> +#include \"utils/pfile.h\"\n> \n> #include \"miscadmin.h\"\n> \n> @@ -53,6 +58,10 @@\n> StartUpID\tThisStartUpID = 0;\n> XLogRecPtr\tRedoRecPtr;\n> \n> +#ifdef\t_HAVE_MMAP\n> +void\t\t*zmmap = NULL;\n> +#endif\n> +\n> int\t\t\tXLOG_DEBUG = 0;\n> \n> /* To read/update control file and create new log file */\n> @@ -955,7 +964,6 @@\n> {\n> \tchar\t\tpath[MAXPGPATH];\n> \tchar\t\ttpath[MAXPGPATH];\n> -\tchar\t\tzbuffer[BLCKSZ];\n> \tint\t\t\tfd;\n> \tint\t\t\tnbytes;\n> \n> @@ -987,28 +995,36 @@\n> \t\telog(STOP, \"InitCreate(logfile %u seg %u) failed: %m\",\n> \t\t\t logId, logSeg);\n> \n> -\t/*\n> -\t * Zero-fill the file. We have to do this the hard way to ensure that\n> -\t * all the file space has really been allocated --- on platforms that\n> -\t * allow \"holes\" in files, just seeking to the end doesn't allocate\n> -\t * intermediate space. This way, we know that we have all the space\n> -\t * and (after the fsync below) that all the indirect blocks are down\n> -\t * on disk. Therefore, fdatasync(2) will be sufficient to sync future\n> -\t * writes to the log file.\n> -\t */\n> -\tMemSet(zbuffer, 0, sizeof(zbuffer));\n> -\tfor (nbytes = 0; nbytes < XLogSegSize; nbytes += sizeof(zbuffer))\n> +#ifdef\t_HAVE_MMAP\n> +\tif (!zmmap || (write(fd, zmmap, XLogSegSize) != XLogSegSize))\n> +#endif\n> \t{\n> -\t\tif ((int) write(fd, zbuffer, sizeof(zbuffer)) != (int) sizeof(zbuffer))\n> -\t\t\telog(STOP, \"ZeroFill(logfile %u seg %u) failed: %m\",\n> -\t\t\t\t logId, logSeg);\n> +\t\t/*\n> +\t \t* Zero-fill the file. We have to do this the hard way to ensure that\n> +\t \t* all the file space has really been allocated --- on platforms that\n> +\t \t* allow \"holes\" in files, just seeking to the end doesn't allocate\n> +\t \t* intermediate space. This way, we know that we have all the space\n> +\t \t* and (after the fsync below) that all the indirect blocks are down\n> +\t \t* on disk. Therefore, fdatasync(2) will be sufficient to sync future\n> +\t \t* writes to the log file.\n> +\t \t*/\n> +\t\tchar\t\tzbuffer[BLCKSZ];\n> +\t\tMemSet(zbuffer, 0, sizeof(zbuffer));\n> +\t\tfor (nbytes = 0; nbytes < XLogSegSize; nbytes += sizeof(zbuffer))\n> +\t\t{\n> +\t\t\tif ((int) write(fd, zbuffer, sizeof(zbuffer)) != (int) sizeof(zbuffer))\n> +\t\t\t\telog(STOP, \"ZeroFill(logfile %u seg %u) failed: %m\",\n> +\t\t\t\t \tlogId, logSeg);\n> +\t\t}\n> \t}\n> \n> \tif (pg_fsync(fd) != 0)\n> \t\telog(STOP, \"fsync(logfile %u seg %u) failed: %m\",\n> \t\t\t logId, logSeg);\n> \n> +#ifdef\tWIN32\n> \tclose(fd);\n> +#endif\n> \n> \t/*\n> \t * Prefer link() to rename() here just to be sure that we don't overwrite\n> @@ -1026,10 +1042,12 @@\n> \t\t\t logId, logSeg);\n> #endif\n> \n> +#ifdef\tWIN32\n> \tfd = BasicOpenFile(path, O_RDWR | PG_BINARY, S_IRUSR | S_IWUSR);\n> \tif (fd < 0)\n> \t\telog(STOP, \"InitReopen(logfile %u seg %u) failed: %m\",\n> \t\t\t logId, logSeg);\n> +#endif\n> \n> \treturn (fd);\n> }\n> @@ -1255,11 +1273,8 @@\n> \tif (noBlck || readOff != (RecPtr->xrecoff % XLogSegSize) / BLCKSZ)\n> \t{\n> \t\treadOff = (RecPtr->xrecoff % XLogSegSize) / BLCKSZ;\n> -\t\tif (lseek(readFile, (off_t) (readOff * BLCKSZ), SEEK_SET) < 0)\n> -\t\t\telog(STOP, \"ReadRecord: lseek(logfile %u seg %u off %u) failed: %m\",\n> -\t\t\t\t readId, readSeg, readOff);\n> -\t\tif (read(readFile, readBuf, BLCKSZ) != BLCKSZ)\n> -\t\t\telog(STOP, \"ReadRecord: read(logfile %u seg %u off %u) failed: %m\",\n> +\t\tif (pg_pread(readFile, readBuf, BLCKSZ, (readOff * BLCKSZ)) != BLCKSZ)\n> +\t\t\telog(STOP, \"ReadRecord: pg_pread(logfile %u seg %u off %u) failed: %m\",\n> \t\t\t\t readId, readSeg, readOff);\n> \t\tif (((XLogPageHeader) readBuf)->xlp_magic != XLOG_PAGE_MAGIC)\n> \t\t{\n> @@ -1415,19 +1430,13 @@\n> \t\telog(LOG, \"Formatting logfile %u seg %u block %u at offset %u\",\n> \t\t\t readId, readSeg, readOff, EndRecPtr.xrecoff % BLCKSZ);\n> \t\treadFile = XLogFileOpen(readId, readSeg, false);\n> -\t\tif (lseek(readFile, (off_t) (readOff * BLCKSZ), SEEK_SET) < 0)\n> -\t\t\telog(STOP, \"ReadRecord: lseek(logfile %u seg %u off %u) failed: %m\",\n> -\t\t\t\t readId, readSeg, readOff);\n> -\t\tif (read(readFile, readBuf, BLCKSZ) != BLCKSZ)\n> -\t\t\telog(STOP, \"ReadRecord: read(logfile %u seg %u off %u) failed: %m\",\n> +\t\tif (pg_pread(readFile, readBuf, BLCKSZ, (readOff * BLCKSZ)) != BLCKSZ)\n> +\t\t\telog(STOP, \"ReadRecord: pg_pread(logfile %u seg %u off %u) failed: %m\",\n> \t\t\t\t readId, readSeg, readOff);\n> \t\tmemset(readBuf + EndRecPtr.xrecoff % BLCKSZ, 0,\n> \t\t\t BLCKSZ - EndRecPtr.xrecoff % BLCKSZ);\n> -\t\tif (lseek(readFile, (off_t) (readOff * BLCKSZ), SEEK_SET) < 0)\n> -\t\t\telog(STOP, \"ReadRecord: lseek(logfile %u seg %u off %u) failed: %m\",\n> -\t\t\t\t readId, readSeg, readOff);\n> -\t\tif (write(readFile, readBuf, BLCKSZ) != BLCKSZ)\n> -\t\t\telog(STOP, \"ReadRecord: write(logfile %u seg %u off %u) failed: %m\",\n> +\t\tif (pg_pwrite(readFile, readBuf, BLCKSZ, (readOff * BLCKSZ)) != BLCKSZ)\n> +\t\t\telog(STOP, \"ReadRecord: pg_pwrite(logfile %u seg %u off %u) failed: %m\",\n> \t\t\t\t readId, readSeg, readOff);\n> \t\treadOff++;\n> \t}\n> @@ -1797,6 +1806,28 @@\n> \treturn buf;\n> }\n> \n> +\n> +#ifdef\t_HAVE_MMAP\n> +static void\n> +ZeroMapInit(void)\n> +{\n> +\tint zfd;\n> +\n> +\tzfd = BasicOpenFile(\"/dev/zero\", O_RDONLY, 0);\n> +\tif (zfd < 0) {\n> +\t\telog(LOG, \"Can't open /dev/zero: %m\");\n> +\t\treturn;\n> +\t}\n> +\tzmmap = mmap(NULL, XLogSegSize, PROT_READ, MAP_SHARED, zfd, 0);\n> +\tif (!zmmap)\n> +\t\telog(LOG, \"Can't mmap /dev/zero: %m\");\n> +\tclose(zfd);\n> +}\n> +#else\n> +#define\tZeroMapInit()\n> +#endif\n> +\n> +\n> /*\n> * This func must be called ONCE on system startup\n> */\n> @@ -1811,6 +1842,9 @@\n> \tchar\t\tbuffer[_INTL_MAXLOGRECSZ + SizeOfXLogRecord];\n> \n> \telog(LOG, \"starting up\");\n> +\n> +\tZeroMapInit();\n> +\n> \tCritSectionCount++;\n> \n> \tXLogCtl->xlblocks = (XLogRecPtr *) (((char *) XLogCtl) + sizeof(XLogCtlData));\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Feb 2001 16:01:15 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> Forgive me if I posted it to the wrong place -- I was far from\n> proposing this for inclusion.\n\nDiffs posted to pgsql-patches are generally considered to be requests\nfor application of a patch. If this is only an experiment it had best\nbe clearly labeled as such.\n\n> It is but a small step on the way to my plan of mmap()ifying all of\n> the WAL stuff (which may also prove a waste of effort).\n\nVery probably. What are your grounds for thinking that's a good idea?\nI can't see any reason to think that mmap is more efficient than write\nfor simple sequential writes, which is what we need to do.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 17:20:06 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c " }, { "msg_contents": "On Sat, 24 Feb 2001, Bruce Momjian wrote:\n\n> I am confused why mmap() is better than writing to a real file.\n\nIt isn't, except that it allows to initialise the logfile in\none syscall, without first allocating and zeroing (and hence\ndirtying) 16Mb of memory.\n\n> Don't we need to write to a real file so it is available for database\n> recovery?\n\nThe mmap isn't used for the destination, but for the source;\nit's just a cheap way to get your hands on 16Mb of zeroes.\n\nMatthew.\n\n", "msg_date": "Sat, 24 Feb 2001 23:01:06 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "On Sat, 24 Feb 2001, Tom Lane wrote:\n\n> > Forgive me if I posted it to the wrong place -- I was far from\n> > proposing this for inclusion.\n>\n> Diffs posted to pgsql-patches are generally considered to be requests\n> for application of a patch. If this is only an experiment it had best\n> be clearly labeled as such.\n\nOK. Is there are better place for discussion of such?\n\n> > It is but a small step on the way to my plan of mmap()ifying all\n> > of the WAL stuff (which may also prove a waste of effort).\n>\n> Very probably. What are your grounds for thinking that's a good idea?\n> I can't see any reason to think that mmap is more efficient than write\n> for simple sequential writes, which is what we need to do.\n\nPotential pros:\n\na. msync(MS_ASYNC) seems to be exactly\nb. Potential to reduce contention\nc. Removing syscalls is rarely a bad thing\nd. Fewer copies, better cache behaviour\n\nPotential cons:\n\na. Portability\nb. A bad pointer can cause a scribble on the log\n\nMatthew.\n\n", "msg_date": "Sat, 24 Feb 2001 23:45:31 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: A patch for xlog.c " }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n>> Diffs posted to pgsql-patches are generally considered to be requests\n>> for application of a patch. If this is only an experiment it had best\n>> be clearly labeled as such.\n\n> OK. Is there are better place for discussion of such?\n\npgsql-hackers is the place to discuss anything that's experimental or\notherwise concerned with future development.\n\n> [ possible merits of mmap ]\n\nLet's take up that discussion in pghackers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 23:10:43 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c " }, { "msg_contents": "> Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> >> Diffs posted to pgsql-patches are generally considered to be requests\n> >> for application of a patch. If this is only an experiment it had best\n> >> be clearly labeled as such.\n> \n> > OK. Is there are better place for discussion of such?\n> \n> pgsql-hackers is the place to discuss anything that's experimental or\n> otherwise concerned with future development.\n> \n> > [ possible merits of mmap ]\n> \n> Let's take up that discussion in pghackers.\n\nI always felt the real benefit of mmap() would be to remove use of SysV\nshared memory and use anon mmap() to prevent problems with SysV share\nmemory limits.\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sat, 24 Feb 2001 23:25:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "[ redirected to pgsql-hackers instead of -patches ]\n\nMatthew Kirkwood <matthew@hairy.beasts.org> writes:\n> On Sat, 24 Feb 2001, Bruce Momjian wrote:\n>> I am confused why mmap() is better than writing to a real file.\n\n> It isn't, except that it allows to initialise the logfile in\n> one syscall, without first allocating and zeroing (and hence\n> dirtying) 16Mb of memory.\n\nUh, the existing code does not zero 16Mb of memory... it zeroes\n8K and then writes that block repeatedly. It's possible that the\noverhead of a syscall for each 8K block is significant, but on the\nother hand writing a block at a time is a heavily used and heavily\noptimized path in all Unixen. It's at least as plausible that the\nmmap-as-source-of-zeroes path will be slower!\n\nI think this is worth looking into, but I'm very far from being\nsold on it...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 23:28:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "mmap for zeroing WAL log" }, { "msg_contents": "On Sat, 24 Feb 2001, Bruce Momjian wrote:\n\n> > Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> > >> Diffs posted to pgsql-patches are generally considered to be requests\n> > >> for application of a patch. If this is only an experiment it had best\n> > >> be clearly labeled as such.\n> >\n> > > OK. Is there are better place for discussion of such?\n> >\n> > pgsql-hackers is the place to discuss anything that's experimental or\n> > otherwise concerned with future development.\n> >\n> > > [ possible merits of mmap ]\n> >\n> > Let's take up that discussion in pghackers.\n>\n> I always felt the real benefit of mmap() would be to remove use of SysV\n> shared memory and use anon mmap() to prevent problems with SysV share\n> memory limits.\n\nYou'll still have memory limits to overcome ... per user memory limits\nbeing one ... there is no such thing as a 'cure-all' ...\n\n\n", "msg_date": "Sun, 25 Feb 2001 16:07:17 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "> > > pgsql-hackers is the place to discuss anything that's experimental or\n> > > otherwise concerned with future development.\n> > >\n> > > > [ possible merits of mmap ]\n> > >\n> > > Let's take up that discussion in pghackers.\n> >\n> > I always felt the real benefit of mmap() would be to remove use of SysV\n> > shared memory and use anon mmap() to prevent problems with SysV share\n> > memory limits.\n> \n> You'll still have memory limits to overcome ... per user memory limits\n> being one ... there is no such thing as a 'cure-all' ...\n\nYes, but typical SysV shared memory limits are much lower than\nper-process limits.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 15:10:09 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "On Sun, 25 Feb 2001, Bruce Momjian wrote:\n\n> > > > pgsql-hackers is the place to discuss anything that's experimental or\n> > > > otherwise concerned with future development.\n> > > >\n> > > > > [ possible merits of mmap ]\n> > > >\n> > > > Let's take up that discussion in pghackers.\n> > >\n> > > I always felt the real benefit of mmap() would be to remove use of SysV\n> > > shared memory and use anon mmap() to prevent problems with SysV share\n> > > memory limits.\n> >\n> > You'll still have memory limits to overcome ... per user memory limits\n> > being one ... there is no such thing as a 'cure-all' ...\n>\n> Yes, but typical SysV shared memory limits are much lower than\n> per-process limits.\n\nwell, come up with suitable patches for v7.2 and we can see where it goes\n... you seem to think mmap() will do what we require, but, so far, have\nbeen unable to convince anyone to dedicate the time to converting to using\nit. \"having to raise/set SysV limits\", IMHO, isn't worth the overhaul\nthat I see having to happen, but, if you can show us the benefits of doing\nit other then removing a 'one time administrative config' of an OS, I\nimagine that nobody will be able to argue it ...\n\n\n", "msg_date": "Sun, 25 Feb 2001 16:17:38 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "> > Yes, but typical SysV shared memory limits are much lower than\n> > per-process limits.\n> \n> well, come up with suitable patches for v7.2 and we can see where it goes\n> ... you seem to think mmap() will do what we require, but, so far, have\n> been unable to convince anyone to dedicate the time to converting to using\n> it. \"having to raise/set SysV limits\", IMHO, isn't worth the overhaul\n> that I see having to happen, but, if you can show us the benefits of doing\n> it other then removing a 'one time administrative config' of an OS, I\n> imagine that nobody will be able to argue it ...\n\nYea, it is pretty low priority, especially since most OS's don't support\nANON mmap(). Most BSD's support it, but I don't think Linux or others\ndo.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 15:36:35 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "On Sun, 25 Feb 2001, Bruce Momjian wrote:\n\n> > > Yes, but typical SysV shared memory limits are much lower than\n> > > per-process limits.\n> >\n> > well, come up with suitable patches for v7.2 and we can see where it goes\n> > ... you seem to think mmap() will do what we require, but, so far, have\n> > been unable to convince anyone to dedicate the time to converting to using\n> > it. \"having to raise/set SysV limits\", IMHO, isn't worth the overhaul\n> > that I see having to happen, but, if you can show us the benefits of doing\n> > it other then removing a 'one time administrative config' of an OS, I\n> > imagine that nobody will be able to argue it ...\n>\n> Yea, it is pretty low priority, especially since most OS's don't support\n> ANON mmap(). Most BSD's support it, but I don't think Linux or others\n> do.\n\nah, then not a low priority, a non-starter, period ... maybe when all the\nOSs we support move to supporting ANON mmap() :(\n\n", "msg_date": "Sun, 25 Feb 2001 17:16:28 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "> On Sun, 25 Feb 2001, Bruce Momjian wrote:\n> \n> > > > Yes, but typical SysV shared memory limits are much lower than\n> > > > per-process limits.\n> > >\n> > > well, come up with suitable patches for v7.2 and we can see where it goes\n> > > ... you seem to think mmap() will do what we require, but, so far, have\n> > > been unable to convince anyone to dedicate the time to converting to using\n> > > it. \"having to raise/set SysV limits\", IMHO, isn't worth the overhaul\n> > > that I see having to happen, but, if you can show us the benefits of doing\n> > > it other then removing a 'one time administrative config' of an OS, I\n> > > imagine that nobody will be able to argue it ...\n> >\n> > Yea, it is pretty low priority, especially since most OS's don't support\n> > ANON mmap(). Most BSD's support it, but I don't think Linux or others\n> > do.\n> \n> ah, then not a low priority, a non-starter, period ... maybe when all the\n> OSs we support move to supporting ANON mmap() :(\n\nYea, we would have to take a poll to see if the majority support it. \nRight now, I think it is clearly a minority, and not worth the added\nconfusion for a few platforms.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 16:17:40 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "The Hermit Hacker writes:\n\n> > Yea, it is pretty low priority, especially since most OS's don't support\n> > ANON mmap(). Most BSD's support it, but I don't think Linux or others\n> > do.\n>\n> ah, then not a low priority, a non-starter, period ... maybe when all the\n> OSs we support move to supporting ANON mmap() :(\n\nIt would be worthwhile for those operating systems that don't have SysV\nshared memory but do have mmap(). But I don't have one of those, so I\nain't gonna do it. ;-)\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 16:24:02 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "> The Hermit Hacker writes:\n> \n> > > Yea, it is pretty low priority, especially since most OS's don't support\n> > > ANON mmap(). Most BSD's support it, but I don't think Linux or others\n> > > do.\n> >\n> > ah, then not a low priority, a non-starter, period ... maybe when all the\n> > OSs we support move to supporting ANON mmap() :(\n> \n> It would be worthwhile for those operating systems that don't have SysV\n> shared memory but do have mmap(). But I don't have one of those, so I\n> ain't gonna do it. ;-)\n\nAll have SysV memory. mmap() usage is only useful in enabling larger\nbuffers without kernel changes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Feb 2001 11:36:29 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "Bruce Momjian writes:\n\n> All have SysV memory.\n\nAll that we currently support...\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 18:13:54 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "On Tue, 27 Feb 2001, Bruce Momjian wrote:\n\n> mmap() usage is only useful in enabling larger\n> buffers without kernel changes.\n\nMy plan was not to replace the shared buffer pool with an\nmmap()ed area, but rather to use mmap() on the data files\nthemselves to eliminate it.\n\nClearly this is rather controversial, since it may have\nsafety implications, but it should allow the kernel better\nto choose what to cache.\n\nMatthew.\n\n", "msg_date": "Tue, 27 Feb 2001 17:17:24 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: A patch for xlog.c" }, { "msg_contents": "On Sat, 24 Feb 2001, Tom Lane wrote:\n\n> >> I am confused why mmap() is better than writing to a real file.\n> \n> > It isn't, except that it allows to initialise the logfile in\n> > one syscall, without first allocating and zeroing (and hence\n> > dirtying) 16Mb of memory.\n> \n> Uh, the existing code does not zero 16Mb of memory... it zeroes\n> 8K and then writes that block repeatedly.\n\nSee the \"one syscall\" bit above.\n\n> It's possible that the overhead of a syscall for each 8K block is\n> significant,\n\nI had assumed that the overhead would come from synchronous\nmetadata incurring writes of at least the inode, block bitmap\nand probably an indirect block for each syscall.\n\n> but on the other hand writing a block at a time is a heavily used and\n> heavily optimized path in all Unixen. It's at least as plausible that\n> the mmap-as-source-of-zeroes path will be slower!\n\nResults:\n\nOn Linux/ext2, it appears good for a gain of 3-5% for log\ncreations (via a fairly minimal test program).\n\nOn FreeBSD 4.1-RELEASE/ffs (with all of sync/async/softupdates)\nit is a couple of percent worse in elapsed time, but consumes\naround a third more system CPU time (12sec vs 9sec on one test\nsystem).\n\nI am awaiting numbers from reiserfs but, for now, it looks like\nI am far from vindicated.\n\nMatthew.\n\n", "msg_date": "Tue, 27 Feb 2001 22:20:57 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: mmap for zeroing WAL log" }, { "msg_contents": "Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> I had assumed that the overhead would come from synchronous\n> metadata incurring writes of at least the inode, block bitmap\n> and probably an indirect block for each syscall.\n\nNo Unix that I've ever heard of forces metadata to disk after each\n\"write\" call; anyone who tried it would have abysmal performance.\nThat's what fsync and the syncer daemon are for.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 27 Feb 2001 17:25:03 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: mmap for zeroing WAL log " }, { "msg_contents": "On Tue, 27 Feb 2001, Tom Lane wrote:\n\n> Matthew Kirkwood <matthew@hairy.beasts.org> writes:\n> > I had assumed that the overhead would come from synchronous\n> > metadata incurring writes of at least the inode, block bitmap\n> > and probably an indirect block for each syscall.\n>\n> No Unix that I've ever heard of forces metadata to disk after each\n> \"write\" call; anyone who tried it would have abysmal performance.\n> That's what fsync and the syncer daemon are for.\n\nMy understanding was that that's exactly what ffs' synchronous\nmetadata writes do.\n\nAm I missing something here? Do they jsut schedule I/O, but\nreturn without waiting for its completion?\n\nMatthew.\n\n", "msg_date": "Wed, 28 Feb 2001 10:32:56 +0000 (GMT)", "msg_from": "Matthew Kirkwood <matthew@hairy.beasts.org>", "msg_from_op": true, "msg_subject": "Re: mmap for zeroing WAL log " } ]
[ { "msg_contents": "\tI've just upgraded to the beta4 in order to fix an RI\ndeadlock we seemed to be having with 7.0.3 -- and it seems\nthat one of the engineers has been writing some queries that\ncast a text field to an int and take advantage of the\nfact that we used to turn text fields with no digits into 0,\nmuch as C's atoi function works.\n\n\tThe new behavior is to throw a parse error, which causes\nall kinds of problem. Is this intentional? I dimly remember\nseeing a whole lot of atoi discussion, but I can't seem to\nfind it in my last two files of this mailing list.\n\n-- \nAdam Haberlach | All your base are belong to us.\nadam@newsnipple.com |\nhttp://www.newsnipple.com |\n'88 EX500 '00 >^< |\n", "msg_date": "Sat, 24 Feb 2001 10:18:10 -0800", "msg_from": "Adam Haberlach <adam@newsnipple.com>", "msg_from_op": true, "msg_subject": "pg_atoi() behavior change? Intentional?" }, { "msg_contents": "Adam Haberlach <adam@newsnipple.com> writes:\n> ... one of the engineers has been writing some queries that\n> cast a text field to an int and take advantage of the\n> fact that we used to turn text fields with no digits into 0,\n> much as C's atoi function works.\n\n> \tThe new behavior is to throw a parse error, which causes\n> all kinds of problem. Is this intentional?\n\nWhat new behavior?\n\nregression=# select ''::text::int4;\n ?column?\n----------\n 0\n(1 row)\n\n7.0.* behaves the same as far as I can tell. I think this is actually\na bug, and it *should* throw an error ... but it doesn't.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 16:58:16 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pg_atoi() behavior change? Intentional? " } ]
[ { "msg_contents": "Hi,\n\nIt appears that limit and offset do not work in a subselect such as:\n\n\tupdate my_table set my_col = new_val where oid = (select oid from \nmy_table offset row_number limit 1);\n\nBasically, I need to update rows by offset from the beginning of the \ntable. Even nicer would be\n\n\tupdate my_table set my_col = new_val offset row_number limit 1;\n\nBut this is not supported either.\n\nTim\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n", "msg_date": "Sat, 24 Feb 2001 15:08:17 -0500", "msg_from": "\"Timothy H. Keitt\" <Timothy.Keitt@SUNYSB.Edu>", "msg_from_op": true, "msg_subject": "offset and limit in update and subselect" }, { "msg_contents": "I see this (subselect) is available in >=7.1.\n\nTim\n\nTimothy H. Keitt wrote:\n\n> Hi,\n> \n> It appears that limit and offset do not work in a subselect such as:\n> \n> update my_table set my_col = new_val where oid = (select oid from \n> my_table offset row_number limit 1);\n> \n> Basically, I need to update rows by offset from the beginning of the \n> table. Even nicer would be\n> \n> update my_table set my_col = new_val offset row_number limit 1;\n> \n> But this is not supported either.\n> \n> Tim\n\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n", "msg_date": "Sat, 24 Feb 2001 15:22:33 -0500", "msg_from": "\"Timothy H. Keitt\" <Timothy.Keitt@SUNYSB.Edu>", "msg_from_op": true, "msg_subject": "Re: offset and limit in update and subselect" }, { "msg_contents": "\"Timothy H. Keitt\" <Timothy.Keitt@SUNYSB.Edu> writes:\n> Basically, I need to update rows by offset from the beginning of the \n> table.\n\nI think you'd better rethink your data design. Tuple order in a table\nis not a defined concept according to SQL. Even if we allowed queries\nsuch as you've described, the results would not be well-defined, but\nwould change at the slightest provocation. The implementation feels\nitself entitled to rearrange tuple order whenever the whim strikes it.\n\nAs the documentation tries hard to make plain, LIMIT/OFFSET are only\nguaranteed to produce reproducible results if there's also an ORDER BY\nthat constrains the tuples into a unique ordering.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 17:07:54 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: offset and limit in update and subselect " }, { "msg_contents": "At 05:07 PM 2/24/01 -0500, Tom Lane wrote:\n>is not a defined concept according to SQL. Even if we allowed queries\n>such as you've described, the results would not be well-defined, but\n>would change at the slightest provocation. The implementation feels\n>itself entitled to rearrange tuple order whenever the whim strikes it.\n>\n>As the documentation tries hard to make plain, LIMIT/OFFSET are only\n>guaranteed to produce reproducible results if there's also an ORDER BY\n>that constrains the tuples into a unique ordering.\n\nHi,\n\nWould it then be fine to use update ... limit in the following scenario?\n\nI have a todo queue:\n\ncreate table todo ( task text, pid int default 0);\n\nThe tasks are inserted into the todo table.\n\nThen the various worker processes do the following update to grab tasks\nwithout duplication.\n\nupdate todo set pid=$mypid where pid=0 limit 1;\n\nFor me it doesn't matter what which row each worker gets, as long as they\nonly get one each and they are not the same.\n\nWhat would the performance impact of \"order by\" be in a LIMIT X case? Would\nit require a full table scan?\n\nThanks,\nLink.\n\n", "msg_date": "Sun, 25 Feb 2001 19:42:48 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: offset and limit in update and subselect " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n> Would it then be fine to use update ... limit in the following scenario?\n> I have a todo queue:\n> create table todo ( task text, pid int default 0);\n> The tasks are inserted into the todo table.\n> Then the various worker processes do the following update to grab tasks\n> without duplication.\n> update todo set pid=$mypid where pid=0 limit 1;\n\nThere's no LIMIT clause in UPDATE. You could do something like\n\n\tBEGIN\n\tSELECT taskid FROM todo WHERE pid = 0 FOR UPDATE LIMIT 1;\n\tUPDATE todo SET pid = $mypid WHERE taskid = $selectedid;\n\tCOMMIT\n\n(assuming taskid is unique; you could use the OID if you have no\napplication-defined ID).\n\n> What would the performance impact of \"order by\" be in a LIMIT X case? Would\n> it require a full table scan?\n\nYes, unless there's an index on the order-by item. The above example\nshould be fairly efficient if both pid and taskid are indexed.\n\n\nHmm ... trying this out just now, I realize that 7.1 effectively does\nthe LIMIT before the FOR UPDATE, which is not the way 7.0 behaved.\nUgh. Too late to fix it for 7.1, but I guess FOR UPDATE marking ought\nto become a plan node just like LIMIT did.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 16:58:04 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: offset and limit in update and subselect " }, { "msg_contents": "At 04:58 PM 25-02-2001 -0500, Tom Lane wrote:\n>\n>There's no LIMIT clause in UPDATE. You could do something like\n\nOh. I thought 7.1 had that.\n\n>\tBEGIN\n>\tSELECT taskid FROM todo WHERE pid = 0 FOR UPDATE LIMIT 1;\n>\tUPDATE todo SET pid = $mypid WHERE taskid = $selectedid;\n>\tCOMMIT\n\nThis is very similar to what I'm testing out in 7.0.3 - except I'm\ncurrently trying \"order by random\" to prevent blocking. This is because\nall worker processes will tend to select stuff in the same order (in the\nabsence of inserts or updates on that table), and thus they will hit the\nsame first row (this is what I encountered last week - and I got the wrong\nimpression that all rows were locked).\n\nWhat would happen if I rewrite that query to:\n\nupdate todo set pid = $mypid where exists ( select task id from todo where\npid = 0 for update limit 1);\n\nThis is pushing it, but I'm curious on what would happen :). \n\nI'll stick to doing it in two queries, and leave out the \"order by random\"-\nfaster select vs low blocking.\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 26 Feb 2001 09:26:47 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: Re: offset and limit in update and subselect " }, { "msg_contents": "Lincoln Yeoh <lyeoh@pop.jaring.my> writes:\n>> BEGIN\n>> SELECT taskid FROM todo WHERE pid = 0 FOR UPDATE LIMIT 1;\n>> UPDATE todo SET pid = $mypid WHERE taskid = $selectedid;\n>> COMMIT\n\n> This is very similar to what I'm testing out in 7.0.3 - except I'm\n> currently trying \"order by random\" to prevent blocking. This is because\n> all worker processes will tend to select stuff in the same order (in the\n> absence of inserts or updates on that table), and thus they will hit the\n> same first row (this is what I encountered last week - and I got the wrong\n> impression that all rows were locked).\n\nRight. Only the first row is locked, but that doesn't help any. \"order\nby random\" sounds like it might be a good answer, if there aren't many\nrows that need to be sorted.\n\n> What would happen if I rewrite that query to:\n\n> update todo set pid = $mypid where exists ( select task id from todo where\n> pid = 0 for update limit 1);\n\nRight now you get \n\nERROR: SELECT FOR UPDATE is not allowed in subselects\n\nThis is something that could be fixed if FOR UPDATE were a plan node\ninstead of a function done at the executor top level.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 23:16:01 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: offset and limit in update and subselect " }, { "msg_contents": "At 11:16 PM 25-02-2001 -0500, Tom Lane wrote:\n>\n>Right. Only the first row is locked, but that doesn't help any. \"order\n>by random\" sounds like it might be a good answer, if there aren't many\n>rows that need to be sorted.\n\nYep. I'll just see what happens in the testing stages.\n\n>> What would happen if I rewrite that query to:\n>\n>> update todo set pid = $mypid where exists ( select task id from todo where\n>> pid = 0 for update limit 1);\n>\n>Right now you get \n>\n>ERROR: SELECT FOR UPDATE is not allowed in subselects\n>\n>This is something that could be fixed if FOR UPDATE were a plan node\n>instead of a function done at the executor top level.\n\nOK. Sounds like it won't be worth the trouble to do, plus deadlocks would\nbe real fun ;).\n\nCheerio,\nLink.\n\n", "msg_date": "Mon, 26 Feb 2001 12:39:39 +0800", "msg_from": "Lincoln Yeoh <lyeoh@pop.jaring.my>", "msg_from_op": false, "msg_subject": "Re: offset and limit in update and subselect " }, { "msg_contents": "Hmmm... that's good to know. Basically, I'm trying to model fixed order \ntables in another application through a proxy mechanism (see \nhttp://rpgsql.sourceforge.net/). I guess I will have to force row \nordering on all proxied tables.\n\nTim\n\nTom Lane wrote:\n\n> \"Timothy H. Keitt\" <Timothy.Keitt@SUNYSB.Edu> writes:\n> \n>> Basically, I need to update rows by offset from the beginning of the \n>> table.\n> \n> \n> I think you'd better rethink your data design. Tuple order in a table\n> is not a defined concept according to SQL. Even if we allowed queries\n> such as you've described, the results would not be well-defined, but\n> would change at the slightest provocation. The implementation feels\n> itself entitled to rearrange tuple order whenever the whim strikes it.\n> \n> As the documentation tries hard to make plain, LIMIT/OFFSET are only\n> guaranteed to produce reproducible results if there's also an ORDER BY\n> that constrains the tuples into a unique ordering.\n> \n> \t\t\tregards, tom lane\n\n\n-- \nTimothy H. Keitt\nDepartment of Ecology and Evolution\nState University of New York at Stony Brook\nPhone: 631-632-1101, FAX: 631-632-7626\nhttp://life.bio.sunysb.edu/ee/keitt/\n\n", "msg_date": "Mon, 26 Feb 2001 09:54:14 -0500", "msg_from": "\"Timothy H. Keitt\" <Timothy.Keitt@SUNYSB.Edu>", "msg_from_op": true, "msg_subject": "Re: offset and limit in update and subselect" } ]
[ { "msg_contents": "\tHi,\n\n\tI think I finished the HOWTO that I've been writing far a couple days.\nThe HTML version is at http://www.brasileiro.net/roberto/howto. The\ndocument expalins the basic differences from Oracle's PL/SQL to\nPoltgreSQL's PL/pgSQL and how to port applications to Postgres. It comes\nwith several examples and the code for a port of Oracle's instr functions.\n\tI wrote it in standard DocBook and would be glad to give it to the PG\nteam if there's interest in including its whole or part in the\ndocumentation. Just let me know who should I send it to.\n\t\n\tAnother issue is that I'd like to revamp the PL/pgSQL main docs. The\nway it is right is hard to find what you want because everything is buried\nunder the \"Description\" page. Plus the documentation could use _a lot_\nmore examples, especially of things that people use often, like \"FOR ROW\",\nloops, etc.\n\tWould that be something the PG team would be interested in?\n\n\tThoughts? \n\n\t-Roberto\n\n\tP.S: Thanks so much for PG. I love it!\n\t\n-- \nComputer Science\t\t\tUtah State University\nSpace Dynamics Laboratory\t\tWeb Developer\nUSU Free Software & GNU/Linux Club \thttp://fslc.usu.edu\nMy web site: http://www.brasileiro.net\n", "msg_date": "Sat, 24 Feb 2001 17:37:30 -0700", "msg_from": "Roberto Mello <rmello@cc.usu.edu>", "msg_from_op": true, "msg_subject": "PL/SQL-to-PL/pgSQL-HOWTO + PL/pgSQL documentation" }, { "msg_contents": "Roberto Mello <rmello@cc.usu.edu> writes:\n> \tI think I finished the HOWTO that I've been writing far a couple days.\n> The HTML version is at http://www.brasileiro.net/roberto/howto. The\n> document expalins the basic differences from Oracle's PL/SQL to\n> PoltgreSQL's PL/pgSQL and how to port applications to Postgres. It comes\n> with several examples and the code for a port of Oracle's instr functions.\n> \tI wrote it in standard DocBook and would be glad to give it to the PG\n> team if there's interest in including its whole or part in the\n> documentation. Just let me know who should I send it to.\n\nThomas Lockhart is our lead documentation guy.\n\t\n> \tAnother issue is that I'd like to revamp the PL/pgSQL main docs. The\n> way it is right is hard to find what you want because everything is buried\n> under the \"Description\" page. Plus the documentation could use _a lot_\n> more examples, especially of things that people use often, like \"FOR ROW\",\n> loops, etc.\n> \tWould that be something the PG team would be interested in?\n\nAbsolutely! You betcha! Go for it!\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 24 Feb 2001 23:30:53 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: PL/SQL-to-PL/pgSQL-HOWTO + PL/pgSQL documentation " }, { "msg_contents": "Roberto Mello writes:\n\n> \tI wrote it in standard DocBook and would be glad to give it to the PG\n> team if there's interest in including its whole or part in the\n> documentation. Just let me know who should I send it to.\n\nSend it to pgsql-docs@postgresql.org, either as a patch or as whatever you\nhave it.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Sun, 25 Feb 2001 12:54:45 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: PL/SQL-to-PL/pgSQL-HOWTO + PL/pgSQL documentation" } ]
[ { "msg_contents": "Hello.\n\nI have made a small contribution to the JDBC driver, in the JDBC\nv2.0 stuff. Whom do I send it to?\n\nOla\n\n---\nOla Sundell\nola@miranda.org - olas@wiw.org - ola.sundell@mimer.se\nhttp://miranda.org/~ola\nPGP key information:\npub 1024/744E6D8D 2000/02/13 Ola Sundell <ola@miranda.org>\nKey fingerprint = 8F CA 7C 6F EC 0D C0 23 1E 08 BF 32 FC 37 24 E3\n\n", "msg_date": "Sun, 25 Feb 2001 06:15:07 -0500 (EST)", "msg_from": "Ola Sundell <ola@miranda.org>", "msg_from_op": true, "msg_subject": "jdbc driver hack" }, { "msg_contents": "Send it to the jdbc list please.\n\n> Hello.\n> \n> I have made a small contribution to the JDBC driver, in the JDBC\n> v2.0 stuff. Whom do I send it to?\n> \n> Ola\n> \n> ---\n> Ola Sundell\n> ola@miranda.org - olas@wiw.org - ola.sundell@mimer.se\n> http://miranda.org/~ola\n> PGP key information:\n> pub 1024/744E6D8D 2000/02/13 Ola Sundell <ola@miranda.org>\n> Key fingerprint = 8F CA 7C 6F EC 0D C0 23 1E 08 BF 32 FC 37 24 E3\n> \n> \n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 13:02:21 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: jdbc driver hack" }, { "msg_contents": "At 06:15 25/02/01 -0500, Ola Sundell wrote:\n>Hello.\n>\n>I have made a small contribution to the JDBC driver, in the JDBC\n>v2.0 stuff. Whom do I send it to?\n\nThe JDBC list is the best place (which I've seen you already have).\n\nPS: I'm replying this only to get this into the mail archives ;-)\n\nPeter\n\n\n>Ola\n>\n>---\n>Ola Sundell\n>ola@miranda.org - olas@wiw.org - ola.sundell@mimer.se\n>http://miranda.org/~ola\n>PGP key information:\n>pub 1024/744E6D8D 2000/02/13 Ola Sundell <ola@miranda.org>\n>Key fingerprint = 8F CA 7C 6F EC 0D C0 23 1E 08 BF 32 FC 37 24 E3\n\n", "msg_date": "Thu, 01 Mar 2001 20:06:56 +0000", "msg_from": "Peter Mount <peter@retep.org.uk>", "msg_from_op": false, "msg_subject": "Re: jdbc driver hack" } ]
[ { "msg_contents": "Hi\n\nI've tried to search the site, but no usable pages turned up.\n\nMy question is about monitoring PostgreSQL and if it turns out to be \"down\" \nto notify a person to take action.\n\nI'm surprised that I couldn't find anything about it. Does anyone have an \nadvice ? Anything that will fit into ordinary NMS. Maybe SNMP will not fit, \nbut then some kind of \"database ping\", or equal.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 14.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Sun, 25 Feb 2001 20:06:15 +0100", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": true, "msg_subject": "Monitor status" }, { "msg_contents": "On Sun, 25 Feb 2001, Kaare Rasmussen wrote:\n\n> Hi\n>\n> I've tried to search the site, but no usable pages turned up.\n>\n> My question is about monitoring PostgreSQL and if it turns out to be \"down\"\n> to notify a person to take action.\n>\n> I'm surprised that I couldn't find anything about it. Does anyone have an\n> advice ? Anything that will fit into ordinary NMS. Maybe SNMP will not fit,\n> but then some kind of \"database ping\", or equal.\n\nHave you looked at bigbrother?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 128K ISDN from $22.00/mo - 56K Dialup from $16.00/mo at Pop4 Networking\n Online Campground Directory http://www.camping-usa.com\n Online Giftshop Superstore http://www.cloudninegifts.com\n==========================================================================\n\n\n\n", "msg_date": "Sun, 25 Feb 2001 22:33:45 -0500 (EST)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Monitor status" }, { "msg_contents": "Vince Vielhaber wrote:\n> \n> On Sun, 25 Feb 2001, Kaare Rasmussen wrote:\n> \n> > Hi\n> >\n> > I've tried to search the site, but no usable pages turned up.\n> >\n> > My question is about monitoring PostgreSQL and if it turns out to be \"down\"\n> > to notify a person to take action.\n> >\n> > I'm surprised that I couldn't find anything about it. Does anyone have an\n> > advice ? Anything that will fit into ordinary NMS. Maybe SNMP will not fit,\n> > but then some kind of \"database ping\", or equal.\n> \n> Have you looked at bigbrother?\n\nThere is also a component of netsaint (www.netsaint.org) that will\ncheck PostgreSQL postmasters. I hope to add to that the ability to\ndo specific queries, (right now, it only does a login, then\ndisconnects). (I am the primary maintainer of the plugins that do\nthe actual service checks, another developer originated the overall\nproject and maintains the core software itself).\n\nNetsaint, like big brother, can check a bunch of other services as\nwell. Nesaint has native support for notifications, and if you wish\ncxan also be set upt to automatically issue restarts and so on (I\nsuspect Big Brother can do so also, but I do not know).\n\nNetsaint is Free Software released under GPL.\n__\nKarl DeBisschop \nkdebisschop@alert.infoplease.com\nLearning Network Reference http://www.infoplease.com\nNetsaint Plugin Developer \nkdebisschop@users.sourceforge.net\n", "msg_date": "Mon, 26 Feb 2001 09:08:35 -0500", "msg_from": "kdebisschop@alert.infoplease.com", "msg_from_op": false, "msg_subject": "Re: Monitor status" }, { "msg_contents": "> There is also a component of netsaint (www.netsaint.org) that will\n> check PostgreSQL postmasters. I hope to add to that the ability to\n\nThanks.\n\nI've also seen a lot of announcements of OpenNMS at freshmeat. All I know is \nthat it's Java based, and as fas as I can tell will work without agents.\n\nCan any of these operate in a mixed Linux / Win2000 environment (I'd like to \nhave one tool only for NMS.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 ���ben 14.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L���rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Tue, 27 Feb 2001 00:49:41 +0100", "msg_from": "Kaare Rasmussen <kar@webline.dk>", "msg_from_op": true, "msg_subject": "Re: Monitor status" } ]
[ { "msg_contents": "[ Send to hackers]\n\n> I'd be willing to consider using mmap as a compile-time option if it\n> can be shown to be a substantial performance win where it's available.\n> (I suspect that's a very big \"if\".) If it's not a substantial win,\n> I don't think we should accept the change --- the portability risks and\n> testing/maintenance costs loom too large for me.\n> \n\nI was considering it because you can use a much larger amount of shared\nmemory without reconfiguring the kernel.\n\n> BTW, how exactly is mmap a substitute for SysV shared memory? AFAICT\n> it's only defined to map a disk file into your address space, not to\n> allow a shared memory region to be set up that's independent of any\n> disk file.\n\nIt allows no backing store on disk. It is the BSD solution to SysV\nshare memory. Here are all the BSDi flags:\n\n MAP_ANON Map anonymous memory not associated with any specific file.\n The file descriptor used for creating MAP_ANON must be -1.\n The offset parameter is ignored.\n\n MAP_FIXED Do not permit the system to select a different address than\n the one specified. If the specified address cannot be used,\n mmap will fail. If MAP_FIXED is specified, addr must be a\n multiple of the pagesize. Use of this option is discouraged.\n\n MAP_PRIVATE\n Modifications are private.\n\n MAP_SHARED Modifications are shared.\n\nWe would use MAP_ANON|MAP_SHARED I guess.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 21:00:41 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "[PATCHES] A patch for xlog.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> It allows no backing store on disk. It is the BSD solution to SysV\n> share memory. Here are all the BSDi flags:\n\n> MAP_ANON Map anonymous memory not associated with any specific file.\n> The file descriptor used for creating MAP_ANON must be -1.\n> The offset parameter is ignored.\n\nHmm. Now that I read down to the \"nonstandard extensions\" part of the\nHPUX man page for mmap(), I find\n\n If MAP_ANONYMOUS is set in flags:\n\n o A new memory region is created and initialized to all zeros.\n This memory region can be shared only with descendants of\n the current process.\n\nWhile I've said before that I don't think it's really necessary for\nprocesses that aren't children of the postmaster to access the shared\nmemory, I'm not sure that I want to go over to a mechanism that makes it\n*impossible* for that to be done. Especially not if the only motivation\nis to avoid having to configure the kernel's shared memory settings.\n\nBesides, what makes you think there's not a limit on the size of shmem\nallocatable via mmap()?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 23:28:46 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] A patch for xlog.c " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It allows no backing store on disk. It is the BSD solution to SysV\n> > share memory. Here are all the BSDi flags:\n> \n> > MAP_ANON Map anonymous memory not associated with any specific file.\n> > The file descriptor used for creating MAP_ANON must be -1.\n> > The offset parameter is ignored.\n> \n> Hmm. Now that I read down to the \"nonstandard extensions\" part of the\n> HPUX man page for mmap(), I find\n> \n> If MAP_ANONYMOUS is set in flags:\n> \n> o A new memory region is created and initialized to all zeros.\n> This memory region can be shared only with descendants of\n> the current process.\n> \n> While I've said before that I don't think it's really necessary for\n> processes that aren't children of the postmaster to access the shared\n> memory, I'm not sure that I want to go over to a mechanism that makes it\n> *impossible* for that to be done. Especially not if the only motivation\n> is to avoid having to configure the kernel's shared memory settings.\n\nAgreed. It would make it impossible and a possible limitation.\n\n> Besides, what makes you think there's not a limit on the size of shmem\n> allocatable via mmap()?\n\nI figured mmap() was different than SysV becuase mmap() is file based.\n\nI have had this item on the TODO list for a while:\n\n\t* Use mmap() rather than SYSV shared memory(?)\n\nShould I remove it?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 23:48:11 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I have had this item on the TODO list for a while:\n> \t* Use mmap() rather than SYSV shared memory(?)\n> Should I remove it?\n\nIt's fine as long as it's got that question mark on it ;-).\nI don't say we *shouldn't* do this, I'm just raising questions\nthat would need to be answered.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sun, 25 Feb 2001 23:58:45 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] A patch for xlog.c " }, { "msg_contents": "> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I have had this item on the TODO list for a while:\n> > \t* Use mmap() rather than SYSV shared memory(?)\n> > Should I remove it?\n> \n> It's fine as long as it's got that question mark on it ;-).\n> I don't say we *shouldn't* do this, I'm just raising questions\n> that would need to be answered.\n\nYea, it is one of those question mark things.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Sun, 25 Feb 2001 23:59:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Sun, Feb 25, 2001 at 11:28:46PM -0500, Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > It allows no backing store on disk. \n\nI.e. it allows you to map memory without an associated inode; the memory\nmay still be swapped. Of course, there is no problem with mapping an \ninode too, so that unrelated processes can join in. Solarix has a flag\nto pin the shared pages in RAM so they can't be swapped out.\n\n> > It is the BSD solution to SysV\n> > share memory. Here are all the BSDi flags:\n> \n> > MAP_ANON Map anonymous memory not associated with any specific\n> > file. The file descriptor used for creating MAP_ANON\n> > must be -1. The offset parameter is ignored.\n> \n> Hmm. Now that I read down to the \"nonstandard extensions\" part of the\n> HPUX man page for mmap(), I find\n> \n> If MAP_ANONYMOUS is set in flags:\n> \n> o A new memory region is created and initialized to all zeros.\n> This memory region can be shared only with descendants of\n> the current process.\n\nThis is supported on Linux and BSD, but not on Solarix 7. It's not \nnecessary; you can just map /dev/zero on SysV systems that don't \nhave MAP_ANON.\n\n> While I've said before that I don't think it's really necessary for\n> processes that aren't children of the postmaster to access the shared\n> memory, I'm not sure that I want to go over to a mechanism that makes it\n> *impossible* for that to be done. Especially not if the only motivation\n> is to avoid having to configure the kernel's shared memory settings.\n\nThere are enormous advantages to avoiding the need to configure kernel \nsettings. It makes PG a better citizen. PG is much easier to drop in \nand use if you don't need attention from the IT department.\n\nBut I don't know of any reason to avoid mapping an actual inode,\nso using mmap doesn't necessarily mean giving up sharing among\nunrelated processes.\n\n> Besides, what makes you think there's not a limit on the size of shmem\n> allocatable via mmap()?\n\nI've never seen any mmap limit documented. Since mmap() is how \neverybody implements shared libraries, such a limit would be equivalent \nto a limit on how much/many shared libraries are used. mmap() with \nMAP_ANONYMOUS (or its SysV /dev/zero equivalent) is a common, modern \nway to get raw storage for malloc(), so such a limit would be a limit\non malloc() too.\n\nThe mmap architecture comes to us from the Mach microkernel memory\nmanager, backported into BSD and then copied widely. Since it was\nthe fundamental mechanism for all memory operations in Mach, arbitrary\nlimits would make no sense. That it worked so well is the reason it \nwas copied everywhere else, so adding arbitrary limits while copying \nit would be silly. I don't think we'll see any systems like that.\n\nNathan Myers\nncm@zembu.com\n", "msg_date": "Mon, 26 Feb 2001 00:21:25 -0800", "msg_from": "ncm@zembu.com (Nathan Myers)", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Mon, 26 Feb 2001, Nathan Myers wrote:\n\n> > While I've said before that I don't think it's really necessary for\n> > processes that aren't children of the postmaster to access the shared\n> > memory, I'm not sure that I want to go over to a mechanism that makes it\n> > *impossible* for that to be done. Especially not if the only motivation\n> > is to avoid having to configure the kernel's shared memory settings.\n>\n> There are enormous advantages to avoiding the need to configure kernel\n> settings. It makes PG a better citizen. PG is much easier to drop in\n> and use if you don't need attention from the IT department.\n\nIs there a reason why Oracle still uses shared memory and hasn't moved to\nmmap()? Are there advantages to it that we aren't seeing, or is oracle\njust too much of a mahemouth for that sort of overhaul? Don't go with the\nquick answer either ...\n\n> > Besides, what makes you think there's not a limit on the size of shmem\n> > allocatable via mmap()?\n>\n> I've never seen any mmap limit documented. Since mmap() is how\n> everybody implements shared libraries, such a limit would be equivalent\n> to a limit on how much/many shared libraries are used.\n\nThere are/will be limits based on how an admin sets his/her per user\ndatasize limits on their OS ...\n\n\n", "msg_date": "Mon, 26 Feb 2001 08:37:35 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "> On Sun, Feb 25, 2001 at 11:28:46PM -0500, Tom Lane wrote:\n> > Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > > It allows no backing store on disk. \n> \n> I.e. it allows you to map memory without an associated inode; the memory\n> may still be swapped. Of course, there is no problem with mapping an \n> inode too, so that unrelated processes can join in. Solarix has a flag\n> to pin the shared pages in RAM so they can't be swapped out.\n\nWe don't want to generate i/o to disk just for shared memory\nmodifications, that is why we can't use a disk file.\n\n> \n> > > It is the BSD solution to SysV\n> > > share memory. Here are all the BSDi flags:\n> > \n> > > MAP_ANON Map anonymous memory not associated with any specific\n> > > file. The file descriptor used for creating MAP_ANON\n> > > must be -1. The offset parameter is ignored.\n> > \n> > Hmm. Now that I read down to the \"nonstandard extensions\" part of the\n> > HPUX man page for mmap(), I find\n> > \n> > If MAP_ANONYMOUS is set in flags:\n> > \n> > o A new memory region is created and initialized to all zeros.\n> > This memory region can be shared only with descendants of\n> > the current process.\n> \n> This is supported on Linux and BSD, but not on Solarix 7. It's not \n> necessary; you can just map /dev/zero on SysV systems that don't \n> have MAP_ANON.\n\nOh, really. Yes, I have seen people do that.\n\n> > While I've said before that I don't think it's really necessary for\n> > processes that aren't children of the postmaster to access the shared\n> > memory, I'm not sure that I want to go over to a mechanism that makes it\n> > *impossible* for that to be done. Especially not if the only motivation\n> > is to avoid having to configure the kernel's shared memory settings.\n> \n> There are enormous advantages to avoiding the need to configure kernel \n> settings. It makes PG a better citizen. PG is much easier to drop in \n> and use if you don't need attention from the IT department.\n\nOne big advantage is that mmap() removes itself when all processes using\nit exit, while SysV stays around and has to be cleaned up manually in\nsome cases.\n\n> But I don't know of any reason to avoid mapping an actual inode,\n> so using mmap doesn't necessarily mean giving up sharing among\n> unrelated processes.\n\nSee above.\n\n> \n> > Besides, what makes you think there's not a limit on the size of shmem\n> > allocatable via mmap()?\n> \n> I've never seen any mmap limit documented. Since mmap() is how \n> everybody implements shared libraries, such a limit would be equivalent \n> to a limit on how much/many shared libraries are used. mmap() with \n> MAP_ANONYMOUS (or its SysV /dev/zero equivalent) is a common, modern \n> way to get raw storage for malloc(), so such a limit would be a limit\n> on malloc() too.\n> \n> The mmap architecture comes to us from the Mach microkernel memory\n> manager, backported into BSD and then copied widely. Since it was\n> the fundamental mechanism for all memory operations in Mach, arbitrary\n> limits would make no sense. That it worked so well is the reason it \n> was copied everywhere else, so adding arbitrary limits while copying \n> it would be silly. I don't think we'll see any systems like that.\n\nThis is encouraging.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 11:20:19 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "ncm@zembu.com (Nathan Myers) writes:\n> This is supported on Linux and BSD, but not on Solarix 7. It's not \n> necessary; you can just map /dev/zero on SysV systems that don't \n> have MAP_ANON.\n\nHPUX says:\n\n The mmap() function is supported for regular files. Support for any\n other type of file is unspecified.\n\n> But I don't know of any reason to avoid mapping an actual inode,\n\nHow about wasted I/O due to the kernel thinking it needs to reflect\nwrites to the memory region back out to the underlying file?\n\n> Since mmap() is how everybody implements shared libraries,\n\nNow *there's* a sweeping generalization. Documentation of this\nclaim, please?\n\n> The mmap architecture comes to us from the Mach microkernel memory\n> manager, backported into BSD and then copied widely.\n\nIf everyone copied the Mach implementation, why is it they don't even\nagree on the spellings of the user-visible flags?\n\n\nThis looks a lot like exchanging the devil we know (SysV shmem) for a\ndevil we don't know. Do I need to remind you about, for example, the\nmmap bugs in early Linux releases? (I still vividly remember having to\nabandon mmap on a project a few years back that needed to be portable\nto Linux. Perhaps that colors my opinions here.) I don't think the\nproblems with shmem are sufficiently large to justify venturing into\na whole new terra incognita of portability issues and kernel bugs.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 11:23:25 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] A patch for xlog.c " }, { "msg_contents": "Tom Lane <tgl@sss.pgh.pa.us> writes:\n\n> > Since mmap() is how everybody implements shared libraries,\n> \n> Now *there's* a sweeping generalization. Documentation of this\n> claim, please?\n\nI've seen a lot of shared library implementations (I used to be the\nGNU binutils maintainer), and Nathan is approximately correct. Most\nELF systems use a dynamic linker inherited from the original SVR4\nimplementation, which uses mmap. You can see this by running strace\non an SVR4 system. The *BSD and GNU dynamic linker implementations\nare of course independently derived, but they use mmap too.\n\nmmap is the natural way to implement ELF style shared libraries. The\nbasic operation you have to do is to map the shared library into the\nprocess memory space, and then to process a few relocations. Mapping\nthe shared library in can be done either using mmap, or using\nopen/read/close. For a large file, mmap is going to be much faster\nthan open/read/close, because it doesn't require actually reading the\nfile.\n\nThere are, of course, many non-ELF shared libraries implementations.\nSVR3 does not use mmap. SunOS does use mmap (SunOS shared libraries\nwere taken into SVR4 and the ELF standard). I don't know offhand\nabout AIX, Digital Unix, or Windows.\n\nmmap is standardized by the most recent version of POSIX.1.\n\nIan\n", "msg_date": "26 Feb 2001 08:57:08 -0800", "msg_from": "Ian Lance Taylor <ian@airs.com>", "msg_from_op": false, "msg_subject": "Re: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "Hello Tom,\n\nTuesday, February 27, 2001, 12:23:25 AM, you wrote:\n\nTL> This looks a lot like exchanging the devil we know (SysV shmem) for a\nTL> devil we don't know. Do I need to remind you about, for example, the\nTL> mmap bugs in early Linux releases? (I still vividly remember having to\nTL> abandon mmap on a project a few years back that needed to be portable\nTL> to Linux. Perhaps that colors my opinions here.) I don't think the\nTL> problems with shmem are sufficiently large to justify venturing into\nTL> a whole new terra incognita of portability issues and kernel bugs.\n\nTL> regards, tom lane\n\nthe only problem is because if we need to tune Postermaster to use\nlarge buffer while system havn't so many SYSV shared memory, in many\nsystemes, we need to recompile OS kernel, this is a small problem to install\nPGSQL to product environment.\n\n-- \nBest regards,\nXuYifeng\n\n\n", "msg_date": "Tue, 27 Feb 2001 10:27:22 +0800", "msg_from": "jamexu <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Tue, 27 Feb 2001, jamexu wrote:\n\n> Hello Tom,\n>\n> Tuesday, February 27, 2001, 12:23:25 AM, you wrote:\n>\n> TL> This looks a lot like exchanging the devil we know (SysV shmem) for a\n> TL> devil we don't know. Do I need to remind you about, for example, the\n> TL> mmap bugs in early Linux releases? (I still vividly remember having to\n> TL> abandon mmap on a project a few years back that needed to be portable\n> TL> to Linux. Perhaps that colors my opinions here.) I don't think the\n> TL> problems with shmem are sufficiently large to justify venturing into\n> TL> a whole new terra incognita of portability issues and kernel bugs.\n>\n> TL> regards, tom lane\n>\n> the only problem is because if we need to tune Postermaster to use\n> large buffer while system havn't so many SYSV shared memory, in many\n> systemes, we need to recompile OS kernel, this is a small problem to install\n> PGSQL to product environment.\n\nWhat? You don't automatically recompile your OS kernel when you build a\nsystem in the first place?? First step on any OS install of FreeBSD is to\nrid myself of the 'extras' that are in the generic kernel, and enable\nSharedMemory (even if I'm not using PgSQL on that machine) ...\n\n\n", "msg_date": "Mon, 26 Feb 2001 23:00:05 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": ">> the only problem is because if we need to tune Postermaster to use\n>> large buffer while system havn't so many SYSV shared memory, in many\n>> systemes, we need to recompile OS kernel, this is a small problem to install\n>> PGSQL to product environment.\n\nOf course, if you haven't got mmap(), a recompile won't help ...\n\nI'd be somewhat more enthusiastic about mmap if I thought we could\nabandon the SysV shmem support completely, but I don't foresee that\nhappening for a long while yet.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 22:05:52 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c " }, { "msg_contents": "Hello The,\n\nTuesday, February 27, 2001, 11:00:05 AM, you wrote:\n\nTHH> On Tue, 27 Feb 2001, jamexu wrote:\n\n>> Hello Tom,\n>>\n>> Tuesday, February 27, 2001, 12:23:25 AM, you wrote:\n>>\n>> TL> This looks a lot like exchanging the devil we know (SysV shmem) for a\n>> TL> devil we don't know. Do I need to remind you about, for example, the\n>> TL> mmap bugs in early Linux releases? (I still vividly remember having to\n>> TL> abandon mmap on a project a few years back that needed to be portable\n>> TL> to Linux. Perhaps that colors my opinions here.) I don't think the\n>> TL> problems with shmem are sufficiently large to justify venturing into\n>> TL> a whole new terra incognita of portability issues and kernel bugs.\n>>\n>> TL> regards, tom lane\n>>\n>> the only problem is because if we need to tune Postermaster to use\n>> large buffer while system havn't so many SYSV shared memory, in many\n>> systemes, we need to recompile OS kernel, this is a small problem to install\n>> PGSQL to product environment.\n\nTHH> What? You don't automatically recompile your OS kernel when you build a\nTHH> system in the first place?? First step on any OS install of FreeBSD is to\nTHH> rid myself of the 'extras' that are in the generic kernel, and enable\nTHH> SharedMemory (even if I'm not using PgSQL on that machine) ...\n\nheihei, why do you think users always using FreeBSD and not other\nUNIX systemes?\nyour assume is false.\n\n---\nXu Yifeng\n\n\n", "msg_date": "Tue, 27 Feb 2001 11:18:34 +0800", "msg_from": "jamexu <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re[3]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Tue, 27 Feb 2001, jamexu wrote:\n\n> Hello The,\n>\n> Tuesday, February 27, 2001, 11:00:05 AM, you wrote:\n>\n> THH> On Tue, 27 Feb 2001, jamexu wrote:\n>\n> >> Hello Tom,\n> >>\n> >> Tuesday, February 27, 2001, 12:23:25 AM, you wrote:\n> >>\n> >> TL> This looks a lot like exchanging the devil we know (SysV shmem) for a\n> >> TL> devil we don't know. Do I need to remind you about, for example, the\n> >> TL> mmap bugs in early Linux releases? (I still vividly remember having to\n> >> TL> abandon mmap on a project a few years back that needed to be portable\n> >> TL> to Linux. Perhaps that colors my opinions here.) I don't think the\n> >> TL> problems with shmem are sufficiently large to justify venturing into\n> >> TL> a whole new terra incognita of portability issues and kernel bugs.\n> >>\n> >> TL> regards, tom lane\n> >>\n> >> the only problem is because if we need to tune Postermaster to use\n> >> large buffer while system havn't so many SYSV shared memory, in many\n> >> systemes, we need to recompile OS kernel, this is a small problem to install\n> >> PGSQL to product environment.\n>\n> THH> What? You don't automatically recompile your OS kernel when you build a\n> THH> system in the first place?? First step on any OS install of FreeBSD is to\n> THH> rid myself of the 'extras' that are in the generic kernel, and enable\n> THH> SharedMemory (even if I'm not using PgSQL on that machine) ...\n>\n> heihei, why do you think users always using FreeBSD and not other\n> UNIX systemes?\n> your assume is false.\n\nI don't ... I personally admin FreeBSD and Solaris boxen ... FreeBSD,\nfirst step is to always recompile the kernel after an install, to get rid\nof crud and add Shared Memory ... the Solaris boxes, you add a couple of\nlines to /etc/system and reboot, and you have Shared Memory ...\n\nI don't know about other 'commercial OSs', but I'd be shocked if a Linux\nadmin never does any kernel config cleanup befor egoing production *shrug*\n\n\n\n", "msg_date": "Mon, 26 Feb 2001 23:41:30 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re[3]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "> > the only problem is because if we need to tune Postermaster to use\n> > large buffer while system havn't so many SYSV shared memory, in many\n> > systemes, we need to recompile OS kernel, this is a small problem to install\n> > PGSQL to product environment.\n> \n> What? You don't automatically recompile your OS kernel when you build a\n> system in the first place?? First step on any OS install of FreeBSD is to\n> rid myself of the 'extras' that are in the generic kernel, and enable\n> SharedMemory (even if I'm not using PgSQL on that machine) ...\n\nHe is saying the machine is already in production. Suppose he has run\nPostgreSQL for a few months, then needs to increase number of buffers. \nHe can't exceed the kernel limit unless he recompiles.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 22:42:56 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Mon, 26 Feb 2001, Bruce Momjian wrote:\n\n> > > the only problem is because if we need to tune Postermaster to use\n> > > large buffer while system havn't so many SYSV shared memory, in many\n> > > systemes, we need to recompile OS kernel, this is a small problem to install\n> > > PGSQL to product environment.\n> >\n> > What? You don't automatically recompile your OS kernel when you build a\n> > system in the first place?? First step on any OS install of FreeBSD is to\n> > rid myself of the 'extras' that are in the generic kernel, and enable\n> > SharedMemory (even if I'm not using PgSQL on that machine) ...\n>\n> He is saying the machine is already in production. Suppose he has run\n> PostgreSQL for a few months, then needs to increase number of buffers.\n> He can't exceed the kernel limit unless he recompiles.\n\nOkay ... same applies to MMAP() though, I had to disappoint ... there are\nkernel limits that, at least under FreeBSD, do require a kernel\nrecompile in order to exceed ... alot of them have been moved (maybe all\nnow) to sysctl settable values ... but, again, under some of the\ncommercial OSs, I don't think that anything but (as in Solaris) modifying\nsomething like /etc/system and rebooting will fix ...\n\n\n", "msg_date": "Mon, 26 Feb 2001 23:56:29 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "> Okay ... same applies to MMAP() though, I had to disappoint ... there are\n> kernel limits that, at least under FreeBSD, do require a kernel\n> recompile in order to exceed ... alot of them have been moved (maybe all\n> now) to sysctl settable values ... but, again, under some of the\n> commercial OSs, I don't think that anything but (as in Solaris) modifying\n> something like /etc/system and rebooting will fix ...\n\nBut the mmap() limits are much larger than the SysV limits, aren't they,\nto the point where you would never have to fiddle with the mmap() limits\nto get 100mb of buffers, right?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 23:04:42 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Mon, 26 Feb 2001, Bruce Momjian wrote:\n\n> > Okay ... same applies to MMAP() though, I had to disappoint ... there are\n> > kernel limits that, at least under FreeBSD, do require a kernel\n> > recompile in order to exceed ... alot of them have been moved (maybe all\n> > now) to sysctl settable values ... but, again, under some of the\n> > commercial OSs, I don't think that anything but (as in Solaris) modifying\n> > something like /etc/system and rebooting will fix ...\n>\n> But the mmap() limits are much larger than the SysV limits, aren't they,\n> to the point where you would never have to fiddle with the mmap() limits\n> to get 100mb of buffers, right?\n\nNot necessarily ... it depends on the admin of the server ... then again,\nI don't consider it a hassle to add a couple of lines to my kernel config\n(or /etc/system) and reboot *shrug* to me, its just part of the admin\nprocess ...\n\n\n", "msg_date": "Tue, 27 Feb 2001 00:26:16 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "> > But the mmap() limits are much larger than the SysV limits, aren't they,\n> > to the point where you would never have to fiddle with the mmap() limits\n> > to get 100mb of buffers, right?\n> \n> Not necessarily ... it depends on the admin of the server ... then again,\n> I don't consider it a hassle to add a couple of lines to my kernel config\n> (or /etc/system) and reboot *shrug* to me, its just part of the admin\n> process ...\n\nAre the kernel SysV defaults smaller than the mmap() kernel defaults?\n\nI know it is easy for you, but the number of reports and problems we\nhear about shows it is an issue for some.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Mon, 26 Feb 2001 23:27:48 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I know it is easy for you, but the number of reports and problems we\n> hear about shows it is an issue for some.\n\nWe hear some reports, but not a lot. We have no idea whatever what\nproblems might ensue if we used mmap instead. I'm dubious that SysV\nshmem creates enough problems to justify replacing it with a solution\nof essentially unknown portability characteristics...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 26 Feb 2001 23:45:18 -0500", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Re[2]: Re: [PATCHES] A patch for xlog.c " }, { "msg_contents": "Hello Tom,\n\nTuesday, February 27, 2001, 12:45:18 PM, you wrote:\n\nTL> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I know it is easy for you, but the number of reports and problems we\n>> hear about shows it is an issue for some.\n\nTL> We hear some reports, but not a lot. We have no idea whatever what\nTL> problems might ensue if we used mmap instead. I'm dubious that SysV\nTL> shmem creates enough problems to justify replacing it with a solution\nTL> of essentially unknown portability characteristics...\n\nTL> regards, tom lane\n\ncould anyone investigate mmap() in many modern UNIX systems to prove that\nmmap() is so un-portable?\n\nit seems mmap() is a portable problem like you said, but I think SYSV\nshmem for PGSQL is a installation problem. you push some difficults to\nend user, and take easy taskes for yourself.\n\nXu Yifeng\n\n\n", "msg_date": "Tue, 27 Feb 2001 13:45:44 +0800", "msg_from": "Xu Yifeng <jamexu@telekbird.com.cn>", "msg_from_op": false, "msg_subject": "Re[4]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Tue, 27 Feb 2001, Xu Yifeng wrote:\n\n> Hello Tom,\n>\n> Tuesday, February 27, 2001, 12:45:18 PM, you wrote:\n>\n> TL> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I know it is easy for you, but the number of reports and problems we\n> >> hear about shows it is an issue for some.\n>\n> TL> We hear some reports, but not a lot. We have no idea whatever what\n> TL> problems might ensue if we used mmap instead. I'm dubious that SysV\n> TL> shmem creates enough problems to justify replacing it with a solution\n> TL> of essentially unknown portability characteristics...\n>\n> TL> regards, tom lane\n>\n> could anyone investigate mmap() in many modern UNIX systems to prove that\n> mmap() is so un-portable?\n>\n> it seems mmap() is a portable problem like you said, but I think SYSV\n> shmem for PGSQL is a installation problem. you push some difficults to\n> end user, and take easy taskes for yourself.\n\nConsidering that, so far as I can tell, both you and Bruce are the only\nones that are really heavy on moving away from SysV ... how many ppl are\nactually finding it to be that much more difficult? :)\n\n\n", "msg_date": "Tue, 27 Feb 2001 08:28:43 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re[4]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "> I don't know about other 'commercial OSs', but I'd be shocked if a Linux\n> admin never does any kernel config cleanup befor egoing production *shrug*\n\noops...\n\n - Thomas\n", "msg_date": "Tue, 27 Feb 2001 14:31:35 +0000", "msg_from": "Thomas Lockhart <lockhart@alumni.caltech.edu>", "msg_from_op": false, "msg_subject": "Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "> > could anyone investigate mmap() in many modern UNIX systems to prove that\n> > mmap() is so un-portable?\n> >\n> > it seems mmap() is a portable problem like you said, but I think SYSV\n> > shmem for PGSQL is a installation problem. you push some difficults to\n> > end user, and take easy taskes for yourself.\n> \n> Considering that, so far as I can tell, both you and Bruce are the only\n> ones that are really heavy on moving away from SysV ... how many ppl are\n> actually finding it to be that much more difficult? :)\n\nI am not sure I would call myself _heavy_ on it. I suggest researching\nit on platforms that support anon. mmap() to reduce administration load\nwhen increasing the number of buffers. If it is not a big win, there is\nno reason to add support for it. We clearly will be keeping SysV for a\nlong time, so adding another shared memory system, mmap(), should only\nbe done for a very good reason.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 853-3000\n + If your life is a hard drive, | 830 Blythe Avenue\n + Christ can be your backup. | Drexel Hill, Pennsylvania 19026\n", "msg_date": "Tue, 27 Feb 2001 11:23:44 -0500 (EST)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Re[4]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "The Hermit Hacker writes:\n\n> I don't ... I personally admin FreeBSD and Solaris boxen ... FreeBSD,\n> first step is to always recompile the kernel after an install, to get rid\n> of crud and add Shared Memory ... the Solaris boxes, you add a couple of\n> lines to /etc/system and reboot, and you have Shared Memory ...\n>\n> I don't know about other 'commercial OSs', but I'd be shocked if a Linux\n> admin never does any kernel config cleanup befor egoing production *shrug*\n\nLinux allows you to load and unload kernel modules, while the system is\nrunning, to add and remove stuff as you need it. But this is moot because\nLinux also allows you to increase shared memory (up to the total\naddressable memory) while the system is running. Recompiling Linux\nkernels is a thing of the past with modern distributions.\n\n-- \nPeter Eisentraut peter_e@gmx.net http://yi.org/peter-e/\n\n", "msg_date": "Tue, 27 Feb 2001 17:25:35 +0100 (CET)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Re[3]: Re: [PATCHES] A patch for xlog.c" }, { "msg_contents": "On Tue, 27 Feb 2001, Peter Eisentraut wrote:\n\n> The Hermit Hacker writes:\n>\n> > I don't ... I personally admin FreeBSD and Solaris boxen ... FreeBSD,\n> > first step is to always recompile the kernel after an install, to get rid\n> > of crud and add Shared Memory ... the Solaris boxes, you add a couple of\n> > lines to /etc/system and reboot, and you have Shared Memory ...\n> >\n> > I don't know about other 'commercial OSs', but I'd be shocked if a Linux\n> > admin never does any kernel config cleanup befor egoing production *shrug*\n>\n> Linux allows you to load and unload kernel modules, while the system is\n> running, to add and remove stuff as you need it. But this is moot because\n> Linux also allows you to increase shared memory (up to the total\n> addressable memory) while the system is running. Recompiling Linux\n> kernels is a thing of the past with modern distributions.\n\nActually, just found that out for FreeBSD too *sigh* You do have to\nenable SYSV* in the kernel itself, but increasing shared memory and\nsemaphores is a simple sysctl that can be run while the system is live ...\n\n\n", "msg_date": "Tue, 27 Feb 2001 13:26:03 -0400 (AST)", "msg_from": "The Hermit Hacker <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Re[3]: Re: [PATCHES] A patch for xlog.c" } ]