threads
listlengths
1
2.99k
[ { "msg_contents": "Does someone from core want to inform bugtraq about 7.2.2?\nCheers,\n- Stuart\n\nWestcountry Design & Print,\nHeron Road, Sowton, Exeter. \nEX2 7NF - also at -\n17 Brest Road, Derriford,\nPlymouth. PL6 5AA\nEngland\nwww.westcountry-design-print.co.uk\n\n\n\n\n\ntell Bugtraq about 7.2.2\n\n\nDoes someone from core want to inform bugtraq about 7.2.2?\nCheers,\n- Stuart\n\nWestcountry Design & Print,\nHeron Road, Sowton, Exeter. \nEX2 7NF - also at -\n17 Brest Road, Derriford,\nPlymouth. PL6 5AA\nEngland\nwww.westcountry-design-print.co.uk", "msg_date": "Wed, 28 Aug 2002 10:27:19 +0100", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "tell Bugtraq about 7.2.2" }, { "msg_contents": "\nhaving never had to do it before, do you know what the procedure is?\n\nOn Wed, 28 Aug 2002, Henshall, Stuart - WCP wrote:\n\n> Does someone from core want to inform bugtraq about 7.2.2?\n> Cheers,\n> - Stuart\n>\n> Westcountry Design & Print,\n> Heron Road, Sowton, Exeter.\n> EX2 7NF - also at -\n> 17 Brest Road, Derriford,\n> Plymouth. PL6 5AA\n> England\n> www.westcountry-design-print.co.uk\n>\n\n", "msg_date": "Wed, 28 Aug 2002 11:35:31 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: tell Bugtraq about 7.2.2" }, { "msg_contents": "On Wed, 28 Aug 2002, Marc G. Fournier wrote:\n\n> \n> having never had to do it before, do you know what the procedure is?\n\nI thought the announcement was forwarded to Bugtraq by Lamar?\n\n---\nDate: Fri, 23 Aug 2002 23:35:59 -0400\nFrom: Lamar Owen <lamar.owen@wgcr.org>\nTo: bugtraq@securityfocus.com\nSubject: Fwd: [GENERAL] PostgreSQL 7.2.2: Security Release\n---\n\nGavin\n\n\n\n", "msg_date": "Thu, 29 Aug 2002 00:57:01 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: tell Bugtraq about 7.2.2" }, { "msg_contents": "On Wednesday 28 August 2002 10:35 am, Marc G. Fournier wrote:\n> having never had to do it before, do you know what the procedure is?\n\nPost to bugtraq@securityfocus.com -- it's moderated, and I don't know if \nthere's a subscription requirement.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 28 Aug 2002 14:32:24 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: tell Bugtraq about 7.2.2" }, { "msg_contents": "On Wednesday 28 August 2002 02:32 pm, Lamar Owen wrote:\n> On Wednesday 28 August 2002 10:35 am, Marc G. Fournier wrote:\n> > having never had to do it before, do you know what the procedure is?\n>\n> Post to bugtraq@securityfocus.com -- it's moderated, and I don't know if\n> there's a subscription requirement.\n\nOh, but I did forward this one. This is just an informational message for \nfuture reference.\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Wed, 28 Aug 2002 14:52:20 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: tell Bugtraq about 7.2.2" }, { "msg_contents": "On Wed, 28 Aug 2002, Lamar Owen wrote:\n\n> On Wednesday 28 August 2002 10:35 am, Marc G. Fournier wrote:\n> > having never had to do it before, do you know what the procedure is?\n>\n> Post to bugtraq@securityfocus.com -- it's moderated, and I don't know if\n> there's a subscription requirement.\n>\n\nLamar posted the info about 7.2.2 on bugtraq. I saw it.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 28 Aug 2002 15:03:25 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: tell Bugtraq about 7.2.2" } ]
[ { "msg_contents": "\n//@(#) Mordred Labs advisory 0x0005\n\nRelease data: 23/08/02\nName: Several buffer overruns in PostgreSQL\nVersions affected: all versions\nRisk: from average to low\n\n--[ Description:\n\nPostgreSQL provides you with several builint geo types\n(circle,polygon,box...etc). \nUnfortunately the code for geo functions written in a very insecure style\nand should be totally rewritten, as a quick search revealed this:\n\n---[ Details:\n\n1)\n\nUpon invoking a polygon(integer, circle) function \na src/backend/utils/adt/geo_ops.c:circle_poly() function will gets called,\nwhich suffers from a buffer overflow.\n\n2) A src/backend/adt/utils/geo_ops.c:path_encode() fails to detect a buffer\noverrun condition. It is called in multiple places, the most\ninteresting are path_out() and poly_out() functions.\n\n3) Upon converting a char string to a path object, a\nsrc/backend/utils/adt/geo_ops.c:path_in() function will gets called,\nwhich suffers from a buffer overrun, caused by a very long argument.\n\n4) A src/backend/utils/adt/geo_ops.c:poly_in() function fails to detect a\nbuffer \noverrun condition caused by a very long argument.\n\n5) A src/backend/utils/adt/geo_ops.c:path_add() also fails to detect a\nsimple buffer\noverrun.\n\n6)\n\nAnd finally, a truly dumb feature (not a security related though) in\npostmaster: \n$ postmaster -o `perl -e 'print \"\\x66\" x 1200'`\nSegmentation fault (core dumped)\n\n--[ How to reproduce:\n\nI only show how to reproduce a first buffer overrun condition, as the others\ntoo memory consuming :-)\n\n1)\n\ntemplate1=# select polygon(268435455,'((1,2),3)'::circle);\npqReadData() -- backend closed the channel unexpectedly.\n This probably means the backend terminated abnormally\n before or while processing the request.\nThe connection to the server was lost. Attempting reset: Failed.\n!#\n\n--[ Solution\n\nDrop the vulnerable functions.\n\n\n\n\n________________________________________________________________________\nThis letter has been delivered unencrypted. We'd like to remind you that\nthe full protection of e-mail correspondence is provided by S-mail\nencryption mechanisms if only both, Sender and Recipient use S-mail.\nRegister at S-mail.com: http://www.s-mail.com/inf/en\n", "msg_date": "Wed, 28 Aug 2002 09:51:31 +0000", "msg_from": "Sir Mordred The Traitor <mordred@s-mail.com>", "msg_from_op": true, "msg_subject": "@(#)Mordre Labs advisory 0x0005: Several buffer overruns in\n PostgreSQL" }, { "msg_contents": "Sir Mordred The Traitor <mordred@s-mail.com> writes:\n> Upon invoking a polygon(integer, circle) function a\n> src/backend/utils/adt/geo_ops.c:circle_poly() function will gets\n> called, which suffers from a buffer overflow.\n> \n> 2) A src/backend/adt/utils/geo_ops.c:path_encode() fails to detect a\n> buffer overrun condition. It is called in multiple places, the most\n> interesting are path_out() and poly_out() functions.\n\n> 5) A src/backend/utils/adt/geo_ops.c:path_add() also fails to detect\n> a simple buffer overrun.\n\nI've attached a patch which should fix these problems.\n\n> 3) Upon converting a char string to a path object, a\n> src/backend/utils/adt/geo_ops.c:path_in() function will gets called,\n> which suffers from a buffer overrun, caused by a very long argument.\n\n> 4) A src/backend/utils/adt/geo_ops.c:poly_in() function fails to\n> detect a buffer overrun condition caused by a very long argument.\n\nI wasn't able to reproduce either of these (wouldn't it require an\ninput string with several hundred thousand commas?), can you give me a\ntest-case?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC", "msg_date": "28 Aug 2002 15:10:57 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: @(#)Mordre Labs advisory 0x0005: Several buffer overruns in\n\tPostgreSQL" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nNeil Conway wrote:\n> Sir Mordred The Traitor <mordred@s-mail.com> writes:\n> > Upon invoking a polygon(integer, circle) function a\n> > src/backend/utils/adt/geo_ops.c:circle_poly() function will gets\n> > called, which suffers from a buffer overflow.\n> > \n> > 2) A src/backend/adt/utils/geo_ops.c:path_encode() fails to detect a\n> > buffer overrun condition. It is called in multiple places, the most\n> > interesting are path_out() and poly_out() functions.\n> \n> > 5) A src/backend/utils/adt/geo_ops.c:path_add() also fails to detect\n> > a simple buffer overrun.\n> \n> I've attached a patch which should fix these problems.\n> \n> > 3) Upon converting a char string to a path object, a\n> > src/backend/utils/adt/geo_ops.c:path_in() function will gets called,\n> > which suffers from a buffer overrun, caused by a very long argument.\n> \n> > 4) A src/backend/utils/adt/geo_ops.c:poly_in() function fails to\n> > detect a buffer overrun condition caused by a very long argument.\n> \n> I wasn't able to reproduce either of these (wouldn't it require an\n> input string with several hundred thousand commas?), can you give me a\n> test-case?\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 28 Aug 2002 17:17:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: @(#)Mordre Labs advisory 0x0005: Several buffer overruns" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nNeil Conway wrote:\n> Sir Mordred The Traitor <mordred@s-mail.com> writes:\n> > Upon invoking a polygon(integer, circle) function a\n> > src/backend/utils/adt/geo_ops.c:circle_poly() function will gets\n> > called, which suffers from a buffer overflow.\n> > \n> > 2) A src/backend/adt/utils/geo_ops.c:path_encode() fails to detect a\n> > buffer overrun condition. It is called in multiple places, the most\n> > interesting are path_out() and poly_out() functions.\n> \n> > 5) A src/backend/utils/adt/geo_ops.c:path_add() also fails to detect\n> > a simple buffer overrun.\n> \n> I've attached a patch which should fix these problems.\n> \n> > 3) Upon converting a char string to a path object, a\n> > src/backend/utils/adt/geo_ops.c:path_in() function will gets called,\n> > which suffers from a buffer overrun, caused by a very long argument.\n> \n> > 4) A src/backend/utils/adt/geo_ops.c:poly_in() function fails to\n> > detect a buffer overrun condition caused by a very long argument.\n> \n> I wasn't able to reproduce either of these (wouldn't it require an\n> input string with several hundred thousand commas?), can you give me a\n> test-case?\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n> \n> http://archives.postgresql.org\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 19:05:49 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: @(#)Mordre Labs advisory 0x0005: Several buffer overruns" } ]
[ { "msg_contents": "No idea sorry :(\n- Stuart\n\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> Sent: 28 August 2002 15:36\n> To: Henshall, Stuart - WCP\n> Cc: 'pgsql-hackers@postgresql.org'\n> Subject: Re: [HACKERS] tell Bugtraq about 7.2.2\n> \n> \n> \n> having never had to do it before, do you know what the procedure is?\n> \n> On Wed, 28 Aug 2002, Henshall, Stuart - WCP wrote:\n> \n> > Does someone from core want to inform bugtraq about 7.2.2?\n> > Cheers,\n> > - Stuart\n> >\n> > Westcountry Design & Print,\n> > Heron Road, Sowton, Exeter.\n> > EX2 7NF - also at -\n> > 17 Brest Road, Derriford,\n> > Plymouth. PL6 5AA\n> > England\n> > www.westcountry-design-print.co.uk\n> >\n> \n\n\n\n\n\nRE: [HACKERS] tell Bugtraq about 7.2.2\n\n\nNo idea sorry :(\n- Stuart\n\n> -----Original Message-----\n> From: Marc G. Fournier [mailto:scrappy@hub.org]\n> Sent: 28 August 2002 15:36\n> To: Henshall, Stuart - WCP\n> Cc: 'pgsql-hackers@postgresql.org'\n> Subject: Re: [HACKERS] tell Bugtraq about 7.2.2\n> \n> \n> \n> having never had to do it before, do you know what the procedure is?\n> \n> On Wed, 28 Aug 2002, Henshall, Stuart - WCP wrote:\n> \n> > Does someone from core want to inform bugtraq about 7.2.2?\n> > Cheers,\n> > - Stuart\n> >\n> > Westcountry Design & Print,\n> > Heron Road, Sowton, Exeter.\n> > EX2 7NF - also at -\n> > 17 Brest Road, Derriford,\n> > Plymouth. PL6 5AA\n> > England\n> > www.westcountry-design-print.co.uk\n> >\n>", "msg_date": "Wed, 28 Aug 2002 15:40:47 +0100", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "Re: tell Bugtraq about 7.2.2" } ]
[ { "msg_contents": "oops, sorry your correct.\n- Stuart\n\n> -----Original Message-----\n> From: Gavin Sherry [mailto:swm@linuxworld.com.au]\n> Sent: 28 August 2002 15:57\n> To: Marc G. Fournier\n> Cc: Henshall, Stuart - WCP; 'pgsql-hackers@postgresql.org'\n> Subject: Re: [HACKERS] tell Bugtraq about 7.2.2\n> \n> \n> On Wed, 28 Aug 2002, Marc G. Fournier wrote:\n> \n> > \n> > having never had to do it before, do you know what the procedure is?\n> \n> I thought the announcement was forwarded to Bugtraq by Lamar?\n> \n> ---\n> Date: Fri, 23 Aug 2002 23:35:59 -0400\n> From: Lamar Owen <lamar.owen@wgcr.org>\n> To: bugtraq@securityfocus.com\n> Subject: Fwd: [GENERAL] PostgreSQL 7.2.2: Security Release\n> ---\n> \n> Gavin\n> \n> \n> \n\n\n\n\n\nRE: [HACKERS] tell Bugtraq about 7.2.2\n\n\noops, sorry your correct.\n- Stuart\n\n> -----Original Message-----\n> From: Gavin Sherry [mailto:swm@linuxworld.com.au]\n> Sent: 28 August 2002 15:57\n> To: Marc G. Fournier\n> Cc: Henshall, Stuart - WCP; 'pgsql-hackers@postgresql.org'\n> Subject: Re: [HACKERS] tell Bugtraq about 7.2.2\n> \n> \n> On Wed, 28 Aug 2002, Marc G. Fournier wrote:\n> \n> > \n> > having never had to do it before, do you know what the procedure is?\n> \n> I thought the announcement was forwarded to Bugtraq by Lamar?\n> \n> ---\n> Date: Fri, 23 Aug 2002 23:35:59 -0400\n> From: Lamar Owen <lamar.owen@wgcr.org>\n> To: bugtraq@securityfocus.com\n> Subject: Fwd: [GENERAL] PostgreSQL 7.2.2: Security Release\n> ---\n> \n> Gavin\n> \n> \n>", "msg_date": "Wed, 28 Aug 2002 15:51:29 +0100", "msg_from": "\"Henshall, Stuart - WCP\" <SHenshall@westcountrypublications.co.uk>", "msg_from_op": true, "msg_subject": "Re: tell Bugtraq about 7.2.2" } ]
[ { "msg_contents": "What I changed is covered in the CHANGES file.\nNote that this includes a bug fix I already subimtted.\nThese changes are versus 7.3 CVS and may not be backwards compatible with 7.2.\nThey do not include a bug fix for a problem I reported with cube_yyerror.\nA context diff is attached.", "msg_date": "Wed, 28 Aug 2002 09:53:21 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "contrib/cube update" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nI'm very new to this project and inspired by recent security release, I\nstarted to audit postgresql source against common mistakes with sprintf().\n\nI mostly found problems with sprintf() used on statically allocated\nbuffers or dynamically allocated buffers with random constant size.\n\nI used lib/stringinfo.h functions when I was sure palloc()-memory\nallocation was the right thing to do and I felt like code needed to\nconstruct a complete string no matter how complex.\n\nThere were places where I just changes sprintf() to snprintf(). Like in\nsome *BSD dl loading functions etc.\n\nThere were also places where I could identify the possible bug but\ndidn't know 'the' right way to fix it. As I say, I don't know the\ncodebase very well so I really didn't know what auxiliarity functions\nthere are to use. These parts are marked as FIXME and should be easily\nidentified by looking at the patch (link below - it is a big one).\n\nThere were also simple mistakes like in src/backend/tioga/tgRecipe.c\n- -\tsprintf(qbuf, Q_LOOKUP_EDGES_IN_RECIPE, name);\n- -\tpqres = PQexec(qbuf);\n+\tsnprintf(qbuf, MAX_QBUF_LENGTH, Q_LOOKUP_EDGES_IN_RECIPE, name);\n~ \tpqres = PQexec(qbuf);\n~ \tif (*pqres == 'R' || *pqres == 'E')\n\nNotice how previous PQexec() is removed. There were two of them.\n\nSome of my fixes cause code to be a bit slower because of dynamically\nallocated mem, but it also fixes a lot of ptr+strlen(ptr) -style\nperformance problems. I didn't particularly try to fix these but some of\nthem are corrected by simply using lib/stringinfo.h\n\nPlease take look at this patch but since I have worked three long nights\nwith this one, there probably are bugs. I tried compiling it with\n\"configure --with-tcl --with-perl --with-python\" and at least it\ncompiled for me :) But that's about all I can promise.\n\ndiffstat postgresql-7.2.2-sprintf.patch\n~ contrib/cube/cube.c | 26 --\n~ contrib/cube/cubeparse.y | 11\n~ contrib/intarray/_int.c | 29 +-\n~ contrib/rserv/rserv.c | 30 +-\n~ contrib/seg/segparse.y | 18 -\n~ contrib/spi/refint.c | 39 +--\n~ contrib/spi/timetravel.c | 12\n~ doc/src/sgml/spi.sgml | 2\n~ src/backend/parser/analyze.c | 2\n~ src/backend/port/dynloader/freebsd.c | 10\n~ src/backend/port/dynloader/netbsd.c | 11\n~ src/backend/port/dynloader/nextstep.c | 2\n~ src/backend/port/dynloader/openbsd.c | 10\n~ src/backend/postmaster/postmaster.c | 2\n~ src/backend/storage/file/fd.c | 1\n~ src/backend/storage/ipc/shmqueue.c | 1\n~ src/backend/tioga/tgRecipe.c | 11\n~ src/backend/utils/adt/ri_triggers.c | 312\n++++++++++++------------\n~ src/bin/pg_dump/pg_dump.c | 14 -\n~ src/bin/pg_passwd/pg_passwd.c | 2\n~ src/bin/psql/command.c | 2\n~ src/bin/psql/describe.c | 3\n~ src/interfaces/ecpg/preproc/pgc.l | 8\n~ src/interfaces/ecpg/preproc/preproc.y | 24 -\n~ src/interfaces/ecpg/preproc/type.c | 16 -\n~ src/interfaces/ecpg/preproc/variable.c | 12\n~ src/interfaces/libpgeasy/examples/pgwordcount.c | 6\n~ src/interfaces/libpgtcl/pgtclCmds.c | 4\n~ src/interfaces/libpq/fe-auth.c | 2\n~ src/interfaces/odbc/connection.c | 2\n~ src/interfaces/odbc/dlg_specific.c | 5\n~ src/interfaces/odbc/info.c | 38 +-\n~ src/interfaces/odbc/qresult.c | 4\n~ src/interfaces/odbc/results.c | 8\n~ src/interfaces/odbc/statement.c | 6\n~ 35 files changed, 365 insertions, 320 deletions\n\nPatch is about 70k and downloadable from\nhttp://suihkari.baana.suomi.net/postgresql/patches/postgresql-7.2.2-sprintf.patch\n\nAt least I didn't just bitch and moan about the bugs. ;)\n\n- - Jukka\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.7 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQE9bRZSYYWM2XTSwX0RApcSAJ40pTB0DEiucS/4m2aNFHSn5XVXlwCfeyYT\nEL5AF82ZlcqT/dGgd6BRJWM=\n=qojm\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 28 Aug 2002 21:28:35 +0300", "msg_from": "Jukka Holappa <jukkaho@mail.student.oulu.fi>", "msg_from_op": true, "msg_subject": "Sprintf() auditing and a patch" } ]
[ { "msg_contents": "... is now implemented.\n\nBut alterations done to the bootstrap user are not dumped. This is\nconsistent with the fact that other attributes such as the password are\nnot dumped either. It would be rather involved to implement this, since\nat the time of the dump you don't know the name of the future user. So be\nprepared to sell this as a feature somehow ...\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Wed, 28 Aug 2002 20:30:58 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Dumping user-specific configuration" } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nThis is a resend of my previous email which was stucked at moderation\napproval.. and as I don't know if anyone actually does that in your\nlist, I'm resending this now.\n\nHi,\n\nI'm very new to this project and inspired by recent security release, I\nstarted to audit postgresql source against common mistakes with sprintf().\n\nI mostly found problems with sprintf() used on statically allocated\nbuffers or dynamically allocated buffers with random constant size.\n\nI used lib/stringinfo.h functions when I was sure palloc()-memory\nallocation was the right thing to do and I felt like code needed to\nconstruct a complete string no matter how complex.\n\nThere were places where I just changes sprintf() to snprintf(). Like in\nsome *BSD dl loading functions etc.\n\nThere were also places where I could identify the possible bug but\ndidn't know 'the' right way to fix it. As I say, I don't know the\ncodebase very well so I really didn't know what auxiliarity functions\nthere are to use. These parts are marked as FIXME and should be easily\nidentified by looking at the patch (link below - it is a big one).\n\nThere were also simple mistakes like in src/backend/tioga/tgRecipe.c\n- -\tsprintf(qbuf, Q_LOOKUP_EDGES_IN_RECIPE, name);\n- -\tpqres = PQexec(qbuf);\n+\tsnprintf(qbuf, MAX_QBUF_LENGTH, Q_LOOKUP_EDGES_IN_RECIPE, name);\n~ \tpqres = PQexec(qbuf);\n~ \tif (*pqres == 'R' || *pqres == 'E')\n\nNotice how previous PQexec() is removed. There were two of them.\n\nSome of my fixes cause code to be a bit slower because of dynamically\nallocated mem, but it also fixes a lot of ptr+strlen(ptr) -style\nperformance problems. I didn't particularly try to fix these but some of\nthem are corrected by simply using lib/stringinfo.h\n\nPlease take look at this patch but since I have worked three long nights\nwith this one, there probably are bugs. I tried compiling it with\n\"configure --with-tcl --with-perl --with-python\" and at least it\ncompiled for me :) But that's about all I can promise.\n\ndiffstat postgresql-7.2.2-sprintf.patch\n~ contrib/cube/cube.c | 26 --\n~ contrib/cube/cubeparse.y | 11\n~ contrib/intarray/_int.c | 29 +-\n~ contrib/rserv/rserv.c | 30 +-\n~ contrib/seg/segparse.y | 18 -\n~ contrib/spi/refint.c | 39 +--\n~ contrib/spi/timetravel.c | 12\n~ doc/src/sgml/spi.sgml | 2\n~ src/backend/parser/analyze.c | 2\n~ src/backend/port/dynloader/freebsd.c | 10\n~ src/backend/port/dynloader/netbsd.c | 11\n~ src/backend/port/dynloader/nextstep.c | 2\n~ src/backend/port/dynloader/openbsd.c | 10\n~ src/backend/postmaster/postmaster.c | 2\n~ src/backend/storage/file/fd.c | 1\n~ src/backend/storage/ipc/shmqueue.c | 1\n~ src/backend/tioga/tgRecipe.c | 11\n~ src/backend/utils/adt/ri_triggers.c | 312\n++++++++++++------------\n~ src/bin/pg_dump/pg_dump.c | 14 -\n~ src/bin/pg_passwd/pg_passwd.c | 2\n~ src/bin/psql/command.c | 2\n~ src/bin/psql/describe.c | 3\n~ src/interfaces/ecpg/preproc/pgc.l | 8\n~ src/interfaces/ecpg/preproc/preproc.y | 24 -\n~ src/interfaces/ecpg/preproc/type.c | 16 -\n~ src/interfaces/ecpg/preproc/variable.c | 12\n~ src/interfaces/libpgeasy/examples/pgwordcount.c | 6\n~ src/interfaces/libpgtcl/pgtclCmds.c | 4\n~ src/interfaces/libpq/fe-auth.c | 2\n~ src/interfaces/odbc/connection.c | 2\n~ src/interfaces/odbc/dlg_specific.c | 5\n~ src/interfaces/odbc/info.c | 38 +-\n~ src/interfaces/odbc/qresult.c | 4\n~ src/interfaces/odbc/results.c | 8\n~ src/interfaces/odbc/statement.c | 6\n~ 35 files changed, 365 insertions, 320 deletions\n\nPatch is about 70k and downloadable from\nhttp://suihkari.baana.suomi.net/postgresql/patches/postgresql-7.2.2-sprintf.patch\n\nAt least I didn't just bitch and moan about the bugs. ;)\n\n- - Jukka\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.7 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQE9bRlYYYWM2XTSwX0RAndJAJ9C8KDGjteQ2Edngwifb6C876KDsgCfUon6\nPObTTeQfDLmgxkKN7bPnyk4=\n=nFa0\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Wed, 28 Aug 2002 21:41:28 +0300", "msg_from": "Jukka Holappa <jukkaho@mail.student.oulu.fi>", "msg_from_op": true, "msg_subject": "[Resend] Sprintf() auditing and a patch" }, { "msg_contents": "\nI have reviewed your patch, and it is a thorough job. Unfortunately,\nour code has drifted dramatically since 7.2 in the areas you patched. \nWould you be able to download our CVS or current snapshot and submit a\npatch based on that code?\n\nIn fact, we have applied a batch of snprintf fixes already so some of\nthem may already be fixed. You found quite a few so you probably have\nsome fixes we don't have.\n\n---------------------------------------------------------------------------\n\nJukka Holappa wrote:\n[ PGP not available, raw data follows ]\n> -----BEGIN PGP SIGNED MESSAGE-----\n> Hash: SHA1\n> \n> This is a resend of my previous email which was stucked at moderation\n> approval.. and as I don't know if anyone actually does that in your\n> list, I'm resending this now.\n> \n> Hi,\n> \n> I'm very new to this project and inspired by recent security release, I\n> started to audit postgresql source against common mistakes with sprintf().\n> \n> I mostly found problems with sprintf() used on statically allocated\n> buffers or dynamically allocated buffers with random constant size.\n> \n> I used lib/stringinfo.h functions when I was sure palloc()-memory\n> allocation was the right thing to do and I felt like code needed to\n> construct a complete string no matter how complex.\n> \n> There were places where I just changes sprintf() to snprintf(). Like in\n> some *BSD dl loading functions etc.\n> \n> There were also places where I could identify the possible bug but\n> didn't know 'the' right way to fix it. As I say, I don't know the\n> codebase very well so I really didn't know what auxiliarity functions\n> there are to use. These parts are marked as FIXME and should be easily\n> identified by looking at the patch (link below - it is a big one).\n> \n> There were also simple mistakes like in src/backend/tioga/tgRecipe.c\n> - -\tsprintf(qbuf, Q_LOOKUP_EDGES_IN_RECIPE, name);\n> - -\tpqres = PQexec(qbuf);\n> +\tsnprintf(qbuf, MAX_QBUF_LENGTH, Q_LOOKUP_EDGES_IN_RECIPE, name);\n> ~ \tpqres = PQexec(qbuf);\n> ~ \tif (*pqres == 'R' || *pqres == 'E')\n> \n> Notice how previous PQexec() is removed. There were two of them.\n> \n> Some of my fixes cause code to be a bit slower because of dynamically\n> allocated mem, but it also fixes a lot of ptr+strlen(ptr) -style\n> performance problems. I didn't particularly try to fix these but some of\n> them are corrected by simply using lib/stringinfo.h\n> \n> Please take look at this patch but since I have worked three long nights\n> with this one, there probably are bugs. I tried compiling it with\n> \"configure --with-tcl --with-perl --with-python\" and at least it\n> compiled for me :) But that's about all I can promise.\n> \n> diffstat postgresql-7.2.2-sprintf.patch\n> ~ contrib/cube/cube.c | 26 --\n> ~ contrib/cube/cubeparse.y | 11\n> ~ contrib/intarray/_int.c | 29 +-\n> ~ contrib/rserv/rserv.c | 30 +-\n> ~ contrib/seg/segparse.y | 18 -\n> ~ contrib/spi/refint.c | 39 +--\n> ~ contrib/spi/timetravel.c | 12\n> ~ doc/src/sgml/spi.sgml | 2\n> ~ src/backend/parser/analyze.c | 2\n> ~ src/backend/port/dynloader/freebsd.c | 10\n> ~ src/backend/port/dynloader/netbsd.c | 11\n> ~ src/backend/port/dynloader/nextstep.c | 2\n> ~ src/backend/port/dynloader/openbsd.c | 10\n> ~ src/backend/postmaster/postmaster.c | 2\n> ~ src/backend/storage/file/fd.c | 1\n> ~ src/backend/storage/ipc/shmqueue.c | 1\n> ~ src/backend/tioga/tgRecipe.c | 11\n> ~ src/backend/utils/adt/ri_triggers.c | 312\n> ++++++++++++------------\n> ~ src/bin/pg_dump/pg_dump.c | 14 -\n> ~ src/bin/pg_passwd/pg_passwd.c | 2\n> ~ src/bin/psql/command.c | 2\n> ~ src/bin/psql/describe.c | 3\n> ~ src/interfaces/ecpg/preproc/pgc.l | 8\n> ~ src/interfaces/ecpg/preproc/preproc.y | 24 -\n> ~ src/interfaces/ecpg/preproc/type.c | 16 -\n> ~ src/interfaces/ecpg/preproc/variable.c | 12\n> ~ src/interfaces/libpgeasy/examples/pgwordcount.c | 6\n> ~ src/interfaces/libpgtcl/pgtclCmds.c | 4\n> ~ src/interfaces/libpq/fe-auth.c | 2\n> ~ src/interfaces/odbc/connection.c | 2\n> ~ src/interfaces/odbc/dlg_specific.c | 5\n> ~ src/interfaces/odbc/info.c | 38 +-\n> ~ src/interfaces/odbc/qresult.c | 4\n> ~ src/interfaces/odbc/results.c | 8\n> ~ src/interfaces/odbc/statement.c | 6\n> ~ 35 files changed, 365 insertions, 320 deletions\n> \n> Patch is about 70k and downloadable from\n> http://suihkari.baana.suomi.net/postgresql/patches/postgresql-7.2.2-sprintf.patch\n> \n> At least I didn't just bitch and moan about the bugs. ;)\n> \n> - - Jukka\n> -----BEGIN PGP SIGNATURE-----\n> Version: GnuPG v1.0.7 (GNU/Linux)\n> Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n> \n> iD8DBQE9bRlYYYWM2XTSwX0RAndJAJ9C8KDGjteQ2Edngwifb6C876KDsgCfUon6\n> PObTTeQfDLmgxkKN7bPnyk4=\n> =nFa0\n> -----END PGP SIGNATURE-----\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n[ End of raw data]\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 28 Aug 2002 17:32:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Resend] Sprintf() auditing and a patch" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nBruce Momjian wrote:\n| I have reviewed your patch, and it is a thorough job. Unfortunately,\n| our code has drifted dramatically since 7.2 in the areas you patched.\n| Would you be able to download our CVS or current snapshot and submit a\n| patch based on that code?\n|\n| In fact, we have applied a batch of snprintf fixes already so some of\n| them may already be fixed. You found quite a few so you probably have\n| some fixes we don't have.\n\nSure, I take a look at CVS.\n\n- - Jukka\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.7 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQE9bUWuYYWM2XTSwX0RAlQqAJwMNlwOJLYFnOb3xqUHE/BRZYRVPwCdGu4o\nXJl98uXJlg5ZLhNlfX04pow=\n=cdmM\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 29 Aug 2002 00:50:38 +0300", "msg_from": "Jukka Holappa <jukkaho@mail.student.oulu.fi>", "msg_from_op": true, "msg_subject": "Re: [Resend] Sprintf() auditing and a patch" }, { "msg_contents": "[ Sorry, never saw the original email ]\n\nBruce Momjian <pgman@candle.pha.pa.us> writes:\n> Jukka Holappa wrote:\n> > I'm very new to this project and inspired by recent security\n> > release, I started to audit postgresql source against common\n> > mistakes with sprintf().\n\nIf you're interested, another common source of problems is integer\noverflow when dealing with numeric input from the user. In fact, far\nmore security problems have been caused by insufficient integer\noverflow checking than by string handling bugs.\n\nFYI, we prefer patches in context diff format (diff -c). Also, there\nare some code style rules that most of the backend code follows. For\nexample,\n\nfor (i = 0; i < x; i++) { ....\n\nrather than:\n\nfor(i=0;i<x;++i) {\n\nAnd indented using tabs. In any case, these should be automatically\ncorrected by Bruce before a release is made, but it would be nice if\npatches followed this style.\n\n> > There were also simple mistakes like in\n> > src/backend/tioga/tgRecipe.c\n\nThat code is long dead, BTW.\n\n> > Some of my fixes cause code to be a bit slower because of\n> > dynamically allocated mem\n\nGiven that you're not using StringInfo in any performance-critical\nareas AFAICT (mostly in contrib/, for example), I would suspect the\nperformance difference wouldn't be too steep (although it's worth\nverifying that before the patch is applied). I briefly benchmarked\nsnprintf() versus sprintf() a couple days ago and found no performance\ndifference, but using StringInfo may impose a higher penalty.\n\nI'd agree that StringInfo is appropriate when the string is frequently\nbeing appended to (and the code using the strlen() pointer arithmetic\ntechnique you mentioned); however, you've converted the code to use\nStringInfo on situations in which it is clearly not warranted. To pick\none example at random, seg_atof(char *) in contrib/seg/segparse.y\ndoesn't require anything more than a statically sized buffer and\nsnprintf().\n\nAlso, that routine happens to leak memory, since you forgot to call\npfree(buf.data) -- I believe you made the same mistake in several\nother places, such as seg_yyerror(char *) in the same file.\n\nPersonally, I prefer this:\n\n char *buf[1024];\n\n snprintf(buf, sizeof(buf), \"...\");\n\nrather than this:\n\n char *buf[1024];\n\n snprintf(buf, 1024, \"...\");\n\n(even if the size of the char array is a preprocessor constant).\n\nThe reason being that\n\n (a) it is more clear: the code plainly states \"write to this\n string, up to the declared size of the string but no\n more\".\n\n (b) it is more maintainable: if someone were to change the\n size of the char array to, say, 512 bytes but didn't\n change the snprintf(), you'd have a potential bug.\n\nYou used sizeof(...) in some places but not in others.\n\nThat's all I noticed briefly eye-balling the patch; please re-diff\nagainst CVS HEAD and submit a context diff and I'll take another look.\n\n> > Please take look at this patch but since I have worked three long\n> > nights with this one, there probably are bugs. I tried compiling\n> > it with \"configure --with-tcl --with-perl --with-python\" and at\n> > least it compiled for me :) But that's about all I can promise.\n\nFYI, running the regression tests is an easy way to do some basic\ntesting. Since code that causes regression tests to fail won't be\naccepted (period), you may as well run them now, rather now later.\n\n> > At least I didn't just bitch and moan about the bugs. ;)\n\nThank you; frankly, I wish your attitude was more common.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "28 Aug 2002 23:49:19 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: [Resend] Sprintf() auditing and a patch" }, { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nNeil Conway wrote:\n| [ Sorry, never saw the original email ]\n\nBecause it is still hanging in moderation queue ;)\n\n| FYI, we prefer patches in context diff format (diff -c). Also, there\n| are some code style rules that most of the backend code follows. For\n| example,\n\nI tried to use the same style that was used in the code previously.\nApparently I forgot it in some places.\n\n|\n|>>There were also simple mistakes like in\n|>>src/backend/tioga/tgRecipe.c\n|>\n|\n| That code is long dead, BTW.\n\nWell, we'll se what I can dig out of CVS version :) I think string\nhandling can be very nasty in some places but the problems are so much\neasier to find than with integer overflows.\n\n| I'd agree that StringInfo is appropriate when the string is frequently\n| being appended to (and the code using the strlen() pointer arithmetic\n| technique you mentioned); however, you've converted the code to use\n| StringInfo on situations in which it is clearly not warranted. To pick\n| one example at random, seg_atof(char *) in contrib/seg/segparse.y\n| doesn't require anything more than a statically sized buffer and\n| snprintf().\n\nI'm sure I did that, because I really didn't know in all places, what\nwould be the right thing to do.\n\nUsing snprintf() there would cause a log message of \"using numeric value\nxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx...\" when trying to overflow this. I\nagree, being told number to be unpresentable (coming after errorious\nstring) is not actually necessary when seeing this :)\n\n|\n| Also, that routine happens to leak memory, since you forgot to call\n| pfree(buf.data) -- I believe you made the same mistake in several\n| other places, such as seg_yyerror(char *) in the same file.\n\nI checked and this is true. However code leaks already in the same\nplace. (although less bytes).\n\n|\n| Personally, I prefer this:\n|\n| char *buf[1024];\n\nYou don't prefer an array of pointers, but I got the point.\n\n|\n| snprintf(buf, sizeof(buf), \"...\");\n|\n| rather than this:\n|\n| char *buf[1024];\n|\n| snprintf(buf, 1024, \"...\");\n[snip]\n| You used sizeof(...) in some places but not in others.\n\nVery true. I did all my checking and fixing in three nights and didn't\nthink about the maintainability at first but started using\nsizeof(later). I just wanted to get them fixed at first. These should\nall be using sizeof(buf) when the target is an array.\n\nThere were also places where a simple pointer to a buffer was passed to\nanother function which then appended some string to it. I think this was\ndate/time handling in somewhere. That kind of things are impossible to\nfix (without changing the function definition) if the appended string\ndoesn't have a certain maximum size. Dates/times sure have that limit,\nbut I hope no one copies that code to handle any some variable length\nstrings..\n\n| FYI, running the regression tests is an easy way to do some basic\n| testing. Since code that causes regression tests to fail won't be\n| accepted (period), you may as well run them now, rather now later.\n\nAll true.:)\n\n- - Jukka\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.7 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQE9ba3nYYWM2XTSwX0RAoBJAJwK4eA5iPDNaQFF3TCL09MD/dkBwgCdHGmi\nb4RCkBnOPBfQMQAX7wJk4U4=\n=7hvG\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Thu, 29 Aug 2002 08:15:20 +0300", "msg_from": "Jukka Holappa <jukkaho@mail.student.oulu.fi>", "msg_from_op": true, "msg_subject": "Re: [Resend] Sprintf() auditing and a patch" }, { "msg_contents": "Neil Conway wrote:\n> If you're interested, another common source of problems is integer\n> overflow when dealing with numeric input from the user. In fact, far\n> more security problems have been caused by insufficient integer\n> overflow checking than by string handling bugs.\n\nOne other things that bothers me are cases where we allocate memory to\nhold the ASCII representation of an integer, but instead of using a\nmacro that documents this fact, we use a constant, and different\nconstants in different places. That should be cleaned up.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 14:43:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [Resend] Sprintf() auditing and a patch" } ]
[ { "msg_contents": "With beta starting September 1, what time table do we want to have? Is\nfeature freeze September 1? When do we want to package up the beta1\ntarball?\n\nI will try to have the HISTORY/release.sgml file ready for September 1.\n\n\nI am attaching the open items list.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\n P O S T G R E S Q L\n\n 7 . 3 O P E N I T E M S\n\n\nCurrent at ftp://candle.pha.pa.us/pub/postgresql/open_items.\n\nSource Code Changes\n-------------------\nSchema handling - ready? interfaces? client apps?\nDrop column handling - ready for all clients, apps?\nFix implicit type coercions that are worse\nImprove macros in new tuple header code (Tom)\nAllow PL/PgSQL functions to return sets (Neil)\nAllow easy display of usernames in a group (pg_hba.conf uses groups now)\nFix BeOS and QNX4 ports\nGet bison upgrade on postgresql.org\n\nOn Hold\n-------\nPoint-in-time recovery\nWin32 port\nSecurity audit\n\nDocumentation Changes\n---------------------\nDocument need to add permissions to loaded functions and languages", "msg_date": "Wed, 28 Aug 2002 17:40:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Timetable for 7.3 beta" }, { "msg_contents": "Bruce Momjian wrote:\n> With beta starting September 1, what time table do we want to have? Is\n> feature freeze September 1? When do we want to package up the beta1\n> tarball?\n> \n> I will try to have the HISTORY/release.sgml file ready for September 1.\n> \n\nSince September 1 is a Sunday, how about giving us until the end of the \nday to get stuff in? :-) That gives everyone most of the weekend to \nfinish up whatever they need to.\n\nJoe\n\n\n\n", "msg_date": "Wed, 28 Aug 2002 15:20:22 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Timetable for 7.3 beta" } ]
[ { "msg_contents": "I've been looking at the sample SRFs (show_all_settings etc) and am not\nhappy about the way memory management is done. As the code is currently\nset up, the functions essentially assume that they are executed in a\ncontext that will never be reset until they're done returning tuples.\n(This is true because tupledescs and so on are blithely constructed in\nCurrentMemoryContext during the first call.)\n\nThis approach means that SRFs cannot afford to leak any memory per-call.\nIf they do, and the result set is large, they will run the backend out\nof memory. I don't think that's acceptable.\n\nThe reason that the code fails to crash is that nodeFunctionscan.c\ndoesn't do a ResetExprContext(econtext) in the loop that collects rows\nfrom the function and stashes them in the tuplestore. But I think it\nmust do so in the long run, and so it would be better to get this right\nthe first time.\n\nI think we should document that any memory that is allocated in the\nfirst call for use in subsequent calls must come from the memory context\nsaved in FuncCallContext (and let's choose a more meaningful name than\nfmctx, please). This would mean adding code like\n\n\toldcontext = MemoryContextSwitchTo(funcctx->fmctx);\n\n\t...\n\n\tMemoryContextSwitchTo(oldcontext);\n\naround the setup code that follows SRF_FIRSTCALL_INIT. Then it would be\nsafe for nodeFunctionscan.c to do a reset before each function call.\n\nComments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Aug 2002 19:08:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Concern about memory management with SRFs" }, { "msg_contents": "Tom Lane wrote:\n> I think we should document that any memory that is allocated in the\n> first call for use in subsequent calls must come from the memory context\n> saved in FuncCallContext (and let's choose a more meaningful name than\n> fmctx, please). This would mean adding code like\n> \n> \toldcontext = MemoryContextSwitchTo(funcctx->fmctx);\n> \n> \t...\n> \n> \tMemoryContextSwitchTo(oldcontext);\n> \n> around the setup code that follows SRF_FIRSTCALL_INIT. Then it would be\n> safe for nodeFunctionscan.c to do a reset before each function call.\n\nThat sounds like a good plan.\n\nBut can/should we wrap those calls in either existing or new macros? Or \nis it better to have the function author keenly aware of the memory \nmanagement details? I tend to think the former is better.\n\nMaybe SRF_FIRSTCALL_INIT()(init_MultiFuncCall()) should:\n- save CurrentMemoryContext to funcctx->per_call_memory_ctx\n (new member of the struct)\n- save fcinfo->flinfo->fn_mcxt to funcctx->multi_call_memory_ctx\n (nee funcctx->fmctx)\n- leave fcinfo->flinfo->fn_mcxt as the current memory context when it\n exits\n\nThen SRF_PERCALL_SETUP() (per_MultiFuncCall()) can change back to \nfuncctx->per_call_memory_ctx.\n\nWould this work?\n\nJoe\n\n", "msg_date": "Wed, 28 Aug 2002 18:19:54 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Concern about memory management with SRFs" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Maybe SRF_FIRSTCALL_INIT()(init_MultiFuncCall()) should:\n> - save CurrentMemoryContext to funcctx->per_call_memory_ctx\n> (new member of the struct)\n> - save fcinfo->flinfo->fn_mcxt to funcctx->multi_call_memory_ctx\n> (nee funcctx->fmctx)\n> - leave fcinfo->flinfo->fn_mcxt as the current memory context when it\n> exits\n\n> Then SRF_PERCALL_SETUP() (per_MultiFuncCall()) can change back to \n> funcctx->per_call_memory_ctx.\n\nI thought about that and didn't like it; it may simplify the simple case\nbut I think it actively gets in the way of less-simple cases. For\nexample, the FIRSTCALL code might generate some transient structures\nalong with ones that it wants to keep. Also, your recommended\npseudocode allows the author to write code between the end of the\nFIRSTCALL branch and the PERCALL_SETUP call; that code will not execute\nin a predictable context if we do it this way.\n\nI'm also not happy with the implied assumption that every call to the\nfunction executes in the same transient context. That is true at the\nmoment but I'd just as soon not see it as a wired-in assumption.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Aug 2002 21:32:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Concern about memory management with SRFs " }, { "msg_contents": "Tom Lane wrote:\n> I thought about that and didn't like it; it may simplify the simple case\n> but I think it actively gets in the way of less-simple cases. For\n> example, the FIRSTCALL code might generate some transient structures\n> along with ones that it wants to keep. Also, your recommended\n> pseudocode allows the author to write code between the end of the\n> FIRSTCALL branch and the PERCALL_SETUP call; that code will not execute\n> in a predictable context if we do it this way.\n> \n> I'm also not happy with the implied assumption that every call to the\n> function executes in the same transient context. That is true at the\n> moment but I'd just as soon not see it as a wired-in assumption.\n\nFair enough. I'll take a shot at the necessary changes (if you want me \nto). Is it OK to use fcinfo->flinfo->fn_mcxt as the long term memory \ncontext or is there a better choice? Is funcctx->multi_call_memory_ctx a \nsuitable name in place of funcctx->fmctx?\n\nJoe\n\n", "msg_date": "Wed, 28 Aug 2002 18:46:55 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Concern about memory management with SRFs" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Is it OK to use fcinfo->flinfo->fn_mcxt as the long term memory \n> context or is there a better choice?\n\nThat is the correct choice.\n\n> Is funcctx->multi_call_memory_ctx a \n> suitable name in place of funcctx->fmctx?\n\nNo objection here.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Aug 2002 21:50:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: Concern about memory management with SRFs " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Is it OK to use fcinfo->flinfo->fn_mcxt as the long term memory \n>>context or is there a better choice?\n> \n> That is the correct choice.\n> \n>>Is funcctx->multi_call_memory_ctx a \n>>suitable name in place of funcctx->fmctx?\n> \n> No objection here.\n\nHere's a patch to address Tom's SRF API memory management concerns, as \ndiscussed earlier today on HACKERS.\n\nPlease note that, although this should apply cleanly on cvs tip, it will \nhave (two) failed hunks (nodeFunctionscan.c) *if* applied after Neil's \nplpgsql SRF patch. Or it will cause a failure in Neil's patch if it is \napplied first (I think). The fix in either case is to wrap the loop that \ncollects rows from the function and stashes them in the tuplestore as \nfollows:\n\ndo until no more tuples\n+ ExprContext *econtext = scanstate->csstate.cstate.cs_ExprContext;\n\n get one tuple\n put it in the tuplestore\n\n+ ResetExprContext(econtext);\nloop\n\nAlso note that contrib/dblink is intentionally missing, because I'm \nstill working on other aspects of it. I'll have an updated dblink in a \nday or so.\n\nIf there are no objections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Wed, 28 Aug 2002 23:53:55 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "SRF memory mgmt patch (was [HACKERS] Concern about memory management\n\twith SRFs)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Here's a patch to address Tom's SRF API memory management concerns, as \n> discussed earlier today on HACKERS.\n\nPatch committed.\n\nIt seemed to me that pgstattuple.c does not really want to be an SRF,\nbut only a function returning a single tuple. As such, it can provide\na fine example of using the funcapi.h tuple-building machinery *without*\nthe SRF machinery. I changed it accordingly, but am not able to update\nREADME.pgstattuple.euc_jp; Tatsuo-san, would you handle that?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 13:18:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement with SRFs)" }, { "msg_contents": "Tom Lane wrote:\n> Patch committed.\n\nGiven the change to SRF memory management, is the following still \nnecessary (or possibly even incorrect)?\n\nin funcapi.c:\nper_MultiFuncCall(PG_FUNCTION_ARGS)\n{\n FuncCallContext *retval = (FuncCallContext *)\n fcinfo->flinfo->fn_extra;\n\n /* make sure we start with a fresh slot */\n if(retval->slot != NULL)\n ExecClearTuple(retval->slot);\n\n return retval;\n}\n\nAll but one of the SRFs I've tried don't seem to care, but I have one \nthat is getting an assertion:\n\n0x42029331 in kill () from /lib/i686/libc.so.6\n(gdb) bt\n#0 0x42029331 in kill () from /lib/i686/libc.so.6\n#1 0x4202911a in raise () from /lib/i686/libc.so.6\n#2 0x4202a8c2 in abort () from /lib/i686/libc.so.6\n#3 0x08179ab9 in ExceptionalCondition () at assert.c:48\n#4 0x0818416f in pfree (pointer=0x7f7f7f7f) at mcxt.c:470\n#5 0x0806bd32 in heap_freetuple (htup=0x832bb80) at heaptuple.c:736\n#6 0x080e47df in ExecClearTuple (slot=0x832b2cc) at execTuples.c:406\n#7 0x0817cf49 in per_MultiFuncCall (fcinfo=0xbfffe8e0) at funcapi.c:88\n#8 0x40017273 in dblink_get_pkey (fcinfo=0xbfffe8e0) at dblink.c:911\n\nNot quite sure why yet, but I'm thinking the ExecClearTuple() is no \nlonger needed/desired anyway.\n\nJoe\n\n", "msg_date": "Thu, 29 Aug 2002 16:32:55 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Given the change to SRF memory management, is the following still \n> necessary (or possibly even incorrect)?\n\n> /* make sure we start with a fresh slot */\n> if(retval->slot != NULL)\n> ExecClearTuple(retval->slot);\n\nI don't think it was ever necessary... but there's nothing wrong with\nit either.\n\n> All but one of the SRFs I've tried don't seem to care, but I have one \n> that is getting an assertion:\n\n> 0x42029331 in kill () from /lib/i686/libc.so.6\n> (gdb) bt\n> #0 0x42029331 in kill () from /lib/i686/libc.so.6\n> #1 0x4202911a in raise () from /lib/i686/libc.so.6\n> #2 0x4202a8c2 in abort () from /lib/i686/libc.so.6\n> #3 0x08179ab9 in ExceptionalCondition () at assert.c:48\n> #4 0x0818416f in pfree (pointer=0x7f7f7f7f) at mcxt.c:470\n> #5 0x0806bd32 in heap_freetuple (htup=0x832bb80) at heaptuple.c:736\n> #6 0x080e47df in ExecClearTuple (slot=0x832b2cc) at execTuples.c:406\n> #7 0x0817cf49 in per_MultiFuncCall (fcinfo=0xbfffe8e0) at funcapi.c:88\n> #8 0x40017273 in dblink_get_pkey (fcinfo=0xbfffe8e0) at dblink.c:911\n\n> Not quite sure why yet, but I'm thinking the ExecClearTuple() is no \n> longer needed/desired anyway.\n\nYou'll need to fix that anyway because the next ExecStoreTuple will try\nto do an ExecClearTuple. Looks like the same tuple is being freed\ntwice. Once you've handed a tuple to ExecStoreTuple it's not yours to\nfree anymore; perhaps some bit of code in dblink has that wrong?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 19:40:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement with SRFs)" }, { "msg_contents": "Tom Lane wrote:\n> You'll need to fix that anyway because the next ExecStoreTuple will try\n> to do an ExecClearTuple. Looks like the same tuple is being freed\n> twice. Once you've handed a tuple to ExecStoreTuple it's not yours to\n> free anymore; perhaps some bit of code in dblink has that wrong?\n\nThat's just it:\n 0x40017273 in dblink_get_pkey (fcinfo=0xbfffe8e0) at dblink.c:911\n*is*\n funcctx = SRF_PERCALL_SETUP();\nwhich is is a macro\n #define SRF_PERCALL_SETUP() per_MultiFuncCall(fcinfo)\n\nWhen I remove the call to ExecClearTuple() from per_MultiFuncCall(), \neverything starts to work.\n\nAs you said, if the next ExecStoreTuple will try to do an \nExecClearTuple(), ISTM that it should be removed from \nper_MultiFuncCall()/SRF_PERCALL_SETUP(). Or am I crazy?\n\nJoe\n\n\n\n", "msg_date": "Thu, 29 Aug 2002 16:48:54 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> As you said, if the next ExecStoreTuple will try to do an \n> ExecClearTuple(), ISTM that it should be removed from \n> per_MultiFuncCall()/SRF_PERCALL_SETUP().\n\nNo, it's not necessary: ExecClearTuple knows the difference between a\nfull and an empty TupleSlot.\n\nI'm not sure where the excess free is coming from, but it ain't\nExecClearTuple's fault. You might try setting a breakpoint at\nheap_freetuple to see if that helps.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 20:01:32 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement with SRFs)" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> As you said, if the next ExecStoreTuple will try to do an \n> ExecClearTuple(), ISTM that it should be removed from \n> per_MultiFuncCall()/SRF_PERCALL_SETUP(). Or am I crazy?\n\nActually ... on second thought ...\n\nI bet the real issue here is that we have a long-lived TupleTableSlot\npointing at a short-lived tuple. (I assume you're just forming the\ntuple in the function's working context, no?)\n\nWhen ExecClearTuple is called on the next time through, it tries to\npfree a tuple that has already been recycled along with the rest of\nthe short-term context. Result: coredump.\n\nHowever, if that were the story then *none* of the SRFs returning\ntuple should work, and they do. So I'm still confused.\n\nBut I suspect that what we want to do is take management of the tuples\naway from the Slot: pass should_free = FALSE to ExecStoreTuple in the\nTupleGetDatum macro. The ClearTuple call *is* appropriate when you do\nthat, because it will reset the Slot to empty rather than leaving it\ncontaining a dangling pointer to a long-since-freed tuple.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 20:07:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement with SRFs)" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n> \n>>As you said, if the next ExecStoreTuple will try to do an \n>>ExecClearTuple(), ISTM that it should be removed from \n>>per_MultiFuncCall()/SRF_PERCALL_SETUP(). Or am I crazy?\n> \n> \n> Actually ... on second thought ...\n> \n> I bet the real issue here is that we have a long-lived TupleTableSlot\n> pointing at a short-lived tuple. (I assume you're just forming the\n> tuple in the function's working context, no?)\n\nyep\n\n> When ExecClearTuple is called on the next time through, it tries to\n> pfree a tuple that has already been recycled along with the rest of\n> the short-term context. Result: coredump.\n> \n> However, if that were the story then *none* of the SRFs returning\n> tuple should work, and they do. So I'm still confused.\n\nThat's what had me confused.\n\nI have found the smoking gun, however. I had changed this function from \nreturning setof text, to returning setof \ntwo_column_named_composite_type. *However*. I had not dropped and \nrecreated the function with the proper declaration. Once I redeclared \nthe function properly, the coredump went away, even with current \nper_MultiFuncCall() code.\n\nThe way I found this was by removing ExecClearTuple() from \nper_MultiFuncCall(). That allowed the function to return without core \ndumping, but it gave me one column of garbage -- that finally clued me in.\n\n> But I suspect that what we want to do is take management of the tuples\n> away from the Slot: pass should_free = FALSE to ExecStoreTuple in the\n> TupleGetDatum macro. The ClearTuple call *is* appropriate when you do\n> that, because it will reset the Slot to empty rather than leaving it\n> containing a dangling pointer to a long-since-freed tuple.\n\nOK. I'll make that change and leave ExecClearTuple() in \nper_MultiFuncCall(). Sound like a plan?\n\nJoe\n\n\n", "msg_date": "Thu, 29 Aug 2002 17:21:44 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I have found the smoking gun, however. I had changed this function from \n> returning setof text, to returning setof \n> two_column_named_composite_type. *However*. I had not dropped and \n> recreated the function with the proper declaration. Once I redeclared \n> the function properly, the coredump went away, even with current \n> per_MultiFuncCall() code.\n\nAh. I think the changes I just committed would have helped:\nnodeFunctionscan.c now runs a tupledesc_mismatch() check regardless of\nwhether it thinks the function returns RECORD or not.\n\n>> But I suspect that what we want to do is take management of the tuples\n>> away from the Slot: pass should_free = FALSE to ExecStoreTuple in the\n>> TupleGetDatum macro. The ClearTuple call *is* appropriate when you do\n>> that, because it will reset the Slot to empty rather than leaving it\n>> containing a dangling pointer to a long-since-freed tuple.\n\n> OK. I'll make that change and leave ExecClearTuple() in \n> per_MultiFuncCall(). Sound like a plan?\n\nFirst let's see if we can figure out why the code is failing to fail\nas it stands. The fact that it's not dumping core says there's\nsomething we don't understand yet ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 20:46:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about memory\n\tmanagement with SRFs)" }, { "msg_contents": "> Joe Conway <mail@joeconway.com> writes:\n> > Here's a patch to address Tom's SRF API memory management concerns, as \n> > discussed earlier today on HACKERS.\n> \n> Patch committed.\n> \n> It seemed to me that pgstattuple.c does not really want to be an SRF,\n> but only a function returning a single tuple.\n\nThank you for modifying pgstattuple.c. You are right, it does not want\nto return more than 1 tuple.\n\n> As such, it can provide\n> a fine example of using the funcapi.h tuple-building machinery *without*\n> the SRF machinery. I changed it accordingly, but am not able to update\n> README.pgstattuple.euc_jp; Tatsuo-san, would you handle that?\n\nSure. I'll take care of that.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 30 Aug 2002 10:25:48 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch" }, { "msg_contents": "Tatsuo Ishii wrote:\n>>It seemed to me that pgstattuple.c does not really want to be an SRF,\n>>but only a function returning a single tuple.\n> \n> Thank you for modifying pgstattuple.c. You are right, it does not want\n> to return more than 1 tuple.\n> \n\nI noticed that too, but it did occur to me that at some point you might \nwant to make the function return a row for every table in a database. \nPerhaps even another system view (like pg_locks or pg_settings)?\n\nJoe\n\n", "msg_date": "Thu, 29 Aug 2002 18:29:45 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch" }, { "msg_contents": "Tom Lane wrote:\n> First let's see if we can figure out why the code is failing to fail\n> as it stands. The fact that it's not dumping core says there's\n> something we don't understand yet ...\n\nI'm not sure if the attached will help figure it out, but at the very \nleast it was eye-opening for me. I ran a test on \ndblink_get_pkey('foobar') that returns 5 rows. I had a breakpoint set in \nExecClearTuple. I found that ExecClearTuple was called a total of 32 \ntimes for 5 returned rows!\n\nRelevant to this discussion was that ExecClearTuple was called three \ntimes, with the same slot pointer, for each function call to \ndblink_get_pkey. Once in SRF_PERCALL_SETUP (per_MultiFuncCall), once in \nTupleGetDatum (ExecStoreTuple), and once in FunctionNext in the loop \nthat builds the tuplestore.\n\nUnfortunately I have not been able to get back to a point where I see a \ncoredump :(. But, that did seem to be related to calling the function \nwith an inappropriate declaration (now it just gives me garbage instead \nof dumping core, even though I reverted the per_MultiFuncCall changes I \nmade earlier). I'll keep messing with this for a while, but I was hoping \nthe attached info would lead to some more suggestions of where to be \nlooking.\n\nThanks,\n\nJoe", "msg_date": "Thu, 29 Aug 2002 21:29:49 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about" }, { "msg_contents": "Joe Conway wrote:\n> Unfortunately I have not been able to get back to a point where I see a \n> coredump :(. But, that did seem to be related to calling the function \n> with an inappropriate declaration (now it just gives me garbage instead \n> of dumping core, even though I reverted the per_MultiFuncCall changes I \n> made earlier). I'll keep messing with this for a while, but I was hoping \n> the attached info would lead to some more suggestions of where to be \n> looking.\n\nIt's back as of cvs tip. This time, it looks like all table functions \nare failing in the same manner:\n\n#4 0x081845fb in pfree (pointer=0x7f7f7f7f) at mcxt.c:470\n#5 0x0806bdb2 in heap_freetuple (htup=0x82fc7b8) at heaptuple.c:736\n#6 0x080e4cbf in ExecClearTuple (slot=0x82f92f4) at execTuples.c:406\n#7 0x0817d3ad in per_MultiFuncCall (fcinfo=0xbfffe9e0) at funcapi.c:88\n#8 0x0814af25 in pg_lock_status (fcinfo=0xbfffe9e0) at lockfuncs.c:69\n#9 0x080e3990 in ExecMakeTableFunctionResult (funcexpr=0x82e9fa0,\n\n#4 0x081845fb in pfree (pointer=0x7f7f7f7f) at mcxt.c:470\n#5 0x0806bdb2 in heap_freetuple (htup=0x82f43a4) at heaptuple.c:736\n#6 0x080e4cbf in ExecClearTuple (slot=0x82e9f2c) at execTuples.c:406\n#7 0x0817d3ad in per_MultiFuncCall (fcinfo=0xbfffe9e0) at funcapi.c:88\n#8 0x40016a4b in dblink_record (fcinfo=0xbfffe9e0) at dblink.c:518\n#9 0x080e3990 in ExecMakeTableFunctionResult (funcexpr=0x82e8df8,\n\n#4 0x081845fb in pfree (pointer=0x7f7f7f7f) at mcxt.c:470\n#5 0x0806bdb2 in heap_freetuple (htup=0x83026f8) at heaptuple.c:736\n#6 0x080e4cbf in ExecClearTuple (slot=0x82f71cc) at execTuples.c:406\n#7 0x0817d3ad in per_MultiFuncCall (fcinfo=0xbfffe9e0) at funcapi.c:88\n#8 0x08181635 in show_all_settings (fcinfo=0xbfffe9e0) at guc.c:2469\n#9 0x080e3990 in ExecMakeTableFunctionResult (funcexpr=0x82f64a0,\n\nCurrently all C language table functions are broken :(, but all sql \nlanguage table functions seem to work -- which is why regression doesn't \nfail (pointing out the need to add a select * from pg_settings to a \nregression test somewhere).\n\nI'm looking at this now. I suspect the easy fix is to remove \nExecClearTuple from per_MultiFuncCall, but I'll try to understand what's \ngoing on first.\n\nJoe\n\n\n", "msg_date": "Fri, 30 Aug 2002 10:34:32 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about" }, { "msg_contents": "Joe Conway wrote:\n> I'm looking at this now. I suspect the easy fix is to remove \n> ExecClearTuple from per_MultiFuncCall, but I'll try to understand what's \n> going on first.\n> \n\nOn second thought, *all* functions failing is what you expected, right \nTom? I just changed TupleGetDatum() as we discussed:\n\n#define TupleGetDatum(_slot, _tuple) \\\n PointerGetDatum(ExecStoreTuple(_tuple, _slot, InvalidBuffer, false))\n\nand now everything works again. Is this the preferred fix and/or is it \nworth spending more time to dig into this?\n\nJoe\n\n\n", "msg_date": "Fri, 30 Aug 2002 10:51:35 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about" }, { "msg_contents": "Joe Conway wrote:\n> Joe Conway wrote:\n> \n>> I'm looking at this now. I suspect the easy fix is to remove \n>> ExecClearTuple from per_MultiFuncCall, but I'll try to understand \n>> what's going on first.\n>>\n> \n> On second thought, *all* functions failing is what you expected, right \n> Tom? I just changed TupleGetDatum() as we discussed:\n> \n> #define TupleGetDatum(_slot, _tuple) \\\n> PointerGetDatum(ExecStoreTuple(_tuple, _slot, InvalidBuffer, false))\n> \n> and now everything works again. Is this the preferred fix and/or is it \n> worth spending more time to dig into this?\n\nHere is a patch with the above mentioned fix. It also has an addition to \nrangefuncs.sql and rangefuncs.out to ensure a C language table function \ngets tested. I did this by adding\n SELECT * FROM pg_settings WHERE name LIKE 'enable%';\nto the test. I think this should produce reasonably stable results, but \nobviously will require some maintenance if we add/remove a GUC variable \nmatching this criteria. Alternative suggestions welcomed, but if there \nare no objections, please commit.\n\nThanks,\n\nJoe", "msg_date": "Fri, 30 Aug 2002 11:51:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> On second thought, *all* functions failing is what you expected, right \n>> Tom?\n\nYeah. I coulda sworn I tested pg_settings yesterday after making those\nother changes, but I must not have; it's sure failing for me today.\n\n> Here is a patch with the above mentioned fix. It also has an addition to \n> rangefuncs.sql and rangefuncs.out to ensure a C language table function \n> gets tested.\n\nGood idea. Will apply.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 15:38:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Here is a patch with the above mentioned fix. It also has an addition to \n>>rangefuncs.sql and rangefuncs.out to ensure a C language table function \n>>gets tested.\n> \n> Good idea. Will apply.\n\nBTW, Neil, do you have a sample plpgsql table function that can be \nincluded in the rangefuncs regression test?\n\nThanks,\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 13:18:30 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> BTW, Neil, do you have a sample plpgsql table function that can be \n> included in the rangefuncs regression test?\n\nThe plpgsql regression test has 'em, down at the end.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 17:09:23 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: SRF memory mgmt patch (was [HACKERS] Concern about " } ]
[ { "msg_contents": "Sorry for the off-topic article, but I wonder anyone has been received\nthis kind of mail...\n\nI have been received following from postmaster@postgresql.org. Is this\nmail really posted by postmaster@postgresql.org? If so, could someone\non postgresql.org stop this kind of alert mail? I'm tired of these\nkind of mails. Apparently the mail postmaster@postgresql.org pointed\nout is not sent from me (maybe from 217.117.47.237?)\n\nNote that I do not subscribe pgsql-general list (I once subscried. But\nit seems I was kicked out from the list for unknow reason).\n\n--------------------------------------------------------------------\nFrom: postmaster@postgresql.org\nSubject: VIRUS IN YOUR MAIL (W32/Klez.h@MM)\nDate: Wed, 28 Aug 2002 10:56:02 -0400 (EDT)\nMessage-ID: <20020828145602.55F51476747@postgresql.org>\n\n> V I R U S A L E R T\n> \n> Our virus checker found the\n> \n> W32/Klez.h@MM\n> \n> virus in your email to the following recipient:\n> \n> -> pgsql-general@postgresql.org\n> \n> Delivery of the email was stopped!\n> \n> Please check your system for viruses,\n> or ask your system administrator to do so.\n> \n> For your reference, here are headers from your email:\n> ------------------------- BEGIN HEADERS -----------------------------\n> Return-Path: <ishii@postgresql.org>\n> Received: from server1.pgsql.org (www.postgresql.org [64.49.215.9])\n> \tby postgresql.org (Postfix) with SMTP id 9E8CD476743\n> \tfor <pgsql-general@postgresql.org>; Wed, 28 Aug 2002 10:56:01 -0400 (EDT)\n> Received: (qmail 38960 invoked by alias); 28 Aug 2002 14:55:58 -0000\n> Received: from unknown (HELO Fndbvdxpo) (217.117.47.237)\n> by www.postgresql.org with SMTP; 28 Aug 2002 14:55:58 -0000\n> From: ishii <ishii@postgresql.org>\n> To: pgsql-general@postgresql.org\n> Subject: Bede@profm.ro\n> MIME-Version: 1.0\n> Content-Type: multipart/alternative;\n> \tboundary=Tc04I0Pa2i8h90lG4ZHv\n> Message-Id: <20020828145601.9E8CD476743@postgresql.org>\n> Date: Wed, 28 Aug 2002 10:56:01 -0400 (EDT)\n> -------------------------- END HEADERS ------------------------------\n> \n", "msg_date": "Thu, 29 Aug 2002 10:06:30 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: VIRUS IN YOUR MAIL (W32/Klez.h@MM)" }, { "msg_contents": "\nYes, I have recieved it. I assume it is bacause a virus is picking our\nemail address from a mailbox as the \"From:\" and sending the virus to\nothers.\n\n\n---------------------------------------------------------------------------\n\nTatsuo Ishii wrote:\n> Sorry for the off-topic article, but I wonder anyone has been received\n> this kind of mail...\n> \n> I have been received following from postmaster@postgresql.org. Is this\n> mail really posted by postmaster@postgresql.org? If so, could someone\n> on postgresql.org stop this kind of alert mail? I'm tired of these\n> kind of mails. Apparently the mail postmaster@postgresql.org pointed\n> out is not sent from me (maybe from 217.117.47.237?)\n> \n> Note that I do not subscribe pgsql-general list (I once subscried. But\n> it seems I was kicked out from the list for unknow reason).\n> \n> --------------------------------------------------------------------\n> From: postmaster@postgresql.org\n> Subject: VIRUS IN YOUR MAIL (W32/Klez.h@MM)\n> Date: Wed, 28 Aug 2002 10:56:02 -0400 (EDT)\n> Message-ID: <20020828145602.55F51476747@postgresql.org>\n> \n> > V I R U S A L E R T\n> > \n> > Our virus checker found the\n> > \n> > W32/Klez.h@MM\n> > \n> > virus in your email to the following recipient:\n> > \n> > -> pgsql-general@postgresql.org\n> > \n> > Delivery of the email was stopped!\n> > \n> > Please check your system for viruses,\n> > or ask your system administrator to do so.\n> > \n> > For your reference, here are headers from your email:\n> > ------------------------- BEGIN HEADERS -----------------------------\n> > Return-Path: <ishii@postgresql.org>\n> > Received: from server1.pgsql.org (www.postgresql.org [64.49.215.9])\n> > \tby postgresql.org (Postfix) with SMTP id 9E8CD476743\n> > \tfor <pgsql-general@postgresql.org>; Wed, 28 Aug 2002 10:56:01 -0400 (EDT)\n> > Received: (qmail 38960 invoked by alias); 28 Aug 2002 14:55:58 -0000\n> > Received: from unknown (HELO Fndbvdxpo) (217.117.47.237)\n> > by www.postgresql.org with SMTP; 28 Aug 2002 14:55:58 -0000\n> > From: ishii <ishii@postgresql.org>\n> > To: pgsql-general@postgresql.org\n> > Subject: Bede@profm.ro\n> > MIME-Version: 1.0\n> > Content-Type: multipart/alternative;\n> > \tboundary=Tc04I0Pa2i8h90lG4ZHv\n> > Message-Id: <20020828145601.9E8CD476743@postgresql.org>\n> > Date: Wed, 28 Aug 2002 10:56:01 -0400 (EDT)\n> > -------------------------- END HEADERS ------------------------------\n> > \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 28 Aug 2002 21:16:06 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: VIRUS IN YOUR MAIL (W32/Klez.h@MM)" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> Sorry for the off-topic article, but I wonder anyone has been received\n> this kind of mail...\n\nI have.\n\n> Note that I do not subscribe pgsql-general list (I once subscried. But\n> it seems I was kicked out from the list for unknow reason).\n\nI'm subscribed to -general but I haven't received mail from there in ages.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Fri, 30 Aug 2002 00:15:11 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: VIRUS IN YOUR MAIL (W32/Klez.h@MM)" } ]
[ { "msg_contents": "Tom Lane: \"And with the availability of schemas in 7.3, I think that\nmultiple databases per installation is going to become less common to\nbegin with --- people will more often use multiple schemas in one big\ndatabase if they want the option of data sharing, or completely\nseparate installations if they want airtight separation.\"\n\nThis is not a good assumption, in my opinion. Normally, one app is\nassociated with one database. This way, if something happens to the db,\nonly one application is unavailable, others will not be affected, more\nor less. Besides, some databases are huge, so recovery time may take a\nlong time. If everything is in one db, the whole organization will be\nbrought to a halt, all apps will be down for a while. From my\nexperience, this will not be considered acceptable in any reasonable\norganization.\n \nThis is the place where cross-db queries become critically important.\nYou do not want to duplicate data in several databases and, at the same\ntime, you do not want to have one huge unmanageable db that can\npotentially bring down all your apps.\n\nmy $0.02\n\n\n__________________________________________________\nDo You Yahoo!?\nYahoo! Finance - Get real-time stock quotes\nhttp://finance.yahoo.com\n", "msg_date": "Wed, 28 Aug 2002 19:00:51 -0700 (PDT)", "msg_from": "One Way <oneway_111@yahoo.com>", "msg_from_op": true, "msg_subject": "Re: Open 7.3 items " } ]
[ { "msg_contents": "Hi guys,\n\nAfter a crash and several restarts this morning, my postgres 7.2.1 is\nrunning very oddly.\n\nEvery time you execute a query it basially takes forever. Some of them\nactually do take forever. eg: \"select * from users\" just hangs forever. As\npeople use the site all available postgres slots are taken up by various\nrandom queries until the max backends is reached. The requests cannot be\ncancelled within psql, etc.\n\nThe processes are hanging in semwait or sbwait or biord states, but mostly\nsbwait. Restarts don't help. Reboots don't help. There's nothing in the\nlogs. What the heck is going on?\n\nChris\n\n", "msg_date": "Thu, 29 Aug 2002 10:12:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Serious problem with my postgres" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> After a crash and several restarts this morning, my postgres 7.2.1 is\n> running very oddly.\n> The processes are hanging in semwait or sbwait or biord states, but mostly\n> sbwait.\n\nSome random poking around in google says that sbwait state is \"waiting\nfor data to arrive at/drain from a socket buffer\". Network problems,\nperhaps?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Aug 2002 22:36:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Serious problem with my postgres " } ]
[ { "msg_contents": "OK,\n\nI seem to have recovered from my Postgres probs. I decided to do a vacuum\nfull to clean things up. I got this about halfway through:\n\nusa=# vacuum full analyze;\nERROR: No one parent tuple was found\n\nLog:\n\n2002-08-28 19:38:47 DEBUG: Index expiry_users_users_key: Pages 110; Tuples\n41310: Deleted 866.\n CPU 0.04s/0.03u sec elapsed 1.99 sec.\n2002-08-28 19:38:54 DEBUG: Index users_users_email_lower_idx: Pages 269;\nTuples 41310: Deleted 866.\n CPU 0.09s/0.08u sec elapsed 6.75 sec.\n2002-08-28 19:39:25 ERROR: No one parent tuple was found\n\nAny fix for that?\n\nChris\n\n", "msg_date": "Thu, 29 Aug 2002 10:42:07 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Postgres problems" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> I seem to have recovered from my Postgres probs. I decided to do a vacuum\n> full to clean things up. I got this about halfway through:\n\n> usa=# vacuum full analyze;\n> ERROR: No one parent tuple was found\n\n> Any fix for that?\n\nTry a \"SELECT * FROM <table> FOR UPDATE\" outside any transaction block.\nThat should clear the problem and let you vacuum. This is fixed in CVS\ntip but there's no fix in 7.2.2.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 28 Aug 2002 23:11:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Postgres problems " } ]
[ { "msg_contents": "does anybody know who i can talk to about a virus/bug scanning engine for postgresql tables/webportal stuff? i would like to start a proof-of-concept thread for this. sounds strange, \ni know, but chill out, details to come.\n\n\n", "msg_date": "Wed, 28 Aug 2002 22:44:10 -0400 (EDT)", "msg_from": "control <control@phantombrain.org>", "msg_from_op": true, "msg_subject": "Inquiry From Form [pgsql]" } ]
[ { "msg_contents": "In include/c.h, MemSet() is defined to be different than the stock\nfunction memset() only when copying less than or equal to\nMEMSET_LOOP_LIMIT bytes (currently 64). The comments above the macro\ndefinition note:\n\n *\tWe got the 64 number by testing this against the stock memset() on\n *\tBSD/OS 3.0. Larger values were slower.\tbjm 1997/09/11\n *\n *\tI think the crossover point could be a good deal higher for\n *\tmost platforms, actually. tgl 2000-03-19\n\nI decided to investigate Tom's suggestion and determine the\nperformance of MemSet() versus memset() on my machine, for various\nvalues of MEMSET_LOOP_LIMIT. The machine this is being tested on is a\nPentium 4 1.8 Ghz with RDRAM, running Linux 2.4.19pre8 with GCC 3.1.1\nand glibc 2.2.5 -- the results may or may not apply to other\nmachines.\n\nThe test program was:\n\n#include <string.h>\n#include \"postgres.h\"\n\n#undef MEMSET_LOOP_LIMIT\n#define MEMSET_LOOP_LIMIT BUFFER_SIZE\n\nint\nmain(void)\n{\n\tchar buffer[BUFFER_SIZE];\n\tlong long i;\n\n\tfor (i = 0; i < 99000000; i++)\n\t{\n\t\tMemSet(buffer, 0, sizeof(buffer));\n\t}\n\n\treturn 0;\n}\n\n(I manually changed MemSet() to memset() when testing the performance\nof the latter function.)\n\nIt was compiled like so:\n\n gcc -O2 -DBUFFER_SIZE=xxx -Ipgsql/src/include memset.c\n\n(The -O2 optimization flag is important: the results are significantly\ndifferent if it is not used.)\n\nHere are the results (each timing is the 'total' listing from 'time\n./a.out'):\n\nBUFFER_SIZE = 64\n MemSet() -> 2.756, 2.810, 2.789\n memset() -> 13.844, 13.782, 13.778\n\nBUFFER_SIZE = 128\n MemSet() -> 5.848, 5.989, 5.861\n memset() -> 15.637, 15.631, 15.631\n\nBUFFER_SIZE = 256\n MemSet() -> 9.602, 9.652, 9.633\n memset() -> 19.305, 19.370, 19.302\n\nBUFFER_SIZE = 512\n MemSet() -> 17.416, 17.462, 17.353\n memset() -> 26.657, 26.658, 26.678\n\nBUFFER_SIZE = 1024\n MemSet() -> 32.144, 32.179, 32.086\n memset() -> 41.186, 41.115, 41.176\n\nBUFFER_SIZE = 2048\n MemSet() -> 60.39, 60.48, 60.32\n memset() -> 71.19, 71.18, 71.17\n\nBUFFER_SIZE = 4096\n MemSet() -> 118.29, 120.07, 118.69\n memset() -> 131.40, 131.41\n\n... at which point I stopped benchmarking.\n\nIs the benchmark above a reasonable assessment of memset() / MemSet()\nperformance when copying word-aligned amounts of memory? If so, what's\na good value for MEMSET_LOOP_LIMIT (perhaps 512)?\n\nAlso, if anyone would like to contribute the results of doing the\nbenchmark on their particular system, that might provide some useful\nadditional data points.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "29 Aug 2002 01:27:41 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": true, "msg_subject": "tweaking MemSet() performance" }, { "msg_contents": "\nI consider this a very good test. As you can see from the date of my\nlast test, 1997/09/11, I think I may have had a dual Pentium Pro at that\npoint, and hardware has certainly changed since then. I did try 128 at\nthat time and found it to be slower, but with newer hardware, it is very\npossible it has improved.\n\nI remember in writing that macro how surprised I was that there was any\nimprovements, but obviously there is a gain and the gain is getting\nbigger.\n\nI tested the following program:\n\t\t\n\t#include <string.h>\n\t#include \"postgres.h\"\n\t\n\t#undef\tMEMSET_LOOP_LIMIT\n\t#define\tMEMSET_LOOP_LIMIT 1000000\n\t\n\tint\n\tmain(int argc, char **argv)\n\t{\n\t\tint\t\tlen = atoi(argv[1]);\n\t\tchar\t\tbuffer[len];\n\t\tlong long\ti;\n\t\n\t\tfor (i = 0; i < 9900000; i++)\n\t\t\tMemSet(buffer, 0, len);\n\t\treturn 0;\n\t}\n\nand, yes, -O2 is significant! Looks like we use -O2 on all platforms\nthat use GCC so we should be OK there.\n\nI tested with the following script:\n\n\tfor TIME in 64 128 256 512 1024 2048 4096; do echo \"*$TIME\\c\";\n\ttime tst1 $TIME; done\n\nand got for MemSet:\n\t\n\t*64\n\treal 0m1.001s\n\tuser 0m1.000s\n\tsys 0m0.003s\n\t*128\n\treal 0m1.578s\n\tuser 0m1.567s\n\tsys 0m0.013s\n\t*256\n\treal 0m2.723s\n\tuser 0m2.723s\n\tsys 0m0.003s\n\t*512\n\treal 0m5.044s\n\tuser 0m5.029s\n\tsys 0m0.013s\n\t*1024\n\treal 0m9.621s\n\tuser 0m9.621s\n\tsys 0m0.003s\n\t*2048\n\treal 0m18.821s\n\tuser 0m18.811s\n\tsys 0m0.013s\n\t*4096\n\treal 0m37.266s\n\tuser 0m37.266s\n\tsys 0m0.003s\n\nand for memset():\n\t\n\t*64\n\treal 0m1.813s\n\tuser 0m1.801s\n\tsys 0m0.014s\n\t*128\n\treal 0m2.489s\n\tuser 0m2.499s\n\tsys 0m0.994s\n\t*256\n\treal 0m4.397s\n\tuser 0m5.389s\n\tsys 0m0.005s\n\t*512\n\treal 0m5.186s\n\tuser 0m6.170s\n\tsys 0m0.015s\n\t*1024\n\treal 0m6.676s\n\tuser 0m6.676s\n\tsys 0m0.003s\n\t*2048\n\treal 0m9.766s\n\tuser 0m9.776s\n\tsys 0m0.994s\n\t*4096\n\treal 0m15.970s\n\tuser 0m15.954s\n\tsys 0m0.003s\n\nso for BSD/OS, the break-even is 512.\n\nI am on a dual P3/550 using 2.95.2. I will tell you exactly why my\nbreak-even is lower than most --- I have assembly language memset()\nfunctions in libc on BSD/OS.\n\nI suggest changing the MEMSET_LOOP_LIMIT to 512.\n\n---------------------------------------------------------------------------\n\nNeil Conway wrote:\n> In include/c.h, MemSet() is defined to be different than the stock\n> function memset() only when copying less than or equal to\n> MEMSET_LOOP_LIMIT bytes (currently 64). The comments above the macro\n> definition note:\n> \n> *\tWe got the 64 number by testing this against the stock memset() on\n> *\tBSD/OS 3.0. Larger values were slower.\tbjm 1997/09/11\n> *\n> *\tI think the crossover point could be a good deal higher for\n> *\tmost platforms, actually. tgl 2000-03-19\n> \n> I decided to investigate Tom's suggestion and determine the\n> performance of MemSet() versus memset() on my machine, for various\n> values of MEMSET_LOOP_LIMIT. The machine this is being tested on is a\n> Pentium 4 1.8 Ghz with RDRAM, running Linux 2.4.19pre8 with GCC 3.1.1\n> and glibc 2.2.5 -- the results may or may not apply to other\n> machines.\n> \n> The test program was:\n> \n> #include <string.h>\n> #include \"postgres.h\"\n> \n> #undef MEMSET_LOOP_LIMIT\n> #define MEMSET_LOOP_LIMIT BUFFER_SIZE\n> \n> int\n> main(void)\n> {\n> \tchar buffer[BUFFER_SIZE];\n> \tlong long i;\n> \n> \tfor (i = 0; i < 99000000; i++)\n> \t{\n> \t\tMemSet(buffer, 0, sizeof(buffer));\n> \t}\n> \n> \treturn 0;\n> }\n> \n> (I manually changed MemSet() to memset() when testing the performance\n> of the latter function.)\n> \n> It was compiled like so:\n> \n> gcc -O2 -DBUFFER_SIZE=xxx -Ipgsql/src/include memset.c\n> \n> (The -O2 optimization flag is important: the results are significantly\n> different if it is not used.)\n> \n> Here are the results (each timing is the 'total' listing from 'time\n> ./a.out'):\n> \n> BUFFER_SIZE = 64\n> MemSet() -> 2.756, 2.810, 2.789\n> memset() -> 13.844, 13.782, 13.778\n> \n> BUFFER_SIZE = 128\n> MemSet() -> 5.848, 5.989, 5.861\n> memset() -> 15.637, 15.631, 15.631\n> \n> BUFFER_SIZE = 256\n> MemSet() -> 9.602, 9.652, 9.633\n> memset() -> 19.305, 19.370, 19.302\n> \n> BUFFER_SIZE = 512\n> MemSet() -> 17.416, 17.462, 17.353\n> memset() -> 26.657, 26.658, 26.678\n> \n> BUFFER_SIZE = 1024\n> MemSet() -> 32.144, 32.179, 32.086\n> memset() -> 41.186, 41.115, 41.176\n> \n> BUFFER_SIZE = 2048\n> MemSet() -> 60.39, 60.48, 60.32\n> memset() -> 71.19, 71.18, 71.17\n> \n> BUFFER_SIZE = 4096\n> MemSet() -> 118.29, 120.07, 118.69\n> memset() -> 131.40, 131.41\n> \n> ... at which point I stopped benchmarking.\n> \n> Is the benchmark above a reasonable assessment of memset() / MemSet()\n> performance when copying word-aligned amounts of memory? If so, what's\n> a good value for MEMSET_LOOP_LIMIT (perhaps 512)?\n> \n> Also, if anyone would like to contribute the results of doing the\n> benchmark on their particular system, that might provide some useful\n> additional data points.\n> \n> Cheers,\n> \n> Neil\n> \n> -- \n> Neil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 15:37:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "On Thu, Aug 29, 2002 at 01:27:41AM -0400, Neil Conway wrote:\n> \n> Also, if anyone would like to contribute the results of doing the\n> benchmark on their particular system, that might provide some useful\n> additional data points.\n\nOk, here's a run on a Sun E450, Solaris 7. I presume your \"total\"\ntime label corresponds to my \"real\" time. That's what I'm including,\nanyway.\n\nSystem Configuration: Sun Microsystems sun4u Sun Enterprise 450 (2\nX UltraSPARC-II 400MHz)\nSystem clock frequency: 100 MHz\nMemory size: 2560 Megabytes\n\nBUFFER_SIZE = 64\n MemSet(): 0m13.343s,12.567s,13.659s\n memset(): 0m1.255s,0m1.258s,0m1.254s\n \nBUFFER_SIZE = 128\n MemSet(): 0m21.347s,0m21.200s,0m20.541s\n memset(): 0m18.041s,0m17.963s,0m17.990s\n \nBUFFER_SIZE = 256\n MemSet(): 0m38.023s,0m37.480s,0m37.631s\n memset(): 0m25.969s,0m26.047s,0m26.012s\n \nBUFFER_SIZE = 512\n MemSet(): 1m9.226s,1m9.901s,1m10.148s\n memset(): 2m17.897s,2m18.310s,2m17.984s\n\nBUFFER_SIZE = 1024\n MemSet(): 2m13.690s,2m13.981s,2m13.206s\n memset(): 4m43.195s,4m43.405s,4m43.390s\n\n. . .at which point I gave up.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Thu, 29 Aug 2002 17:59:51 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Andrew Sullivan wrote:\n> On Thu, Aug 29, 2002 at 01:27:41AM -0400, Neil Conway wrote:\n> > \n> > Also, if anyone would like to contribute the results of doing the\n> > benchmark on their particular system, that might provide some useful\n> > additional data points.\n> \n> Ok, here's a run on a Sun E450, Solaris 7. I presume your \"total\"\n> time label corresponds to my \"real\" time. That's what I'm including,\n> anyway.\n\n\nNow, these are unusual results. In the 64 case, MemSet is dramatically\nslower, and it only starts to win around 512, and seems to speed up\nafter that.\n\nThese are strange results. The idea of MemSet was to prevent the\nfunction call overhead for memset, but in such a case, you would think\nthe function call overhead would reduce as a percentage of the total\ntime as the buffer got longer.\n\nIn your results it seems to suggest that memset() gets slower for longer\nbuffer lengths, and a for loop starts to win at longer sizes. Should I\npull out my Solaris kernel source and see what memset() is doing?\n\n---------------------------------------------------------------------------\n\n\n\n> System Configuration: Sun Microsystems sun4u Sun Enterprise 450 (2\n> X UltraSPARC-II 400MHz)\n> System clock frequency: 100 MHz\n> Memory size: 2560 Megabytes\n> \n> BUFFER_SIZE = 64\n> MemSet(): 0m13.343s,12.567s,13.659s\n> memset(): 0m1.255s,0m1.258s,0m1.254s\n> \n> BUFFER_SIZE = 128\n> MemSet(): 0m21.347s,0m21.200s,0m20.541s\n> memset(): 0m18.041s,0m17.963s,0m17.990s\n> \n> BUFFER_SIZE = 256\n> MemSet(): 0m38.023s,0m37.480s,0m37.631s\n> memset(): 0m25.969s,0m26.047s,0m26.012s\n> \n> BUFFER_SIZE = 512\n> MemSet(): 1m9.226s,1m9.901s,1m10.148s\n> memset(): 2m17.897s,2m18.310s,2m17.984s\n> \n> BUFFER_SIZE = 1024\n> MemSet(): 2m13.690s,2m13.981s,2m13.206s\n> memset(): 4m43.195s,4m43.405s,4m43.390s\n> \n> . . .at which point I gave up.\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 19:35:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "En Thu, 29 Aug 2002 19:35:13 -0400 (EDT)\nBruce Momjian <pgman@candle.pha.pa.us> escribi�:\n\n> In your results it seems to suggest that memset() gets slower for longer\n> buffer lengths, and a for loop starts to win at longer sizes. Should I\n> pull out my Solaris kernel source and see what memset() is doing?\n\nNo, because memset() belongs to the libc AFAICS... Do you have source\ncode for that?\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Pensar que el espectro que vemos es ilusorio no lo despoja de espanto,\ns�lo le suma el nuevo terror de la locura\" (Perelandra, CSLewis)\n", "msg_date": "Thu, 29 Aug 2002 19:53:50 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "On Thu, 2002-08-29 at 18:53, Alvaro Herrera wrote:\n> En Thu, 29 Aug 2002 19:35:13 -0400 (EDT)\n> Bruce Momjian <pgman@candle.pha.pa.us> escribió:\n> \n> > In your results it seems to suggest that memset() gets slower for longer\n> > buffer lengths, and a for loop starts to win at longer sizes. Should I\n> > pull out my Solaris kernel source and see what memset() is doing?\n> \n> No, because memset() belongs to the libc AFAICS... Do you have source\n> code for that?\nand if you do, what vintage is it? I believe Solaris has mucked with\nstuff over the last few rev's. \n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "29 Aug 2002 18:56:20 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Alvaro Herrera wrote:\n> En Thu, 29 Aug 2002 19:35:13 -0400 (EDT)\n> Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> \n> > In your results it seems to suggest that memset() gets slower for longer\n> > buffer lengths, and a for loop starts to win at longer sizes. Should I\n> > pull out my Solaris kernel source and see what memset() is doing?\n> \n> No, because memset() belongs to the libc AFAICS... Do you have source\n> code for that?\n\nYou bet. I have source code to it all, libs, /bin, etc.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 20:08:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Larry Rosenman wrote:\n> On Thu, 2002-08-29 at 18:53, Alvaro Herrera wrote:\n> > En Thu, 29 Aug 2002 19:35:13 -0400 (EDT)\n> > Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> > \n> > > In your results it seems to suggest that memset() gets slower for longer\n> > > buffer lengths, and a for loop starts to win at longer sizes. Should I\n> > > pull out my Solaris kernel source and see what memset() is doing?\n> > \n> > No, because memset() belongs to the libc AFAICS... Do you have source\n> > code for that?\n> and if you do, what vintage is it? I believe Solaris has mucked with\n> stuff over the last few rev's. \n\n8.0. Looks like there is interested so I will dig the CD's out of the\nthe box the moves moved and take a look. Now where is that...\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 20:09:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Larry Rosenman wrote:\n> On Thu, 2002-08-29 at 18:53, Alvaro Herrera wrote:\n> > En Thu, 29 Aug 2002 19:35:13 -0400 (EDT)\n> > Bruce Momjian <pgman@candle.pha.pa.us> escribi?:\n> > \n> > > In your results it seems to suggest that memset() gets slower for longer\n> > > buffer lengths, and a for loop starts to win at longer sizes. Should I\n> > > pull out my Solaris kernel source and see what memset() is doing?\n> > \n> > No, because memset() belongs to the libc AFAICS... Do you have source\n> > code for that?\n> and if you do, what vintage is it? I believe Solaris has mucked with\n> stuff over the last few rev's. \n\nOK, I am not permitted to discuss the contents of the source with anyone\nexcept other Solaris source licensees, but I can say that there isn't\nanything fancy in the source.\n\nThere is nothing that would explain the slowdown of memset >512 bytes\ncompared to MemSet. All lengths 64, 128, ... use the same algorithm in\nthe memset code.\n\nI got the source from the now-cancelled Solaris Foundation Source\nProgram:\n\n\thttp://wwws.sun.com/software/solaris/source/\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 21:20:14 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Would you please retest this. I have attached my email showing a\nsimpler test that is less error-prone.\n\nI can't come up with any scenario that would produce what you have\nreported. If I look at function call cost, MemSet loop efficiency, and\nmemset loop efficiency, I can't come up with a combination that produces\nwhat you reported.\n\nThe standard assumption is that function call overhead is significant,\nand that memset it faster than C MemSet. What compiler are you using? \nIs the memset() call being inlined by the compiler? You will have to\nlook at the assembler code to be sure.\n\nMy only guess is that memset is inlined and that it is only moving\nsingle bytes. If that is the case, there is no function call overhead\nand it would explain why MemSet gets faster as the buffer gets larger.\n\n---------------------------------------------------------------------------\n\nAndrew Sullivan wrote:\n> On Thu, Aug 29, 2002 at 01:27:41AM -0400, Neil Conway wrote:\n> > \n> > Also, if anyone would like to contribute the results of doing the\n> > benchmark on their particular system, that might provide some useful\n> > additional data points.\n> \n> Ok, here's a run on a Sun E450, Solaris 7. I presume your \"total\"\n> time label corresponds to my \"real\" time. That's what I'm including,\n> anyway.\n> \n> System Configuration: Sun Microsystems sun4u Sun Enterprise 450 (2\n> X UltraSPARC-II 400MHz)\n> System clock frequency: 100 MHz\n> Memory size: 2560 Megabytes\n> \n> BUFFER_SIZE = 64\n> MemSet(): 0m13.343s,12.567s,13.659s\n> memset(): 0m1.255s,0m1.258s,0m1.254s\n> \n> BUFFER_SIZE = 128\n> MemSet(): 0m21.347s,0m21.200s,0m20.541s\n> memset(): 0m18.041s,0m17.963s,0m17.990s\n> \n> BUFFER_SIZE = 256\n> MemSet(): 0m38.023s,0m37.480s,0m37.631s\n> memset(): 0m25.969s,0m26.047s,0m26.012s\n> \n> BUFFER_SIZE = 512\n> MemSet(): 1m9.226s,1m9.901s,1m10.148s\n> memset(): 2m17.897s,2m18.310s,2m17.984s\n> \n> BUFFER_SIZE = 1024\n> MemSet(): 2m13.690s,2m13.981s,2m13.206s\n> memset(): 4m43.195s,4m43.405s,4m43.390s\n> \n> . . .at which point I gave up.\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073", "msg_date": "Thu, 29 Aug 2002 23:07:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Would you please retest this. I have attached my email showing a\n> simpler test that is less error-prone.\n\nWhat did you consider less error-prone, exactly?\n\nNeil's original test considered the case where both the value being\nset and the buffer length (second and third args of MemSet) are\ncompile-time constants. Your test used a compile-time-constant second\narg and a variable third arg. It's obvious from looking at the source\nof MemSet that this will make a difference in what an optimizing\ncompiler can do.\n\nI believe that both cases are interesting in practice in the Postgres\nsources, but I have no idea about their relative frequency of\noccurrence.\n\nFWIW, I get numbers like the following for the constant-third-arg\nscenario, using \"gcc -O2\" with gcc 2.95.3 on HPUX 10.20, HPPA C180\nprocessor:\n\nbufsize\t\tMemSet\t\tmemset\n64\t\t0m1.71s\t\t0m4.89s\n128\t\t0m2.51s\t\t0m5.36s\n256\t\t0m4.11s\t\t0m7.02s\n512\t\t0m7.32s\t\t0m10.31s\n1024\t\t0m13.74s\t0m16.90s\n2048\t\t0m26.58s\t0m30.08s\n4096\t\t0m52.24s\t0m56.43s\n\nSo I'd go for a crossover point of *at least* 512. IIRC, I got\nsimilar numbers two years ago that led me to put the comment into\nc.h that Neil is reacting to...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 02:17:44 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance " }, { "msg_contents": "On Thu, Aug 29, 2002 at 07:35:13PM -0400, Bruce Momjian wrote:\n> Andrew Sullivan wrote:\n> > On Thu, Aug 29, 2002 at 01:27:41AM -0400, Neil Conway wrote:\n> > > \n> > > Also, if anyone would like to contribute the results of doing the\n> > > benchmark on their particular system, that might provide some useful\n> > > additional data points.\n> > \n> > Ok, here's a run on a Sun E450, Solaris 7. I presume your \"total\"\n> > time label corresponds to my \"real\" time. That's what I'm including,\n> > anyway.\n> \n> \n> Now, these are unusual results. In the 64 case, MemSet is dramatically\n> slower, and it only starts to win around 512, and seems to speed up\n> after that.\n> \n> These are strange results. The idea of MemSet was to prevent the\n\nYes, I was rather surprised, too. In fact, the first couple of runs\nI thought I must have made a mistake and compiled with (for instance)\nMemSet() instead of memset(). But I triple-checked, and I hadn't.\n\nFWIW, here's an example of what I used to call the compiler:\n\ngcc -O2 -DBUFFER_SIZE=1024 -Ipostgresql-7.2.1/src/include/ memset.c\n\nA\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 Aug 2002 08:01:02 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "On Thu, Aug 29, 2002 at 11:07:03PM -0400, Bruce Momjian wrote:\n\n> and that memset it faster than C MemSet. What compiler are you using? \n\nSorry. Should have included that.\n\n$gcc --version\n2.95.3\n\n> Is the memset() call being inlined by the compiler? You will have to\n> look at the assembler code to be sure.\n\nNo idea. I can maybe check this out later, but I'll have to ask one\nof my colleagues for help. My knowledge of what I am looking at runs\nout way before looking at assembler code :(\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 Aug 2002 08:04:14 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Would you please retest this. I have attached my email showing a\n> > simpler test that is less error-prone.\n> \n> What did you consider less error-prone, exactly?\n> \n> Neil's original test considered the case where both the value being\n> set and the buffer length (second and third args of MemSet) are\n> compile-time constants. Your test used a compile-time-constant second\n> arg and a variable third arg. It's obvious from looking at the source\n> of MemSet that this will make a difference in what an optimizing\n> compiler can do.\n\nIt was less error-prone because you don't have to recompile for every\nconstant, though your idea that a non-constant length may effect the\noptimizer is possible, though I assumed for >=64, the length would not\nbe significant to the optimizer.\n\nShould we take it to 1024 as a switchover point? I am low at 512, and\nothers are higher, so 1024 seems like a good average.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 10:53:19 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "On Thu, Aug 29, 2002 at 11:07:03PM -0400, Bruce Momjian wrote:\n> \n> Would you please retest this. I have attached my email showing a\n> simpler test that is less error-prone.\n\nOk, here you go. Same machine as before, 2-way UltraSPARC-II @400\nMHz, 2.5 G, gcc 2.95.3, Solaris 7. This gcc compiles 32 bit apps. \n\nMemSet():\n\n*64\n\nreal\t0m1.298s\nuser\t0m1.290s\nsys\t0m0.010s\n*128\n\nreal\t0m2.251s\nuser\t0m2.250s\nsys\t0m0.000s\n*256\n\nreal\t0m3.734s\nuser\t0m3.720s\nsys\t0m0.010s\n*512\n\nreal\t0m7.041s\nuser\t0m7.010s\nsys\t0m0.020s\n*1024\n\nreal\t0m13.353s\nuser\t0m13.350s\nsys\t0m0.000s\n*2048\n\nreal\t0m26.178s\nuser\t0m26.040s\nsys\t0m0.000s\n*4096\n\nreal\t0m51.838s\nuser\t0m51.670s\nsys\t0m0.010s\n\nand memset()\n\n*64\n\nreal\t0m1.469s\nuser\t0m1.460s\nsys\t0m0.000s\n*128\n\nreal\t0m1.813s\nuser\t0m1.810s\nsys\t0m0.000s\n*256\n\nreal\t0m2.747s\nuser\t0m2.730s\nsys\t0m0.010s\n*512\n\nreal\t0m12.478s\nuser\t0m12.370s\nsys\t0m0.010s\n*1024\n\nreal\t0m26.128s\nuser\t0m26.010s\nsys\t0m0.000s\n*2048\n\nreal\t0m57.663s\nuser\t0m57.320s\nsys\t0m0.010s\n*4096\n\nreal\t1m53.772s\nuser\t1m53.290s\nsys\t0m0.000s\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 Aug 2002 17:15:24 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "Andrew Sullivan wrote:\n\n>On Thu, Aug 29, 2002 at 01:27:41AM -0400, Neil Conway wrote:\n> \n>\n>>Also, if anyone would like to contribute the results of doing the\n>>benchmark on their particular system, that might provide some useful\n>>additional data points.\n>> \n>>\nLinux 2.4.18 (preempt) gcc 2.95.4 Dual Althon 1600XP 1Gb DDR\n\nmemset() *64 2.999s\nMemSet() *64 3.640s\n\nmemset() *128 4.211s\nMemSet() *128 5.933s\n\nmemset() *256 6.624s\nMemSet()*256 14.889s\n\nmemset() *512 11.182s\nMemSet()*512 28.583s\n\nmemset() *1024 20.288s\nMemSet()*1024 55.853s\n\nmemset() *2048 38.513s\nMemSet()*2048 1m50.555s\n\nmemset()*4096 1m15.010s\nMemSet()*4096 3m40.381s\n\nLinux 2.4.16 gcc 2.95.4 Dual Celeron 400 512Mb PC66\n\nmemset() *64 15.618s\nMemSet() *64 12.864s\n\nmemset() *128 24.524s\nMemSet() *128 21.852s\n\nmemset() *256 53.963s\nMemSet() *256 52.012s\n\nmemset() *512 1m31.232s\nMemSet() *512 1m39.445s\n\nmemset() *1024 2m44.609s\nMemSet() *1024 3m14.567s\n\nmemset() *2048 5m12.630s\nMemSet() *2048 6m24.916s\n\nmemset() *4096 10m8.183s\nMemSet() *4096 12m43.830s\n\nAshley Cambrell\n\n\n\n\n\n\n\nAndrew Sullivan wrote:\n\nOn Thu, Aug 29, 2002 at 01:27:41AM -0400, Neil Conway wrote:\n \n\nAlso, if anyone would like to contribute the results of doing the\nbenchmark on their particular system, that might provide some useful\nadditional data points.\n \n\n\n\nLinux 2.4.18 (preempt) gcc 2.95.4 Dual Althon 1600XP 1Gb DDR\n\nmemset() *64 2.999s\nMemSet() *64 3.640s\n\nmemset() *128 4.211s\nMemSet() *128 5.933s\n\nmemset() *256 6.624s\nMemSet()*256 14.889s\n\nmemset() *512 11.182s\nMemSet()*512 28.583s\n\nmemset() *1024 20.288s\nMemSet()*1024 55.853s\n\nmemset() *2048 38.513s\nMemSet()*2048 1m50.555s\n\nmemset()*4096 1m15.010s\nMemSet()*4096 3m40.381s\n\nLinux 2.4.16 gcc 2.95.4 Dual Celeron 400 512Mb PC66\n\nmemset() *64 15.618s\nMemSet() *64 12.864s\n\nmemset() *128 24.524s\nMemSet() *128 21.852s\n\nmemset() *256 53.963s\nMemSet() *256 52.012s\n\nmemset() *512 1m31.232s\nMemSet() *512 1m39.445s\n\nmemset() *1024 2m44.609s\nMemSet() *1024  3m14.567s\n\nmemset() *2048 5m12.630s\nMemSet() *2048  6m24.916s\n\nmemset() *4096 10m8.183s\n MemSet() *4096 12m43.830s\n\nAshley Cambrell", "msg_date": "Sat, 31 Aug 2002 09:31:25 +1000", "msg_from": "Ashley Cambrell <ash@freaky-namuh.com>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" }, { "msg_contents": "\nOK, seems we have wildly different values for MemSet for different\nmachines. I am going to up the MEMSET_LOOP_LIMIT value to 1024 because\nit seems to be the best value on most machines. We can revisit this in\n7.4.\n\nI wonder if a configure test is going to be required for this\nevenutally. I think random page size needs the same handling.\n\nMaybe I should add to TODO:\n\n o compute optimal MEMSET_LOOP_LIMIT value via configure.\n\nIs there a significant benefit? Can someone run some query with MemSet\nvs. memset and see a timing difference? You can use the new GUC param\nlog_duration I just committed.\n\nRemember, I added MemSet to eliminate the function call overhead, but at\nthis point, we are now seeing that MemSet beats memset() for ordinary\nmemory setting, and function call overhead isn't even an issue with the\nlarger buffer sizes.\n\n---------------------------------------------------------------------------\n\nAndrew Sullivan wrote:\n> On Thu, Aug 29, 2002 at 11:07:03PM -0400, Bruce Momjian wrote:\n> > \n> > Would you please retest this. I have attached my email showing a\n> > simpler test that is less error-prone.\n> \n> Ok, here you go. Same machine as before, 2-way UltraSPARC-II @400\n> MHz, 2.5 G, gcc 2.95.3, Solaris 7. This gcc compiles 32 bit apps. \n> \n> MemSet():\n> \n> *64\n> \n> real\t0m1.298s\n> user\t0m1.290s\n> sys\t0m0.010s\n> *128\n> \n> real\t0m2.251s\n> user\t0m2.250s\n> sys\t0m0.000s\n> *256\n> \n> real\t0m3.734s\n> user\t0m3.720s\n> sys\t0m0.010s\n> *512\n> \n> real\t0m7.041s\n> user\t0m7.010s\n> sys\t0m0.020s\n> *1024\n> \n> real\t0m13.353s\n> user\t0m13.350s\n> sys\t0m0.000s\n> *2048\n> \n> real\t0m26.178s\n> user\t0m26.040s\n> sys\t0m0.000s\n> *4096\n> \n> real\t0m51.838s\n> user\t0m51.670s\n> sys\t0m0.010s\n> \n> and memset()\n> \n> *64\n> \n> real\t0m1.469s\n> user\t0m1.460s\n> sys\t0m0.000s\n> *128\n> \n> real\t0m1.813s\n> user\t0m1.810s\n> sys\t0m0.000s\n> *256\n> \n> real\t0m2.747s\n> user\t0m2.730s\n> sys\t0m0.010s\n> *512\n> \n> real\t0m12.478s\n> user\t0m12.370s\n> sys\t0m0.010s\n> *1024\n> \n> real\t0m26.128s\n> user\t0m26.010s\n> sys\t0m0.000s\n> *2048\n> \n> real\t0m57.663s\n> user\t0m57.320s\n> sys\t0m0.010s\n> *4096\n> \n> real\t1m53.772s\n> user\t1m53.290s\n> sys\t0m0.000s\n> \n> A\n> \n> -- \n> ----\n> Andrew Sullivan 204-4141 Yonge Street\n> Liberty RMS Toronto, Ontario Canada\n> <andrew@libertyrms.info> M2P 2A8\n> +1 416 646 3304 x110\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 19:41:12 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: tweaking MemSet() performance" } ]
[ { "msg_contents": "I have removed bunch of #ifdef MULTIBYTE per recent pgsql-hackers\ndiscussion. All regression tests passed.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 29 Aug 2002 16:23:15 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "#ifdef MULTIBYTE removed" }, { "msg_contents": "Tatsuo Ishii <t-ishii@sra.co.jp> writes:\n> I have removed bunch of #ifdef MULTIBYTE per recent pgsql-hackers\n> discussion. All regression tests passed.\n\nI didn't follow the -hackers discussion, but shouldn't the references\nto '--enable-multibyte' and the MULTIBYTE #define be removed from\nsrc/include/pg_config.h.in ?\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "29 Aug 2002 03:49:34 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: #ifdef MULTIBYTE removed" }, { "msg_contents": "> I didn't follow the -hackers discussion, but shouldn't the references\n> to '--enable-multibyte' and the MULTIBYTE #define be removed from\n> src/include/pg_config.h.in ?\n\nThanks for pointing it out. I just forgot to commit the file.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 29 Aug 2002 17:03:44 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: #ifdef MULTIBYTE removed" } ]
[ { "msg_contents": "\n> And dealing with a real name would be nice, IMHO. \n> Otherwise we may end up with 'SMtT' as the nickname, 'SMitTy' perhaps ?\n:-)\nNever camed across with such an offensive bullshit. \nBut we will not end up with 'SMtT' nor with 'SMitTy', i am sure of it.\nAlso , i never camed across with the situation, when people asked me in\nrather offensive \nstyle to provide my real name and i a bit confused...\nThanks anyway. That was a very interesting experience.\n\n> If he wants to call himself 'Sir Modred' or 'Donald Duck' or 'Jack the\nRipper' or whatever...\nI do NOT want to call myself 'Donald Duck' nor 'Jack The Ripper' nor\n'whatever'...\n\n> Even in the security business, where people routinely use pseudonyms ...\nI am not in a security business, in fact i don't give a fuck about the\nsecurity business at all...\nI am just an ordinary javascript programmer.\n\n> When you have to read and process nearly 1,000 e-mails a day (as I have\nhad to do)\nThat'is your problem, if you are not able to do that, change the job.. \n\n> At least I didn't just bitch and moan about the bugs. ;)\nGood boy, but surely that's me who is moaning and bitching about the bugs\nhere?\n\n\n________________________________________________________________________\nThis letter has been delivered unencrypted. We'd like to remind you that\nthe full protection of e-mail correspondence is provided by S-mail\nencryption mechanisms if only both, Sender and Recipient use S-mail.\nRegister at S-mail.com: http://www.s-mail.com\n", "msg_date": "Thu, 29 Aug 2002 09:45:11 +0000", "msg_from": "Sir Mordred The Traitor <mordred@s-mail.com>", "msg_from_op": true, "msg_subject": "Misc replies" } ]
[ { "msg_contents": "Bruce,\n\nplease apply small patch for README.tsearch.\n\nI've documented space usage and using CLUSTER command\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Thu, 29 Aug 2002 15:49:14 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": true, "msg_subject": "README.tsearch.diff for CVS" }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nOleg Bartunov wrote:\n> Bruce,\n> \n> please apply small patch for README.tsearch.\n> \n> I've documented space usage and using CLUSTER command\n> \n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n\nContent-Description: \n\n[ Attachment, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 15:55:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: README.tsearch.diff for CVS" } ]
[ { "msg_contents": "On Thu, Aug 29, 2002 at 10:53:12 -0300,\n Paul Cowan <cowanpf@ednet.ns.ca> wrote:\n> I do not know if this is the right list for this question but I am at \n> the end of my rope. I have migrated a number of database into postgres \n> and every thing seems to be working fine as far as the data is \n> concerned. I am now beginnng the development work using php. The problem \n> is when some of the users are tring to extract data using MS word 2002. \n> They try to do a mail merge pulling the data from postgres. But it does \n> not seem to see the tables. I am able to make an odbc connection to the \n> database. If I make an odbc connection using MS Acess and then pull the \n> data from the access query to the word mail merge it works. What I would \n> like to do is to avoid using access. Does anyone know how I can get MS \n> word 2002 to import from a postgres table. \n\nCould this be a case problem with what word is using for table names?\n", "msg_date": "Thu, 29 Aug 2002 08:52:55 -0500", "msg_from": "Bruno Wolff III <bruno@wolff.to>", "msg_from_op": true, "msg_subject": "Re: ms word 2002" }, { "msg_contents": "I do not know if this is the right list for this question but I am at \nthe end of my rope. I have migrated a number of database into postgres \nand every thing seems to be working fine as far as the data is \nconcerned. I am now beginnng the development work using php. The problem \nis when some of the users are tring to extract data using MS word 2002. \nThey try to do a mail merge pulling the data from postgres. But it does \nnot seem to see the tables. I am able to make an odbc connection to the \ndatabase. If I make an odbc connection using MS Acess and then pull the \ndata from the access query to the word mail merge it works. What I would \nlike to do is to avoid using access. Does anyone know how I can get MS \nword 2002 to import from a postgres table. \n\n", "msg_date": "Thu, 29 Aug 2002 10:53:12 -0300", "msg_from": "Paul Cowan <cowanpf@ednet.ns.ca>", "msg_from_op": false, "msg_subject": "ms word 2002" } ]
[ { "msg_contents": "i think i might've stumbled across a tiny defect in the optimizer. \nunfortunately, i haven't the knowledge of the code to know where to \nbegin looking at how to address this problem.\n\nanyway, consider the following:\n\ncreate table foo(\n\tid int2\n);\n\ncreate table bar(\n\tid int2\n\tfoo_id int2 references foo( id )\n);\n\nimagine that the tables are populated.\n\nnow, consider the query\n\nselect b.foo_id\nfrom bar b\nwhere b.id = <some id>\nand\nexists(\n\tselect *\n\tfrom foo f\n\twhere b.foo_id = f.id\n\tand b.id = <some id, as above>\n);\n\nnow consider the same query with \"select <constant>\" in place of \"select \n*\" in the EXISTS subquery.\n\nexplain analyze indicates that the constant version always runs a little \nbit faster. shouldn't the optimizer be able to determine that it isn't \nnecessary actually to read a row in the case of EXISTS? i'm assuming \nthat's where the overhead is coming into play.\n\ni realize this is minutiae in comparison to other aspects of \ndevelopment, but it is another small performance boost that could be \nadded since i imagine many people, myself included, find it more natural \nto throw in \"select *\" rather than \"select <constant>\".\n\ni didn't see this on the current lists or TODO, but if it's a dupe, i \napologize for the noise. i also apologize for not being able to patch \nit, myself!\n\n-tfo\n\n", "msg_date": "Thu, 29 Aug 2002 11:13:03 -0500", "msg_from": "\"Thomas F. O'Connell\" <tfo@monsterlabs.com>", "msg_from_op": true, "msg_subject": "the optimizer and exists" } ]
[ { "msg_contents": "Hi,\n\nI checked all the previous string handling errors and most of them were \nalready fixed by You. However there were a few left and attached patch \nshould fix the rest of them.\n\nI used StringInfo only in 2 places and both of them are inside debug \nifdefs. Only performance penalty will come from using strlen() like all \nthe other code does.\n\nI also modified some of the already patched parts by changing \nsnprintf(buf, 2 * BUFSIZE, ... style lines to\nsnprintf(buf, sizeof(buf), ... where buf is an array.\n\nThis patch also passes all regression testing:\n======================\n All 89 tests passed.\n======================\n\nPatch is in -c format as requested and also available from \nhttp://suihkari.baana.suomi.net/postgresql/patches/postgresql-CVS-2002-08-29-sprintf.patch\n\n- Jukka", "msg_date": "Fri, 30 Aug 2002 00:12:22 +0300", "msg_from": "Jukka Holappa <jukkaho@mail.student.oulu.fi>", "msg_from_op": true, "msg_subject": "[PATCH] Sprintf() patch against current CVS tree." } ]
[ { "msg_contents": "-----BEGIN PGP SIGNED MESSAGE-----\nHash: SHA1\n\nHi,\n\nI checked all the previous string handling errors and most of them were\nalready fixed by You. However there were a few left and this patch\nshould fix the rest of them.\n\nI used StringInfo only in 2 places and both of them are inside debug\nifdefs. Only performance penalty will come from using strlen() like all\nthe other code does.\n\nI also modified some of the already patched parts by changing\nsnprintf(buf, 2 * BUFSIZE, ... style lines to\nsnprintf(buf, sizeof(buf), ... where buf is an array.\n\nThis patch also passes all regression testing:\n======================\n~ All 89 tests passed.\n======================\n\nPatch is in -c format as requested and available from\nhttp://suihkari.baana.suomi.net/postgresql/patches/postgresql-CVS-2002-08-29-sprintf.patch\n\nI tried to send it as attachment, but seems like nothing went through\n(and my mailer would certainly mangle an inline patch).\n\n- - Jukka\n\n-----BEGIN PGP SIGNATURE-----\nVersion: GnuPG v1.0.7 (GNU/Linux)\nComment: Using GnuPG with Mozilla - http://enigmail.mozdev.org\n\niD8DBQE9bpIfYYWM2XTSwX0RAk0cAJ4odC77W2xnSAreVCrWGVTrkog02wCeN8rZ\npl0XmFUqMuGpBhSydp/rhpA=\n=QVKk\n-----END PGP SIGNATURE-----\n\n", "msg_date": "Fri, 30 Aug 2002 00:29:03 +0300", "msg_from": "Jukka Holappa <jukkaho@mail.student.oulu.fi>", "msg_from_op": true, "msg_subject": "[PATCH] Sprintf() patch against current CVS tree." } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\tpetere@postgresql.org\t02/08/29 18:09:23\n\nModified files:\n\tsrc/include/port: hpux.h \n\nLog message:\n\tWorkaround for broken large file support on HP-UX\n\n", "msg_date": "Thu, 29 Aug 2002 18:09:23 -0400 (EDT)", "msg_from": "petere@postgresql.org (Peter Eisentraut - PostgreSQL)", "msg_from_op": true, "msg_subject": "pgsql-server/src/include/port hpux.h" }, { "msg_contents": "petere@postgresql.org (Peter Eisentraut - PostgreSQL) writes:\n> Modified files:\n> \tsrc/include/port: hpux.h \n> Log message:\n> \tWorkaround for broken large file support on HP-UX\n\nGood try but it didn't help. After looking more closely I've realized\nthat HP's system headers are just hopelessly broken, at least on HPUX\n10.20 (which, to be fair, is well behind the curve now). There is just\nno way to compile 64-bit support without drawing warnings in \n-Wmissing-declarations mode, because they've simply not included all\nthe declarations that should be there. _LARGEFILE64_SOURCE was a red\nherring --- I forgot to count underscores carefully, and I now see that\nthe declarations that _LARGEFILE64_SOURCE exposes aren't the ones that\ngcc is complaining about the lack of.\n\nWhat I'm currently thinking we should do is default largefile support to\noff in HPUX < 11.0; is there a convenient way to accomplish that in\nautoconf?\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 22:09:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgsql-server/src/include/port hpux.h " }, { "msg_contents": "Tom Lane writes:\n\n> What I'm currently thinking we should do is default largefile support to\n> off in HPUX < 11.0; is there a convenient way to accomplish that in\n> autoconf?\n\nSomething like this maybe (before AC_SYS_LARGEFILE):\n\ncase $host_os in hpuxZYX*)\nif test \"${enable_largefile+set}\" != set; then\nenable_largefile=no\nfi\nesac\n\nI found an HP whitepaper on the large file support and it seems they don't\nhave the problem of missing declarations.\n\nhttp://docs.hp.com/hpux/onlinedocs/os/lgfiles4.pdf\n\nNote the example in section 6.4.1. On page 37 they grep the preprocessed\nsource and somehow manage to get a declaration of __lseek64() in there.\nMaybe it's a later OS release.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Mon, 2 Sep 2002 20:02:23 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/src/include/port hpux.h " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I found an HP whitepaper on the large file support and it seems they don't\n> have the problem of missing declarations.\n> http://docs.hp.com/hpux/onlinedocs/os/lgfiles4.pdf\n> Note the example in section 6.4.1. On page 37 they grep the preprocessed\n> source and somehow manage to get a declaration of __lseek64() in there.\n\nYeah, but you'll notice they *have to* declare __lseek64(), 'cause the\ncompiler will otherwise assume it returns int. I've been through these\nfiles now and it seems that they've very carefully included only the\nminimal declarations they absolutely had to. What's really annoying\nis that the prototypes are there --- but #ifdef'd out of sight:\n\n# if defined(_FILE64)\n# ifndef __cplusplus\n extern off_t __lseek64();\n# ifdef __STDC_EXT__\n static truncate(a,b) const char *a; off_t b; { return __truncate64(a,b); }\n# else\n static truncate(a,b) off_t b; { return __truncate64(a,b); }\n# endif /* __STDC_EXT__ */\n static int prealloc(a,b) off_t b; \t { return __prealloc64(a,b); }\n static int lockf(a,b,c) off_t c; \t\t { return __lockf64(a,b,c); }\n static int ftruncate(a,b) off_t b;\t \t { return __ftruncate64(a,b); }\n static off_t lseek(a,b,c) off_t b;\t\t { return __lseek64(a,b,c); }\n# else /* __cplusplus */\n extern off_t __lseek64(int, off_t, int);\n extern int __truncate64(const char *, off_t);\n extern int __prealloc64(int, off_t);\n extern int __lockf64(int, int, off_t);\n extern int __ftruncate64(int, off_t);\n inline int truncate(const char *a, off_t b) { return __truncate64(a,b); }\n inline int prealloc(int a, off_t b) { return __prealloc64(a,b); }\n inline int lockf(int a, int b, off_t c) { return __lockf64(a,b,c); }\n inline int ftruncate(int a, off_t b) { return __ftruncate64(a,b); }\n inline off_t lseek(int a, off_t b, int c) { return __lseek64(a,b,c); }\n# endif /* __cplusplus */\n# endif /* _FILE64 */\n\nThe bottom line is that large file support does work on HPUX 10.20, but\nit will generate a ton of warning messages if you build using gcc and\n-Wmissing-declarations.\n\nI'm now thinking that I will just edit my local copies of the system\nheaders to add the missing declarations so that I don't see these\nwarnings. Turning off largefile support is probably too high a price\nfor users to pay just so Tom Lane doesn't have to see warnings ;-)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 12:32:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/src/include/port hpux.h " } ]
[ { "msg_contents": "With the recent talk of RULE regression failures, I thought I'd bring back\nup that I _always_ have a rule failure on Freebsd/alpha.\n\nThe files are attached...\n\nChris", "msg_date": "Fri, 30 Aug 2002 10:01:53 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "RULE regression test failure" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> With the recent talk of RULE regression failures, I thought I'd bring back\n> up that I _always_ have a rule failure on Freebsd/alpha.\n\nHm, what do you get from\n\texplain SELECT * FROM shoe_ready WHERE total_avail >= 2;\nin the regression database?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 02:30:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "Ummmm...how do I make the regression database!? Do I have to do\ninstallcheck instead of check?\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, 30 August 2002 2:31 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] RULE regression test failure\n>\n>\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > With the recent talk of RULE regression failures, I thought I'd\n> bring back\n> > up that I _always_ have a rule failure on Freebsd/alpha.\n>\n> Hm, what do you get from\n> \texplain SELECT * FROM shoe_ready WHERE total_avail >= 2;\n> in the regression database?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Fri, 30 Aug 2002 14:32:55 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Ummmm...how do I make the regression database!? Do I have to do\n> installcheck instead of check?\n\nThat's the easiest way; or you can restart the temp postmaster that\nthe regression script starts and kills.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 02:34:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "On Fri, 30 Aug 2002, Christopher Kings-Lynne wrote:\n\n> Ummmm...how do I make the regression database!? Do I have to do\n> installcheck instead of check?\n\nyes\n\nDatabase is 'regression'\n\ng\n\n", "msg_date": "Fri, 30 Aug 2002 16:37:53 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "An aside.\n\nIn the fulltextindex code I'm trying to figure out what's breaking the\nattached code segment.\n\nBasically the data->vl_len line causes a segfault on the second time thru\nthe while loop. I can't figure it out. I can't write to the value, but\nwhy? Basically with a word like 'john', it is inserting 'hn', then 'ohn'\nand then 'john' into the database.\n\nThanks for any help for me getting this in for the beta!\n\nChris\n-------------------------------------\n\nstruct varlena *data;\nchar *word = NULL;\nchar *cur_pos = NULL;\nint cur_pos_length = 0;\n\ndata = (struct varlena *) palloc(column_length);\n\nwhile(cur_pos > word)\n{\n\tcur_pos_length = strlen(cur_pos);\n\t/* Line below causes seg fault on SECOND iteration */\n\tdata->vl_len = cur_pos_length + sizeof(int32);\n\tmemcpy(VARDATA(data), cur_pos, cur_pos_length);\n\tvalues[0] = PointerGetDatum(data);\n\tvalues[1] = 0;\n\tvalues[2] = oid;\n\n\tret = SPI_execp(*(plan->splan), values, NULL, 0);\n\tif(ret != SPI_OK_INSERT)\n\t\telog(ERROR, \"Full Text Indexing: error executing plan in insert\\n\");\n\n\tcur_pos--;\n}\n\n", "msg_date": "Fri, 30 Aug 2002 14:42:15 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Fulltextindex" }, { "msg_contents": "Attached.\n\nChris\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, 30 August 2002 2:31 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] RULE regression test failure \n> \n> \n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > With the recent talk of RULE regression failures, I thought I'd \n> bring back\n> > up that I _always_ have a rule failure on Freebsd/alpha.\n> \n> Hm, what do you get from\n> \texplain SELECT * FROM shoe_ready WHERE total_avail >= 2;\n> in the regression database?\n> \n> \t\t\tregards, tom lane\n>", "msg_date": "Fri, 30 Aug 2002 14:52:16 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> struct varlena *data;\n> char *word = NULL;\n> char *cur_pos = NULL;\n> int cur_pos_length = 0;\n\n> data = (struct varlena *) palloc(column_length);\n\n> while(cur_pos > word)\n> {\n> \tcur_pos_length = strlen(cur_pos);\n> \t/* Line below causes seg fault on SECOND iteration */\n\nYou are not telling the whole truth here, as the above code excerpt\nwill obviously never iterate the WHILE even once. \"NULL > NULL\" is\nfalse in every C I ever heard of.\n\nAlso, how much is column_length and how does it relate to the amount\nof data being copied into *data ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 02:53:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fulltextindex " }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n>> Hm, what do you get from\n>> explain SELECT * FROM shoe_ready WHERE total_avail >= 2;\n>> in the regression database?\n>\n> [this plan]\n\nThat seems substantially the same plan as I see here. I guess\nthat the different output order must reflect a platform-specific\ndifference in qsort()'s treatment of equal keys.\n\nProbably the best answer is to add \"ORDER BY shoename\" to the test\nquery to eliminate the platform dependency. Any objections out there?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 03:00:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "> That seems substantially the same plan as I see here. I guess\n> that the different output order must reflect a platform-specific\n> difference in qsort()'s treatment of equal keys.\n> \n> Probably the best answer is to add \"ORDER BY shoename\" to the test\n> query to eliminate the platform dependency. Any objections out there?\n\nNone here.\n\nChris\n\n", "msg_date": "Fri, 30 Aug 2002 15:02:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: RULE regression test failure " }, { "msg_contents": "OK, I was probably a little aggressive in my pruning...BTW, this code was\nnot written by me...\n\n--------------------------------\n\nstruct varlena *data;\nchar *word = \"john\";\nchar *cur_pos = NULL;\nint cur_pos_length = 0;\n\ndata = (struct varlena *) palloc(VARHDRSZ + column_length + 1);\nword_length = strlen(word);\ncur_pos = &word[word_length - 2];\n\nwhile(cur_pos > word)\n{\n\tcur_pos_length = strlen(cur_pos);\n\t/* Line below causes seg fault on SECOND iteration */\n\tdata->vl_len = cur_pos_length + sizeof(int32);\n\tmemcpy(VARDATA(data), cur_pos, cur_pos_length);\n\tvalues[0] = PointerGetDatum(data);\n\tvalues[1] = 0;\n\tvalues[2] = oid;\n\n\tret = SPI_execp(*(plan->splan), values, NULL, 0);\n\tif(ret != SPI_OK_INSERT)\n\t\telog(ERROR, \"Full Text Indexing: error executing plan in insert\\n\");\n\n\tcur_pos--;\n}\n\n> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us]\n> Sent: Friday, 30 August 2002 2:53 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: Fulltextindex\n>\n>\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > struct varlena *data;\n> > char *word = NULL;\n> > char *cur_pos = NULL;\n> > int cur_pos_length = 0;\n>\n> > data = (struct varlena *) palloc(column_length);\n>\n> > while(cur_pos > word)\n> > {\n> > \tcur_pos_length = strlen(cur_pos);\n> > \t/* Line below causes seg fault on SECOND iteration */\n>\n> You are not telling the whole truth here, as the above code excerpt\n> will obviously never iterate the WHILE even once. \"NULL > NULL\" is\n> false in every C I ever heard of.\n>\n> Also, how much is column_length and how does it relate to the amount\n> of data being copied into *data ?\n>\n> \t\t\tregards, tom lane\n>\n\n", "msg_date": "Fri, 30 Aug 2002 15:08:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Fulltextindex " }, { "msg_contents": "\nOn Fri, 30 Aug 2002, Christopher Kings-Lynne wrote:\n> \n> --------------------------------\n> \n> struct varlena *data;\n> char *word = \"john\";\n> char *cur_pos = NULL;\n> int cur_pos_length = 0;\n> \n> data = (struct varlena *) palloc(VARHDRSZ + column_length + 1);\n> word_length = strlen(word);\n> cur_pos = &word[word_length - 2];\n> \n> while(cur_pos > word)\n> {\n> \tcur_pos_length = strlen(cur_pos);\n> \t/* Line below causes seg fault on SECOND iteration */\n> \tdata->vl_len = cur_pos_length + sizeof(int32);\n> \tmemcpy(VARDATA(data), cur_pos, cur_pos_length);\n> \tvalues[0] = PointerGetDatum(data);\n> \tvalues[1] = 0;\n> \tvalues[2] = oid;\n> \n> \tret = SPI_execp(*(plan->splan), values, NULL, 0);\n> \tif(ret != SPI_OK_INSERT)\n> \t\telog(ERROR, \"Full Text Indexing: error executing plan in insert\\n\");\n> \n> \tcur_pos--;\n> }\n> \n\nThat would imply the SPI_execp call is trashing the value of data. Have you\nconfirmed that? (Sometimes it helps to confirm exactly where a pointer is\ngetting hammered.)\n\ncolumn_length is something sensible like word_length I presume.\n\nThat sizeof(int32) should really be VARHDRSZ imo, but I can't see how that's\nbreaking it.\n\nDisclaimer: I have no idea what I'm doing here.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Fri, 30 Aug 2002 12:22:01 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Fulltextindex" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> struct varlena *data;\n> char *word = \"john\";\n> char *cur_pos = NULL;\n> int cur_pos_length = 0;\n\n> data = (struct varlena *) palloc(VARHDRSZ + column_length + 1);\n> word_length = strlen(word);\n> cur_pos = &word[word_length - 2];\n\n> while(cur_pos > word)\n> {\n> \tcur_pos_length = strlen(cur_pos);\n> \t/* Line below causes seg fault on SECOND iteration */\n> \tdata->vl_len = cur_pos_length + sizeof(int32);\n> \tmemcpy(VARDATA(data), cur_pos, cur_pos_length);\n> \tvalues[0] = PointerGetDatum(data);\n> \tvalues[1] = 0;\n> \tvalues[2] = oid;\n\n> \tret = SPI_execp(*(plan->splan), values, NULL, 0);\n> \tif(ret != SPI_OK_INSERT)\n> \t\telog(ERROR, \"Full Text Indexing: error executing plan in insert\\n\");\n\n> \tcur_pos--;\n> }\n\nAre you sure it's actually segfaulting *at* the store into data->vl_len?\nThis seems hard to believe, if data is a local variable. It seems\npossible that the storage data is pointing to gets freed during\nSPI_execp, but that would just mean you'd be scribbling on memory that\ndoesn't belong to you --- which might cause a crash later, but surely\nnot at that line.\n\nIt would be worth looking to see which context is active when you do the\npalloc() for data, and then watch to see if anything does a\nMemoryContextReset on it. (If you are running with asserts enabled,\nan even simpler test is to look and see if data->vl_len gets changed\nunderneath you.)\n\nAlso, I'm still wondering if column_length is guaranteed to be longer\nthan word_length.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 09:44:03 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Fulltextindex " }, { "msg_contents": "OK, patch attached that adds ORDER BY to the problem regression query.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> > That seems substantially the same plan as I see here. I guess\n> > that the different output order must reflect a platform-specific\n> > difference in qsort()'s treatment of equal keys.\n> > \n> > Probably the best answer is to add \"ORDER BY shoename\" to the test\n> > query to eliminate the platform dependency. Any objections out there?\n> \n> None here.\n> \n> Chris\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: src/test/regress/expected/rules.out\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/expected/rules.out,v\nretrieving revision 1.62\ndiff -c -c -r1.62 rules.out\n*** src/test/regress/expected/rules.out\t2 Sep 2002 02:13:02 -0000\t1.62\n--- src/test/regress/expected/rules.out\t2 Sep 2002 04:47:26 -0000\n***************\n*** 1002,1008 ****\n sl8 | 1 | brown | 40 | inch | 101.6\n (8 rows)\n \n! SELECT * FROM shoe_ready WHERE total_avail >= 2;\n shoename | sh_avail | sl_name | sl_avail | total_avail \n ------------+----------+------------+----------+-------------\n sh1 | 2 | sl1 | 5 | 2\n--- 1002,1008 ----\n sl8 | 1 | brown | 40 | inch | 101.6\n (8 rows)\n \n! SELECT * FROM shoe_ready WHERE total_avail >= 2 ORDER BY 1;\n shoename | sh_avail | sl_name | sl_avail | total_avail \n ------------+----------+------------+----------+-------------\n sh1 | 2 | sl1 | 5 | 2\nIndex: src/test/regress/sql/rules.sql\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/test/regress/sql/rules.sql,v\nretrieving revision 1.21\ndiff -c -c -r1.21 rules.sql\n*** src/test/regress/sql/rules.sql\t2 Sep 2002 02:13:02 -0000\t1.21\n--- src/test/regress/sql/rules.sql\t2 Sep 2002 04:47:29 -0000\n***************\n*** 585,591 ****\n \n -- SELECTs in doc\n SELECT * FROM shoelace ORDER BY sl_name;\n! SELECT * FROM shoe_ready WHERE total_avail >= 2;\n \n CREATE TABLE shoelace_log (\n sl_name char(10), -- shoelace changed\n--- 585,591 ----\n \n -- SELECTs in doc\n SELECT * FROM shoelace ORDER BY sl_name;\n! SELECT * FROM shoe_ready WHERE total_avail >= 2 ORDER BY 1;\n \n CREATE TABLE shoelace_log (\n sl_name char(10), -- shoelace changed", "msg_date": "Mon, 2 Sep 2002 01:20:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: RULE regression test failure" }, { "msg_contents": "All regression tests now pass perfectly for me. Thanks.\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 2 September 2002 1:21 PM\n> To: Christopher Kings-Lynne\n> Cc: Tom Lane; Hackers\n> Subject: Re: [HACKERS] RULE regression test failure\n>\n>\n>\n> OK, patch attached that adds ORDER BY to the problem regression query.\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Christopher Kings-Lynne wrote:\n> > > That seems substantially the same plan as I see here. I guess\n> > > that the different output order must reflect a platform-specific\n> > > difference in qsort()'s treatment of equal keys.\n> > >\n> > > Probably the best answer is to add \"ORDER BY shoename\" to the test\n> > > query to eliminate the platform dependency. Any objections out there?\n> >\n> > None here.\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n\n", "msg_date": "Mon, 2 Sep 2002 14:20:59 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: RULE regression test failure" } ]
[ { "msg_contents": "I'm using the current CVS (as of ~1930 EDT, 29AUG02) on RedHat's latest\nbeta (null). I find that I need to use the -U option when trying to use\npsql and the new PGPASSWORDFILE variable.\n\nHere's what I have in my ~/.pgpw file (pointed to by PGPASSWORDFILE):\n\nlocalhost:*:az_audit:gar:test\n\nThis my Linux userid is 'gar', so it should work, and indeed the error\nmessage in the server log is:\n\nAug 29 21:02:10 tb02 postgres[18440]: [1] LOG: connection received: host=127.0.0.1 port=1084\nAug 29 21:02:10 tb02 postgres[18440]: [2] FATAL: Password authentication failed for user \"gar\"\n\nWhich is odd, because psql clearly knows my userid is 'gar', and\ntransmits it to the backend correctly.\n\nIf I add the '-U gar', then all is well.\n\n\nStepping through psql with gdb, I see that in the case where I don't set\n-U, the returned password (from is garbled:\n\n(gdb) print conn->pgpass \n$11 = 0x806d228 \"test�\\021B\"\n\nWhereas when I set '-U', the returned password is fine!\n\n(gdb) print conn->pgpass\n$15 = 0x806cf08 \"test\"\n\n\nIt appears that the problem is in PasswordFromFile() in fe_connect.c, but\nI'm not sure, as gdb insists that 't' and 'ret' are not in the current\nscope when I get to the end of the function. :-(\n\nBut the behaviour is consisten.\n\nThanks,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Thu, 29 Aug 2002 22:33:39 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "[7.3devl] Using PGPASSWORDFILE with psql requires -U option?" }, { "msg_contents": "Gordon Runkle dijo: \n\n> I'm using the current CVS (as of ~1930 EDT, 29AUG02) on RedHat's latest\n> beta (null). I find that I need to use the -U option when trying to use\n> psql and the new PGPASSWORDFILE variable.\n\nOk, in private email with Gordon I discovered that I missed by one.\nPlease apply the following. Thanks for the report.\n\nIndex: src/interfaces/libpq/fe-connect.c\n===================================================================\nRCS file: /projects/cvsroot/pgsql-server/src/interfaces/libpq/fe-connect.c,v\nretrieving revision 1.199\ndiff -c -r1.199 fe-connect.c\n*** src/interfaces/libpq/fe-connect.c\t2002/08/29 23:06:32\t1.199\n--- src/interfaces/libpq/fe-connect.c\t2002/08/30 03:52:40\n***************\n*** 2953,2960 ****\n \t\t\t\t(t = pwdfMatchesString(t, dbname)) == NULL ||\n \t\t\t\t(t = pwdfMatchesString(t, username)) == NULL)\n \t\t\tcontinue;\n! \t\tret=(char *)malloc(sizeof(char)*strlen(t));\n! \t\tstrncpy(ret, t, strlen(t));\n \t\tfclose(fp);\n \t\treturn ret;\n \t}\n--- 2953,2960 ----\n \t\t\t\t(t = pwdfMatchesString(t, dbname)) == NULL ||\n \t\t\t\t(t = pwdfMatchesString(t, username)) == NULL)\n \t\t\tcontinue;\n! \t\tret=(char *)malloc(sizeof(char)*(strlen(t)+1));\n! \t\tstrncpy(ret, t, strlen(t)+1);\n \t\tfclose(fp);\n \t\treturn ret;\n \t}\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"La felicidad no es ma�ana. La felicidad es ahora\"\n\n", "msg_date": "Thu, 29 Aug 2002 23:53:02 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: [7.3devl] Using PGPASSWORDFILE with psql requires -U" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> ! \t\tret=(char *)malloc(sizeof(char)*strlen(t));\n> ! \t\tstrncpy(ret, t, strlen(t));\n>\n> ! \t\tret=(char *)malloc(sizeof(char)*(strlen(t)+1));\n> ! \t\tstrncpy(ret, t, strlen(t)+1);\n\nWhat have you got against strdup() ?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 00:07:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [7.3devl] Using PGPASSWORDFILE with psql requires -U " }, { "msg_contents": "\nTom has applied a patch to fix this.\n\n---------------------------------------------------------------------------\n\nAlvaro Herrera wrote:\n> Gordon Runkle dijo: \n> \n> > I'm using the current CVS (as of ~1930 EDT, 29AUG02) on RedHat's latest\n> > beta (null). I find that I need to use the -U option when trying to use\n> > psql and the new PGPASSWORDFILE variable.\n> \n> Ok, in private email with Gordon I discovered that I missed by one.\n> Please apply the following. Thanks for the report.\n> \n> Index: src/interfaces/libpq/fe-connect.c\n> ===================================================================\n> RCS file: /projects/cvsroot/pgsql-server/src/interfaces/libpq/fe-connect.c,v\n> retrieving revision 1.199\n> diff -c -r1.199 fe-connect.c\n> *** src/interfaces/libpq/fe-connect.c\t2002/08/29 23:06:32\t1.199\n> --- src/interfaces/libpq/fe-connect.c\t2002/08/30 03:52:40\n> ***************\n> *** 2953,2960 ****\n> \t\t\t\t(t = pwdfMatchesString(t, dbname)) == NULL ||\n> \t\t\t\t(t = pwdfMatchesString(t, username)) == NULL)\n> \t\t\tcontinue;\n> ! \t\tret=(char *)malloc(sizeof(char)*strlen(t));\n> ! \t\tstrncpy(ret, t, strlen(t));\n> \t\tfclose(fp);\n> \t\treturn ret;\n> \t}\n> --- 2953,2960 ----\n> \t\t\t\t(t = pwdfMatchesString(t, dbname)) == NULL ||\n> \t\t\t\t(t = pwdfMatchesString(t, username)) == NULL)\n> \t\t\tcontinue;\n> ! \t\tret=(char *)malloc(sizeof(char)*(strlen(t)+1));\n> ! \t\tstrncpy(ret, t, strlen(t)+1);\n> \t\tfclose(fp);\n> \t\treturn ret;\n> \t}\n> \n> -- \n> Alvaro Herrera (<alvherre[a]atentus.com>)\n> \"La felicidad no es ma?ana. La felicidad es ahora\"\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 11:12:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [7.3devl] Using PGPASSWORDFILE with psql requires -U" } ]
[ { "msg_contents": "There is a TODO item:\n\n\t* Allow logging of query durations\n\nCurrently there is no easy way to get a list of query durations in the\nserver log file. My idea is to add query duration to the end of the\nquery string for 'debug_print_query'. My only problem is that to print\nthe duration, I would have to print the query _after_ it is executed,\nrather than before. This may make it difficult to look at the server\nlogs to see what queries are running, plus if the query errors, I have\nto still print it, I assume, though actually Gavin's new GUC option will\nadd printing of error queries, so I may be OK to just print the\nsuccessful ones.\n\nI imagine this timing would be used by administrators to find out which\nqueries where slow.\n\nComments?\n\nAlso, looking at the GUC options:\n\n\t#debug_print_query = false\n\t#debug_print_parse = false\n\t#debug_print_rewritten = false\n\t#debug_print_plan = false\n\t#debug_pretty_print = false\n\nI notice Peter is correct that debug_print_query really is a log_*\noption, rather than a debug option. The others are for server\ndebugging, while debug_print_query is more of a log queries option. \nShould we rename it and support both log_print_query and\ndebug_print_query for one release?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 29 Aug 2002 23:27:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Reporting query duration" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> There is a TODO item:\n> \t* Allow logging of query durations\n\n> Currently there is no easy way to get a list of query durations in the\n> server log file. My idea is to add query duration to the end of the\n> query string for 'debug_print_query'. My only problem is that to print\n> the duration, I would have to print the query _after_ it is executed,\n> rather than before.\n\nWell, that's what makes it a bad idea, eh?\n\nI think the log entries should be separate: print the query text when\nyou start, print the duration when you're done. A little bit of\npostprocessing can reassemble the two entries.\n\n> Should we rename it and support both log_print_query and\n> debug_print_query for one release?\n\nIf we're gonna rename config variables, let's just rename them.\nPeople don't try to pipe their old postgresql.conf files into psql,\nso I don't think there's a really good argument for supporting old\nvariable names for a long time.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 29 Aug 2002 23:51:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reporting query duration " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > There is a TODO item:\n> > \t* Allow logging of query durations\n> \n> > Currently there is no easy way to get a list of query durations in the\n> > server log file. My idea is to add query duration to the end of the\n> > query string for 'debug_print_query'. My only problem is that to print\n> > the duration, I would have to print the query _after_ it is executed,\n> > rather than before.\n> \n> Well, that's what makes it a bad idea, eh?\n> \n> I think the log entries should be separate: print the query text when\n> you start, print the duration when you're done. A little bit of\n> postprocessing can reassemble the two entries.\n\nHow would someone reassemble them? You would have to have the pid, I\nassume. Do we auto-enable pid output when we enable duration? Yuck.\nMaybe print the pid just for the query and timing lines if pid print\nisn't enabled?\n\nI would think printing it out together at the end would work. We\nalready have several ways to see running queries.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 00:44:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reporting query duration" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I think the log entries should be separate: print the query text when\n>> you start, print the duration when you're done. A little bit of\n>> postprocessing can reassemble the two entries.\n\n> How would someone reassemble them? You would have to have the pid, I\n> assume.\n\nSure.\n\n> Do we auto-enable pid output when we enable duration? Yuck.\n\nNo, you expect the user to select the options he needs. Let's not\nover-engineer a perfectly simple feature.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 00:47:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Reporting query duration " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I think the log entries should be separate: print the query text when\n> >> you start, print the duration when you're done. A little bit of\n> >> postprocessing can reassemble the two entries.\n> \n> > How would someone reassemble them? You would have to have the pid, I\n> > assume.\n> \n> Sure.\n> \n> > Do we auto-enable pid output when we enable duration? Yuck.\n> \n> No, you expect the user to select the options he needs. Let's not\n> over-engineer a perfectly simple feature.\n\nOK, so I will rename debug_print_query to log_print_query, and Gavin's\nnew print query on error will also be a log_*.\n\nI will add a new GUC variable to print the query duration, and recommend\nin the docs that log_pid be enabled at the same time so you can match\nthe duration with the query.\n\nOriginally, I wanted to make the time just print whenever you enabled\nprint_query, but with them on separate lines, it should be a separate\nGUC variable.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 12:14:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Reporting query duration" } ]
[ { "msg_contents": "We have the TODO item:\n\n * Remove wal_files postgresql.conf option because WAL files are now recycled\n\nThe following patch completes this item. It also makes the WAL\ndocumentation a lot easier to understand. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: doc/src/sgml/runtime.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/runtime.sgml,v\nretrieving revision 1.128\ndiff -c -c -r1.128 runtime.sgml\n*** doc/src/sgml/runtime.sgml\t29 Aug 2002 19:53:58 -0000\t1.128\n--- doc/src/sgml/runtime.sgml\t30 Aug 2002 03:46:18 -0000\n***************\n*** 1949,1965 ****\n </varlistentry>\n \n <varlistentry>\n- <term><varname>WAL_FILES</varname> (<type>integer</type>)</term>\n- <listitem>\n- <para>\n- Number of log files that are created in advance at checkpoint\n- time. This option can only be set at server start or in the\n- \t<filename>postgresql.conf</filename> file.\n- </para>\n- </listitem>\n- </varlistentry>\n- \n- <varlistentry>\n <term><varname>WAL_SYNC_METHOD</varname> (<type>string</type>)</term>\n <listitem>\n <para>\n--- 1949,1954 ----\nIndex: doc/src/sgml/wal.sgml\n===================================================================\nRCS file: /cvsroot/pgsql-server/doc/src/sgml/wal.sgml,v\nretrieving revision 1.16\ndiff -c -c -r1.16 wal.sgml\n*** doc/src/sgml/wal.sgml\t5 Jul 2002 19:06:11 -0000\t1.16\n--- doc/src/sgml/wal.sgml\t30 Aug 2002 03:46:18 -0000\n***************\n*** 276,284 ****\n By default a new 16MB segment file is created only if more than 75% of\n the current segment has been used. This is inadequate if the system\n generates more than 4MB of log output between checkpoints.\n- One can instruct the server to pre-create up to 64 log segments\n- at checkpoint time by modifying the <varname>WAL_FILES</varname>\n- configuration parameter.\n </para>\n \n <para>\n--- 276,281 ----\n***************\n*** 306,325 ****\n \n <para>\n The number of 16MB segment files will always be at least\n! <varname>WAL_FILES</varname> + 1, and will normally not exceed\n! <varname>WAL_FILES</varname> + MAX(<varname>WAL_FILES</varname>,\n! <varname>CHECKPOINT_SEGMENTS</varname>) + 1. This may be used to\n! estimate space requirements for WAL. Ordinarily, when an old log\n! segment files are no longer needed, they are recycled (renamed to\n! become the next sequential future segments). If, due to a short-term\n! peak of log output rate, there are more than\n! <varname>WAL_FILES</varname> + MAX(<varname>WAL_FILES</varname>,\n! <varname>CHECKPOINT_SEGMENTS</varname>) + 1 segment files, then\n! unneeded segment files will be deleted instead of recycled until the\n! system gets back under this limit. (If this happens on a regular\n! basis, <varname>WAL_FILES</varname> should be increased to avoid it.\n! Deleting log segments that will only have to be created again later\n! is expensive and pointless.)\n </para>\n \n <para>\n--- 303,316 ----\n \n <para>\n The number of 16MB segment files will always be at least\n! 1, and will normally not exceed <varname>CHECKPOINT_SEGMENTS</varname>)\n! + 1. This may be used to estimate space requirements for WAL. \n! Ordinarily, when old log segment files are no longer needed, \n! they are recycled (renamed to become the next sequential future \n! segments). If, due to a short-term peak of log output rate, there \n! are more than <varname>CHECKPOINT_SEGMENTS</varname>) + 1 segment files, \n! the unneeded segment files will be deleted instead of recycled until the\n! system gets back under this limit.\n </para>\n \n <para>\nIndex: src/backend/access/transam/xlog.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/access/transam/xlog.c,v\nretrieving revision 1.102\ndiff -c -c -r1.102 xlog.c\n*** src/backend/access/transam/xlog.c\t17 Aug 2002 15:12:06 -0000\t1.102\n--- src/backend/access/transam/xlog.c\t30 Aug 2002 03:46:21 -0000\n***************\n*** 87,93 ****\n /* User-settable parameters */\n int\t\t\tCheckPointSegments = 3;\n int\t\t\tXLOGbuffers = 8;\n- int\t\t\tXLOGfiles = 0;\t\t/* # of files to preallocate during ckpt */\n int\t\t\tXLOG_DEBUG = 0;\n char\t *XLOG_sync_method = NULL;\n const char\tXLOG_sync_method_default[] = DEFAULT_SYNC_METHOD_STR;\n--- 87,92 ----\n***************\n*** 97,103 ****\n /*\n * XLOGfileslop is used in the code as the allowed \"fuzz\" in the number of\n * preallocated XLOG segments --- we try to have at least XLOGfiles advance\n! * segments but no more than XLOGfiles+XLOGfileslop segments. This could\n * be made a separate GUC variable, but at present I think it's sufficient\n * to hardwire it as 2*CheckPointSegments+1. Under normal conditions, a\n * checkpoint will free no more than 2*CheckPointSegments log segments, and\n--- 96,102 ----\n /*\n * XLOGfileslop is used in the code as the allowed \"fuzz\" in the number of\n * preallocated XLOG segments --- we try to have at least XLOGfiles advance\n! * segments but no more than XLOGfileslop segments. This could\n * be made a separate GUC variable, but at present I think it's sufficient\n * to hardwire it as 2*CheckPointSegments+1. Under normal conditions, a\n * checkpoint will free no more than 2*CheckPointSegments log segments, and\n***************\n*** 1422,1428 ****\n \t * ours to pre-create a future log segment.\n \t */\n \tif (!InstallXLogFileSegment(log, seg, tmppath,\n! \t\t\t\t\t\t\t\t*use_existent, XLOGfiles + XLOGfileslop,\n \t\t\t\t\t\t\t\tuse_lock))\n \t{\n \t\t/* No need for any more future segments... */\n--- 1421,1427 ----\n \t * ours to pre-create a future log segment.\n \t */\n \tif (!InstallXLogFileSegment(log, seg, tmppath,\n! \t\t\t\t\t\t\t\t*use_existent, XLOGfileslop,\n \t\t\t\t\t\t\t\tuse_lock))\n \t{\n \t\t/* No need for any more future segments... */\n***************\n*** 1568,1587 ****\n \tuint32\t\t_logSeg;\n \tint\t\t\tlf;\n \tbool\t\tuse_existent;\n- \tint\t\t\ti;\n \n \tXLByteToPrevSeg(endptr, _logId, _logSeg);\n! \tif (XLOGfiles > 0)\n! \t{\n! \t\tfor (i = 1; i <= XLOGfiles; i++)\n! \t\t{\n! \t\t\tNextLogSeg(_logId, _logSeg);\n! \t\t\tuse_existent = true;\n! \t\t\tlf = XLogFileInit(_logId, _logSeg, &use_existent, true);\n! \t\t\tclose(lf);\n! \t\t}\n! \t}\n! \telse if ((endptr.xrecoff - 1) % XLogSegSize >=\n \t\t\t (uint32) (0.75 * XLogSegSize))\n \t{\n \t\tNextLogSeg(_logId, _logSeg);\n--- 1567,1575 ----\n \tuint32\t\t_logSeg;\n \tint\t\t\tlf;\n \tbool\t\tuse_existent;\n \n \tXLByteToPrevSeg(endptr, _logId, _logSeg);\n! \tif ((endptr.xrecoff - 1) % XLogSegSize >=\n \t\t\t (uint32) (0.75 * XLogSegSize))\n \t{\n \t\tNextLogSeg(_logId, _logSeg);\n***************\n*** 1635,1645 ****\n \t\t\t\t/*\n \t\t\t\t * Before deleting the file, see if it can be recycled as\n \t\t\t\t * a future log segment. We allow recycling segments up\n! \t\t\t\t * to XLOGfiles + XLOGfileslop segments beyond the current\n \t\t\t\t * XLOG location.\n \t\t\t\t */\n \t\t\t\tif (InstallXLogFileSegment(endlogId, endlogSeg, path,\n! \t\t\t\t\t\t\t\t\t\t true, XLOGfiles + XLOGfileslop,\n \t\t\t\t\t\t\t\t\t\t true))\n \t\t\t\t{\n \t\t\t\t\telog(LOG, \"recycled transaction log file %s\",\n--- 1623,1633 ----\n \t\t\t\t/*\n \t\t\t\t * Before deleting the file, see if it can be recycled as\n \t\t\t\t * a future log segment. We allow recycling segments up\n! \t\t\t\t * to XLOGfileslop segments beyond the current\n \t\t\t\t * XLOG location.\n \t\t\t\t */\n \t\t\t\tif (InstallXLogFileSegment(endlogId, endlogSeg, path,\n! \t\t\t\t\t\t\t\t\t\t true, XLOGfileslop,\n \t\t\t\t\t\t\t\t\t\t true))\n \t\t\t\t{\n \t\t\t\t\telog(LOG, \"recycled transaction log file %s\",\nIndex: src/backend/utils/misc/guc.c\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/backend/utils/misc/guc.c,v\nretrieving revision 1.87\ndiff -c -c -r1.87 guc.c\n*** src/backend/utils/misc/guc.c\t29 Aug 2002 21:02:12 -0000\t1.87\n--- src/backend/utils/misc/guc.c\t30 Aug 2002 03:46:23 -0000\n***************\n*** 641,651 ****\n \t},\n \n \t{\n- \t\t{ \"wal_files\", PGC_SIGHUP }, &XLOGfiles,\n- \t\t0, 0, 64, NULL, NULL\n- \t},\n- \n- \t{\n \t\t{ \"wal_debug\", PGC_SUSET }, &XLOG_DEBUG,\n \t\t0, 0, 16, NULL, NULL\n \t},\n--- 641,646 ----\nIndex: src/include/access/xlog.h\n===================================================================\nRCS file: /cvsroot/pgsql-server/src/include/access/xlog.h,v\nretrieving revision 1.35\ndiff -c -c -r1.35 xlog.h\n*** src/include/access/xlog.h\t17 Aug 2002 15:12:07 -0000\t1.35\n--- src/include/access/xlog.h\t30 Aug 2002 03:46:23 -0000\n***************\n*** 185,191 ****\n /* these variables are GUC parameters related to XLOG */\n extern int\tCheckPointSegments;\n extern int\tXLOGbuffers;\n- extern int\tXLOGfiles;\n extern int\tXLOG_DEBUG;\n extern char *XLOG_sync_method;\n extern const char XLOG_sync_method_default[];\n--- 185,190 ----", "msg_date": "Thu, 29 Aug 2002 23:47:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Remove wal_files" } ]
[ { "msg_contents": "> -----Original Message-----\n> From: Tom Lane [mailto:tgl@sss.pgh.pa.us] \n> Sent: Thursday, August 29, 2002 9:07 PM\n> To: Alvaro Herrera\n> Cc: Gordon Runkle; pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] [7.3devl] Using PGPASSWORDFILE with \n> psql requires -U \n> \n> \n> Alvaro Herrera <alvherre@atentus.com> writes:\n> > ! \t\tret=(char *)malloc(sizeof(char)*strlen(t));\n> > ! \t\tstrncpy(ret, t, strlen(t));\n> >\n> > ! \t\tret=(char *)malloc(sizeof(char)*(strlen(t)+1));\n> > ! \t\tstrncpy(ret, t, strlen(t)+1);\n> \n> What have you got against strdup() ?\n\nThe strdup() function is non-standard, and need not exist in a C\nimplementation.\n", "msg_date": "Thu, 29 Aug 2002 21:12:28 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: [7.3devl] Using PGPASSWORDFILE with psql requires -U " }, { "msg_contents": "Dann Corbit dijo: \n\n> > Alvaro Herrera <alvherre@atentus.com> writes:\n> > > ! \t\tret=(char *)malloc(sizeof(char)*strlen(t));\n> > > ! \t\tstrncpy(ret, t, strlen(t));\n> > >\n> > > ! \t\tret=(char *)malloc(sizeof(char)*(strlen(t)+1));\n> > > ! \t\tstrncpy(ret, t, strlen(t)+1);\n> > \n> > What have you got against strdup() ?\n> \n> The strdup() function is non-standard, and need not exist in a C\n> implementation.\n\nActually I have nothing against strdup(), and I think I have seen it in\nPg source. I just didn't think about it ;-)\n\nFeel free to change it if you want...\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nOne man's impedance mismatch is another man's layer of abstraction.\n(Lincoln Yeoh)\n\n", "msg_date": "Fri, 30 Aug 2002 00:17:09 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: [7.3devl] Using PGPASSWORDFILE with psql requires -U" }, { "msg_contents": "\"Dann Corbit\" <DCorbit@connx.com> writes:\n>> What have you got against strdup() ?\n\n> The strdup() function is non-standard, and need not exist in a C\n> implementation.\n\nBut it *does* exist in all Postgres implementations. This is what\nwe carry around a port/ directory for ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 00:25:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [7.3devl] Using PGPASSWORDFILE with psql requires -U " } ]
[ { "msg_contents": "Is it ok if Florian and I submit the improvements to the fulltextindex\ncontrib during beta?\n\nChris\n\n", "msg_date": "Fri, 30 Aug 2002 12:46:05 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "contrib features during beta period" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Is it ok if Florian and I submit the improvements to the fulltextindex\n> contrib during beta?\n\nWhat improvements are we talking about here? FTI is sufficiently widely\ndepended on that I think it ought to follow the same quality standard\nas the main backend ... viz, \"no new features during beta\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 00:49:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period " }, { "msg_contents": "Well basically we have it all done, except neither of us has time to test at\nthe moment. Florian is free after sept 1st. Basically it adds full word\nmatch support, stop words but keeps full backward compatibility. We are\ngetting a segfault in the new function (fti2) but the old is still working\nfine.\n\nI've also updated the docs and added a new WARNING file that strongly\ndiscourages the use of contrib/fulltextindex in favour of contrib/tsearch.\n\nAlso, there's regression tests, etc.\n\nI personally have switched to tsearch now exclusively but I promised Florian\nI'd review his stuff so I want to get it in.\n\nMaybe I'll try to figure out the bug today or tommorrow and commit it if I\ncan...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Friday, 30 August 2002 12:49 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] contrib features during beta period\n>\n>\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Is it ok if Florian and I submit the improvements to the fulltextindex\n> > contrib during beta?\n>\n> What improvements are we talking about here? FTI is sufficiently widely\n> depended on that I think it ought to follow the same quality standard\n> as the main backend ... viz, \"no new features during beta\".\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Fri, 30 Aug 2002 12:54:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: contrib features during beta period " }, { "msg_contents": "On Friday 30 August 2002 12:49 am, Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > Is it ok if Florian and I submit the improvements to the fulltextindex\n> > contrib during beta?\n\n> What improvements are we talking about here? FTI is sufficiently widely\n> depended on that I think it ought to follow the same quality standard\n> as the main backend ... viz, \"no new features during beta\".\n\nDoes this mean we should be looking for a way to integrate it into the main \nbackend at this point? Isn't that what contrib is for?\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 30 Aug 2002 09:14:35 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Friday 30 August 2002 12:49 am, Tom Lane wrote:\n>> What improvements are we talking about here? FTI is sufficiently widely\n>> depended on that I think it ought to follow the same quality standard\n>> as the main backend ... viz, \"no new features during beta\".\n\n> Does this mean we should be looking for a way to integrate it into the main \n> backend at this point? Isn't that what contrib is for?\n\nWell, given that Chris also thinks that people should migrate to\ntsearch, I'd guess that fulltextindex would not be the one to integrate.\nBut yeah, in the long run I'd like to see one of these packages become\nmainstream, just because full-text search is such a widely used feature.\n\nThe more global point here is that during beta we need everyone to be\ndoing testing, bug fixing, and documentation, not new features ---\neven if they're in contrib. Goodness knows there's not very much\norganization to the Postgres project, but let's try to respect what\nlittle process we do have...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 09:29:28 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period " }, { "msg_contents": "On Friday 30 August 2002 09:29 am, Tom Lane wrote:\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n> > Does this mean we should be looking for a way to integrate [FTI] into the\n> > main backend at this point? Isn't that what contrib is for?\n\n> Well, given that Chris also thinks that people should migrate to\n> tsearch, I'd guess that fulltextindex would not be the one to integrate.\n> But yeah, in the long run I'd like to see one of these packages become\n> mainstream, just because full-text search is such a widely used feature.\n\nAgreed. We don't really have guidelines for this process; any idea what was \nthe last module moved over like that? There may be more modules to consider \n-- earthdistance, for instance, which has been with us for a long time.\n\n> The more global point here is that during beta we need everyone to be\n> doing testing, bug fixing, and documentation, not new features ---\n> even if they're in contrib. Goodness knows there's not very much\n> organization to the Postgres project, but let's try to respect what\n> little process we do have...\n\nABSOLUTELY CORRECT. :-)\n-- \nLamar Owen\nWGCR Internet Radio\n1 Peter 4:11\n", "msg_date": "Fri, 30 Aug 2002 10:35:39 -0400", "msg_from": "Lamar Owen <lamar.owen@wgcr.org>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period" }, { "msg_contents": "Lamar Owen <lamar.owen@wgcr.org> writes:\n> On Friday 30 August 2002 09:29 am, Tom Lane wrote:\n>> But yeah, in the long run I'd like to see one of these packages become\n>> mainstream, just because full-text search is such a widely used feature.\n\n> Agreed. We don't really have guidelines for this process; any idea what was \n> the last module moved over like that?\n\nIt hasn't happened often, but I recall that the int8 code started life\nas a contrib module. Let's see, looking at CVS I see\n\tbit\n\tdateformat\n\tdatetime\n\tint8\n\tip_and_mac\n\tmetaphone\n\tsequence\n\tsoundex\n\tstatmath\n\tunixdate\nas contrib subdirectories that aren't there anymore in current sources,\nand that I think got merged, not dropped.\n\n> There may be more modules to consider \n> -- earthdistance, for instance, which has been with us for a long time.\n\nActually, that seems to have just disappeared completely. What the heck?\nMarc, you're not physically removing stuff from CVS when you move it to\ngborg, are you?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 11:08:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period " }, { "msg_contents": "On Fri, 30 Aug 2002, Tom Lane wrote:\n\n> Lamar Owen <lamar.owen@wgcr.org> writes:\n\n> > There may be more modules to consider \n> > -- earthdistance, for instance, which has been with us for a long time.\n> \n> Actually, that seems to have just disappeared completely. What the heck?\n> Marc, you're not physically removing stuff from CVS when you move it to\n> gborg, are you?\n\nI think Marc put earthdistance in a separate CVS module, i.e. separate\nfrom pgsql-server but included in the pgsql meta-module.\n\n-- \nAlvaro Herrera (<alvherre[@]dcc.uchile.cl>)\n\"Estoy de acuerdo contigo en que la verdad absoluta no existe...\nEl problema es que tu est�s mintiendo y la mentira s� existe\" (G. Lama)\n\n", "msg_date": "Fri, 30 Aug 2002 11:48:22 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period " }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> On Fri, 30 Aug 2002, Tom Lane wrote:\n> -- earthdistance, for instance, which has been with us for a long time.\n>> \n>> Actually, that seems to have just disappeared completely. What the heck?\n>> Marc, you're not physically removing stuff from CVS when you move it to\n>> gborg, are you?\n\n> I think Marc put earthdistance in a separate CVS module, i.e. separate\n> from pgsql-server but included in the pgsql meta-module.\n\nHmm, you're right. It appears that that setup does not actually work\nvery well. I tried rm -rf contrib/earthdistance, after which I find\nthat \"cvs update\" does not bring back the earthdistance subdirectory;\nit seems I have to do a full checkout to get back in sync. I suppose\nthis is a CVS bug. And it's one that worries me a lot more than the\nannoyance that cvsweb won't show a merged view. I wonder whether cvs\nupdate would even notice changes inside the earthdistance module?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 12:06:25 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: contrib features during beta period " } ]
[ { "msg_contents": "I proved that you can reclaim on disk space after a DROP COLUMN with toast\ntables:\n\ntest=# create table toast_test(a text, b text);\nCREATE TABLE\ntest=# insert into toast_test values (repeat('XXXXXXXXXX', 1000000),\nrepeat('XXXXXXXXXX', 1000000));\nINSERT 246368 1\ntest=# insert into toast_test values (repeat('XXXXXXXXXX', 1000000),\nrepeat('XXXXXXXXXX', 1000000));\nINSERT 246371 1\n\nGives:\n\n-rw------- 1 chriskl users 8192 Aug 30 15:46 246363\n-rw------- 1 chriskl users 475136 Aug 30 15:47 246365\n-rw------- 1 chriskl users 16384 Aug 30 15:46 246367\n\ntest=# alter table toast_test drop a;\nALTER TABLE\ntest=# update toast_test set b = b;\nUPDATE 2\n\nGives:\n\n-rw------- 1 chriskl users 8192 Aug 30 15:46 246363\n-rw------- 1 chriskl users 475136 Aug 30 15:48 246365\n-rw------- 1 chriskl users 16384 Aug 30 15:46 246367\n\ntest=# vacuum full toast_test;\nVACUUM\ntest=# checkpoint;\nCHECKPOINT\n\nGives:\n\n-rw------- 1 chriskl users 8192 Aug 30 15:48 246363\n-rw------- 1 chriskl users 237568 Aug 30 15:48 246365\n-rw------- 1 chriskl users 16384 Aug 30 15:48 246367\n\nSeems to halve the space used which is what you'd expect.\n\nChris\n\n", "msg_date": "Fri, 30 Aug 2002 15:52:15 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "DROP COLUMN & TOASTED DATA" } ]
[ { "msg_contents": "Please correct me if I've got this wrong, but it appears from the SRF\nAPI, that a SRF cannot readily refer to the TupleDesc to which it is\nexpected to conform (i.e. the TupleDesc derived from the FROM clause of\nan original SELECT statement) because that is held in the executor state\nand not copied or linked into the function context.\n\nThe reason I'm interested (and this might be a crazy idea) is that a\nfunction might choose to adapt its output based on what it is asked for.\ni.e. the attribute names and types which it is asked to provide might\nhave some significance to the function. \n\nThe application in this case is the querying of an XML document (this\nrelates to the contrib/xml XPath functions) where you might want a\nfunction which gives you a \"virtual view\" of the document. In order to\ndo so, you specify a query such as:\n\nSELECT * FROM xmlquery_func('some text here') AS xq(document_author\ntext, document_publisher text, document_date text);\n\n(this would likely be part of a subquery or join in practice.)\n\nThe function would see the requested attribute \"document_author\" and\ntranslate that to '//document/author/text()' for use by the XPath\nengine. This avoids having to have a function with varying arguments\n-instead you have a 'virtual table' that generates only the attributes\nrequested.\n\nDoes this sound completely crazy?\n\nRegards\n\nJohn\n\n-- \nJohn Gray\t\nAzuli IT\t\nwww.azuli.co.uk\t\n\n\n", "msg_date": "30 Aug 2002 12:45:54 +0100", "msg_from": "John Gray <jgray@azuli.co.uk>", "msg_from_op": true, "msg_subject": "Accessing original TupleDesc from SRF" }, { "msg_contents": "John Gray <jgray@azuli.co.uk> writes:\n> Please correct me if I've got this wrong, but it appears from the SRF\n> API, that a SRF cannot readily refer to the TupleDesc to which it is\n> expected to conform (i.e. the TupleDesc derived from the FROM clause of\n> an original SELECT statement) because that is held in the executor state\n> and not copied or linked into the function context.\n\n> The reason I'm interested (and this might be a crazy idea) is that a\n> function might choose to adapt its output based on what it is asked for.\n\nSeems like a cool idea.\n\nWe could fairly readily add a field to ReturnSetInfo, but that would\nonly be available to functions defined as returning a set. That'd\nprobably cover most useful cases but it still seems a bit unclean.\n\nI suppose that ExecMakeTableFunctionResult could be changed to *always*\npass ReturnSetInfo, even if it's not expecting the function to return\na set. That seems even less clean; but it would work, at least in the\ncurrent implementation.\n\nAnyone have a better idea?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 10:21:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF " }, { "msg_contents": "John Gray wrote:\n> Please correct me if I've got this wrong, but it appears from the SRF\n> API, that a SRF cannot readily refer to the TupleDesc to which it is\n> expected to conform (i.e. the TupleDesc derived from the FROM clause of\n> an original SELECT statement) because that is held in the executor state\n> and not copied or linked into the function context.\n> \n\n[snip]\n\n> \n> Does this sound completely crazy?\n> \n\nNot crazy at all. I asked the same question a few days ago:\nhttp://archives.postgresql.org/pgsql-hackers/2002-08/msg01914.php\n\nTom suggested a workaround for my purpose, but I do agree that this is \nneeded in the long run. I looked at it briefly, but there was no easy \nanswer I could spot. I'll take another look today.\n\nJoe\n\n\n", "msg_date": "Fri, 30 Aug 2002 07:46:10 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF" }, { "msg_contents": "Tom Lane wrote:\n> John Gray <jgray@azuli.co.uk> writes:\n> \n>>Please correct me if I've got this wrong, but it appears from the SRF\n>>API, that a SRF cannot readily refer to the TupleDesc to which it is\n>>expected to conform (i.e. the TupleDesc derived from the FROM clause of\n>>an original SELECT statement) because that is held in the executor state\n>>and not copied or linked into the function context.\n> \n> \n>>The reason I'm interested (and this might be a crazy idea) is that a\n>>function might choose to adapt its output based on what it is asked for.\n> \n> \n> Seems like a cool idea.\n> \n> We could fairly readily add a field to ReturnSetInfo, but that would\n> only be available to functions defined as returning a set. That'd\n> probably cover most useful cases but it still seems a bit unclean.\n\nI thought about that, but had the same concern.\n\n\n> I suppose that ExecMakeTableFunctionResult could be changed to *always*\n> pass ReturnSetInfo, even if it's not expecting the function to return\n> a set. That seems even less clean; but it would work, at least in the\n> current implementation.\n\nHmmm. I hadn't thought about this possibility. Why is it unclean? Are \nthere places where the lack of ReturnSetInfo is used to indicate that \nthe function does not return a set? Doesn't seem like there should be.\n\n> Anyone have a better idea?\n\nI was looking to see if it could be added to FunctionCallInfoData, but \nyou might find that more unclean still ;-).\n\nActually, I left off trying to figure out how to pass the columndef to \nExecMakeFunctionResult in the first place. It wasn't obvious to me, and \nsince you offered an easy alternative solution I stopped trying. Any \nsuggestions? Preference of extending FunctionCallInfoData or \nReturnSetInfo? I'd really like to do this for 7.3.\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 08:04:52 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> John Gray wrote:\n>> Does this sound completely crazy?\n\n> Not crazy at all. I asked the same question a few days ago:\n> http://archives.postgresql.org/pgsql-hackers/2002-08/msg01914.php\n\nI've been thinking more about this, and wondering if we should not\nonly make the tupdesc available but rely more heavily on it than we\ndo. Most of the C-coded functions do fairly substantial pushups to\nconstruct tupdescs that are just going to duplicate what\nnodeFunctionscan already has in its back pocket. They could save some\ntime by just picking that up and using it.\n\nOn the other hand, your experience yesterday with debugging a mismatched\nfunction declaration suggests that it's still a good idea to make the\nfunctions build the tupdesc they think they are returning.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 11:16:38 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF " }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> I suppose that ExecMakeTableFunctionResult could be changed to *always*\n>> pass ReturnSetInfo, even if it's not expecting the function to return\n>> a set. That seems even less clean; but it would work, at least in the\n>> current implementation.\n\n> Hmmm. I hadn't thought about this possibility. Why is it unclean? Are \n> there places where the lack of ReturnSetInfo is used to indicate that \n> the function does not return a set? Doesn't seem like there should be.\n\nProbably not. If the function itself doesn't know whether it should\nreturn a set, it can always look at the FmgrInfo struct to find out.\n\n> Actually, I left off trying to figure out how to pass the columndef to \n> ExecMakeFunctionResult in the first place.\n\nThat was hard yesterday, but it's easy today because nodeFunctionscan\nisn't using ExecEvalExpr anymore --- we'd only have to add one more\nparameter to ExecMakeTableFunctionResult and we're there.\n\n> Preference of extending FunctionCallInfoData or ReturnSetInfo?\n\nDefinitely ReturnSetInfo. If we put it in FunctionCallInfoData then\nthat's an extra word to zero for *every* fmgr function call, not only\ntable functions.\n\nOne thing to notice is that a table function that's depending on this\ninfo being available will have to punt if it's invoked via\nExecMakeFunctionResult (ie, it's being called in a SELECT targetlist).\nThat doesn't bother me too much, but maybe others will see it\ndifferently.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 11:26:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF " }, { "msg_contents": "Tom Lane wrote:\n> I've been thinking more about this, and wondering if we should not\n> only make the tupdesc available but rely more heavily on it than we\n> do. Most of the C-coded functions do fairly substantial pushups to\n> construct tupdescs that are just going to duplicate what\n> nodeFunctionscan already has in its back pocket. They could save some\n> time by just picking that up and using it.\n> \n> On the other hand, your experience yesterday with debugging a mismatched\n> function declaration suggests that it's still a good idea to make the\n> functions build the tupdesc they think they are returning.\n\nIn a function which *can* know what the tupledec should look like based \non independent information (contrib/tablefunc.c:crosstab), or based on a \npriori knowledge (guc.c:show_all_settings), then the passed in tupdesc \ncould be used by the function to validate that it has been acceptably \ndeclared (for named types) or called (for anonymous types).\n\nBut it is also interesting to let the function try to adapt to the way \nin which it has been called, and punt if it can't deal with it. And in \nsome cases, like John's example, there *is* no other way.\n\nJoe\n\n\n", "msg_date": "Fri, 30 Aug 2002 08:33:40 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF" }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Actually, I left off trying to figure out how to pass the columndef to \n>>ExecMakeFunctionResult in the first place.\n> \n> That was hard yesterday, but it's easy today because nodeFunctionscan\n> isn't using ExecEvalExpr anymore --- we'd only have to add one more\n> parameter to ExecMakeTableFunctionResult and we're there.\n\nI didn't even realize you had changed that! Things move quickly around \nhere ;-). I'll take a look this morning.\n\n\n>>Preference of extending FunctionCallInfoData or ReturnSetInfo?\n> \n> Definitely ReturnSetInfo. If we put it in FunctionCallInfoData then\n> that's an extra word to zero for *every* fmgr function call, not only\n> table functions.\n\nOK.\n\n\n> One thing to notice is that a table function that's depending on this\n> info being available will have to punt if it's invoked via\n> ExecMakeFunctionResult (ie, it's being called in a SELECT targetlist).\n> That doesn't bother me too much, but maybe others will see it\n> differently.\n\nIt's an important safety tip, but it doesn't bother me either. I think \nyou have suggested before that SRFs in SELECT targetlists should be \ndeprecated/removed. I'd take that one step further and say that SELECT \ntargetlists should only allow a scalar result, but obviously there are \nsome backwards compatibility issues there.\n\nJoe\n\n\n", "msg_date": "Fri, 30 Aug 2002 08:45:07 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n>> On the other hand, your experience yesterday with debugging a mismatched\n>> function declaration suggests that it's still a good idea to make the\n>> functions build the tupdesc they think they are returning.\n\n> In a function which *can* know what the tupledec should look like based \n> on independent information (contrib/tablefunc.c:crosstab), or based on a \n> priori knowledge (guc.c:show_all_settings), then the passed in tupdesc \n> could be used by the function to validate that it has been acceptably \n> declared (for named types) or called (for anonymous types).\n\nYeah, I had also considered the idea of pushing the responsibility of\nverifying the tupdesc matching out to the function (ie, nodeFunctionscan\nwouldn't call tupdesc_mismatch anymore, but the function could).\n\nI think this is a bad idea on balance though; it would save few cycles\nand probably create lots more debugging headaches like the one you had.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 11:49:57 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Preference of extending FunctionCallInfoData or ReturnSetInfo?\n> \n> Definitely ReturnSetInfo. If we put it in FunctionCallInfoData then\n> that's an extra word to zero for *every* fmgr function call, not only\n> table functions.\n\nAttached adds:\n + TupleDesc queryDesc; /* descriptor for planned query */\nto ReturnSetInfo, and populates ReturnSetInfo for every function call to \n ExecMakeTableFunctionResult, not just when fn_retset.\n\n> One thing to notice is that a table function that's depending on this\n> info being available will have to punt if it's invoked via\n> ExecMakeFunctionResult (ie, it's being called in a SELECT targetlist).\n> That doesn't bother me too much, but maybe others will see it\n> differently.\n\nI haven't done it yet, but I suppose this should be documented in \nxfunc.sgml. With this patch the behavior of a function called through \nExecMakeFunctionResult will be:\n\nif (fn_retset)\n ReturnSetInfo is populated but queryDesc is set to NULL\n\nif (!fn_retset)\n ReturnSetInfo is NULL\n\nIf there are no objections, please apply.\n\nThanks,\n\nJoe", "msg_date": "Fri, 30 Aug 2002 13:48:33 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Attached adds:\n> + TupleDesc queryDesc; /* descriptor for planned query */\n> to ReturnSetInfo, and populates ReturnSetInfo for every function call to \n> ExecMakeTableFunctionResult, not just when fn_retset.\n\nI thought \"expectedDesc\" was a more sensible choice of name, so I made\nit that. Otherwise, patch applied.\n\n> I haven't done it yet, but I suppose this should be documented in \n> xfunc.sgml.\n\nActually, most of what's in src/backend/utils/fmgr/README should be\ntransposed into xfunc.sgml someday.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 20:01:51 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Attached adds:\n>> + TupleDesc queryDesc; /* descriptor for planned query */\n>>to ReturnSetInfo, and populates ReturnSetInfo for every function call to \n>> ExecMakeTableFunctionResult, not just when fn_retset.\n> \n> I thought \"expectedDesc\" was a more sensible choice of name, so I made\n> it that. Otherwise, patch applied.\n\n\nI was trying to actually use this new feature today, and ran into a \nlittle bug in nodeFunctionscan.c that prevented it from actually working.\n\nFor anonymous composite types, ExecInitFunctionScan builds the tuple \ndescription using:\n tupdesc = BuildDescForRelation(coldeflist);\n\nBut BuildDescForRelation leaves initializes the tupdesc like this:\n desc = CreateTemplateTupleDesc(natts, UNDEFOID);\n\nThe UNDEFOID later causes an assertion failure in heap_formtuple when \nyou try to use the tupdesc to build a tuple.\n\nAttached is a very small patch to fix.\n\n>>I haven't done it yet, but I suppose this should be documented in \n>>xfunc.sgml.\n> Actually, most of what's in src/backend/utils/fmgr/README should be\n> transposed into xfunc.sgml someday.\n\nAfter beta starts I'll work on migrating this to the sgml docs.\n\nJoe", "msg_date": "Sat, 31 Aug 2002 11:38:43 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> But BuildDescForRelation leaves initializes the tupdesc like this:\n> desc = CreateTemplateTupleDesc(natts, UNDEFOID);\n\n> The UNDEFOID later causes an assertion failure in heap_formtuple when \n> you try to use the tupdesc to build a tuple.\n\nSo far, I haven't seen any reason to think that that three-way has-OID\nstuff accomplishes anything but causing trouble ... but I've applied\nthis patch for the moment. I hope to get around to reviewing the\nHeapTupleHeader hacks later in the weekend.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Aug 2002 15:14:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Accessing original TupleDesc from SRF " } ]
[ { "msg_contents": "\n> Please correct me if I've got this wrong, but it appears from the SRF\n> API, that a SRF cannot readily refer to the TupleDesc to which it is\n> expected to conform (i.e. the TupleDesc derived from the FROM \n> clause of\n> an original SELECT statement) because that is held in the \n> executor state\n> and not copied or linked into the function context.\n> \n> The reason I'm interested (and this might be a crazy idea) is that a\n> function might choose to adapt its output based on what it is \n> asked for.\n> i.e. the attribute names and types which it is asked to provide might\n> have some significance to the function. \n> \n> The application in this case is the querying of an XML document (this\n> relates to the contrib/xml XPath functions) where you might want a\n> function which gives you a \"virtual view\" of the document. In order to\n> do so, you specify a query such as:\n> \n> SELECT * FROM xmlquery_func('some text here') AS xq(document_author\n> text, document_publisher text, document_date text);\n> \n> (this would likely be part of a subquery or join in practice.)\n> \n> The function would see the requested attribute \"document_author\" and\n> translate that to '//document/author/text()' for use by the XPath\n> engine. This avoids having to have a function with varying arguments\n> -instead you have a 'virtual table' that generates only the attributes\n> requested.\n> \n> Does this sound completely crazy?\n\nNope, sounds really useful.\n\nAndreas\n", "msg_date": "Fri, 30 Aug 2002 13:55:37 +0200", "msg_from": "\"Zeugswetter Andreas SB SD\" <ZeugswetterA@spardat.at>", "msg_from_op": true, "msg_subject": "Re: Accessing original TupleDesc from SRF" } ]
[ { "msg_contents": "Hello everybody,\n\nThere is an open question we need broad opinion on.\n\nCurrently pgaccess stores its own data in the database it works with.\nSome people do not like that. To store it elsewhere invokes a number of\nissues such as:\n\n- where is this somewhere\n- converting form all versions to the new\n- etc.\n\nWhat do people think about this. Is it so bad that the own data is\nstored in the database pgaccess works with?\n\nIavor\n\n--\nwww.pgaccess.org\n\n", "msg_date": "Fri, 30 Aug 2002 17:44:12 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "pgaccess - where to store the own data" }, { "msg_contents": "Iavor Raytchev wrote:\n> Hello everybody,\n> \n> There is an open question we need broad opinion on.\n> \n> Currently pgaccess stores its own data in the database it works with.\n> Some people do not like that. To store it elsewhere invokes a number of\n> issues such as:\n> \n> - where is this somewhere\n> - converting form all versions to the new\n> - etc.\n> \n> What do people think about this. Is it so bad that the own data is\n> stored in the database pgaccess works with?\n> \n\nI don't particularly like it. Oracle deals with this by having a \ndatabase unto itself as a management repository (Oracle Enterprise \nManager, OEM, I believe). You register the database you want to manage \nwith the repository, and the metadata is kept there instead of in each \nmanaged database.\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 08:51:44 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: pgaccess - where to store the own data" }, { "msg_contents": "\n>> Iavor Raytchev wrote:\n>> > Hello everybody,\n>> >\n>> > There is an open question we need broad opinion on.\n>> >\n>> > Currently pgaccess stores its own data in the database it works\n>> > with. Some people do not like that. To store it elsewhere invokes\n>> > a number of issues such as:\n>> >\n>> > - where is this somewhere\n>> > - converting form all versions to the new\n>> > - etc.\n>> >\n>> > What do people think about this. Is it so bad that the own data\n>> > is stored in the database pgaccess works with?\n>>\n>> I don't particularly like it. Oracle deals with this by having a\n>> database unto itself as a management repository (Oracle Enterprise\n>> Manager, OEM, I believe). You register the database you want to\n>> manage with the repository, and the metadata is kept there instead\n>> of in each managed database.\n\n>> Joe\n\nI would agree that pgaccess's metadata should not necessarily be stored \nin with <all> of the rest of the data being used by a pgaccess \napplication. However, having a central repository as described above \nmay make it difficult to distribute an application without providing \nsome capabilities to distribute/manage a portion of the central \nrepository - which could be ugly for the developer and an end user.\n\n From my experiences using m$access to augment existing applications, I \nwould think that at least two sets of data would need to be handled by \npgaccess - some in an existing database, and some in the pgaccess \napplication. Hence, the structure of m$access with it's 'linked' and \nlocal tables in the application database itself. For self-contained \napplications, no linked tables would be used, and the existing format \nis fine for distributing an application. But, a major strength of \nm$access is it's ability to use data from multiple sources (databases), \nwhile the application database uses them transparently. \n\nIn any case, where there are multiple users (say > 3 people) the data \nis usually separated from the application metadata anyway for \nmaintenance purposes. That way it is not necessary to do live changes \nor to pass large data laden databases about for an application \nmodification.\n\nHence, I would vote to retain the existing method, and put the effort \ninto the ability to open multiple 'other' databases on a table by table \nbasis.\n\nRegards,\n\nterry\n", "msg_date": "Fri, 30 Aug 2002 15:12:32 -0400", "msg_from": "terry <tg5027@citlink.net>", "msg_from_op": false, "msg_subject": "Re: pgaccess - where to store the own data" }, { "msg_contents": "> > What do people think about this. Is it so bad that the own data is\n> > stored in the database pgaccess works with?\n> >\n>\n> I don't particularly like it. Oracle deals with this by having a\n> database unto itself as a management repository (Oracle Enterprise\n> Manager, OEM, I believe). You register the database you want to manage\n> with the repository, and the metadata is kept there instead of in each\n> managed database.\n\nThese days you could create a schema in the database you're managing as\nwell...\n\nChris\n\n\n", "msg_date": "Sat, 31 Aug 2002 20:38:03 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Iavor Raytchev [mailto:iavor.raytchev@verysmall.org] \n> Sent: 30 August 2002 16:44\n> To: pgsql-hackers; pgsql-interfaces\n> Subject: [HACKERS] pgaccess - where to store the own data\n> \n> \n> Hello everybody,\n> \n> There is an open question we need broad opinion on.\n> \n> Currently pgaccess stores its own data in the database it \n> works with. Some people do not like that. To store it \n> elsewhere invokes a number of issues such as:\n> \n> - where is this somewhere\n> - converting form all versions to the new\n> - etc.\n> \n> What do people think about this. Is it so bad that the own \n> data is stored in the database pgaccess works with?\n\nI had the same trouble with pgAdmin, especially with pgAdmin I which had\na whole host of objects server-side. I also found that people didn't\nlike it, but where else do you store the data? \n\npgAdmin II no longer uses such tables, but to get over the problem as\nbest I could, I added a cleanup option to pgAdmin I that removed all\nserver side objects in one go.\n\nRegards, Dave.\n", "msg_date": "Fri, 30 Aug 2002 16:50:52 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "> > What do people think about this. Is it so bad that the own \n> > data is stored in the database pgaccess works with?\n> \n> pgAdmin II no longer uses such tables, but to get over the problem as\n> best I could, I added a cleanup option to pgAdmin I that removed all\n> server side objects in one go.\n\nWhat does pgAdmin II do instead? Or, how did you solve the problem?\n\nAlso, just to put my two cents in, I and others I have worked with \ndon't like admin tools mucking up the databases we're working on. So, I\nthink it's a good idea to find some solution.\n\nOne thought is to use a completely separate database, but also allow it\nto be stored in the current database if the user wants it too. This\nalso solves the case of a user that can't create a new database for his\nadmin tool (permissions etc...). Also, it might be cleaner now that we\nhave schemea support to create one pgadmin, or pgaccess schemea in the\ndatabase, that handled all the others.\n\n", "msg_date": "30 Aug 2002 13:59:22 -0400", "msg_from": "\"Matthew T. OConnor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "Matthew T. OConnor wrote:\n> One thought is to use a completely separate database, but also allow it\n> to be stored in the current database if the user wants it too. This\n> also solves the case of a user that can't create a new database for his\n> admin tool (permissions etc...). Also, it might be cleaner now that we\n> have schemea support to create one pgadmin, or pgaccess schemea in the\n> database, that handled all the others.\n> \n\nAs someone else mentioned (I think), even using a separate schema is not \nalways an acceptable option. If you are using a \"packaged\" application \n(whether commercial or open source), you usually don't want *any* \nchanges to the vendor provided database. Particularly with commercial \nsoftware, that can mean loss of, or problems with, technical support, or \nproblems when upgrading.\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 11:12:28 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "> As someone else mentioned (I think), even using a separate schema is not\n> always an acceptable option. If you are using a \"packaged\" application\n> (whether commercial or open source), you usually don't want *any*\n> changes to the vendor provided database. Particularly with commercial\n> software, that can mean loss of, or problems with, technical support, or\n> problems when upgrading.\n\nAgreed, but if the information is to be stored using the database server at \nall, then I think this option should be left in since some users probably \ndon't mind the clutter, and will not be allowed to create a new database or \nschemea.\n", "msg_date": "Fri, 30 Aug 2002 14:43:38 -0400", "msg_from": "\"Matthew T. OConnor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "On Wednesday 04 September 2002 17:07, Brett Schwarz wrote:\n> But in relational theory, doesn't relations refer to what we commonly\n> refer to as tables? I think this would be confusing as well. However, I\n> do agree that it would be cool to have it automatically generated...\n\nIn MS Access they can be a combination of Queries and Tables.\n\nNot knowing any better, if it a relationship of any kind, then call it a \nrelation. At least us unwashed from the MS Access world would get\na better handle on it. ( if that is what they really are )\nJohn\n\n\n>\n> On Wed, 2002-09-04 at 10:00, Iavor Raytchev wrote:\n> > Ross and Iavor:\n> > > > > BTW, has the 'schema' tab been renamed yet? With actual\n> > > > > schema\n> > > > > in 7.3, that'll get confusing.\n> > > >\n> > > > Not renamed yet.\n> > >\n> > > In which case, we need to come up with a different name. How\n> > > does\n> > > \"diagrams\" strike you all?\n> >\n> > Hm... In MS Access it is called 'Relations' which sounds kind of\n> > correct. Basically now we just display them, so 'Diagrams' could be\n> > correct for us for now. In MS Access the relations are actually built\n> > there. That's what I would like us to do - use the current 'Schema' tab\n> > (they are not tabs anymore in the new interface) and make it able to\n> > build relations (represented in the code with referential integrity).\n> > Then 'Diagrams' would not fit, but 'Relations'. Also 'References'.\n> >\n> > Iavor\n\n-- \nJohn Turner\nJCI Inc.\nhttp://home.ntelos.net/~JLT\n\"Just because you do not know the answer\ndoes not mean that someone else does\"\nStephen J. Gould, {rip}\n", "msg_date": "Wed, 4 Sep 2002 13:54:19 +0000", "msg_from": "\"John L. Turner\" <jlt@wvinter.net>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-developers] the current 'schema' tab - renaming ideas" }, { "msg_contents": "On Fri, Aug 30, 2002 at 02:43:38PM -0400, Matthew T. OConnor wrote:\n> > As someone else mentioned (I think), even using a separate schema is not\n> > always an acceptable option. If you are using a \"packaged\" application\n> > (whether commercial or open source), you usually don't want *any*\n> > changes to the vendor provided database. Particularly with commercial\n> > software, that can mean loss of, or problems with, technical support, or\n> > problems when upgrading.\n> \n> Agreed, but if the information is to be stored using the database server at \n> all, then I think this option should be left in since some users probably \n> don't mind the clutter, and will not be allowed to create a new database or \n> schemea.\n\nI'm a bit late on this discussion, but I, for one, have liked having\nsome of the pgaccess info stored with the database. That way, no matter\nwhat machine I connect to the DB from, I get the same set of functions,\nqueries, and schema-documents.\n\nBTW, has the 'schema' tab been renamed yet? With actual schema in 7.3,\nthat'll get confusing.\n\nRoss\n", "msg_date": "Wed, 4 Sep 2002 10:19:49 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: pgaccess - where to store the own data" }, { "msg_contents": "\nRoss wrote:\n\n> I'm a bit late on this discussion, but I, for one, have liked\n> having\n> some of the pgaccess info stored with the database. That way,\n> no matter\n> what machine I connect to the DB from, I get the same set of\n> functions,\n> queries, and schema-documents.\n\nVery much true.\n\nA wiki page has been started on that topic - feel free to contribute to\nthe methods and their pros and cons, as well to the proposed final\napproach.\n\nhttp://www.pgaccess.org/index.php?page=WhereToStoreThePgAccessOwnData\n\n> BTW, has the 'schema' tab been renamed yet? With actual schema\n> in 7.3,\n> that'll get confusing.\n\nNot renamed yet.\n\n", "msg_date": "Wed, 4 Sep 2002 18:24:10 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": false, "msg_subject": "Re: pgaccess - where to store the own data" }, { "msg_contents": "On Wed, Sep 04, 2002 at 06:24:10PM +0200, Iavor Raytchev wrote:\n \n> A wiki page has been started on that topic - feel free to contribute to\n> the methods and their pros and cons, as well to the proposed final\n> approach.\n \n> http://www.pgaccess.org/index.php?page=WhereToStoreThePgAccessOwnData\n\nI'll take a look.\n\n> > BTW, has the 'schema' tab been renamed yet? With actual schema\n> > in 7.3, that'll get confusing.\n \n> Not renamed yet.\n\nIn which case, we need to come up with a different name. How does\n\"diagrams\" strike you all?\n\nRoss (I removed HACKERS from the CC)\n", "msg_date": "Wed, 4 Sep 2002 11:45:58 -0500", "msg_from": "\"Ross J. Reedstrom\" <reedstrm@rice.edu>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "Diagrams makes sense to me...\n\nOn Wed, 2002-09-04 at 09:45, Ross J. Reedstrom wrote:\n> On Wed, Sep 04, 2002 at 06:24:10PM +0200, Iavor Raytchev wrote:\n> \n> > A wiki page has been started on that topic - feel free to contribute to\n> > the methods and their pros and cons, as well to the proposed final\n> > approach.\n> \n> > http://www.pgaccess.org/index.php?page=WhereToStoreThePgAccessOwnData\n> \n> I'll take a look.\n> \n> > > BTW, has the 'schema' tab been renamed yet? With actual schema\n> > > in 7.3, that'll get confusing.\n> \n> > Not renamed yet.\n> \n> In which case, we need to come up with a different name. How does\n> \"diagrams\" strike you all?\n> \n> Ross (I removed HACKERS from the CC)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n-- \nBrett Schwarz\nbrett_schwarz AT yahoo.com\n\n", "msg_date": "04 Sep 2002 09:57:09 -0700", "msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "Ross and Iavor:\n\n> > > BTW, has the 'schema' tab been renamed yet? With actual\n> > > schema\n> > > in 7.3, that'll get confusing.\n>\n> > Not renamed yet.\n>\n> In which case, we need to come up with a different name. How\n> does\n> \"diagrams\" strike you all?\n\nHm... In MS Access it is called 'Relations' which sounds kind of\ncorrect. Basically now we just display them, so 'Diagrams' could be\ncorrect for us for now. In MS Access the relations are actually built\nthere. That's what I would like us to do - use the current 'Schema' tab\n(they are not tabs anymore in the new interface) and make it able to\nbuild relations (represented in the code with referential integrity).\nThen 'Diagrams' would not fit, but 'Relations'. Also 'References'.\n\nIavor\n\n", "msg_date": "Wed, 4 Sep 2002 19:00:29 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": false, "msg_subject": "the current 'schema' tab - renaming ideas" }, { "msg_contents": "But in relational theory, doesn't relations refer to what we commonly\nrefer to as tables? I think this would be confusing as well. However, I\ndo agree that it would be cool to have it automatically generated...\n\nOn Wed, 2002-09-04 at 10:00, Iavor Raytchev wrote:\n> Ross and Iavor:\n> \n> > > > BTW, has the 'schema' tab been renamed yet? With actual\n> > > > schema\n> > > > in 7.3, that'll get confusing.\n> >\n> > > Not renamed yet.\n> >\n> > In which case, we need to come up with a different name. How\n> > does\n> > \"diagrams\" strike you all?\n> \n> Hm... In MS Access it is called 'Relations' which sounds kind of\n> correct. Basically now we just display them, so 'Diagrams' could be\n> correct for us for now. In MS Access the relations are actually built\n> there. That's what I would like us to do - use the current 'Schema' tab\n> (they are not tabs anymore in the new interface) and make it able to\n> build relations (represented in the code with referential integrity).\n> Then 'Diagrams' would not fit, but 'Relations'. Also 'References'.\n> \n> Iavor\n-- \nBrett Schwarz\nbrett_schwarz AT yahoo.com\n\n", "msg_date": "04 Sep 2002 10:07:59 -0700", "msg_from": "Brett Schwarz <brett_schwarz@yahoo.com>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-developers] the current 'schema' tab - renaming ideas" }, { "msg_contents": "\nBrett:\n\n> But in relational theory, doesn't relations refer to what we commonly\n> refer to as tables? I think this would be confusing as well.\n> However, I\n> do agree that it would be cool to have it automatically generated...\n\nI am not an expert in relational theory... If it is so - then that's bad\nname. But if the native speakers and relational theory experts (joke :)\nagree that what we have now as 'Schema' can be called 'Diagrams' and\nused for creating different documentations, etc., and what is left is\nthe referential integrity (looking at it and visual management) than we\ncan call the new thing just 'Visual referential integrity manager' :)))\nif not too long...\n\n", "msg_date": "Wed, 4 Sep 2002 19:14:34 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-developers] the current 'schema' tab - renaming ideas" }, { "msg_contents": "\n\n>> Hm... In MS Access it is called 'Relations' which sounds kind of\n>> correct. \n\nActually, the window is called \"Relationships\", not \"Relations\"....\n\n-- \nterry\n", "msg_date": "Wed, 4 Sep 2002 14:36:25 -0400", "msg_from": "terry <tg5027@citlink.net>", "msg_from_op": false, "msg_subject": "Re: the current 'schema' tab - renaming ideas" } ]
[ { "msg_contents": "I just sync'd up and am getting:\n\nmake[4]: Leaving directory `/opt/src/pgsql/src/port'\nmake[3]: PERL@: Command not found\nmake[3]: *** [sql_help.h] Error 127\nmake[3]: Leaving directory `/opt/src/pgsql/src/bin/psql'\nmake[2]: *** [all] Error 2\nmake[2]: Leaving directory `/opt/src/pgsql/src/bin'\nmake[1]: *** [all] Error 2\nmake[1]: Leaving directory `/opt/src/pgsql/src'\nmake: *** [all] Error 2\n\nAny ideas?\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 08:56:47 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "make failure on cvs tip" }, { "msg_contents": "\nI am seeing the same failure ... looking at it...\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> I just sync'd up and am getting:\n> \n> make[4]: Leaving directory `/opt/src/pgsql/src/port'\n> make[3]: PERL@: Command not found\n> make[3]: *** [sql_help.h] Error 127\n> make[3]: Leaving directory `/opt/src/pgsql/src/bin/psql'\n> make[2]: *** [all] Error 2\n> make[2]: Leaving directory `/opt/src/pgsql/src/bin'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory `/opt/src/pgsql/src'\n> make: *** [all] Error 2\n> \n> Any ideas?\n> \n> Joe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 12:18:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: make failure on cvs tip" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I just sync'd up and am getting:\n> make[4]: Leaving directory `/opt/src/pgsql/src/port'\n> make[3]: PERL@: Command not found\n\nMarc was overenthusiastic about removing perl support from configure.\nI'll put it back.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 12:23:31 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: make failure on cvs tip " }, { "msg_contents": "\nOK, fixed. Marc removed the perl tests because he was removing\ninterfaces/perl5, but we still need it for pl/perl and psql's use of\nperl for the help files. \n\nPerhaps it can be paired down now by someone who understands the perl\nbuilds but at this point I just put the configure.in part back, and ran\nautoconf.\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> I just sync'd up and am getting:\n> \n> make[4]: Leaving directory `/opt/src/pgsql/src/port'\n> make[3]: PERL@: Command not found\n> make[3]: *** [sql_help.h] Error 127\n> make[3]: Leaving directory `/opt/src/pgsql/src/bin/psql'\n> make[2]: *** [all] Error 2\n> make[2]: Leaving directory `/opt/src/pgsql/src/bin'\n> make[1]: *** [all] Error 2\n> make[1]: Leaving directory `/opt/src/pgsql/src'\n> make: *** [all] Error 2\n> \n> Any ideas?\n> \n> Joe\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 12:24:41 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: make failure on cvs tip" }, { "msg_contents": "Bruce Momjian wrote:\n> OK, fixed. Marc removed the perl tests because he was removing\n> interfaces/perl5, but we still need it for pl/perl and psql's use of\n> perl for the help files. \n> \n> Perhaps it can be paired down now by someone who understands the perl\n> builds but at this point I just put the configure.in part back, and ran\n> autoconf.\n> \n\nThat did it. Thanks!\n\nJoe\n\n\n\n", "msg_date": "Fri, 30 Aug 2002 09:25:51 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": true, "msg_subject": "Re: make failure on cvs tip" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Perhaps it can be paired down now by someone who understands the perl\n> builds but at this point I just put the configure.in part back, and ran\n> autoconf.\n\nI think all that ought to be done is change the description of the\n--with-perl option to mention only plperl, and not the interface...\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 13:04:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: make failure on cvs tip " }, { "msg_contents": "\nDone.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Perhaps it can be paired down now by someone who understands the perl\n> > builds but at this point I just put the configure.in part back, and ran\n> > autoconf.\n> \n> I think all that ought to be done is change the description of the\n> --with-perl option to mention only plperl, and not the interface...\n> \n> \t\t\tregards, tom lane\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 13:14:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: make failure on cvs tip" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Joe Conway [mailto:mail@joeconway.com] \n> Sent: 30 August 2002 16:52\n> To: Iavor Raytchev\n> Cc: pgsql-hackers; pgsql-interfaces\n> Subject: Re: [HACKERS] [INTERFACES] pgaccess - where to store \n> the own data\n> \n> \n> Iavor Raytchev wrote:\n> > Hello everybody,\n> > \n> > There is an open question we need broad opinion on.\n> > \n> > Currently pgaccess stores its own data in the database it \n> works with. \n> > Some people do not like that. To store it elsewhere invokes \n> a number \n> > of issues such as:\n> > \n> > - where is this somewhere\n> > - converting form all versions to the new\n> > - etc.\n> > \n> > What do people think about this. Is it so bad that the own data is \n> > stored in the database pgaccess works with?\n> > \n> \n> I don't particularly like it. Oracle deals with this by having a \n> database unto itself as a management repository (Oracle Enterprise \n> Manager, OEM, I believe). You register the database you want \n> to manage \n> with the repository, and the metadata is kept there instead \n> of in each \n> managed database.\n\nI thought of using that approach with pgAdmin, but how do you ensure\nthat new users use the correct database as the repository, or, if you\nhard coded it to use a 'pgaccess' database (for example), how do you\ndeal with security etc on a shared system such as might be run by an\nisp?\n\nOf course, in 7.3 you could just create a pgaccess schema in each\ndatabase...\n\nRegards, Dave.\n", "msg_date": "Fri, 30 Aug 2002 16:58:35 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" }, { "msg_contents": "\n>> how do you deal with security etc on a shared system such as might\n>> be run by an isp?\n\nHmmm. Does this indicate that pgaccess maybe needs to be forked into a \ntrue admin tool and a true end-user/application developer tool?\n\n>> Of course, in 7.3 you could just create a pgaccess schema in each\n>> database...\n\nThat sounds good, but I'ld still like the ability to access tables from \nmultiple databases too.\n\nterry\n", "msg_date": "Tue, 3 Sep 2002 09:54:01 -0400", "msg_from": "terry <tg5027@citlink.net>", "msg_from_op": false, "msg_subject": "Re: pgaccess - where to store the own data" } ]
[ { "msg_contents": "Greets all,\nWhile attempting to clean up some memory leaks, I have encountered some \ndifficulties. In the code for PQclear() we have the check:\n\n if (!res)\n\treturn;\n\nThe problem arrises when the result object pointer you are passing to clear \ncontains not a null and not a valid result object address, but a junk pointer.\nPQclear() seg faults when the address is outside of the data segment.\n(libpq bug?)\n \nMy question is, how does one determine when a PGresult* contains the address \nof a valid result object? Rewriting the calling code is not an option sadly.\nWhat I would like to be able to do is something like this:\n\n\tif ( result_is_valid( res ) )\n\t{\n\t\tPQclear( res );\n\t}\n\nThanks in advance for any help/suggestions.\n -Wade\n", "msg_date": "Fri, 30 Aug 2002 09:56:58 -0700", "msg_from": "wade <wade@wavefire.com>", "msg_from_op": true, "msg_subject": "(libpq) PQclear on questionable result pointer." }, { "msg_contents": "wade <wade@wavefire.com> writes:\n> The problem arrises when the result object pointer you are passing to clear \n> contains not a null and not a valid result object address, but a junk pointer.\n> PQclear() seg faults when the address is outside of the data segment.\n> (libpq bug?)\n\nNo, that's a bug in *your* code. Passing a bogus pointer to any library\nanywhere will make it segfault, because there's no reasonable way to\nverify a pointer.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 13:01:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: (libpq) PQclear on questionable result pointer. " } ]
[ { "msg_contents": "\nHI all,\n\nSorry to interrupt your busy list.\n\nI was wondering if you could recomend a good source code db/indexer that\ncould be used to search through the postgresql code?\n\nThanks,\n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n", "msg_date": "Fri, 30 Aug 2002 11:57:17 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "source code indexer" }, { "msg_contents": "Laurette Cisneros wrote:\n> HI all,\n> \n> Sorry to interrupt your busy list.\n> \n> I was wondering if you could recomend a good source code db/indexer that\n> could be used to search through the postgresql code?\n\nI think the real pros use grep and emacs ;-)\n\nBut for us mere mortals, I find LXR very useful. I have set one up for \nmy own use -- it gets rebuilt from cvs nightly. If you are interested see:\n\n https://www.joeconway.com/lxr.pgsql/\n\nuse login name \"lxr\" and password \"pglxr\" (without the quotes)\n\nHTH,\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 12:16:14 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: source code indexer" }, { "msg_contents": "Laurette Cisneros dijo: \n\nHi,\n\n> I was wondering if you could recomend a good source code db/indexer that\n> could be used to search through the postgresql code?\n\nSome people here use something called glimpse AFAIR. I don't even know\nit.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nOfficer Krupke, what are we to do?\nGee, officer Krupke, Krup you! (West Side Story, \"Gee, Officer Krupke\")\n\n", "msg_date": "Fri, 30 Aug 2002 15:16:25 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: source code indexer" }, { "msg_contents": "Ah. Great! I had download lxr and was starting to dig in to insatall it\nand thought I would check with the pgers to see what they recommended.\nGlad to see someone has done this.\n\nThanks,\n\nL.\nOn Fri, 30 Aug 2002, Joe Conway wrote:\n\n> Laurette Cisneros wrote:\n> > HI all,\n> > \n> > Sorry to interrupt your busy list.\n> > \n> > I was wondering if you could recomend a good source code db/indexer that\n> > could be used to search through the postgresql code?\n> \n> I think the real pros use grep and emacs ;-)\n> \n> But for us mere mortals, I find LXR very useful. I have set one up for \n> my own use -- it gets rebuilt from cvs nightly. If you are interested see:\n> \n> https://www.joeconway.com/lxr.pgsql/\n> \n> use login name \"lxr\" and password \"pglxr\" (without the quotes)\n> \n> HTH,\n> \n> Joe\n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n", "msg_date": "Fri, 30 Aug 2002 12:21:30 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: source code indexer" }, { "msg_contents": "On Fri, 30 Aug 2002 11:57:17 -0700 (PDT), Laurette Cisneros\n<laurette@nextbus.com> wrote:\n>I was wondering if you could recomend a good source code db/indexer that\n>could be used to search through the postgresql code?\n\nI use Source Navigator v5.1 http://sourceforge.net/projects/sourcenav/\n\nServus\n Manfred\n", "msg_date": "Fri, 30 Aug 2002 21:28:18 +0200", "msg_from": "Manfred Koizar <mkoi-pg@aon.at>", "msg_from_op": false, "msg_subject": "Re: source code indexer" }, { "msg_contents": "On Fri, 30 Aug 2002, Laurette Cisneros wrote:\n\n> \n> HI all,\n> \n> Sorry to interrupt your busy list.\n> \n> I was wondering if you could recomend a good source code db/indexer that\n> could be used to search through the postgresql code?\n\nI think I must be one of those 'old school' types. I use\n\n\tfind <somedir> <some spec.> | xargs grep\n\noften followed by tags in Emacs.\n\nIt isn't perfect but then I'm not either.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Fri, 30 Aug 2002 23:47:37 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: source code indexer" }, { "msg_contents": "Nigel J. Andrews wrote:\n> On Fri, 30 Aug 2002, Laurette Cisneros wrote:\n> \n> > \n> > HI all,\n> > \n> > Sorry to interrupt your busy list.\n> > \n> > I was wondering if you could recommend a good source code db/indexer that\n> > could be used to search through the postgresql code?\n> \n> I think I must be one of those 'old school' types. I use\n> \n> \tfind <somedir> <some spec.> | xargs grep\n> \n> often followed by tags in Emacs.\n> \n> It isn't perfect but then I'm not either.\n\nI use a commercial editor called Crisp, which is unfortunately a\ncommercial product. It runs on almost any platform:\n\n\tftp://207.106.42.251/pub/crisp.gif\n\nI have found several editor features a great help in PostgreSQL\ndevelopment:\n\n\tprogrammable macro language\n\tkeyboard record/playback\n\tcolorization\n\tlist of functions in the file\n\ttags jump to function definition\n\tcross-reference listings\n\nThey are not a big deal when you are making localized changes. In fact,\nI just used a character-mode editor for those, but when I have to\nanalyze the code or make massive changes, those features make it easier.\n\nThe screenshot I listed has the functions listed on the left, the\ncross-reference information at the bottom, and a colorized main editor\nwindow.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 14:55:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: source code indexer" }, { "msg_contents": "On Fri, 30 Aug 2002, Nigel J. Andrews wrote:\n\n> I think I must be one of those 'old school' types. I use\n>\n> \tfind <somedir> <some spec.> | xargs grep\n>\n> often followed by tags in Emacs.\n\nYou might find that Gnu id-tools is a much faster way of\ndoing this, especially for large amounts of source code.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 2 Sep 2002 13:52:10 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: source code indexer" }, { "msg_contents": "Thanks to everyone who made suggestions.\n \nI have found Source Navigator to be very close and useful for what I was \nlooking for!\n \nThanks again,\n \nL.\nOn Fri, 30 Aug 2002, Manfred Koizar wrote:\n\n> On Fri, 30 Aug 2002 11:57:17 -0700 (PDT), Laurette Cisneros\n> <laurette@nextbus.com> wrote:\n> >I was wondering if you could recomend a good source code db/indexer that\n> >could be used to search through the postgresql code?\n> \n> I use Source Navigator v5.1 http://sourceforge.net/projects/sourcenav/\n> \n> Servus\n> Manfred\n> \n\n-- \nLaurette Cisneros\nThe Database Group\n(510) 420-3137\nNextBus Information Systems, Inc.\nwww.nextbus.com\n----------------------------------\nA wiki we will go...\n\n", "msg_date": "Tue, 3 Sep 2002 16:33:04 -0700 (PDT)", "msg_from": "Laurette Cisneros <laurette@nextbus.com>", "msg_from_op": true, "msg_subject": "Re: source code indexer" } ]
[ { "msg_contents": "Grep just to find things.\n\nDoxygen to see what is going on in exquisite detail.\n", "msg_date": "Fri, 30 Aug 2002 12:23:13 -0700", "msg_from": "\"Dann Corbit\" <DCorbit@connx.com>", "msg_from_op": true, "msg_subject": "Re: source code indexer" } ]
[ { "msg_contents": "Using current CVS sources (as of 1500 EDT), 'make check' fails at the\ndatabase initialization step.\n\nThis box is running RH 7.3 with all current RH updates, and has\nsuccessfully built Pg 7.2.1 and 7.2.2.\n\nHere's how I'm configuring it (same as I'm doing under the RH Null Beta,\nwhich works fine):\n\n./configure --prefix=/opt/postgresql --with-java --with-python --with-openssl --enable-syslog --enable-debug --enable-cassert\n--enable-depend\n\nHere's what src/test/regress/log/initdb.log says:\n\nRunning with noclean mode on. Mistakes will not be cleaned up.\n/home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding: relocation error: /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding: undefined symbol: pg_char_to_encoding\ninitdb: pg_encoding failed\n\nPerhaps you did not configure PostgreSQL for multibyte support or\nthe program was not successfully installed.\n\n\nMy understanding is that --enable-multibyte is now the default on 7.3,\nand indeed it works fine on RH Null.\n\nI also did a diff of the config.log on both boxes, and didn't see\nany major differences (little things like not having the SGML/Docbook\nstuff on this box).\n\nIf the config.log files would be useful, I can send them.\n\nThanks,\n\nGordon.\n\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Fri, 30 Aug 2002 15:43:03 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "[7.3-devl] initdb fails on RH 7.3" }, { "msg_contents": "[This is an email copy of a Usenet post to \"comp.databases.postgresql.hackers\"]\n\nI just checked out another CVS snapshot onto a RH 7.2 box, and 'make\ncheck' can successfully do the initdb.\n\nI updated the source on the RH 7.3 box, and still get the initdb failure.\n\nI updated the source on the RH Null box, and 'make check' can\nsuccessfully do the initdb.\n\nAnyone else having issues on RH 7.3?\n\nG.\n\nOn Fri, 30 Aug 2002 15:43:03 -0400, Gordon Runkle wrote:\n\n> Using current CVS sources (as of 1500 EDT), 'make check' fails at the\n> database initialization step.\n> \n> This box is running RH 7.3 with all current RH updates, and has\n> successfully built Pg 7.2.1 and 7.2.2.\n> \n> Here's how I'm configuring it (same as I'm doing under the RH Null Beta,\n> which works fine):\n> \n> ./configure --prefix=/opt/postgresql --with-java --with-python\n> --with-openssl --enable-syslog --enable-debug --enable-cassert\n> --enable-depend\n> \n> Here's what src/test/regress/log/initdb.log says:\n> \n> Running with noclean mode on. Mistakes will not be cleaned up.\n> /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding:\n> relocation error:\n> /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding:\n> undefined symbol: pg_char_to_encoding initdb: pg_encoding failed\n> \n> Perhaps you did not configure PostgreSQL for multibyte support or the\n> program was not successfully installed.\n> \n> \n> My understanding is that --enable-multibyte is now the default on 7.3,\n> and indeed it works fine on RH Null.\n> \n> I also did a diff of the config.log on both boxes, and didn't see any\n> major differences (little things like not having the SGML/Docbook stuff\n> on this box).\n> \n> If the config.log files would be useful, I can send them.\n> \n> Thanks,\n> \n> Gordon.\n \n\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Fri, 30 Aug 2002 19:59:51 -0400", "msg_from": "\"Gordon Runkle\" <gar@integrated-dynamics.com>", "msg_from_op": false, "msg_subject": "Re: [7.3-devl] initdb fails on RH 7.3" }, { "msg_contents": "I just checked out another CVS snapshot onto a RH 7.2 box, and 'make\ncheck' can successfully do the initdb.\n\nI updated the source on the RH 7.3 box, and still get the initdb failure.\n\nI updated the source on the RH Null box, and 'make check' can\nsuccessfully do the initdb.\n\nAnyone else having issues on RH 7.3?\n\nG.\n\nOn Fri, 30 Aug 2002 15:43:03 -0400, Gordon Runkle wrote:\n\n> Using current CVS sources (as of 1500 EDT), 'make check' fails at the\n> database initialization step.\n> \n> This box is running RH 7.3 with all current RH updates, and has\n> successfully built Pg 7.2.1 and 7.2.2.\n> \n> Here's how I'm configuring it (same as I'm doing under the RH Null Beta,\n> which works fine):\n> \n> ./configure --prefix=/opt/postgresql --with-java --with-python\n> --with-openssl --enable-syslog --enable-debug --enable-cassert\n> --enable-depend\n> \n> Here's what src/test/regress/log/initdb.log says:\n> \n> Running with noclean mode on. Mistakes will not be cleaned up.\n> /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding:\n> relocation error:\n> /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding:\n> undefined symbol: pg_char_to_encoding initdb: pg_encoding failed\n> \n> Perhaps you did not configure PostgreSQL for multibyte support or the\n> program was not successfully installed.\n> \n> \n> My understanding is that --enable-multibyte is now the default on 7.3,\n> and indeed it works fine on RH Null.\n> \n> I also did a diff of the config.log on both boxes, and didn't see any\n> major differences (little things like not having the SGML/Docbook stuff\n> on this box).\n> \n> If the config.log files would be useful, I can send them.\n> \n> Thanks,\n> \n> Gordon.\n \n\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Fri, 30 Aug 2002 19:59:53 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "Re: [7.3-devl] initdb fails on RH 7.3" }, { "msg_contents": "Gordon Runkle <gar@integrated-dynamics.com> writes:\n> Running with noclean mode on. Mistakes will not be cleaned up.\n> /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding: relocation error: /home/gar/src/pgsql/src/test/regress/./tmp_check/install//opt/postgresql/bin/pg_encoding: undefined symbol: pg_char_to_encoding\n> initdb: pg_encoding failed\n\nI think the dynamic linker is picking up a non-multibyte-enabled version\nof libpq.so. Check your ldconfig setup.\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 20:15:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [7.3-devl] initdb fails on RH 7.3 " }, { "msg_contents": "Thanks, Tom,\n\nMy /etc/ld.so.conf didn't have /opt/postgresql/lib in it, yet when I\nrenamed /opt/postgresql (which was v7.2.2) to something else, the initdb\nsucceeded. I'm not sure why it went looking up there...\n\nThanks again,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "30 Aug 2002 20:48:17 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "Re: [7.3-devl] initdb fails on RH 7.3" } ]
[ { "msg_contents": "The alter_table regression test is now failing for me (RH Null).\n\nIt appears that the problem is that the backend is giving back a\ndifferent error message than expected when dropping a column from a\nnon-existent table:\n\n-- try altering non-existent table, should fail\nalter table foo drop column bar;\nERROR: Relation \"foo\" has no column \"bar\"\n\n\nWhat is expected is:\n\n-- try altering non-existent table, should fail\nalter table foo drop column bar;\nERROR: Relation \"foo\" does not exist\n\n\nIt seems to me that the expected behaviour is best, FWIW...\n\nThanks,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Fri, 30 Aug 2002 15:52:52 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "[7.3-devl] alter_table test" }, { "msg_contents": "Gordon Runkle <gar@integrated-dynamics.com> writes:\n> The alter_table regression test is now failing for me (RH Null).\n> It appears that the problem is that the backend is giving back a\n> different error message than expected when dropping a column from a\n> non-existent table:\n\n> -- try altering non-existent table, should fail\n> alter table foo drop column bar;\n> ERROR: Relation \"foo\" has no column \"bar\"\n\nHmm, I don't get that here. In CVS tip the regression tests pass,\nand a manual trial gives:\n\ntest73=# alter table foo drop column bar;\nERROR: Relation \"foo\" does not exist\n\nWould you try a full rebuild and see if things are still flaky?\n\n\t\t\tregards, tom lane\n", "msg_date": "Fri, 30 Aug 2002 20:57:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: [7.3-devl] alter_table test " }, { "msg_contents": "Looks good now, all three environments (RH 7.2, RH 7.3, RH Null).\n\nG.\n\nOn Fri, 2002-08-30 at 20:57, Tom Lane wrote:\n> Hmm, I don't get that here. In CVS tip the regression tests pass,\n> and a manual trial gives:\n> \n> test73=# alter table foo drop column bar;\n> ERROR: Relation \"foo\" does not exist\n> \n> Would you try a full rebuild and see if things are still flaky?\n> \n> \t\t\tregards, tom lane\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "30 Aug 2002 21:26:01 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "Re: [7.3-devl] alter_table test" } ]
[ { "msg_contents": "\n\n> -----Original Message-----\n> From: Matthew T. OConnor [mailto:matthew@zeut.net] \n> Sent: 30 August 2002 18:59\n> To: Dave Page\n> Cc: Iavor Raytchev; pgsql-hackers; pgsql-interfaces\n> Subject: Re: [HACKERS] pgaccess - where to store the own data\n> \n> \n> > > What do people think about this. Is it so bad that the own\n> > > data is stored in the database pgaccess works with?\n> > \n> > pgAdmin II no longer uses such tables, but to get over the \n> problem as \n> > best I could, I added a cleanup option to pgAdmin I that \n> removed all \n> > server side objects in one go.\n> \n> What does pgAdmin II do instead? Or, how did you solve the problem?\n\npgAdmin II 1.2.0 optionally used one table for it's revision control\nfeature. This has been removed in the latest code 'cos I was never\ntotally happy with it, and no-one admitted to using it when I quizzed\nthe lists.\n\nThe other objects (views, functions and tables) have been removed either\nbecause pgAdmin II is far cleverer in the way it caches things than\npgAdmin I was and can get away with 2 queries instead of one or a more\ncomplex one if required, or, because features such as monitoring of\nsequence values and tables sizes were dropped in the great rewrite.\n\nIt's also worth noting, that pgAdmin and pgAccess have different aims.\nWhilst pgAccess aims to provide application bulding and reporting\nfacilities (like Access) which naturally require a centralised data\nstore, pgAdmin is intended as a pure Admin tool aiming to fully support\nall PostgreSQL object types.\n\nRegards, Dave.\n", "msg_date": "Fri, 30 Aug 2002 21:02:51 +0100", "msg_from": "\"Dave Page\" <dpage@vale-housing.co.uk>", "msg_from_op": true, "msg_subject": "Re: [HACKERS] pgaccess - where to store the own data" } ]
[ { "msg_contents": "Hi, I couldn't get any good answers off the ADMIN list, can you help me?\n\nI haven't been able to finding information on this, or at least I\nhaven't known the right keywords to search for.\n\nWe are trying to make a fully contained, CD-runable version of postgres\nfor advocacy purposes.\nThe only problem we are really having is the locking of the database files\nin the PGDATA folder. Since the PGDATA folder is going to be on the CD\n(i.e. read-only) normal setup prevents us from doing this. We indeed, only\nwant to read data and not write.\n\nCan you direct me to either other resources to read or at least point me to\nthe settings/functionality that I need to learn more about? Or better yet\ncould you give me some guidance on how to get around this. The simpler the\nsolution the better (i.e. postmaster\noptions, environment variables, etc.).\n\nIs it possible somehow?\n\nTyler\n\np.s. We are testing this on a windows beta but I hope that doesn't make a\ndifference.\n\n\n", "msg_date": "Fri, 30 Aug 2002 13:47:00 -0700", "msg_from": "\"Tyler Mitchell\" <TMitchell@lignum.com>", "msg_from_op": true, "msg_subject": "Running postgres on a read-only file system" }, { "msg_contents": "Tyler Mitchell wrote:\n> Hi, I couldn't get any good answers off the ADMIN list, can you help me?\n> \n\nOr at least not one that you liked, huh ;-)\n\nYou won't get a more authoritative answer than you've already gotten.\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 13:57:13 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Running postgres on a read-only file system" } ]
[ { "msg_contents": "\n>Tyler Mitchell wrote:\n>> Hi, I couldn't get any good answers off the ADMIN list, can you help me?\n>>\n\n>Or at least not one that you liked, huh ;-)\n\nOoops \"good\" is a relative term. I should phrase it \"I couldn't get any\nanswers that directly met my application needs\" :)\n\nI know that I need to at least get some more understanding on the process\nthat takes place. But like I said, I don't even really know where to start\nand couldn't find anything in the docs (at least using the keywords that\ncame to mind) to help explain it to me.\n\n>You won't get a more authoritative answer than you've already gotten.\n\nA guy's gotta try :(\n\n\n\n", "msg_date": "Fri, 30 Aug 2002 14:08:59 -0700", "msg_from": "\"Tyler Mitchell\" <TMitchell@lignum.com>", "msg_from_op": true, "msg_subject": "Re: Running postgres on a read-only file system" }, { "msg_contents": "On Fri, Aug 30, 2002 at 02:08:59PM -0700, Tyler Mitchell wrote:\n> \n> I know that I need to at least get some more understanding on the process\n> that takes place. \n\nThe problem is that PostgreSQL doesn't have a \"read only\" mode. So\nyou can't really do it this way.\n\nIs there a way to make a RAMDISK on Win32? If so, Tom Lane's\nsuggestion is probably the best one. Set up a RAMDISK, put your data\ndirectory there, and presto. Of course, that means you need enough\nphysical memory to hold the database, which might cause problems.\n\nWhat about using the CD-ROM to copy a version of the database onto\nthe hard drive? You could delete it when your application shuts\ndown, I guess; you'd still need that much free space for your db,\nthough.\n\nA\n\n-- \n----\nAndrew Sullivan 204-4141 Yonge Street\nLiberty RMS Toronto, Ontario Canada\n<andrew@libertyrms.info> M2P 2A8\n +1 416 646 3304 x110\n\n", "msg_date": "Fri, 30 Aug 2002 17:18:37 -0400", "msg_from": "Andrew Sullivan <andrew@libertyrms.info>", "msg_from_op": false, "msg_subject": "Re: Running postgres on a read-only file system" } ]
[ { "msg_contents": "\n>On Fri, Aug 30, 2002 at 02:08:59PM -0700, Tyler Mitchell wrote:\n>>\n>> I know that I need to at least get some more understanding on the\nprocess\n>> that takes place.\n\n>The problem is that PostgreSQL doesn't have a \"read only\" mode. So\n>you can't really do it this way.\n\nOkay, that answers one of my questions, thanks Andrew. Is this something\nthat others may be interested in? Is it realistic to ask that it be added\nto the TODO list?\nWhat kind of writes occur normally, how does file locking work. Could you\ndirect me to other resources on this for postgresql?\n\n>\n>Is there a way to make a RAMDISK on Win32? If so, Tom Lane's\n>suggestion is probably the best one. Set up a RAMDISK, put your data\n>directory there, and presto. Of course, that means you need enough\n>physical memory to hold the database, which might cause problems.\n>\n>What about using the CD-ROM to copy a version of the database onto\n>the hard drive? You could delete it when your application shuts\n>down, I guess; you'd still need that much free space for your db,\n>though.\n\nYes, both good ideas, we've been kicking these around. But we just wanted\nto exhaust the possibilities before we \"give in\" :)\n\nOne more idea, is it possible to \"fake\" a read-write file system. I.e.\nsupply the files that postgresql will be looking for? (I know it's a\nstretch, but hey, this IS the \"hackers\" list) :)\n\nThanks guys.\n\n\n", "msg_date": "Fri, 30 Aug 2002 14:34:00 -0700", "msg_from": "\"Tyler Mitchell\" <TMitchell@lignum.com>", "msg_from_op": true, "msg_subject": "Re: Running postgres on a read-only file system" }, { "msg_contents": "On Fri, 2002-08-30 at 16:34, Tyler Mitchell wrote:\n> \n> >On Fri, Aug 30, 2002 at 02:08:59PM -0700, Tyler Mitchell wrote:\n> >>\n> >> I know that I need to at least get some more understanding on the\n> process\n> >> that takes place.\n> \n> >The problem is that PostgreSQL doesn't have a \"read only\" mode. So\n> >you can't really do it this way.\n> \n> Okay, that answers one of my questions, thanks Andrew. Is this something\n> that others may be interested in? Is it realistic to ask that it be added\n> to the TODO list?\n> What kind of writes occur normally, how does file locking work. Could you\n> direct me to other resources on this for postgresql?\n> \n> >\n> >Is there a way to make a RAMDISK on Win32? If so, Tom Lane's\n> >suggestion is probably the best one. Set up a RAMDISK, put your data\n> >directory there, and presto. Of course, that means you need enough\n> >physical memory to hold the database, which might cause problems.\n> >\n> >What about using the CD-ROM to copy a version of the database onto\n> >the hard drive? You could delete it when your application shuts\n> >down, I guess; you'd still need that much free space for your db,\n> >though.\n> \n> Yes, both good ideas, we've been kicking these around. But we just wanted\n> to exhaust the possibilities before we \"give in\" :)\n> \n> One more idea, is it possible to \"fake\" a read-write file system. I.e.\n> supply the files that postgresql will be looking for? (I know it's a\n> stretch, but hey, this IS the \"hackers\" list) :)\nThe problem is every query wants to write the clog files.....\n\n\n\n\n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "30 Aug 2002 16:37:22 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Running postgres on a read-only file system" }, { "msg_contents": "\n> One more idea, is it possible to \"fake\" a read-write file system. I.e.\n> supply the files that postgresql will be looking for? (I know it's a\n> stretch, but hey, this IS the \"hackers\" list) :)\n\nOne of the tricks I use for diskless systems is to mount a ramdrive in a\nunion mount with a read only nfs mount. This allows filewrites (to the\nram drive) but old originals are retrieved from the ramdrive.\n\nThis is done on FreeBSD, but is effective enough for getting a fully\nfunctioning system (yes, Postgresql included). Takes quite a bit of ram\nthough.\n\n\nPerhaps there is a toolkit for windows that can do similar union mounts?\n\n", "msg_date": "30 Aug 2002 17:43:29 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Running postgres on a read-only file system" } ]
[ { "msg_contents": "I know this one has been a pain, but I'm getting regression failures on:\n\n abstime ... FAILED\n tinterval ... FAILED\ntest horology ... FAILED\n\nunder RedHat 7.3 and RedHat Null Beta.\n\nThanks,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "30 Aug 2002 21:30:10 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "[7.3-devl] Timezones on RH 7.3 and Null" }, { "msg_contents": "Gordon Runkle wrote:\n> I know this one has been a pain, but I'm getting regression failures on:\n> \n> abstime ... FAILED\n> tinterval ... FAILED\n> test horology ... FAILED\n> \n> under RedHat 7.3 and RedHat Null Beta.\n\nThat's due to a glibc change and is expected, if not desired. Complain \nto Red Hat. For more info, see previous threads on HACKERS, notably this \none:\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-05/msg00740.php\n\nJoe\n\n", "msg_date": "Fri, 30 Aug 2002 18:45:06 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [7.3-devl] Timezones on RH 7.3 and Null" }, { "msg_contents": "On Fri, 2002-08-30 at 21:45, Joe Conway wrote:\n> That's due to a glibc change and is expected, if not desired. Complain \n> to Red Hat. For more info, see previous threads on HACKERS, notably this \n> one:\n> \n> http://archives.postgresql.org/pgsql-hackers/2002-05/msg00740.php\n\nYeah, I remember that. The impression I had from the whole thing was\nthat, yeah, it's a glibc issue, but it still has to be fixed.\n\nI guess I misunderstood?\n\nGordon\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "30 Aug 2002 22:10:24 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "Re: [7.3-devl] Timezones on RH 7.3 and Null" }, { "msg_contents": "Gordon Runkle wrote:\n> On Fri, 2002-08-30 at 21:45, Joe Conway wrote:\n> \n>>That's due to a glibc change and is expected, if not desired. Complain \n>>to Red Hat. For more info, see previous threads on HACKERS, notably this \n>>one:\n>>\n>>http://archives.postgresql.org/pgsql-hackers/2002-05/msg00740.php\n> \n> \n> Yeah, I remember that. The impression I had from the whole thing was\n> that, yeah, it's a glibc issue, but it still has to be fixed.\n> \n> I guess I misunderstood?\n> \n\nWell a \"real\" fix sounded like a lot of work, and no one had the right \ncombination of time/desire/knowledge/skill to go implement it. The \n\"workaround\" fix was discussed in this more recent thread:\n\nhttp://archives.postgresql.org/pgsql-hackers/2002-08/msg01233.php\n\nIt still isn't clear to me exactly what needs to be done to implement \nthe workaround, and since I don't really *need* dates before 1970 for my \nown purposes (presently at least), I haven't tried to figure it out in \nfavor of other priorities.\n\nBut I'm sure a fix would be enthusiastically greeted on the PATCHES list ;-)\n\nJoe\n\n\n\n", "msg_date": "Fri, 30 Aug 2002 19:24:09 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: [7.3-devl] Timezones on RH 7.3 and Null" }, { "msg_contents": "On Fri, 2002-08-30 at 22:24, Joe Conway wrote:\n> Well a \"real\" fix sounded like a lot of work, and no one had the right \n> combination of time/desire/knowledge/skill to go implement it. The \n> \"workaround\" fix was discussed in this more recent thread:\n> \n> http://archives.postgresql.org/pgsql-hackers/2002-08/msg01233.php\n> \n> It still isn't clear to me exactly what needs to be done to implement \n> the workaround, and since I don't really *need* dates before 1970 for my \n> own purposes (presently at least), I haven't tried to figure it out in \n> favor of other priorities.\n\nThanks, that's informative. I don't really understand the workaround\neither...\n\n> But I'm sure a fix would be enthusiastically greeted on the PATCHES list ;-)\n\nProbably not if I hacked away at it...at \"Beta - 1\" =:-o\n\nG.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "31 Aug 2002 14:57:28 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "Re: [7.3-devl] Timezones on RH 7.3 and Null" } ]
[ { "msg_contents": "I have moved my main computer to my new house. The DNS is still being\nupdated, but the new IP address is:\n\n\t207.106.42.251\n\nVince, would you update the developer's page and change\n'candle.pha.pa.us' to this fixed IP address. It may take a few days for\nthe IP to completely propogate.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Fri, 30 Aug 2002 21:42:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "DNS change for candle.pha.pa.us" }, { "msg_contents": "On Fri, 30 Aug 2002, Bruce Momjian wrote:\n\n> I have moved my main computer to my new house. The DNS is still being\n> updated, but the new IP address is:\n>\n> \t207.106.42.251\n>\n> Vince, would you update the developer's page and change\n> 'candle.pha.pa.us' to this fixed IP address. It may take a few days for\n> the IP to completely propogate.\n\nLooks like someone already got it. I just got back into town and am\ncatching up.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 2 Sep 2002 18:31:36 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: DNS change for candle.pha.pa.us" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Fri, 30 Aug 2002, Bruce Momjian wrote:\n> \n> > I have moved my main computer to my new house. The DNS is still being\n> > updated, but the new IP address is:\n> >\n> > \t207.106.42.251\n> >\n> > Vince, would you update the developer's page and change\n> > 'candle.pha.pa.us' to this fixed IP address. It may take a few days for\n> > the IP to completely propogate.\n> \n> Looks like someone already got it. I just got back into town and am\n> catching up.\n\nNo problem. Marc got it. I think we can revert it back in a day or two.\nIt takes 2-3 days to propogate and I am getting my email through the\nnormal IP now so it must be pretty much resolved. Marc cleared up a\nmisunderstanding I had of whether the IP ported for me by .US was a\nproblem if it was wrong but if my NS's had it right.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 21:05:46 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: DNS change for candle.pha.pa.us" }, { "msg_contents": "On Mon, 2 Sep 2002, Bruce Momjian wrote:\n\n> Vince Vielhaber wrote:\n> > On Fri, 30 Aug 2002, Bruce Momjian wrote:\n> >\n> > > I have moved my main computer to my new house. The DNS is still being\n> > > updated, but the new IP address is:\n> > >\n> > > \t207.106.42.251\n> > >\n> > > Vince, would you update the developer's page and change\n> > > 'candle.pha.pa.us' to this fixed IP address. It may take a few days for\n> > > the IP to completely propogate.\n> >\n> > Looks like someone already got it. I just got back into town and am\n> > catching up.\n>\n> No problem. Marc got it. I think we can revert it back in a day or two.\n> It takes 2-3 days to propogate and I am getting my email through the\n> normal IP now so it must be pretty much resolved. Marc cleared up a\n> misunderstanding I had of whether the IP ported for me by .US was a\n> problem if it was wrong but if my NS's had it right.\n\nWe can always put in a dns entry for something like candle.postgresql.org\nand point it to whatever you have. That way if your isp goes and changes\nthings again we can make the quick change and all is done.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Mon, 2 Sep 2002 22:40:46 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: DNS change for candle.pha.pa.us" }, { "msg_contents": "Vince Vielhaber wrote:\n> On Mon, 2 Sep 2002, Bruce Momjian wrote:\n> \n> > Vince Vielhaber wrote:\n> > > On Fri, 30 Aug 2002, Bruce Momjian wrote:\n> > >\n> > > > I have moved my main computer to my new house. The DNS is still being\n> > > > updated, but the new IP address is:\n> > > >\n> > > > \t207.106.42.251\n> > > >\n> > > > Vince, would you update the developer's page and change\n> > > > 'candle.pha.pa.us' to this fixed IP address. It may take a few days for\n> > > > the IP to completely propogate.\n> > >\n> > > Looks like someone already got it. I just got back into town and am\n> > > catching up.\n> >\n> > No problem. Marc got it. I think we can revert it back in a day or two.\n> > It takes 2-3 days to propogate and I am getting my email through the\n> > normal IP now so it must be pretty much resolved. Marc cleared up a\n> > misunderstanding I had of whether the IP ported for me by .US was a\n> > problem if it was wrong but if my NS's had it right.\n> \n> We can always put in a dns entry for something like candle.postgresql.org\n> and point it to whatever you have. That way if your isp goes and changes\n> things again we can make the quick change and all is done.\n\nNo, that's OK. I have things pointing to my machine like the open\npatches list and stuff. It turns out that my ISP is all I need to\nchange, and .US can do it whenever they get to it. I thought I was more\nvulnerable than I thought.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 22:43:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: DNS change for candle.pha.pa.us" }, { "msg_contents": "Bruce Momjian wrote:\n> No, that's OK. I have things pointing to my machine like the open\n> patches list and stuff. It turns out that my ISP is all I need to\n> change, and .US can do it whenever they get to it. I thought I was more\n> vulnerable than I thought.\n\nActually, now that I think of it, maybe it would be cleaner looking to\nhave a postgresql.org domain on those things so they look more normal\nthan pointing to my local name. What do others think?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 22:46:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: DNS change for candle.pha.pa.us" } ]
[ { "msg_contents": "on RH 7.2, 7.3, and Null Beta.\n\nHere's the important part of the diff of expected (<) and results (>):\n\n6a7\n> ERROR: Function iso8859_1_to_utf8 does not exist\n11c12\n< ERROR: conversion name \"myconv\" already exists\n---\n> ERROR: Function iso8859_1_to_utf8 does not exist\n15a17\n> ERROR: Function iso8859_1_to_utf8 does not exist\n20c22\n< ERROR: default conversion for LATIN1 to UNICODE already exists\n---\n> ERROR: Function iso8859_1_to_utf8 does not exist\n\n\nI don't understand how this part works; but it almost looks like\nsrc/backend/utils/mb/conversion_procs/conversion_create.sql isn't being\nrun?\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "30 Aug 2002 22:15:01 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "[7.3-devl] conversion test failing" } ]
[ { "msg_contents": "Hi all,\n Just briefly describe my problem.\n I have two tables.\ncreate table A(\n a1 serial primary key,\n a2 varchars(10)\n);\ncreate table B(\n b1 integer primary key,\n b2 Integer,\n foreign key(b2) references a(a1)\n)\ninsert into A values('123'); \nselect a1 from A where a2='123'\n>--\n>a1 \n>--\n>1\n>--\ninsert into B values (1,1);\nERROR!! referential integrity violation - key referenced from B not found in A.\n\nbut in table A , if I change it the PK to integer, everything would be fine.\n\nany idea?\n\nthanks a lot!\n", "msg_date": "31 Aug 2002 05:28:53 -0700", "msg_from": "leozc@cse.unsw.edu.au (Zhicong Leo Liang)", "msg_from_op": true, "msg_subject": "serial type as foreign key referential integrity violation" }, { "msg_contents": "On 31 Aug 2002 at 5:28, Zhicong Leo Liang wrote:\n\n> Hi all,\n> Just briefly describe my problem.\n> I have two tables.\n> create table A(\n> a1 serial primary key,\n> a2 varchars(10)\n\nthat should be varchar..\n\n> );\n> create table B(\n> b1 integer primary key,\n> b2 Integer,\n> foreign key(b2) references a(a1)\n> )\n> insert into A values('123'); \n> select a1 from A where a2='123'\n> >--\n> >a1 \n> >--\n> >1\n> >--\n> insert into B values (1,1);\n> ERROR!! referential integrity violation - key referenced from B not found in A.\n\nthis works.. I guess it's matter of writing a bit cleaner sql if nothing else. \nI am using postgresql-7.2-12mdk with mandrake8.2. \n\nI don't know which approach is better or correct, yours or mine. But this \nsolves your problems at least..\n\ntest=# select * from a;\n a1 | a2\n-----+----\n 123 |\n(1 row)\n\ntest=# insert into A(a2) values('123');\nINSERT 4863345 1\ntest=# select * from a;\n a1 | a2\n-----+-----\n 123 |\n 1 | 123\n(2 rows)\n\ntest=# insert into b(b1,b2) values(1,1);\nINSERT 4863346 1\ntest=# select * from a;\n a1 | a2\n-----+-----\n 123 |\n 1 | 123\n(2 rows)\n\ntest=# select * from b;\n b1 | b2\n----+----\n 1 | 1\n(1 row)\n\ntest=#\n\nBye\n Shridhar\n\n--\nConcept, n.:\tAny \"idea\" for which an outside consultant billed you more than\t\n$25,000.\n\n", "msg_date": "Tue, 03 Sep 2002 19:21:32 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: serial type as foreign key referential integrity violation" }, { "msg_contents": "On 31 Aug 2002, Zhicong Leo Liang wrote:\n\n> Hi all,\n> Just briefly describe my problem.\n> I have two tables.\n> create table A(\n> a1 serial primary key,\n> a2 varchars(10)\n> );\n> create table B(\n> b1 integer primary key,\n> b2 Integer,\n> foreign key(b2) references a(a1)\n> )\n> insert into A values('123');\n> select a1 from A where a2='123'\n> >--\n> >a1\n> >--\n> >1\n> >--\n\nDid you actually do that sequence and get that result?\nBecause you shouldn't. That should have put a 123 in a1 and\na NULL in a2.\nPerhaps you meant insert into a(a2) values('123');\n\n> insert into B values (1,1);\n> ERROR!! referential integrity violation - key referenced from B not found in A.\n\nIn any case doing the above (with correction) and the insert\nworks fine for me. We'll need more info.\n\n\n", "msg_date": "Tue, 3 Sep 2002 08:44:27 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: serial type as foreign key referential integrity" } ]
[ { "msg_contents": "Hello everybody,\n\nThis e-mail contains pgaccess strategic information.\n\nBest,\n\nIavor\n\n--\nwww.pgaccess.org\n\n\nPGACCESS STRATEGY IN THE EVE OF POSTGRESQL 7.3\n\n-------------------------\n1. pgaccess 0.98.8 beta 1\n-------------------------\n\nAs the new policy of PostgreSQL is to keep away everything that is not\naway*, the things that are kept away should increase their\naggressiveness in order to keep the proximity to the main object.\n\nThat is why the pgaccess development team decided to release the\npgaccess 0.98.8 beta 1 on the day when the PostgreSQL 7.3 beta is\nexpected to be released - Sunday, September the 1st.\n\nAnd to keep its releases up to the day with all PostgreSQL releases from\nnow on.\n\nWe believe this is the only way the growing pgaccess users community\nwill have reliable means to stay tuned and be up to date - having based\ncritical business applications on pgaccess and having trust to base\nmore.\n\n* Healthy.\n\n---------------\n2. One location\n---------------\n\nThere is and there is going to be one main location to draw resources\nabout pgaccess from and this is -\n\nwww.pgaccess.org\n\nThis is the place where the pgaccess 0.98.8 beta 1 will be made\navailable tomorrow.\n\nWe do not believe in losing focus, and we are not going to support other\nlocations with pgaccess related information.\n\nWe are working on building pgaccess advocacy partnerships, but these are\nhigh level strategic partnerships and not information repositories.\n\nwww.pgaccess.org is powered by conceptual wiki technology that, we\nbelieve, is the only way to advance pgaccess in this time of dynamic\nshifting of technologies and is the best demonstration of openness known\nto the web.\n\nTo those who would say that this will result in chaos I would say that\nour management is by spirit of leadership and strong vision and not by\ndead rules and dead technology.\n\n-----------------\n3. One dedication\n-----------------\n\npgaccess has one focus and this is - becoming the best GUI and\napplication building environment for PostgreSQL.\n\nIf somebody thinks differently - it is time to set your clock right.\n\n----------\n4. Details\n----------\n\npgaccess 0.98.8 is the first major release after 0.98.7 - more than 18\nmonths old by now, and is a result of the nice collaboration between\nseveral individuals that started out of the blue about April 2002.\n\npgaccess 0.98.8 comes with new interface where connections to multiple\nPostgreSQL databases will be possible.\n\nStill pure Tcl/Tk, pgaccess has advanced and now it works on more\nplatforms than before.\n\nPure Tcl/Tk interface is also made possible by discovering a dedicated\nTcl/Tk icons library.\n\nSo far the reports are that pgaccess works fine with PostgreSQL 7.3\n\nDuring the months till the 7.3 final release in October/November\nstrategic features will be added to pgaccess.\n\nThe new project - pgaccess BusinessExchange also takes off with at least\nthree known projects where the pgaccess code base is being used or about\nto be used as a base for serious useful applications.\n\n-----------------------\n5. Expansion is welcome\n-----------------------\n\nAs the vision behind pgaccess is to make it the best it can be (Bruce) -\nwe still do not think we have enough.\n\nBusiness application ideas (check the BusinessExchange on\nwww.pgaccess.org) and development effort (especially strategic,\ndedicated) are welcome.\n\nIf you feel like doing it, join developers@pgaccess.org and we will find\na way to help you join us.\n\n-------------------\n6. Do not forget...\n-------------------\n\nIn the eve of the big success I would like to bring once again one name\nin focus - Constantin Teodorescu, who built the first (original)\npgaccess and gave us the tool to find great enjoinment in.\n\nTeo, we want to thank you, your place on the pgaccess table will be\nalways kept!\n\nYours for pgaccess,\n\nIavor Raytchev\npgaccess strategic development team\n\n--\nwww.pgaccess.org\n\n", "msg_date": "Sat, 31 Aug 2002 19:55:49 +0200", "msg_from": "\"Iavor Raytchev\" <iavor.raytchev@verysmall.org>", "msg_from_op": true, "msg_subject": "pgaccess 0.98.8 beta 1 - the show starts" }, { "msg_contents": "Iavor Raytchev wrote:\n> That is why the pgaccess development team decided to release the\n> pgaccess 0.98.8 beta 1 on the day when the PostgreSQL 7.3 beta is\n> expected to be released - Sunday, September the 1st.\n\nWhile beta for PostgreSQL officially begins on Sunday, I don't think we\nwill have a PostgreSQL beta1 ready for distribution until perhaps\nWednesday of that week.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 15:35:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [pgaccess-developers] pgaccess 0.98.8 beta 1 - the show starts" }, { "msg_contents": "----- Original Message -----\nFrom: \"Iavor Raytchev\" <iavor.raytchev@verysmall.org>\nTo: \"pgaccess - developers\" <developers@pgaccess.org>; \"pgaccess - users\"\n<users@pgaccess.org>; \"pgsql-interfaces\" <pgsql-interfaces@postgresql.org>;\n\"pgsql-hackers\" <pgsql-hackers@postgresql.org>\nSent: Saturday, August 31, 2002 8:55 PM\nSubject: [INTERFACES] pgaccess 0.98.8 beta 1 - the show starts\n\n\n> In the eve of the big success I would like to bring once again one name\n> in focus - Constantin Teodorescu, who built the first (original)\n> pgaccess and gave us the tool to find great enjoinment in.\n>\n> Teo, we want to thank you, your place on the pgaccess table will be\n> always kept!\n\nThanks a lot Iavor, and thanks to all of you who are pushing further the\ndevelopment of PgAccess.\nI am sad because I don't have any free time in order to get a hand of help\nto your efforts but I hope that in a couple of months I will be able to make\nsomething new for PgAccess 1.0 release.\n\nThanks again,\nTeo\n\n\n", "msg_date": "Mon, 2 Sep 2002 18:25:04 +0300", "msg_from": "\"Teo\" <teo@flex.ro>", "msg_from_op": false, "msg_subject": "Re: [INTERFACES] pgaccess 0.98.8 beta 1 - the show starts" } ]
[ { "msg_contents": "This is pretty chump and easy to get around, but it took me a sec to\nfigure this out. Anyway, the short and skinny being that with the new\nAutoCommit GUC turned off, create(lang|db) won't work until you bail\nout of the transaction. A quick hack would be to insert an \"ABORT;\"\nin each of the CLI tools. Anyone have any thoughts on the best way to\nsolve this? -sc\n\n-- \nSean Chittenden", "msg_date": "Sat, 31 Aug 2002 13:26:18 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "AutoCommit GUC breaks CLI tools..." }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> This is pretty chump and easy to get around, but it took me a sec to\n> figure this out. Anyway, the short and skinny being that with the new\n> AutoCommit GUC turned off, create(lang|db) won't work until you bail\n> out of the transaction. A quick hack would be to insert an \"ABORT;\"\n> in each of the CLI tools. Anyone have any thoughts on the best way to\n> solve this? -sc\n\nWell, like I was saying earlier, I think there is a *lot* of client-side\ncode that is not ready for this. I would not have thrown in the\nautocommit backend feature at all, except that we need it to run the\nNIST SQL-spec-compliance tests.\n\nThe answer for createlang is probably to throw a BEGIN and a COMMIT\naround each command --- that should make it work in either autocommit\non or off modes. We'll have to see how well that scales to other stuff.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 01:36:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: AutoCommit GUC breaks CLI tools... " }, { "msg_contents": "\nHow about adding 'set autocommit=on' to the top of each script?\n\n---------------------------------------------------------------------------\n\nSean Chittenden wrote:\n-- Start of PGP signed section.\n> This is pretty chump and easy to get around, but it took me a sec to\n> figure this out. Anyway, the short and skinny being that with the new\n> AutoCommit GUC turned off, create(lang|db) won't work until you bail\n> out of the transaction. A quick hack would be to insert an \"ABORT;\"\n> in each of the CLI tools. Anyone have any thoughts on the best way to\n> solve this? -sc\n> \n> -- \n> Sean Chittenden\n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 01:37:23 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: AutoCommit GUC breaks CLI tools..." } ]
[ { "msg_contents": "\nAs someone's suggestion, we are going to continue accepting patches\nthrough Sunday night, EDT, which will give us Monday to make sure all\nthe patches are in. I will have the HISTORY/release.sgml ready by then.\n\nAt that point, we can collect any other open items, like doc updates,\nand start to set a date to package beta1 for our users.\n\nI assume beta1 tarball will be packaged sometime later in the week. \nMarc, is that the schedule you had in mind?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 16:45:24 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "7.3 beta schedule" }, { "msg_contents": "On Sat, 31 Aug 2002, Bruce Momjian wrote:\n\n>\n> As someone's suggestion, we are going to continue accepting patches\n> through Sunday night, EDT, which will give us Monday to make sure all\n> the patches are in. I will have the HISTORY/release.sgml ready by then.\n>\n> At that point, we can collect any other open items, like doc updates,\n> and start to set a date to package beta1 for our users.\n>\n> I assume beta1 tarball will be packaged sometime later in the week.\n> Marc, is that the schedule you had in mind?\n\nYup, I believe that it is the case that both the US and Canada \"celebrate\"\nlabour day on Monday, correct? If so, then let's put the freeze on\neffective 8:30amADT on Tuesday morning, at which point I'll put a tag in\nplace, and will package Beta1 up for the initial round of testing ...\n\n\n", "msg_date": "Sat, 31 Aug 2002 18:03:10 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: 7.3 beta schedule" } ]
[ { "msg_contents": "Just a reminder that once all the patches are in, I need to run\npgindent.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 20:52:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "pgindent" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Just a reminder that once all the patches are in, I need to run\n> pgindent.\n\nDo you want to do it before beta starts? I think in the past we\nwaited till second or third beta release ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Aug 2002 21:21:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: pgindent " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Just a reminder that once all the patches are in, I need to run\n> > pgindent.\n> \n> Do you want to do it before beta starts? I think in the past we\n> waited till second or third beta release ...\n\nYes, I would like to do it before beta1 is wrapped. I can't do it\nbefore then.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 21:31:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: pgindent" } ]
[ { "msg_contents": "Although it seems no one who's thought about it is very happy with the\ncurrent state of play for implicit casts, we have run out of time to\nimplement any real solution for 7.3.\n\nAfter looking through the archives, it seems that the only serious\nstep backwards from 7.2 behavior is Barry Lind's example:\n\n> create table test (cola bigint);\n> update test set cola = 10000000000;\n> ERROR: column \"cola\" is of type 'bigint' but expression is of type \n> 'double precision'\n\nwhich fails because the constant is initially typed as float8. To patch\nthis up, I have marked the float8->int8 cast pathway as an implicitly\ninvokable cast. This is not desirable, but it's no worse than what 7.2\nwould do.\n\nI would still like to see us reduce the number of implicit cast\npathways, but that will have to wait for 7.4 now, since there's no\nmore time for discussion about the behavior.\n\n\nIt does seem that in at least a few places, current sources behave better\nthan 7.2 did; for instance the example mentioned in TODO works:\n\n\to SELECT col FROM tab WHERE numeric_col = 10.1 fails, requires quotes\n\nregression=# explain select * FROM tab WHERE numeric_col = 10.1;\n QUERY PLAN\n-----------------------------------------------------\n Seq Scan on tab (cost=0.00..22.50 rows=5 width=32)\n Filter: (numeric_col = 10.1::numeric)\n(2 rows)\n\nHowever the behavior with bigint columns is no better than before.\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Aug 2002 21:18:08 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Current status: implicit-coercion issues for 7.3" }, { "msg_contents": "Tom Lane wrote:\n> It does seem that in at least a few places, current sources behave better\n> than 7.2 did; for instance the example mentioned in TODO works:\n> \n> \to SELECT col FROM tab WHERE numeric_col = 10.1 fails, requires quotes\n> \n> regression=# explain select * FROM tab WHERE numeric_col = 10.1;\n> QUERY PLAN\n> -----------------------------------------------------\n> Seq Scan on tab (cost=0.00..22.50 rows=5 width=32)\n> Filter: (numeric_col = 10.1::numeric)\n> (2 rows)\n\nTODO updated.\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 21:32:09 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Current status: implicit-coercion issues for 7.3" } ]
[ { "msg_contents": "It says here that CREATE CAST insists the cast function be immutable.\nThis seems wrong to me, in view of the fact that we have numerous\nbuilt-in casts that don't adhere to that rule --- for example,\ntimestamptz(date) is not immutable because it depends on the timezone\nsetting.\n\nPerhaps there's a case for prohibiting volatile casts (as opposed to\nstable ones), but I don't really see it. I'd prefer to just remove\nthis restriction. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Sat, 31 Aug 2002 22:42:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "CREATE CAST requires immutable cast function?" }, { "msg_contents": "Tom Lane wrote:\n> Perhaps there's a case for prohibiting volatile casts (as opposed to\n> stable ones), but I don't really see it. I'd prefer to just remove\n> this restriction. Comments?\n\nVolatile casts can blow up. I am sure that is the reasoning. ;-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 22:45:00 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: CREATE CAST requires immutable cast function?" }, { "msg_contents": "Tom Lane writes:\n\n> Perhaps there's a case for prohibiting volatile casts (as opposed to\n> stable ones), but I don't really see it. I'd prefer to just remove\n> this restriction. Comments?\n\nI'm not wedded to it, I just modelled it after the SQL standard, but\nevidently the volatility levels are different in detail. I would disallow\nvolatile casts in any case. There ought to be a minimal behavioral\ncontract between creators and users of types.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 3 Sep 2002 20:53:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: CREATE CAST requires immutable cast function?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> I'm not wedded to it, I just modelled it after the SQL standard, but\n> evidently the volatility levels are different in detail. I would disallow\n> volatile casts in any case. There ought to be a minimal behavioral\n> contract between creators and users of types.\n\nShrug ... ISTM the behavior of a type is whatever the type creator says\nit should be. Whether a volatile cast is a good idea is dubious\n(I can't think of any good examples of one offhand) but I don't see the\nargument for having the system restrict the type creator's choices.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 16:09:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: CREATE CAST requires immutable cast function? " } ]
[ { "msg_contents": "I am trying to find when WAL log files are rotated. The message is:\n\n 2002-02-11 21:18:13 DEBUG: recycled transaction log file 0000000000000005\n\nand it is printed in MoveOfflineLogs(), and MoveOfflineLogs() is only\ncalled by CreateCheckPoint(), but I can't see where CreateCheckPoint()\nis called in normal operation. I see it called by CHECKPOINT, and on\nstartup and shutdown, and from bootstrap, but where is it called during\nnormal backend operation.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sat, 31 Aug 2002 23:27:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Help with finding checkpoint code" }, { "msg_contents": "En Sat, 31 Aug 2002 23:27:08 -0400 (EDT)\nBruce Momjian <pgman@candle.pha.pa.us> escribi�:\n\n> I am trying to find when WAL log files are rotated. The message is:\n> \n> 2002-02-11 21:18:13 DEBUG: recycled transaction log file 0000000000000005\n> \n> and it is printed in MoveOfflineLogs(), and MoveOfflineLogs() is only\n> called by CreateCheckPoint(), but I can't see where CreateCheckPoint()\n> is called in normal operation. I see it called by CHECKPOINT, and on\n> startup and shutdown, and from bootstrap, but where is it called during\n> normal backend operation.\n\nI see it on TruncateCLOG(), src/backend/access/transam/clog.c; that is\ncalled by vacuum code. Also on CheckPoinDataBase(), macro in\nsrc/backend/postmaster/postmaster.c (this is called on a periodic basis\nAFAIU)\n\nHTH\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\nOne man's impedance mismatch is another man's layer of abstraction.\n(Lincoln Yeoh)\n", "msg_date": "Sat, 31 Aug 2002 23:51:04 -0400", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: Help with finding checkpoint code" }, { "msg_contents": "It is called by a special child process of the postmaster after a\nsignal. Search for PMSIGNAL_DO_CHECKPOINT in xlog.c and in postmaster.c.\nThe checkpoint process gets started out of sigusr1_handler().\n\n\nOn Sat, 2002-08-31 at 23:27, Bruce Momjian wrote:\n> I am trying to find when WAL log files are rotated. The message is:\n> \n> 2002-02-11 21:18:13 DEBUG: recycled transaction log file 0000000000000005\n> \n> and it is printed in MoveOfflineLogs(), and MoveOfflineLogs() is only\n> called by CreateCheckPoint(), but I can't see where CreateCheckPoint()\n> is called in normal operation. I see it called by CHECKPOINT, and on\n> startup and shutdown, and from bootstrap, but where is it called during\n> normal backend operation.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n-- \nJ. R. Nield\njrnield@usol.com\n\n\n\n", "msg_date": "31 Aug 2002 23:56:12 -0400", "msg_from": "\"J. R. Nield\" <jrnield@usol.com>", "msg_from_op": false, "msg_subject": "Re: Help with finding checkpoint code" }, { "msg_contents": "\nThanks, got it.\n\n---------------------------------------------------------------------------\n\nJ. R. Nield wrote:\n> It is called by a special child process of the postmaster after a\n> signal. Search for PMSIGNAL_DO_CHECKPOINT in xlog.c and in postmaster.c.\n> The checkpoint process gets started out of sigusr1_handler().\n> \n> \n> On Sat, 2002-08-31 at 23:27, Bruce Momjian wrote:\n> > I am trying to find when WAL log files are rotated. The message is:\n> > \n> > 2002-02-11 21:18:13 DEBUG: recycled transaction log file 0000000000000005\n> > \n> > and it is printed in MoveOfflineLogs(), and MoveOfflineLogs() is only\n> > called by CreateCheckPoint(), but I can't see where CreateCheckPoint()\n> > is called in normal operation. I see it called by CHECKPOINT, and on\n> > startup and shutdown, and from bootstrap, but where is it called during\n> > normal backend operation.\n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > \n> -- \n> J. R. Nield\n> jrnield@usol.com\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 18:36:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Help with finding checkpoint code" } ]
[ { "msg_contents": "Hi,\n\nany idea what we do with ecpg for the upcoming freeze? Do we use the\nbeta version of bison? If so, do we merge the branch back into main?\n\nI just synced the parsers and found one more problem. The embedded SQL\ncommands EXECUTE and PREPARE do collide with the backend commands. No\nidea so far how to fix this.\n\nMichael\n-- \nMichael Meskes\nMichael@Fam-Meskes.De\nGo SF 49ers! Go Rhein Fire!\nUse Debian GNU/Linux! Use PostgreSQL!\n", "msg_date": "Sun, 1 Sep 2002 11:30:14 +0200", "msg_from": "Michael Meskes <meskes@postgresql.org>", "msg_from_op": true, "msg_subject": "ecpg and freeze" }, { "msg_contents": "\n\nWhen is the beta freeze?\n\nI've just started looking at a ToDo list item and hope it won't take too\nlong. However, I've got other things to do and this is the first I've looked in\nthis area. An idea about time left to complete it would be good so I decide if\nI'm wasting my time and effort at the moment.\n\n\n-- \nNigel J. Andrews\nDirector\n\n---\nLogictree Systems Limited\nComputer Consultants\n\n", "msg_date": "Sun, 1 Sep 2002 18:52:30 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Impending freeze" }, { "msg_contents": "On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n\n> \n> \n> When is the beta freeze?\n\nToday.\n\nGavin\n\n", "msg_date": "Mon, 2 Sep 2002 03:58:55 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "\nOn Mon, 2 Sep 2002, Gavin Sherry wrote:\n\n> On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> \n> > When is the beta freeze?\n> \n> Today.\n> \n\nOops, my fault for being imprecise.\n\nI was wondering what time of day with timezone. Someone suggested end of today\nbut that means different times to different people.\n\n\n--\nNigel Andrews\n\n\n", "msg_date": "Sun, 1 Sep 2002 19:17:54 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "Nigel J. Andrews wrote:\n> Oops, my fault for being imprecise.\n> \n> I was wondering what time of day with timezone. Someone suggested end of today\n> but that means different times to different people.\n\nBruce's message said end of day, EDT.\n\nJoe\n\n", "msg_date": "Sun, 01 Sep 2002 11:48:15 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "\n> Oops, my fault for being imprecise.\n> \n> I was wondering what time of day with timezone. Someone suggested end of today\n> but that means different times to different people.\n\nThis is what Marc said yesterday:\n\nYup, I believe that it is the case that both the US and Canada\n\"celebrate\"\nlabour day on Monday, correct? If so, then let's put the freeze on\neffective 8:30amADT on Tuesday morning, at which point I'll put a tag in\nplace, and will package Beta1 up for the initial round of testing ...\n\n\nI don't know if that suggestion will go through or not, but I can't see\nwhy not.\n\n", "msg_date": "01 Sep 2002 14:48:57 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n\n>\n> On Mon, 2 Sep 2002, Gavin Sherry wrote:\n>\n> > On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> >\n> > > When is the beta freeze?\n> >\n> > Today.\n> >\n>\n> Oops, my fault for being imprecise.\n>\n> I was wondering what time of day with timezone. Someone suggested end of today\n> but that means different times to different people.\n\n8:30am ADT on Tuesday morning is when I'm going to freeze everything ...\nso you effectively have all day on Monday to get it in ...\n\n\n", "msg_date": "Sun, 1 Sep 2002 15:58:32 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "On Sun, 1 Sep 2002, Marc G. Fournier wrote:\n\n> On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> \n> >\n> > On Mon, 2 Sep 2002, Gavin Sherry wrote:\n> >\n> > > On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> > >\n> > > > When is the beta freeze?\n> > >\n> > > Today.\n> > >\n> >\n> > Oops, my fault for being imprecise.\n> >\n> > I was wondering what time of day with timezone. Someone suggested end of today\n> > but that means different times to different people.\n> \n> 8:30am ADT on Tuesday morning is when I'm going to freeze everything ...\n> so you effectively have all day on Monday to get it in ...\n\nThanks all that replied.\n\nI haven't read the messages between Friday and when I posted yet so I didn't\nknow about the Tuesday morning thing. I was not going to bother having realised\nthe silliness of trying to rush a patch into place but this sounds more\nrealistic.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Sun, 1 Sep 2002 22:20:40 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "\nI think we need the bison beta on postgresql.org, and we shipped using\nthat.\n\n\n---------------------------------------------------------------------------\n\nMichael Meskes wrote:\n> Hi,\n> \n> any idea what we do with ecpg for the upcoming freeze? Do we use the\n> beta version of bison? If so, do we merge the branch back into main?\n> \n> I just synced the parsers and found one more problem. The embedded SQL\n> commands EXECUTE and PREPARE do collide with the backend commands. No\n> idea so far how to fix this.\n> \n> Michael\n> -- \n> Michael Meskes\n> Michael@Fam-Meskes.De\n> Go SF 49ers! Go Rhein Fire!\n> Use Debian GNU/Linux! Use PostgreSQL!\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 19:05:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: ecpg and freeze" }, { "msg_contents": "\nWe are definitely more organized for this beta because we are picking\nthe hour for beta start. ;-)\n\nThe issue is that we can't just close beta and roll out a beta1 tarball.\nThere will be patch questions, doc questions, and general build\nquestions that could take days to resolve before we are 100% ready for\nbeta1.\n\nNow, if you just want to roll together what we have and send out that,\nthat is fine, but there are going to be significant cleanups in beta2.\n\n---------------------------------------------------------------------------\n\nMarc G. Fournier wrote:\n> On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> \n> >\n> > On Mon, 2 Sep 2002, Gavin Sherry wrote:\n> >\n> > > On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> > >\n> > > > When is the beta freeze?\n> > >\n> > > Today.\n> > >\n> >\n> > Oops, my fault for being imprecise.\n> >\n> > I was wondering what time of day with timezone. Someone suggested end of today\n> > but that means different times to different people.\n> \n> 8:30am ADT on Tuesday morning is when I'm going to freeze everything ...\n> so you effectively have all day on Monday to get it in ...\n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 19:12:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "On Sun, 1 Sep 2002, Bruce Momjian wrote:\n\n>\n> We are definitely more organized for this beta because we are picking\n> the hour for beta start. ;-)\n>\n> The issue is that we can't just close beta and roll out a beta1 tarball.\n> There will be patch questions, doc questions, and general build\n> questions that could take days to resolve before we are 100% ready for\n> beta1.\n>\n> Now, if you just want to roll together what we have and send out that,\n> that is fine, but there are going to be significant cleanups in beta2.\n\nThis is what I'm figuring ... if we roll and send out a beta1 right off,\nmore ppl will look at it and report bugs then if we just say 'hey, its\nfrozen, check it out' ... as I've said before, to me, beta is \"we're\ngetting ready to release, let us know what is wrong so that we can fix it\"\n... its at the RC level that we're saying we believe we have all the\nissues worked out ...\n\n\n\n\n >\n> ---------------------------------------------------------------------------\n>\n> Marc G. Fournier wrote:\n> > On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> >\n> > >\n> > > On Mon, 2 Sep 2002, Gavin Sherry wrote:\n> > >\n> > > > On Sun, 1 Sep 2002, Nigel J. Andrews wrote:\n> > > >\n> > > > > When is the beta freeze?\n> > > >\n> > > > Today.\n> > > >\n> > >\n> > > Oops, my fault for being imprecise.\n> > >\n> > > I was wondering what time of day with timezone. Someone suggested end of today\n> > > but that means different times to different people.\n> >\n> > 8:30am ADT on Tuesday morning is when I'm going to freeze everything ...\n> > so you effectively have all day on Monday to get it in ...\n> >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n\n", "msg_date": "Mon, 2 Sep 2002 00:30:24 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Sun, 1 Sep 2002, Bruce Momjian wrote:\n> \n> >\n> > We are definitely more organized for this beta because we are picking\n> > the hour for beta start. ;-)\n> >\n> > The issue is that we can't just close beta and roll out a beta1 tarball.\n> > There will be patch questions, doc questions, and general build\n> > questions that could take days to resolve before we are 100% ready for\n> > beta1.\n> >\n> > Now, if you just want to roll together what we have and send out that,\n> > that is fine, but there are going to be significant cleanups in beta2.\n> \n> This is what I'm figuring ... if we roll and send out a beta1 right off,\n> more ppl will look at it and report bugs then if we just say 'hey, its\n> frozen, check it out' ... as I've said before, to me, beta is \"we're\n> getting ready to release, let us know what is wrong so that we can fix it\"\n> ... its at the RC level that we're saying we believe we have all the\n> issues worked out ...\n\nYes, I am just concerned we may actually adjust functionality after\nbeta1 if we haven't tied everything down, and that may be a problem.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 00:20:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "On Mon, 2 Sep 2002, Bruce Momjian wrote:\n\n> Yes, I am just concerned we may actually adjust functionality after\n> beta1 if we haven't tied everything down, and that may be a problem.\n\nWhy? Nobody should be using a beta for production anyway ... until we go\nto RC1, functionality can change if its required to fix a bug ... always\nhas been that way ...\n\n", "msg_date": "Mon, 2 Sep 2002 01:32:08 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "Marc G. Fournier wrote:\n> On Mon, 2 Sep 2002, Bruce Momjian wrote:\n> \n> > Yes, I am just concerned we may actually adjust functionality after\n> > beta1 if we haven't tied everything down, and that may be a problem.\n> \n> Why? Nobody should be using a beta for production anyway ... until we go\n> to RC1, functionality can change if its required to fix a bug ... always\n> has been that way ...\n\nOh, OK. We will package it as best was can for Tuesday.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 00:34:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "\"Marc G. Fournier\" <scrappy@hub.org> writes:\n>> Now, if you just want to roll together what we have and send out that,\n>> that is fine, but there are going to be significant cleanups in beta2.\n\n> This is what I'm figuring ... if we roll and send out a beta1 right off,\n> more ppl will look at it and report bugs then if we just say 'hey, its\n> frozen, check it out' ... as I've said before, to me, beta is \"we're\n> getting ready to release, let us know what is wrong so that we can fix it\"\n> ... its at the RC level that we're saying we believe we have all the\n> issues worked out ...\n\nMy two cents: once we ship beta1 we should try really really hard to\navoid forcing an initdb cycle before final release. We can make all\nthe portability fixes and code fixes we like, but we have to avoid\ndisk-file-contents changes and system catalog changes. If we force\nan initdb then we're penalizing beta testers who might have been foolish\nenough to load large databases into the beta version --- and yeah, there\nwere no guarantees, but will they do it again next beta cycle?\n\nSo my take is that anything that needs initdb doesn't get in after\nbeta1, unless it's a \"must fix\" bug. What have we got in the queue\nthat would require system catalog changes?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 01:18:50 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Impending freeze " }, { "msg_contents": "On Mon, 02 Sep 2002 01:20:51 -0400, Tom Lane wrote:\n> \n> My two cents: once we ship beta1 we should try really really hard to\n> avoid forcing an initdb cycle before final release. We can make all the\n> portability fixes and code fixes we like, but we have to avoid\n> disk-file-contents changes and system catalog changes. If we force an\n> initdb then we're penalizing beta testers who might have been foolish\n> enough to load large databases into the beta version --- and yeah, there\n> were no guarantees, but will they do it again next beta cycle?\n\nWell, speaking as a user and tester, I don't mind an initdb between beta\nreleases -- and I'm planning on loading ~8GB of data. I've been playing\nwith the CVS tip all week using a ~2GB dataset, and having to initdb\nevery other day or so. No biggie, and I'm getting to test pg_dump, as a\nbonus! ;-)\n\nI suppose some delicate souls would be put off by an initdb, or get sore\nat experiencing the reason Beta != Production, but I think the serious\nusers will prefer that you developers have a free hand to do whatever is\nnecessary to make 7.3 as good as it can be.\n\nMy US$0.02,\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Mon, 02 Sep 2002 02:04:48 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "\nYes, I think the key issue is whether we have a better chance of\npreventing radical changes during beta if we go a few days during code\nfreeze to make sure everything is bolted down.\n\nMarc doesn't want to do that, so we won't.\n\n---------------------------------------------------------------------------\n\nGordon Runkle wrote:\n> On Mon, 02 Sep 2002 01:20:51 -0400, Tom Lane wrote:\n> > \n> > My two cents: once we ship beta1 we should try really really hard to\n> > avoid forcing an initdb cycle before final release. We can make all the\n> > portability fixes and code fixes we like, but we have to avoid\n> > disk-file-contents changes and system catalog changes. If we force an\n> > initdb then we're penalizing beta testers who might have been foolish\n> > enough to load large databases into the beta version --- and yeah, there\n> > were no guarantees, but will they do it again next beta cycle?\n> \n> Well, speaking as a user and tester, I don't mind an initdb between beta\n> releases -- and I'm planning on loading ~8GB of data. I've been playing\n> with the CVS tip all week using a ~2GB dataset, and having to initdb\n> every other day or so. No biggie, and I'm getting to test pg_dump, as a\n> bonus! ;-)\n> \n> I suppose some delicate souls would be put off by an initdb, or get sore\n> at experiencing the reason Beta != Production, but I think the serious\n> users will prefer that you developers have a free hand to do whatever is\n> necessary to make 7.3 as good as it can be.\n> \n> My US$0.02,\n> \n> Gordon.\n> -- \n> \"Far and away the best prize that life has to offer\n> is the chance to work hard at work worth doing.\"\n> -- Theodore Roosevelt\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 02:11:50 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Impending freeze" }, { "msg_contents": "On Mon, 2 Sep 2002, Tom Lane wrote:\n\n> So my take is that anything that needs initdb doesn't get in after\n> beta1, unless it's a \"must fix\" bug. What have we got in the queue\n> that would require system catalog changes?\n\nAgreed, but, have we ever done such when it *wasn't* required? I know in\nthe past its always been with hestitation that such has been done, or at\nleast that is how I've felt it come across ...\n\n\n", "msg_date": "Mon, 2 Sep 2002 18:55:22 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: Impending freeze " } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\tmomjian@postgresql.org\t02/09/01 19:26:06\n\nModified files:\n\tdoc/src/sgml : func.sgml runtime.sgml \n\tsrc/backend/tcop: postgres.c \n\tsrc/backend/utils/misc: guc.c postgresql.conf.sample \n\tsrc/bin/psql : tab-complete.c \n\tsrc/include/utils: guc.h \n\nLog message:\n\tAdd log_duration to GUC/postgresql.conf.\n\t\n\tRename debug_print_query to log_statement and rename show_query_stats to\n\tshow_statement_stats.\n\n", "msg_date": "Sun, 1 Sep 2002 19:26:06 -0400 (EDT)", "msg_from": "momjian@postgresql.org (Bruce Momjian - CVS)", "msg_from_op": true, "msg_subject": "pgsql-server/ oc/src/sgml/func.sgml oc/src/sgm ..." }, { "msg_contents": "Do we have log_username as an option? Would log the username that tried to\nexecute each query?\n\nWould be a useful option!\n\nChris\n\n> -----Original Message-----\n> From: pgsql-committers-owner@postgresql.org\n> [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> - CVS\n> Sent: Monday, 2 September 2002 7:26 AM\n> To: pgsql-committers@postgresql.org\n> Subject: [COMMITTERS] pgsql-server/ oc/src/sgml/func.sgml oc/src/sgm ...\n>\n>\n> CVSROOT:\t/cvsroot\n> Module name:\tpgsql-server\n> Changes by:\tmomjian@postgresql.org\t02/09/01 19:26:06\n>\n> Modified files:\n> \tdoc/src/sgml : func.sgml runtime.sgml\n> \tsrc/backend/tcop: postgres.c\n> \tsrc/backend/utils/misc: guc.c postgresql.conf.sample\n> \tsrc/bin/psql : tab-complete.c\n> \tsrc/include/utils: guc.h\n>\n> Log message:\n> \tAdd log_duration to GUC/postgresql.conf.\n>\n> \tRename debug_print_query to log_statement and rename\n> show_query_stats to\n> \tshow_statement_stats.\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n>\n\n", "msg_date": "Mon, 2 Sep 2002 09:38:27 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ oc/src/sgml/func.sgml oc/src/sgm ..." }, { "msg_contents": "\nNo, we don't. Right now you have to add log_connections, then link the\nquery pid back to the user connection. Not sure how else we would do it\nbecause log_duration has to do the same linking. Ideas?\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Do we have log_username as an option? Would log the username that tried to\n> execute each query?\n> \n> Would be a useful option!\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-committers-owner@postgresql.org\n> > [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > - CVS\n> > Sent: Monday, 2 September 2002 7:26 AM\n> > To: pgsql-committers@postgresql.org\n> > Subject: [COMMITTERS] pgsql-server/ oc/src/sgml/func.sgml oc/src/sgm ...\n> >\n> >\n> > CVSROOT:\t/cvsroot\n> > Module name:\tpgsql-server\n> > Changes by:\tmomjian@postgresql.org\t02/09/01 19:26:06\n> >\n> > Modified files:\n> > \tdoc/src/sgml : func.sgml runtime.sgml\n> > \tsrc/backend/tcop: postgres.c\n> > \tsrc/backend/utils/misc: guc.c postgresql.conf.sample\n> > \tsrc/bin/psql : tab-complete.c\n> > \tsrc/include/utils: guc.h\n> >\n> > Log message:\n> > \tAdd log_duration to GUC/postgresql.conf.\n> >\n> > \tRename debug_print_query to log_statement and rename\n> > show_query_stats to\n> > \tshow_statement_stats.\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 4: Don't 'kill -9' the postmaster\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 21:43:58 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/ oc/src/sgml/func.sgml oc/src/sgm" } ]
[ { "msg_contents": "Hello All,\n\nI'm new to this list...and PostgreSQL. I've got my server all up and \nrunning...and I am porting an application that had been using MSSQL as \nits backend.\n\nWhen I use my app to browse around (i.e., SELECTS), everything is \nbehaving nicely. I am not having such luck with UPDATEs.\n\nI am using the [ODBC] CRecordset class as the base class of my access \nclasses. They are foreign variables of CRecordView-derived classes (the \nusual way of going about things). When I change data on the form and \nclick apply, I get a strange behavior:\n\n(1) The framework pops up an error message: \"Update or Delete failed.\"\n This error is dismissable by clicking \"OK\".\n\n(2) In reality, the update did NOT fail. It is changed in the back-end.\n\n\nThings to note: This all worked just fine in MSSQL...so it isn't a \nproblem in my code. My guess is that one of the two ODBC drivers in \nnon-conformant.\n\nMore details I've uncovered:\n\nThis error is being raised from execution of the following code:\n\n// This should only fail if SQLSetPos returned\n// SQL_SUCCESS_WITH_INFO explaining why\n\nif (nRetCode == SQL_SUCCESS_WITH_INFO &&\n GetRowStatus(1) != wExpectedRowStatus)\n ThrowDBException(AFX_SQL_ERROR_UPDATE_DELETE_FAILED);\n\nThis code lives in CRecordset::ExecuteSetPosUpdate(); it is invoked to \ncheck the return values after the call to:\n\nAFX_ODBC_CALL(::SQLSetPos(m_hstmt, 1, wPosOption, SQL_LOCK_NO_CHANGE));\n\n\nWhen MSSQL returns, nRetCode is set to 0 (SQL_SUCCESS). The call \nGetRowStatus(1) will just return m_rgRowStatus[0]. With MSSQL, the \nvalue of m_rgRowStatus[0] is 0 (SQL_ROW_SUCCESS) before the SQLSetPos \ncall and 2 afterward (SQL_ROW_UPDATED). Because nRetCode is not \nSQL_SUCCESS_WITH_INFO, the first part of the conditional fails and the \nexception is NOT thrown.\n\nWhen PostgreSQL returns, nRetCode is set to 1 (SQL_SUCCESS_WITH_INFO). \nWith PostgreSQL, the value of m_rgRowStatus[0] is 0 (SQL_ROW_SUCCESS) \nbefore the SQLSetPos call and IS STILL 0 afterward (SQL_ROW_UPDATED). \nBecause nRetCode is SQL_SUCCESS_WITH_INFO (as opposed to plain old \nSQL_SUCCESS), the first part of the conditional is met. The second part \nis now checked. Becuase this is an update, the preliminary work done by \nthis function sets wExpectedRowStatus to 2 (SQL_ROW_UPDATED). As noted \nabove, PostgreSQL is leaving the value of m_rgRowStatus[0] as 0 \n(SQL_ROW_UPDATED).\n\nBecause the MS ODBC driver is correctly setting m_rgRowStatus[0] to 2 \n(SQL_ROW_UPDATED) after the call to SQLSetPos [even though it doesn't \nmatter because it never gets checked since it only returns SQL_SUCCESS \nwithout the extra _WITH_INFO], and because it works, I assume this is an \nerror in the psqlodbc driver.\n\nMore detailed info (versions, debud dump, etc. can be found below this \nmessage). I'll be happy to provide any additional info needed.\n\nI downloaded the driver source and started looking aroung at it. The \nfunction that I believe is getting called [SC_pos_update()] will, if \nsuccessful, always return SQL_SUCCESS_WITH_INFO. Is that the \nappropriate behavior? Why not just return the simple SQL_SUCCESS?\n\nI have NEVER looked at the internals of ODBC before. This is starting \nto give me a headache...though it is making me think about getting \nactively involved with development of the driver [;)] Before I dig any \ndeeper...can anyone think of something simple that I may doing wrong? \nPerhaps a driver setting (though I've tried most of the possible \ncombinations of things that sound like they could affect this)?\n\nAs an aside, can anyone tell me what I need to do to be able to debug \nthe driver DLL live using Visual Studio 6?\n\nTake Care,\n\n-Joe Papavero\n\n\n------------------------------------------------------------------\n\nPostgreSQL 7.2.1 on i686-pc-linux-gnu, compiled by GCC 2.96\npsqlodbc Driver version: 7.2.2\n\n\n...\n[SQLSetPos]PGAPI_SetPos fOption=2 irow=1 lock=0 currt=1\nPGAPI_SetConnectOption: entering fOption = 102 vParam = 0\nPGAPI_SetConnectOption: AUTOCOMMIT: transact_status=1, vparam=0\nPOS UPDATE 0+0 fi=14fecf0 ti=14feba0\n0 used=-6,7cb114\n1 used=7,7cb128\nupdstr=update \"ICT_Campuses\" set \"campusName\" = ? where ctid = '(0, 30)' \nand oid = 16809\nPGAPI_AllocStmt: entering...\n**** PGAPI_AllocStmt: hdbc = 20857408, stmt = 22020544\nCC_add_statement: self=20857408, stmt=22020544\n0 used=-6\n1 used=7\nPGAPI_BindParameter: entering...\nextend_parameter_bindings: entering ... self=22020644, \nparameters_allocated=0, num_params=1\nexit extend_parameter_bindings\nPGAPI_BindParamater: ipar=0, paramType=1, fCType=1, fSqlType=12, \ncbColDef=100, ibScale=-1, rgbValue=8177964, *pcbValue = 7, data_at_exec = 0\nPGAPI_ExecDirect: entering...\n**** PGAPI_ExecDirect: hstmt=22020544, statement='update \"ICT_Campuses\" \nset \"campusName\" = ? where ctid = '(0, 30)' and oid = 16809'\nPGAPI_ExecDirect: calling PGAPI_Execute...\nPGAPI_Execute: entering...\nPGAPI_Execute: clear errors...\nrecycle statement: self= 22020544\nPGAPI_Execute: copying statement params: trans_status=0, len=81, \nstmt='update \"ICT_Campuses\" set \"campusName\" = ? where ctid = '(0, 30)' \nand oid = 16809'\nResolveOneParam: from(fcType)=1, to(fSqlType)=12\n stmt_with_params = 'update \"ICT_Campuses\" set \"campusName\" = \n'Branch2' where ctid = '(0, 30)' and oid = 16809'\n about to begin a transaction on statement = 22020544\n it's NOT a select statement: stmt=22020544\nsend_query(): conn=20857408, query='update \"ICT_Campuses\" set \n\"campusName\" = 'Branch2' where ctid = '(0, 30)' and oid = 16809'\nsend_query: done sending query\nin QR_Constructor\nexit QR_Constructor\nread 25, global_socket_buffersize=4096\nsend_query: got id = 'C'\nsend_query: ok - 'C' - BEGIN\nsend_query: setting cmdbuffer = 'BEGIN'\nsend_query: got id = 'P'\nsend_query: got id = 'C'\nsend_query: ok - 'C' - UPDATE 1\nsend_query: setting cmdbuffer = 'UPDATE 1'\nsend_query: returning res = 22028160\nsend_query: got id = 'Z'\nPGAPI_ExecDirect: returned 0 from PGAPI_Execute\npositioned load fi=14fecf0 ti=14feba0\n...\n\n", "msg_date": "Sun, 01 Sep 2002 19:36:40 -0500", "msg_from": "\"Giuseppe R. Papavero\" <papavero@incutronics.com>", "msg_from_op": true, "msg_subject": "Possible ODBC Driver Bug (via VC++/MFC CRecordset)" } ]
[ { "msg_contents": "Right now, I am going through my email box trying to resolve any open\npatches or items. Once I am done I will let people know. Hopefully\ntomorrow we can deal with cleanups and documentation, as well as\nanything we missed.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Sun, 1 Sep 2002 22:28:34 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "7.3 status" } ]
[ { "msg_contents": "Up to now, if you created a table with\n\n\tSELECT ... INTO foo FROM ...\n\nthen the new table \"foo\" would have OIDs.\n\nAs of CVS tip I have changed this to create a table without OIDs.\nI'd have preferred not to make such a change at the last minute,\nbut the hack we had in place was quite broken. (InitPlan() was\ntrying to back-patch a decision to include OID header space into\nan already-created plan tree. This did not work in any but the\nsimplest cases.)\n\nIf anyone is really annoyed about this, we could probably find a\nsolution; but I'm not inclined to expend effort on it unless there's\nsomeone out there who's seriously unhappy. Comments?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 00:55:58 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "SELECT INTO vs. OIDs" }, { "msg_contents": "On Mon, 2 Sep 2002, Tom Lane wrote:\n\n> As of CVS tip I have changed this to create a table without OIDs.\n> ...\n> If anyone is really annoyed about this, we could probably find a\n> solution; but I'm not inclined to expend effort on it unless there's\n> someone out there who's seriously unhappy. Comments?\n\nI actually prefer the new behaviour to the old.\n\ncjs\n-- \nCurt Sampson <cjs@cynic.net> +81 90 7737 2974 http://www.netbsd.org\n Don't you know, in this new Dark Age, we're all light. --XTC\n\n", "msg_date": "Mon, 2 Sep 2002 13:57:02 +0900 (JST)", "msg_from": "Curt Sampson <cjs@cynic.net>", "msg_from_op": false, "msg_subject": "Re: SELECT INTO vs. OIDs" } ]
[ { "msg_contents": "How do I get a list of what tsearch considers a stop word?\n\neg. 'and', 'or', 'the', 'up', 'down', etc. There seem to be heaps of\nthem...!\n\nChris\n\n", "msg_date": "Mon, 2 Sep 2002 12:57:47 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "tsearch stop words" }, { "msg_contents": "Ummm...I totally don't understand the format of that array at all!\n\nstatic ESWNODE engstoptree[] = {\n {'m',L,9,126},\n {'d',L,4,71},\n {'b',L,2,40},\n {'a',F,0,14},\n {'c',0,0,62},\n {'f',L,2,79},\n {'e',0,0,75},\n {'h',0,1,90},\n {'i',F,0,108},\n {'t',L,4,177},\n {'o',L,2,135},\n {'n',0,0,131},\n {'s',0,0,156},\n\nHow do I figure out the actual word it's matching?\n\nChris\n\n\n> -----Original Message-----\n> From: Oleg Bartunov [mailto:oleg@sai.msu.su]\n> Sent: Monday, 2 September 2002 5:33 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] tsearch stop words\n>\n>\n> Christopher,\n>\n> current implementation is ugly, we still didn't move functionality\n> from OpenFTS to tsearch. Look at makedict subdirectory to create your\n> custom dictionary. Default list is in engstoptree[] defined\n> in dic/porter_english.dct\n>\n> On Mon, 2 Sep 2002, Christopher Kings-Lynne wrote:\n>\n> > How do I get a list of what tsearch considers a stop word?\n> >\n> > eg. 'and', 'or', 'the', 'up', 'down', etc. There seem to be heaps of\n> > them...!\n> >\n> > Chris\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> >\n>\n> \tRegards,\n> \t\tOleg\n> _____________________________________________________________\n> Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> Sternberg Astronomical Institute, Moscow University (Russia)\n> Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> phone: +007(095)939-16-83, +007(095)939-23-83\n>\n\n", "msg_date": "Mon, 2 Sep 2002 16:49:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: tsearch stop words" }, { "msg_contents": "Christopher,\n\ncurrent implementation is ugly, we still didn't move functionality\nfrom OpenFTS to tsearch. Look at makedict subdirectory to create your\ncustom dictionary. Default list is in engstoptree[] defined\nin dic/porter_english.dct\n\nOn Mon, 2 Sep 2002, Christopher Kings-Lynne wrote:\n\n> How do I get a list of what tsearch considers a stop word?\n>\n> eg. 'and', 'or', 'the', 'up', 'down', etc. There seem to be heaps of\n> them...!\n>\n> Chris\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83\n\n", "msg_date": "Mon, 2 Sep 2002 12:32:38 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: tsearch stop words" }, { "msg_contents": "On Mon, 2 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Ummm...I totally don't understand the format of that array at all!\n\nstop words in suffix tree :) I've told it's default list of stop\nwords and it's certainly far from perfect. Please find attached file\nwith stop words we used (I just found on my notebook). But again,\nit's better to build your own dictionary specific for you domain,\nusing makedict script.\n\nOpenFTS is much more flexible in this respect and we hope we'd be able\nto implement most features of OpenFTS in tsearch, so OpenFTS would be just\na perl wrapper.\n\n\tOleg\n\n\n>\n> static ESWNODE engstoptree[] = {\n> {'m',L,9,126},\n> {'d',L,4,71},\n> {'b',L,2,40},\n> {'a',F,0,14},\n> {'c',0,0,62},\n> {'f',L,2,79},\n> {'e',0,0,75},\n> {'h',0,1,90},\n> {'i',F,0,108},\n> {'t',L,4,177},\n> {'o',L,2,135},\n> {'n',0,0,131},\n> {'s',0,0,156},\n>\n> How do I figure out the actual word it's matching?\n>\n> Chris\n>\n>\n> > -----Original Message-----\n> > From: Oleg Bartunov [mailto:oleg@sai.msu.su]\n> > Sent: Monday, 2 September 2002 5:33 PM\n> > To: Christopher Kings-Lynne\n> > Cc: Hackers\n> > Subject: Re: [HACKERS] tsearch stop words\n> >\n> >\n> > Christopher,\n> >\n> > current implementation is ugly, we still didn't move functionality\n> > from OpenFTS to tsearch. Look at makedict subdirectory to create your\n> > custom dictionary. Default list is in engstoptree[] defined\n> > in dic/porter_english.dct\n> >\n> > On Mon, 2 Sep 2002, Christopher Kings-Lynne wrote:\n> >\n> > > How do I get a list of what tsearch considers a stop word?\n> > >\n> > > eg. 'and', 'or', 'the', 'up', 'down', etc. There seem to be heaps of\n> > > them...!\n> > >\n> > > Chris\n> > >\n> > >\n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 2: you can get off all lists at once with the unregister command\n> > > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> > >\n> >\n> > \tRegards,\n> > \t\tOleg\n> > _____________________________________________________________\n> > Oleg Bartunov, sci.researcher, hostmaster of AstroNet,\n> > Sternberg Astronomical Institute, Moscow University (Russia)\n> > Internet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\n> > phone: +007(095)939-16-83, +007(095)939-23-83\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n\tRegards,\n\t\tOleg\n_____________________________________________________________\nOleg Bartunov, sci.researcher, hostmaster of AstroNet,\nSternberg Astronomical Institute, Moscow University (Russia)\nInternet: oleg@sai.msu.su, http://www.sai.msu.su/~megera/\nphone: +007(095)939-16-83, +007(095)939-23-83", "msg_date": "Mon, 2 Sep 2002 19:31:14 +0300 (GMT)", "msg_from": "Oleg Bartunov <oleg@sai.msu.su>", "msg_from_op": false, "msg_subject": "Re: tsearch stop words" } ]
[ { "msg_contents": "We're offering a small reward for a PG hacker that can code up a\nmysqldiff-like utility for PG. For those unfamiliar with mysqldiff:\nhttp://adamspiers.org/computing/mysqldiff/\n\nCreating something similar for PG is slightly more involved (because\nof the ref. integrity issues, among others...), but it would certainly\nbe useful.\n\nTo get more details, stop by:\nhttp://www.wolfbioscience.com/pgdiff/\n\nThanks for the help!\n", "msg_date": "1 Sep 2002 22:20:07 -0700", "msg_from": "adwolf1@yahoo.com (ad wolf)", "msg_from_op": true, "msg_subject": "Wanted: pgdiff ($$$)" }, { "msg_contents": "adwolf1@yahoo.com (ad wolf) writes:\n\n> We're offering a small reward for a PG hacker that can code up a\n> mysqldiff-like utility for PG. For those unfamiliar with mysqldiff:\n> http://adamspiers.org/computing/mysqldiff/\n\nYou might want to check out the perl model Alzabo, I think it's capable of\ndoing this.\n\n-- \ngreg\n\n", "msg_date": "03 Sep 2002 12:33:24 -0400", "msg_from": "Greg Stark <gsstark@mit.edu>", "msg_from_op": false, "msg_subject": "Re: Wanted: pgdiff ($$$)" }, { "msg_contents": "> You might want to check out the perl model Alzabo, I think it's capable of\n> doing this.\n\nIt does not (yet) support foreign keys, alas.\n\nBut if anybody likes to code this, maybe they could work together with the \nAlzabo developer.\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 12.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Tue, 3 Sep 2002 18:41:55 +0200", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: Wanted: pgdiff ($$$)" }, { "msg_contents": "Hi all,\n\nJust a link to this from the front page of the techdocs.postgresql.org\nsite.\n\nHope it helps.\n\n:-)\n\nRegards and best wishes,\n\nJustin Clift\n\n\nGreg Stark wrote:\n> \n> adwolf1@yahoo.com (ad wolf) writes:\n> \n> > We're offering a small reward for a PG hacker that can code up a\n> > mysqldiff-like utility for PG. For those unfamiliar with mysqldiff:\n> > http://adamspiers.org/computing/mysqldiff/\n> \n> You might want to check out the perl model Alzabo, I think it's capable of\n> doing this.\n> \n> --\n> greg\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n\"My grandfather once told me that there are two kinds of people: those\nwho work and those who take the credit. He told me to try to be in the\nfirst group; there was less competition there.\"\n - Indira Gandhi\n", "msg_date": "Wed, 04 Sep 2002 02:53:18 +1000", "msg_from": "Justin Clift <justin@postgresql.org>", "msg_from_op": false, "msg_subject": "Re: Wanted: pgdiff ($$$)" } ]
[ { "msg_contents": "I'm still getting conversion test failures on RH 7.2, 7.3, and Null beta.\n\nMy confiugre arguments are:\n\n./configure --prefix=/opt/postgresql --with-java --with-python --with-openssl --enable-syslog --enable-debug --enable-cassert\n--enable-depend\n\nIt appears that the functions are not being loaded into the regression\ndatabase:\n\n--\n-- create user defined conversion\n--\nCREATE USER foo WITH NOCREATEDB NOCREATEUSER;\nSET SESSION AUTHORIZATION foo;\nCREATE CONVERSION myconv FOR 'LATIN1' TO 'UNICODE' FROM iso8859_1_to_utf8;\nERROR: Function iso8859_1_to_utf8 does not exist\n--\n-- cannot make same name conversion in same schema\n--\nCREATE CONVERSION myconv FOR 'LATIN1' TO 'UNICODE' FROM iso8859_1_to_utf8;\nERROR: Function iso8859_1_to_utf8 does not exist\n--\n-- create default conversion with qualified name\n--\nCREATE DEFAULT CONVERSION public.mydef FOR 'LATIN1' TO 'UNICODE' FROM iso8859_1_to_utf8;\nERROR: Function iso8859_1_to_utf8 does not exist\n--\n-- cannot make default conversion with same shcema/for_encoding/to_encoding\n--\nCREATE DEFAULT CONVERSION public.mydef2 FOR 'LATIN1' TO 'UNICODE' FROM iso8859_1_to_utf8;\nERROR: Function iso8859_1_to_utf8 does not exist\n--\n-- drop user defined conversion\n--\nDROP CONVERSION myconv;\nERROR: conversion myconv not found\nDROP CONVERSION mydef;\nERROR: conversion mydef not found\n--\n\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n", "msg_date": "Mon, 02 Sep 2002 02:26:54 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "[7.3-devl] converision test fails" }, { "msg_contents": "> I'm still getting conversion test failures on RH 7.2, 7.3, and Null beta.\n> \n> My confiugre arguments are:\n> \n> ./configure --prefix=/opt/postgresql --with-java --with-python --with-openssl --enable-syslog --enable-debug --enable-cassert\n> --enable-depend\n\nYou need not to specify --enable-syslog in 7.3 BTW.\n\n> It appears that the functions are not being loaded into the regression\n> database:\n\nThis happens because the path to shared objs are defined at the\ncompile time. I think you don't get the failure once you install\nPostgreSQL. However it's not convenience since the parallel regression\ntest is designed so that it could be executed without the installation\nprocess. I have committed fix for this problem. Please try it again.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 02 Sep 2002 22:35:58 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: [7.3-devl] converision test fails" }, { "msg_contents": "On Mon, 2002-09-02 at 09:35, Tatsuo Ishii wrote:\n> You need not to specify --enable-syslog in 7.3 BTW.\n\nOK, thanks.\n\n> This happens because the path to shared objs are defined at the\n> compile time. I think you don't get the failure once you install\n> PostgreSQL. However it's not convenience since the parallel regression\n> test is designed so that it could be executed without the installation\n> process. I have committed fix for this problem. Please try it again.\n\nThanks, this did the trick!\n\nGordon.\n-- \n\"Far and away the best prize that life has to offer\n is the chance to work hard at work worth doing.\"\n -- Theodore Roosevelt\n\n\n", "msg_date": "02 Sep 2002 12:29:17 -0400", "msg_from": "Gordon Runkle <gar@integrated-dynamics.com>", "msg_from_op": true, "msg_subject": "Re: [7.3-devl] converision test fails" } ]
[ { "msg_contents": "CVSROOT:\t/cvsroot\nModule name:\tpgsql-server\nChanges by:\tmomjian@postgresql.org\t02/09/02 02:27:05\n\nModified files:\n\tcontrib/fulltextindex: README.fti fti.c fti.sql.in \nAdded files:\n\tcontrib/fulltextindex: WARNING uninstall.sql \n\nLog message:\n\tIn case Florian and I don't finish his changes to this contrib before\n\tbeta, at least get this stuff in.\n\t\n\tftipatch.txt - Updates to docs and scripts. Run in the fulltextindexdir\n\tWARNING - Add to fulltextindex dir\n\tuninstall.sql - Add to fulltextindex dir\n\n", "msg_date": "Mon, 2 Sep 2002 02:27:05 -0400 (EDT)", "msg_from": "momjian@postgresql.org (Bruce Momjian - CVS)", "msg_from_op": true, "msg_subject": "pgsql-server/contrib/fulltextindex README.fti ..." }, { "msg_contents": "Thanks - what do people think of the WARNING file? Please have a read and\ncomment perhaps?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-committers-owner@postgresql.org\n> [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> - CVS\n> Sent: Monday, 2 September 2002 2:27 PM\n> To: pgsql-committers@postgresql.org\n> Subject: [COMMITTERS] pgsql-server/contrib/fulltextindex README.fti ...\n>\n>\n> CVSROOT:\t/cvsroot\n> Module name:\tpgsql-server\n> Changes by:\tmomjian@postgresql.org\t02/09/02 02:27:05\n>\n> Modified files:\n> \tcontrib/fulltextindex: README.fti fti.c fti.sql.in\n> Added files:\n> \tcontrib/fulltextindex: WARNING uninstall.sql\n>\n> Log message:\n> \tIn case Florian and I don't finish his changes to this\n> contrib before\n> \tbeta, at least get this stuff in.\n>\n> \tftipatch.txt - Updates to docs and scripts. Run in the\n> fulltextindexdir\n> \tWARNING - Add to fulltextindex dir\n> \tuninstall.sql - Add to fulltextindex dir\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Mon, 2 Sep 2002 14:30:19 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/contrib/fulltextindex README.fti ..." }, { "msg_contents": "\nWARNING does seem like a strong word, though I can't come up with a word\nthat would make sure they read it, maybe SUGGESTION?\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Thanks - what do people think of the WARNING file? Please have a read and\n> comment perhaps?\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-committers-owner@postgresql.org\n> > [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > - CVS\n> > Sent: Monday, 2 September 2002 2:27 PM\n> > To: pgsql-committers@postgresql.org\n> > Subject: [COMMITTERS] pgsql-server/contrib/fulltextindex README.fti ...\n> >\n> >\n> > CVSROOT:\t/cvsroot\n> > Module name:\tpgsql-server\n> > Changes by:\tmomjian@postgresql.org\t02/09/02 02:27:05\n> >\n> > Modified files:\n> > \tcontrib/fulltextindex: README.fti fti.c fti.sql.in\n> > Added files:\n> > \tcontrib/fulltextindex: WARNING uninstall.sql\n> >\n> > Log message:\n> > \tIn case Florian and I don't finish his changes to this\n> > contrib before\n> > \tbeta, at least get this stuff in.\n> >\n> > \tftipatch.txt - Updates to docs and scripts. Run in the\n> > fulltextindexdir\n> > \tWARNING - Add to fulltextindex dir\n> > \tuninstall.sql - Add to fulltextindex dir\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> >\n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 02:34:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/contrib/fulltextindex README.fti" }, { "msg_contents": "Well WARNING is kinda standard - I have seen the like in other projects.\nWell, at least this way they'll read it and then hopefully port over to\ntsearch or at least they can make an informed decision to stick with\nfulltextindex. It is pretty crummy tho - they shouldn't be using it.\n\nNOTICE might be another idea. But really, if the feeling is that it might\nbe removed in the future, then WARNING is correct.\n\nChris\n\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 2 September 2002 2:35 PM\n> To: Christopher Kings-Lynne\n> Cc: Hackers\n> Subject: Re: [HACKERS] [COMMITTERS] pgsql-server/contrib/fulltextindex\n> README.fti\n>\n>\n>\n> WARNING does seem like a strong word, though I can't come up with a word\n> that would make sure they read it, maybe SUGGESTION?\n>\n> ------------------------------------------------------------------\n> ---------\n>\n> Christopher Kings-Lynne wrote:\n> > Thanks - what do people think of the WARNING file? Please have\n> a read and\n> > comment perhaps?\n> >\n> > Chris\n> >\n> > > -----Original Message-----\n> > > From: pgsql-committers-owner@postgresql.org\n> > > [mailto:pgsql-committers-owner@postgresql.org]On Behalf Of\n> Bruce Momjian\n> > > - CVS\n> > > Sent: Monday, 2 September 2002 2:27 PM\n> > > To: pgsql-committers@postgresql.org\n> > > Subject: [COMMITTERS] pgsql-server/contrib/fulltextindex\n> README.fti ...\n> > >\n> > >\n> > > CVSROOT:\t/cvsroot\n> > > Module name:\tpgsql-server\n> > > Changes by:\tmomjian@postgresql.org\t02/09/02 02:27:05\n> > >\n> > > Modified files:\n> > > \tcontrib/fulltextindex: README.fti fti.c fti.sql.in\n> > > Added files:\n> > > \tcontrib/fulltextindex: WARNING uninstall.sql\n> > >\n> > > Log message:\n> > > \tIn case Florian and I don't finish his changes to this\n> > > contrib before\n> > > \tbeta, at least get this stuff in.\n> > >\n> > > \tftipatch.txt - Updates to docs and scripts. Run in the\n> > > fulltextindexdir\n> > > \tWARNING - Add to fulltextindex dir\n> > > \tuninstall.sql - Add to fulltextindex dir\n> > >\n> > >\n> > > ---------------------------(end of\n> broadcast)---------------------------\n> > > TIP 5: Have you checked our extensive FAQ?\n> > >\n> > > http://www.postgresql.org/users-lounge/docs/faq.html\n> > >\n> >\n> >\n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> >\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square,\n> Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Mon, 2 Sep 2002 14:37:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/contrib/fulltextindex README.fti" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Well WARNING is kinda standard - I have seen the like in other projects.\n> Well, at least this way they'll read it and then hopefully port over to\n> tsearch or at least they can make an informed decision to stick with\n> fulltextindex. It is pretty crummy tho - they shouldn't be using it.\n> \n> NOTICE might be another idea. But really, if the feeling is that it might\n> be removed in the future, then WARNING is correct.\n\nYes, that is a good point. I like the file, btw.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 02:40:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: [COMMITTERS] pgsql-server/contrib/fulltextindex README.fti" } ]
[ { "msg_contents": "I have finished going through my email box and applying patches from the\npatches queue. There is one patch left in the queue related to tcl\nnotification of connection failure. Tom wants to look at that.\n\nI am running tests now on the code.\n\nTomorrow, I will run pgindent and create the HISTORY file for 7.3.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 02:30:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "I am done" }, { "msg_contents": "You can probably nail some TODOs:\n\n* Add OR REPLACE clauses to non-FUNCTION object creation\n* Allow autocommit so always in a transaction block\n* Cache most recent query plan(s) (Neil) [prepare]\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> Sent: Monday, 2 September 2002 2:30 PM\n> To: PostgreSQL-development\n> Subject: [HACKERS] I am done\n> \n> \n> I have finished going through my email box and applying patches from the\n> patches queue. There is one patch left in the queue related to tcl\n> notification of connection failure. Tom wants to look at that.\n> \n> I am running tests now on the code.\n> \n> Tomorrow, I will run pgindent and create the HISTORY file for 7.3.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, \n> Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n", "msg_date": "Mon, 2 Sep 2002 14:41:29 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: I am done" }, { "msg_contents": "\nThanks. Done.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> You can probably nail some TODOs:\n> \n> * Add OR REPLACE clauses to non-FUNCTION object creation\n> * Allow autocommit so always in a transaction block\n> * Cache most recent query plan(s) (Neil) [prepare]\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: pgsql-hackers-owner@postgresql.org\n> > [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Bruce Momjian\n> > Sent: Monday, 2 September 2002 2:30 PM\n> > To: PostgreSQL-development\n> > Subject: [HACKERS] I am done\n> > \n> > \n> > I have finished going through my email box and applying patches from the\n> > patches queue. There is one patch left in the queue related to tcl\n> > notification of connection failure. Tom wants to look at that.\n> > \n> > I am running tests now on the code.\n> > \n> > Tomorrow, I will run pgindent and create the HISTORY file for 7.3.\n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, \n> > Pennsylvania 19073\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 02:44:15 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> You can probably nail some TODOs:\n\n> * Allow autocommit so always in a transaction block\n\nThis isn't really done; the backend side is probably okay, but we have\na ton of client-side code that will malfunction if you try to run it in\nautocommit-off state. I'm willing to ship it that way for 7.3, but we\nshould certainly have a TODO item indicating that client libraries,\npsql, etc need work.\n\n\nOther TODO items that are done, or at least better than 7.2:\n\n* Show location of syntax error in query [yacc]\n\nThe character-position hack addresses this, though surely it's not complete.\n\n* Allow logging of query durations\n\nDidn't Bruce just commit that?\n\n* Make single-user local access permissions the default by limiting\n permissions on the socket file (Peter E)\n\nI believe we have decided *not* to do this.\n\n* Reserve last few process slots for super-user if max_connections reached\n* Add GUC parameter to print queries that generate errors\n\nBoth done, no?\n\n* Declare typein/out functions in pg_proc with a special \"C string\" data type\n* Functions returning sets do not totally work\n\nBoth done (the remaining work on sets is covered by another item)\n\n* Allow bytea to handle LIKE with non-TEXT patterns\n\nI didn't want to apply Joe's patch at this late hour, but I think Bruce\ndid it anyway.\n\n\to Store binary-compatible type information in the system\n\nDone, see pg_cast.\n\n\to -SELECT col FROM tab WHERE numeric_col = 10.1 fails, requires quotes\n\nThis should not be marked done; the problem is still there, just this\nparticular symptom went away.\n\n\to Ensure we have array-eq operators for every built-in array type\n\nDid that; there's even a regression test to catch the omission in\nfuture.\n\n* Allow setting database character set without multibyte enabled\n\nThis is probably irrelevant now that multibyte can't be disabled.\n\n* Have UPDATE/DELETE clean out indexes\n\nThis entry makes no sense to me; unless we abandon the entire concept of\nMVCC, this is not gonna happen.\n\n\to ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n\t [inheritance]\n\nWhile this isn't done, its urgency has dropped an awful lot now that\npg_dump knows to use COPY column lists; you don't have to worry about\ndump/restore breakage. Accordingly, I doubt we're ever gonna try to\nchange it.\n\n\to Add ALTER FUNCTION\n\nWhat is ALTER FUNCTION? How does it differ from CREATE OR REPLACE\nFUNCTION?\n\n\to -ALTER TABLE ADD PRIMARY KEY (Tom)\n\to -ALTER TABLE ADD UNIQUE (Tom)\n\nAFAIR, I didn't do either of those; I think Chris K-L gets the credit.\n\n\to ALTER TABLE ADD COLUMN column SERIAL doesn't create sequence\n\nThis is not a problem. The actual problem with adding a serial column\nis covered by the next entry:\n\to ALTER TABLE ADD COLUMN column DEFAULT should fill existing\n\t rows with DEFAULT value\n\n\to -Cluster all tables at once using pg_index.indisclustered set during\n previous CLUSTER\n\nThis is not done, unless we are going to accept Alvaro's last-minute\npatch for it; which I vote we don't. It's too big a change.\n\n\to Prevent DROP of table being referenced by our own open cursor\n\nHuh? There is no such bug that I know of.\n\n\to -Disallow missing columns in INSERT ... VALUES, per ANSI\n\nWhat is this, and why is it marked done?\n\n\to -Remove SET KSQO option now that OR processing is improved (Tom)\n\nI don't think I get the credit (blame?) for this one, either.\n\n* Have pg_dump use LEFT OUTER JOIN in multi-table SELECTs\n or multiple SELECTS to avoid bad system catalog entries\n\nIsn't this pretty much done?\n\n* Add config file check for $ODBCINI, $HOME/.odbc.ini, installpath/etc/odbc.ini\n\nWith ODBC out of the main distro, this isn't our problem anymore.\n\n* Fix foreign key constraints to not error on intermediate db states (Stephan)\n\nIsn't this done?\n\n* Have SERIAL generate non-colliding sequence names when we have \n auto-destruction\n\nThey should be pretty well non-colliding now. What's the gripe exactly?\n\n* Propagate column or table renaming to foreign key constraints\n\nThis is done.\n\n* Remove wal_files postgresql.conf option because WAL files are now recycled\n\nDone, no?\n\n* Improve dynamic memory allocation by introducing tuple-context memory\n allocation (Tom)\n\nUh, wasn't that done long ago?\n\n* Nested FULL OUTER JOINs don't work (Tom)\n\nFixed.\n\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 10:33:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "\n\nOn Mon, 2 Sep 2002, Tom Lane wrote:\n\n> Date: Mon, 02 Sep 2002 10:33:49 -0400\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> To: Christopher Kings-Lynne <chriskl@familyhealth.com.au>\n> Cc: Bruce Momjian <pgman@candle.pha.pa.us>,\n> PostgreSQL-development <pgsql-hackers@postgresql.org>\n> Subject: Re: [HACKERS] I am done\n>\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > You can probably nail some TODOs:\n>\n\nMaybe add a TODO item:\n\n* fix a possibility of DoS attacks in the backend before the user is\n authenticated\n\n? Since I guess my patch for that is too late and I don't even know if\nit's any good approach at all.\n\n-s\n\n\n", "msg_date": "Mon, 2 Sep 2002 10:51:16 -0400 (EDT)", "msg_from": "\"Serguei A. Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane dijo: \n\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > You can probably nail some TODOs:\n> \n\n> \to -Cluster all tables at once using pg_index.indisclustered set during\n> previous CLUSTER\n> \n> This is not done, unless we are going to accept Alvaro's last-minute\n> patch for it; which I vote we don't. It's too big a change.\n\nI agree, but there's the (ugly) clusterdb script. Perhaps replace with\nanother TODO item.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\n", "msg_date": "Mon, 2 Sep 2002 11:04:33 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane wrote:\n> \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > You can probably nail some TODOs:\n> \n> > * Allow autocommit so always in a transaction block\n> \n> This isn't really done; the backend side is probably okay, but we have\n> a ton of client-side code that will malfunction if you try to run it in\n> autocommit-off state. I'm willing to ship it that way for 7.3, but we\n> should certainly have a TODO item indicating that client libraries,\n> psql, etc need work.\n\nMy feeling is that we have to fix this during beta.\n\nAdded to open items:\n\n\tFix client apps for autocommit = off\n\n> Other TODO items that are done, or at least better than 7.2:\n> \n> * Show location of syntax error in query [yacc]\n> \n> The character-position hack addresses this, though surely it's not complete.\n\nYep, still on the TODO.\n\n> \n> * Allow logging of query durations\n> \n> Didn't Bruce just commit that?\n\nThanks. Marked as done now.\n\n> * Make single-user local access permissions the default by limiting\n> permissions on the socket file (Peter E)\n> \n> I believe we have decided *not* to do this.\n\nYes, removed.\n\n> * Reserve last few process slots for super-user if max_connections reached\n> * Add GUC parameter to print queries that generate errors\n> \n> Both done, no?\n\nYep, marked, but right now there is no good way to turn off the printing\nof queries on errors. We have to address that before 7.3 final.\n\n> * Declare typein/out functions in pg_proc with a special \"C string\" data type\n> * Functions returning sets do not totally work\n> \n> Both done (the remaining work on sets is covered by another item)\n\nMarked.\n\n> \n> * Allow bytea to handle LIKE with non-TEXT patterns\n> \n> I didn't want to apply Joe's patch at this late hour, but I think Bruce\n> did it anyway.\n\nI think he made the deadline and no one objected.\n\n> \to Store binary-compatible type information in the system\n> \n> Done, see pg_cast.\n\nMarked.\n\n> \n> \to -SELECT col FROM tab WHERE numeric_col = 10.1 fails, requires quotes\n> \n> This should not be marked done; the problem is still there, just this\n> particular symptom went away.\n\nSomeone reported to me it was done. I have removed the item because\nthough it isn't a bug query anymore, it isn't as fixed as we would like.\nWe still have:\n\n\to Allow better handling of numeric constants, type conversion\n\nCan someone give me another example failing query?\n\n\n> \to Ensure we have array-eq operators for every built-in array type\n> \n> Did that; there's even a regression test to catch the omission in\n> future.\n\n\nMarked.\n\n> \n> * Allow setting database character set without multibyte enabled\n> \n> This is probably irrelevant now that multibyte can't be disabled.\n\nItem removed.\n\n> * Have UPDATE/DELETE clean out indexes\n> \n> This entry makes no sense to me; unless we abandon the entire concept of\n> MVCC, this is not gonna happen.\n\nGood point. Now that we have light vacuum, we are fine.\n\n> \to ALTER TABLE ADD COLUMN to inherited table put column in wrong place\n> \t [inheritance]\n> \n> While this isn't done, its urgency has dropped an awful lot now that\n> pg_dump knows to use COPY column lists; you don't have to worry about\n> dump/restore breakage. Accordingly, I doubt we're ever gonna try to\n> change it.\n\nOK, removed. Can we dump/reload the regression database now? What\nareas still need this fix?\n\n> \to Add ALTER FUNCTION\n> \n> What is ALTER FUNCTION? How does it differ from CREATE OR REPLACE\n> FUNCTION?\n\nRemoved. I think it was thought of before the CREATE OR REPLACE idea\ncame around.\n\n> \to -ALTER TABLE ADD PRIMARY KEY (Tom)\n> \to -ALTER TABLE ADD UNIQUE (Tom)\n> \n> AFAIR, I didn't do either of those; I think Chris K-L gets the credit.\n\nDone.\n\n> \to ALTER TABLE ADD COLUMN column SERIAL doesn't create sequence\n> \n> This is not a problem. The actual problem with adding a serial column\n> is covered by the next entry:\n> \to ALTER TABLE ADD COLUMN column DEFAULT should fill existing\n> \t rows with DEFAULT value\n\nWell, the lack of sequence for a SERIAL is an issue if only related to\nthe DEFAULT issue so I will keep it.\n\n> \to -Cluster all tables at once using pg_index.indisclustered set during\n> previous CLUSTER\n> \n> This is not done, unless we are going to accept Alvaro's last-minute\n> patch for it; which I vote we don't. It's too big a change.\n\nWell, we have clusterdb. One thing that bothers me is that we have\nclusterdb and /contrib/reindexdb going out new in 7.3, only to be\nremoved in 7.4 when we get the table scan done in the backend code ---\nnot a great API solution but we may have to live with it.\n\n> \n> \to Prevent DROP of table being referenced by our own open cursor\n> \n> Huh? There is no such bug that I know of.\n\nWell, actually, there is or was. The issue is that if you open a cursor\nin a transaction, then drop the table while the cursor is open, all\nsorts of weird things happen:\n\n\ttest=> create table test (x int);\n\tCREATE TABLE\n\ttest=> insert into test values (1);\n\tINSERT 149484 1\n\ttest=> begin;\n\tBEGIN\n\ttest=> declare xx cursor for select * from test;\n\tDECLARE CURSOR\n\ttest=> fetch xx;\n\t x \n\t---\n\t 1\n\t(1 row)\n\t\n\ttest=> drop table test;\n\tWARNING: FlushRelationBuffers(test, 0): block 0 is referenced (private 2, global 1)\n\tERROR: heap_drop_with_catalog: FlushRelationBuffers returned -2\n\n> \to -Disallow missing columns in INSERT ... VALUES, per ANSI\n> \n> What is this, and why is it marked done?\n\nWe used to allow INSERT INTO tab VALUES (...) to skip the trailing\ncolumns and automatically fill in null's. That is fixed, per ANSI.\n\n> \to -Remove SET KSQO option now that OR processing is improved (Tom)\n> \n> I don't think I get the credit (blame?) for this one, either.\n\nI think your name was on it because you were the contact for it. I\nthink I did the dirty work. That is how some of these got your name.\n\n> * Have pg_dump use LEFT OUTER JOIN in multi-table SELECTs\n> or multiple SELECTS to avoid bad system catalog entries\n> \n> Isn't this pretty much done?\n\nYes, I suspected it was but wasn't sure. Marked as done now.\n\n> \n> * Add config file check for $ODBCINI, $HOME/.odbc.ini, installpath/etc/odbc.ini\n> \n> With ODBC out of the main distro, this isn't our problem anymore.\n\nYep, removed the ODBC section too.\n\n> * Fix foreign key constraints to not error on intermediate db states (Stephan)\n> \n> Isn't this done?\n\nYes, I think Stephan took care of it, but wasn't sure. Marked as done.\n\n> * Have SERIAL generate non-colliding sequence names when we have \n> auto-destruction\n> \n> They should be pretty well non-colliding now. What's the gripe exactly?\n\nThe issue was that when there were name collisions, we threw an error\ninstead of trying other sequence names. We had to do that because we\nneeded the sequence name to be predictable so it could be auto-deleted. \nNow with dependency, we don't need to have it be predictable. However,\nwe still use nextval() on the sequence name, so we can't say it is\narbitrary either. Should we just remove the item?\n\n> * Propagate column or table renaming to foreign key constraints\n> \n> This is done.\n\nMarked.\n\n> \n> * Remove wal_files postgresql.conf option because WAL files are now recycled\n> \n> Done, no?\n\nYep.\n\n> \n> * Improve dynamic memory allocation by introducing tuple-context memory\n> allocation (Tom)\n> \n> Uh, wasn't that done long ago?\n\n\nYep, I think so, but I wasn't sure.\n\n> \n> * Nested FULL OUTER JOINs don't work (Tom)\n> \n> Fixed.\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 11:22:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Serguei A. Mokhov wrote:\n> \n> \n> On Mon, 2 Sep 2002, Tom Lane wrote:\n> \n> > Date: Mon, 02 Sep 2002 10:33:49 -0400\n> > From: Tom Lane <tgl@sss.pgh.pa.us>\n> > To: Christopher Kings-Lynne <chriskl@familyhealth.com.au>\n> > Cc: Bruce Momjian <pgman@candle.pha.pa.us>,\n> > PostgreSQL-development <pgsql-hackers@postgresql.org>\n> > Subject: Re: [HACKERS] I am done\n> >\n> > \"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> > > You can probably nail some TODOs:\n> >\n> \n> Maybe add a TODO item:\n> \n> * fix a possibility of DoS attacks in the backend before the user is\n> authenticated\n> \n> ? Since I guess my patch for that is too late and I don't even know if\n> it's any good approach at all.\n\nIt is a security issue and will have to be fixed before 7.3 final, one\nway or the other.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 11:24:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> This isn't really done; the backend side is probably okay, but we have\n>> a ton of client-side code that will malfunction if you try to run it in\n>> autocommit-off state.\n\n> My feeling is that we have to fix this during beta.\n\nNo we don't.\n\nI don't think it *can* be fixed in a reasonable fashion until we have\nnotification to the client side about what the backend's transaction\nstate is; which is one of the protocol-change items for 7.4.\n\n> Added to open items:\n> \tFix client apps for autocommit = off\n\nPut it on TODO for 7.4, instead.\n\nBTW, why is there not a TODO item (or maybe a whole section) for all the\nprotocol changes we want?\n\n> Yep, marked, but right now there is no good way to turn off the printing\n> of queries on errors. We have to address that before 7.3 final.\n\nOh, didn't you put in that patch to provide a GUC level control?\n\n> Can we dump/reload the regression database now?\n\nYes.\n\n> Well, the lack of sequence for a SERIAL is an issue if only related to\n> the DEFAULT issue so I will keep it.\n\nI think you should merge the two items into one: the DEFAULT is the\nactual problem, but it'd be okay to note that it blocks adding a serial\ncolumn.\n\n\n>> o Prevent DROP of table being referenced by our own open cursor\n>> \n>> Huh? There is no such bug that I know of.\n\n> Well, actually, there is or was.\n\nOh, wait, our *own* open cursor. Okay --- I was testing it with a\ndifferent backend trying to drop the table, and of course it blocked\nwaiting to acquire exclusive lock on the table. But if it's our own\ncursor we won't block.\n\n>> o -Disallow missing columns in INSERT ... VALUES, per ANSI\n>> \n>> What is this, and why is it marked done?\n\n> We used to allow INSERT INTO tab VALUES (...) to skip the trailing\n> columns and automatically fill in null's. That is fixed, per ANSI.\n\nOh, I see: we compromised on being strict if you have an explicit list\nof column names. You can still omit trailing columns if you just\nsay INSERT INTO foo VALUES(...), which is what the TODO item seems to\nsay you can't do anymore.\n\n>> * Have SERIAL generate non-colliding sequence names when we have \n>> auto-destruction\n>> \n>> They should be pretty well non-colliding now. What's the gripe exactly?\n\n> The issue was that when there were name collisions, we threw an error\n> instead of trying other sequence names. We had to do that because we\n> needed the sequence name to be predictable so it could be auto-deleted. \n> Now with dependency, we don't need to have it be predictable.\n\nThe system may not need that, but I think unpredictable sequence names\nare a bad idea from the user's point of view anyway. Also, the main\nreason why there was a problem before was the failure to auto-drop the\nsequence --- so you were certain to get a collision if you dropped and\nremade the table. So I think this is a non-problem now, and we should\njust remove the item.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 12:10:22 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> This isn't really done; the backend side is probably okay, but we have\n> >> a ton of client-side code that will malfunction if you try to run it in\n> >> autocommit-off state.\n> \n> > My feeling is that we have to fix this during beta.\n> \n> No we don't.\n> \n> I don't think it *can* be fixed in a reasonable fashion until we have\n> notification to the client side about what the backend's transaction\n> state is; which is one of the protocol-change items for 7.4.\n\nWhy can't we just turn on auto-commit when we start the session:\n\t\n\tSET autocommit = true;\n\n> > Added to open items:\n> > \tFix client apps for autocommit = off\n> \n> Put it on TODO for 7.4, instead.\n> \n> BTW, why is there not a TODO item (or maybe a whole section) for all the\n> protocol changes we want?\n\nA few are mentioned in TODO, the others are mentioned in comments in the\ncode. I can gather them if you it would help.\n\n> > Yep, marked, but right now there is no good way to turn off the printing\n> > of queries on errors. We have to address that before 7.3 final.\n> \n> Oh, didn't you put in that patch to provide a GUC level control?\n\nYes, but what level do you set it at to turn it off? It goes from\nDEBUG5 all the way up to ERROR but all of those print the query on an\nerror.\n\n> \n> > Can we dump/reload the regression database now?\n> \n> Yes.\n\nOh, OK, then removed.\n\n> > Well, the lack of sequence for a SERIAL is an issue if only related to\n> > the DEFAULT issue so I will keep it.\n> \n> I think you should merge the two items into one: the DEFAULT is the\n> actual problem, but it'd be okay to note that it blocks adding a serial\n> column.\n\nOK, new text:\n\n o ALTER TABLE ADD COLUMN column DEFAULT should fill existing \n rows with DEFAULT value\n o ALTER TABLE ADD COLUMN column SERIAL doesn't create sequence because\n of the item above\n\n> >> o Prevent DROP of table being referenced by our own open cursor\n> >> \n> >> Huh? There is no such bug that I know of.\n> \n> > Well, actually, there is or was.\n> \n> Oh, wait, our *own* open cursor. Okay --- I was testing it with a\n> different backend trying to drop the table, and of course it blocked\n> waiting to acquire exclusive lock on the table. But if it's our own\n> cursor we won't block.\n\n\nDon't know how I can improve that wording. :-)\n\n> >> o -Disallow missing columns in INSERT ... VALUES, per ANSI\n> >> \n> >> What is this, and why is it marked done?\n> \n> > We used to allow INSERT INTO tab VALUES (...) to skip the trailing\n> > columns and automatically fill in null's. That is fixed, per ANSI.\n> \n> Oh, I see: we compromised on being strict if you have an explicit list\n> of column names. You can still omit trailing columns if you just\n> say INSERT INTO foo VALUES(...), which is what the TODO item seems to\n> say you can't do anymore.\n\n\nYes, that was a point not addressed in the TODO because that distinction\ndidn't exist at the time it was added. Text updated:\n\n o -Disallow missing columns in INSERT ... (col) VALUES, per ANSI\n\n> >> * Have SERIAL generate non-colliding sequence names when we have \n> >> auto-destruction\n> >> \n> >> They should be pretty well non-colliding now. What's the gripe exactly?\n> \n> > The issue was that when there were name collisions, we threw an error\n> > instead of trying other sequence names. We had to do that because we\n> > needed the sequence name to be predictable so it could be auto-deleted. \n> > Now with dependency, we don't need to have it be predictable.\n> \n> The system may not need that, but I think unpredictable sequence names\n> are a bad idea from the user's point of view anyway. Also, the main\n> reason why there was a problem before was the failure to auto-drop the\n> sequence --- so you were certain to get a collision if you dropped and\n> remade the table. So I think this is a non-problem now, and we should\n> just remove the item.\n\nOK, removed. Again, thank's for the review. I am working on HISTORY today.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 12:21:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> I don't think it *can* be fixed in a reasonable fashion until we have\n>> notification to the client side about what the backend's transaction\n>> state is; which is one of the protocol-change items for 7.4.\n\n> Why can't we just turn on auto-commit when we start the session:\n> \tSET autocommit = true;\n\nHow's that help? If the user turns it off again, we still break.\n\nMore to the point, we can hardly claim any level of SQL compliance if\neither libpq or psql try to prevent you from using the autocommit-off\nmode ... but both contain problematic code (mostly in the startup and\nlarge-object-support areas).\n\nThe real problem here is that the client code cannot know how to behave\n(ie, whether to issue BEGIN and/or COMMIT) unless it knows the current\nautocommit setting and current transaction state. And getting that info\nin any reliable fashion requires a protocol change, AFAICS.\n\nThere are some things we can tweak to make the clients less broken than\nthey are now --- for instance, all of libpq's startup-time SET commands\ncould be switched to \"BEGIN; SET ...; COMMIT;\" which will work the same\nwith or without autocommit --- but I don't think we can expect to fix\nlarge-object support, for example, without the protocol change.\n\n>> Oh, didn't you put in that patch to provide a GUC level control?\n\n> Yes, but what level do you set it at to turn it off?\n\nFATAL? PANIC?\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 12:39:36 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "\n> > * Have SERIAL generate non-colliding sequence names when we have \n> > auto-destruction\n> > \n> > They should be pretty well non-colliding now. What's the gripe exactly?\n> \n> The issue was that when there were name collisions, we threw an error\n> instead of trying other sequence names. We had to do that because we\n> needed the sequence name to be predictable so it could be auto-deleted. \n> Now with dependency, we don't need to have it be predictable. However,\n> we still use nextval() on the sequence name, so we can't say it is\n> arbitrary either. Should we just remove the item?\n\nThe names are relied on by pg_dump for setting the next value of the\nsequence. That is, it relies on the names being generated the exact\nsame way every time.\n\n\n", "msg_date": "02 Sep 2002 12:45:26 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: I am done" }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> I don't think it *can* be fixed in a reasonable fashion until we have\n> >> notification to the client side about what the backend's transaction\n> >> state is; which is one of the protocol-change items for 7.4.\n> \n> > Why can't we just turn on auto-commit when we start the session:\n> > \tSET autocommit = true;\n> \n> How's that help? If the user turns it off again, we still break.\n> \n> More to the point, we can hardly claim any level of SQL compliance if\n> either libpq or psql try to prevent you from using the autocommit-off\n> mode ... but both contain problematic code (mostly in the startup and\n> large-object-support areas).\n> \n> The real problem here is that the client code cannot know how to behave\n> (ie, whether to issue BEGIN and/or COMMIT) unless it knows the current\n> autocommit setting and current transaction state. And getting that info\n> in any reliable fashion requires a protocol change, AFAICS.\n> \n> There are some things we can tweak to make the clients less broken than\n> they are now --- for instance, all of libpq's startup-time SET commands\n> could be switched to \"BEGIN; SET ...; COMMIT;\" which will work the same\n> with or without autocommit --- but I don't think we can expect to fix\n> large-object support, for example, without the protocol change.\n\nI was considering the original report that createlang doesn't work. \nSurely we can do some things to fix those at least. Yes, we clearly\naren't going to get this working 100% in 7.3. I don't even know what to\nput on the TODO list because we don't know what cases have problems.\n\n> >> Oh, didn't you put in that patch to provide a GUC level control?\n> \n> > Yes, but what level do you set it at to turn it off?\n> \n> FATAL? PANIC?\n\nHe doesn't support those levels:\n\t\n\ttest=> set log_min_error_statement = fatal;\n\tERROR: invalid value for option 'log_min_error_statement': 'fatal'\n\ttest=> set log_min_error_statement = error;\n\tSET\n\nand in fact, the default is ERROR. I think the default has to be\nsomething higher, but even FATAL seems wrong. We have to be able to\nturn it off, and have it off by default, rather than saying it only\nhappens with fatal errors or something like that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 12:46:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Tom Lane writes:\n\n> There are some things we can tweak to make the clients less broken than\n> they are now --- for instance, all of libpq's startup-time SET commands\n> could be switched to \"BEGIN; SET ...; COMMIT;\" which will work the same\n> with or without autocommit --- but I don't think we can expect to fix\n> large-object support, for example, without the protocol change.\n\nPerhaps one should consider removing the autocommit option. It's no use\nif it's there but everything breaks when you turn it on.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 3 Sep 2002 01:10:03 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Perhaps one should consider removing the autocommit option. It's no use\n> if it's there but everything breaks when you turn it on.\n\nAs far as I'm concerned, it's in there for one reason only (as far as\n7.3 goes): so that we can run the NIST SQL compliance tests. Anyone who\nwants to use it in production at this point is doing so at their own\nrisk.\n\nIn practice, as long as we fix libpq's startup SET commands, the major\nproblems will just be with large object support, which is not mainstream\nusage either; so I'm prepared to live with it for a release or so. It's\nnot like there aren't any other combinations of PG features that don't\nplay nice together.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 19:55:48 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "> \to -ALTER TABLE ADD PRIMARY KEY (Tom)\n> \to -ALTER TABLE ADD UNIQUE (Tom)\n>\n> AFAIR, I didn't do either of those; I think Chris K-L gets the credit.\n\nActually, I did ADD UNIQUE originally after lots of coding and then you went\nand made it work by changing a couple of lines in the grammar. You then got\nADD PRIMARY KEY working as well, so I had Bruce change it to you. In fact,\nyou even removed all the code I had in there that was no longer reachable :)\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 09:36:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > \to -ALTER TABLE ADD PRIMARY KEY (Tom)\n> > \to -ALTER TABLE ADD UNIQUE (Tom)\n> >\n> > AFAIR, I didn't do either of those; I think Chris K-L gets the credit.\n> \n> Actually, I did ADD UNIQUE originally after lots of coding and then you went\n> and made it work by changing a couple of lines in the grammar. You then got\n> ADD PRIMARY KEY working as well, so I had Bruce change it to you. In fact,\n> you even removed all the code I had in there that was no longer reachable :)\n\nAre you guys competing for the modesty award? ;-)\n\nI heard Stallman is trying to win it this year. :-)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 21:40:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "On Mon, 2 Sep 2002, Bruce Momjian wrote:\n\n> > >> Oh, didn't you put in that patch to provide a GUC level control?\n> > \n> > > Yes, but what level do you set it at to turn it off?\n> > \n> > FATAL? PANIC?\n> \n> He doesn't support those levels:\n> \t\n> \ttest=> set log_min_error_statement = fatal;\n> \tERROR: invalid value for option 'log_min_error_statement': 'fatal'\n> \ttest=> set log_min_error_statement = error;\n> \tSET\n> \n> and in fact, the default is ERROR. I think the default has to be\n> something higher, but even FATAL seems wrong. We have to be able to\n> turn it off, and have it off by default, rather than saying it only\n> happens with fatal errors or something like that.\n\nOkay, my bad. From my reading of the email exchange, I thought people\nwanted this on -- always. The best solution for this, in my opinion, is to\nhave a magic value \"off\" which the error code lookup translates to some\nnumber > PANIC.\n\nSecondly, there is a flaw in the patch. I merged all the\nassign_server_min_messages() and assign_client_min_messages() code to make\nthings pretty. Perhaps I shouldn't have (since I left off FATAL and PANIC\nfrom the list, which I shouldn't have for the prior but should have for\nthe latter). So there are a few ways to fix it: allow both functions (+\nthe log_min_error_state function) to accept all possible error codes +\n\"off\" (which does nothing for the first two functions); pass a unique\nnumber for each function to assign_msglvl() so that we can determine the\na legal error code for that GUC variable is being assigned; or, just have\ndifferent lists.\n\nNow, the first solution is a hack, but it shouldn't actually break\nanything. The second is overkill. The third is the best way to do it but\nas we add more of these kinds of functions (log_min_parse,\nlog_min_rewritten? -- I can a use for that) the amount of assign_ code\nwill grow linearly and be pretty similar.\n\nIdeas?\n\nGavin\n\n\n", "msg_date": "Tue, 3 Sep 2002 11:54:35 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: I am done" }, { "msg_contents": "Gavin Sherry wrote:\n> Okay, my bad. From my reading of the email exchange, I thought people\n> wanted this on -- always. The best solution for this, in my opinion, is to\n> have a magic value \"off\" which the error code lookup translates to some\n> number > PANIC.\n\nWhat do people think? I thought we needed a way to turn this off,\nespecially if the queries can be large. Because ERROR is above LOG in\nserver_min_messages, I don't think that is a way to fix it.\n\n\n> Secondly, there is a flaw in the patch. I merged all the\n> assign_server_min_messages() and assign_client_min_messages() code to make\n> things pretty. Perhaps I shouldn't have (since I left off FATAL and PANIC\n> from the list, which I shouldn't have for the prior but should have for\n> the latter). So there are a few ways to fix it: allow both functions (+\n> the log_min_error_state function) to accept all possible error codes +\n> \"off\" (which does nothing for the first two functions); pass a unique\n> number for each function to assign_msglvl() so that we can determine the\n> a legal error code for that GUC variable is being assigned; or, just have\n> different lists.\n\n\nI thought it was good you could merge them, but now I remember why I\ndidn't --- they take different args.\n\n\n> \n> Now, the first solution is a hack, but it shouldn't actually break\n> anything. The second is overkill. The third is the best way to do it but\n\nYou can't do the hack.\n\n> as we add more of these kinds of functions (log_min_parse,\n> log_min_rewritten? -- I can a use for that) the amount of assign_ code\n> will grow linearly and be pretty similar.\n\nI think the second, passing an arg to say whether it is server or\nclient, will do the trick, though now you need an error one too. I\nguess you have to use #define and set it, or pass a string down with the\nGUC variable and test that with strcmp.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 22:10:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "> Are you guys competing for the modesty award? ;-)\n> I heard Stallman is trying to win it this year. :-)\n\nHah, that's a good one.\n\nFor doing what - telling you not to call it GNU/Linux, only Linux/GNU ?\n:-)\n\n-- \nKaare Rasmussen --Linux, spil,-- Tlf: 3816 2582\nKaki Data tshirts, merchandize Fax: 3816 2501\nHowitzvej 75 �ben 12.00-18.00 Email: kar@webline.dk\n2000 Frederiksberg L�rdag 11.00-17.00 Web: www.suse.dk\n", "msg_date": "Tue, 3 Sep 2002 18:43:46 +0200", "msg_from": "Kaare Rasmussen <kar@kakidata.dk>", "msg_from_op": false, "msg_subject": "Re: I am done" }, { "msg_contents": "On Tue, 3 Sep 2002, Kaare Rasmussen wrote:\n\n> > Are you guys competing for the modesty award? ;-)\n> > I heard Stallman is trying to win it this year. :-)\n> \n> Hah, that's a good one.\n> \n> For doing what - telling you not to call it GNU/Linux, only Linux/GNU ?\n> :-)\n\n SELECT FreeProject FROM History ORDER BY FreeProject\n\n Seems quite logical : GNU/Linux ;-)\n\n-- \n\t\t\t Alexandre Dulaunoy -- http://www.foo.be/\n 3B12 DCC2 82FA 2931 2F5B 709A 09E2 CD49 44E6 CBCD --- AD993-6BONE\n\"People who fight may lose.People who do not fight have already lost.\"\n\t\t\t\t\t\t\tBertolt Brecht\n\n\n\n\n", "msg_date": "Tue, 3 Sep 2002 18:49:58 +0200 (CEST)", "msg_from": "Alexandre Dulaunoy <adulau-conos@conostix.com>", "msg_from_op": false, "msg_subject": "Re: I am done" }, { "msg_contents": "Hi all,\n\nDoes anyone else have an opinion on this? If not, I will implement it per\nBruce's commentary.\n\nGavin\n\nOn Mon, 2 Sep 2002, Bruce Momjian wrote:\n\n> Gavin Sherry wrote:\n> > Okay, my bad. From my reading of the email exchange, I thought people\n> > wanted this on -- always. The best solution for this, in my opinion, is to\n> > have a magic value \"off\" which the error code lookup translates to some\n> > number > PANIC.\n> \n> What do people think? I thought we needed a way to turn this off,\n> especially if the queries can be large. Because ERROR is above LOG in\n> server_min_messages, I don't think that is a way to fix it.\n> \n> \n> > Secondly, there is a flaw in the patch. I merged all the\n> > assign_server_min_messages() and assign_client_min_messages() code to make\n> > things pretty. Perhaps I shouldn't have (since I left off FATAL and PANIC\n> > from the list, which I shouldn't have for the prior but should have for\n> > the latter). So there are a few ways to fix it: allow both functions (+\n> > the log_min_error_state function) to accept all possible error codes +\n> > \"off\" (which does nothing for the first two functions); pass a unique\n> > number for each function to assign_msglvl() so that we can determine the\n> > a legal error code for that GUC variable is being assigned; or, just have\n> > different lists.\n> \n> \n> I thought it was good you could merge them, but now I remember why I\n> didn't --- they take different args.\n> \n> \n> > \n> > Now, the first solution is a hack, but it shouldn't actually break\n> > anything. The second is overkill. The third is the best way to do it but\n> \n> You can't do the hack.\n> \n> > as we add more of these kinds of functions (log_min_parse,\n> > log_min_rewritten? -- I can a use for that) the amount of assign_ code\n> > will grow linearly and be pretty similar.\n> \n> I think the second, passing an arg to say whether it is server or\n> client, will do the trick, though now you need an error one too. I\n> guess you have to use #define and set it, or pass a string down with the\n> GUC variable and test that with strcmp.\n> \n> \n\n", "msg_date": "Wed, 4 Sep 2002 16:39:39 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: I am done" }, { "msg_contents": "\nYes, I would like to know if it should be enabled by default, and\nwhether we need a way to turn it off. I assume, considering the size of\nsome of the queries, that we have to have a way to turn it off, and it\nis possible the admin may not want queries in the log, even if the\ngenerate errors.\n\n---------------------------------------------------------------------------\n\nGavin Sherry wrote:\n> Hi all,\n> \n> Does anyone else have an opinion on this? If not, I will implement it per\n> Bruce's commentary.\n> \n> Gavin\n> \n> On Mon, 2 Sep 2002, Bruce Momjian wrote:\n> \n> > Gavin Sherry wrote:\n> > > Okay, my bad. From my reading of the email exchange, I thought people\n> > > wanted this on -- always. The best solution for this, in my opinion, is to\n> > > have a magic value \"off\" which the error code lookup translates to some\n> > > number > PANIC.\n> > \n> > What do people think? I thought we needed a way to turn this off,\n> > especially if the queries can be large. Because ERROR is above LOG in\n> > server_min_messages, I don't think that is a way to fix it.\n> > \n> > \n> > > Secondly, there is a flaw in the patch. I merged all the\n> > > assign_server_min_messages() and assign_client_min_messages() code to make\n> > > things pretty. Perhaps I shouldn't have (since I left off FATAL and PANIC\n> > > from the list, which I shouldn't have for the prior but should have for\n> > > the latter). So there are a few ways to fix it: allow both functions (+\n> > > the log_min_error_state function) to accept all possible error codes +\n> > > \"off\" (which does nothing for the first two functions); pass a unique\n> > > number for each function to assign_msglvl() so that we can determine the\n> > > a legal error code for that GUC variable is being assigned; or, just have\n> > > different lists.\n> > \n> > \n> > I thought it was good you could merge them, but now I remember why I\n> > didn't --- they take different args.\n> > \n> > \n> > > \n> > > Now, the first solution is a hack, but it shouldn't actually break\n> > > anything. The second is overkill. The third is the best way to do it but\n> > \n> > You can't do the hack.\n> > \n> > > as we add more of these kinds of functions (log_min_parse,\n> > > log_min_rewritten? -- I can a use for that) the amount of assign_ code\n> > > will grow linearly and be pretty similar.\n> > \n> > I think the second, passing an arg to say whether it is server or\n> > client, will do the trick, though now you need an error one too. I\n> > guess you have to use #define and set it, or pass a string down with the\n> > GUC variable and test that with strcmp.\n> > \n> > \n> \n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 02:55:03 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> Does anyone else have an opinion on this? If not, I will implement it per\n> Bruce's commentary.\n\n> On Mon, 2 Sep 2002, Bruce Momjian wrote:\n>> I think the second, passing an arg to say whether it is server or\n>> client, will do the trick, though now you need an error one too. I\n>> guess you have to use #define and set it, or pass a string down with the\n>> GUC variable and test that with strcmp.\n\nI think you're going to end up un-merging the routines. There is no way\nto pass an extra parameter to the set/check routines (at least not\nwithout uglifying all the rest of the GUC code). The design premise is\nthat the per-variable hook routines know what they're supposed to do,\nand in that case this means one hook for each variable.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 09:35:59 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, I would like to know if it should be enabled by default, and\n> whether we need a way to turn it off.\n\nI feel it should be off by default --- if enough people say \"hey, this\nis great\" then maybe we could turn it on by default, but we don't yet\nhave that market testing to prove the demand is there. I'm also worried\nabout log bloat.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 09:38:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "On Wed, 4 Sep 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > Does anyone else have an opinion on this? If not, I will implement it per\n> > Bruce's commentary.\n> \n> > On Mon, 2 Sep 2002, Bruce Momjian wrote:\n> >> I think the second, passing an arg to say whether it is server or\n> >> client, will do the trick, though now you need an error one too. I\n> >> guess you have to use #define and set it, or pass a string down with the\n> >> GUC variable and test that with strcmp.\n> \n> I think you're going to end up un-merging the routines. There is no way\n> to pass an extra parameter to the set/check routines (at least not\n\nThere is a wrapper around the generic function:\n\nconst char *\nassign_min_error_statement(const char *newval, bool doit, bool\n\t\t\tinteractive)\n{\n\treturn(assign_msglvl(&log_min_error_statement,newval,doit,interactive));\n}\n\nI would simply define some macros:\n\n#define MSGLVL_MIN_ERR_STMT (1<<0)\n#define MSGLVL_MIN_CLI_MSGS (1<<1)\n#define MSGLVL_MIN_SVR_MSGS (1<<2)\n\nAnd assign_msglvl(), having been passed the variable 'caller', determined\nby the calling function, will do something like this:\n\n /* everyone has likes debug */\nif (strcasecmp(newval, \"debug\") == 0 &&\n caller & (MSGLVL_MIN_ERR_STMT | MSGLVL_MIN_CLI_MSGS | MSGLVL_MIN_SVR_MSGS))\n { if (doit) (*var) = DEBUG1; }\n\n\t/* ... */\n\nelse if (strcasecmp(newval, \"fatal\") == 0 &&\n caller & (MSGLVL_MIN_ERR_STMT | MSGLVL_MIN_SVR_MSGS))\n { if (doit) (*var) = FATAL; }\nelse if (strcasecmp(newval, \"off\") == 0 &&\n caller & MSGLVL_MIN_ERR_STMT)\n { if (doit) (*var) = MIN_ERR_STMT_OFF; }\n\nPersonally, I've never liked coding like this. The reason I merged the\nbase code for each function was so that the GUC code didn't get uglier as\nmore minimum-level-of-logging parameters were added. But with the code\nabove, the bitwise operations and appearance of the\nassign_msglvl() routine will suffer the same fate, I'm afraid.\n\nSince the flawed code is now in beta, it will need to be fixed. Do people\nlike the above solution or should I just revert to having a seperate\nfunction for each GUC variable affected?\n\nGavin\n\n", "msg_date": "Thu, 5 Sep 2002 16:31:14 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Gavin Sherry <swm@linuxworld.com.au> writes:\n> Since the flawed code is now in beta, it will need to be fixed. Do people\n> like the above solution or should I just revert to having a seperate\n> function for each GUC variable affected?\n\nI do not see a good reason why \"fatal\" and \"off\" shouldn't be allowed\nvalues for all three message variables. If we just did that, then you'd\nbe back to sharable code.\n\nBTW, is it a good idea for server_min_messages and\nlog_min_error_statement to be PGC_USERSET? I could see an argument that\nthey should be PGC_SIGHUP, ie, settable only by the admin. As it is,\nany user can hide his activity from the logs. OTOH, in the past we've\nallowed anyone to change the debug level, and there haven't been\ncomplaints about it.\n\nThere's some value in being able to kick the log level up a notch for\na specific session, but knocking it down from the admin's default could\nbe considered a bad thing. I suppose we could invent a PGC_SIGHUP\n\"min_server_min_messages\" variable that sets a minimum value below which\nthe user can't set server_min_messages. Does that seem like a good\nidea, or overkill?\n\nA compromise position would be to make these two variables PG_SUSET,\nie settable per-session but only if you're superuser.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 09:12:41 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane wrote:\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > Since the flawed code is now in beta, it will need to be fixed. Do people\n> > like the above solution or should I just revert to having a seperate\n> > function for each GUC variable affected?\n> \n> I do not see a good reason why \"fatal\" and \"off\" shouldn't be allowed\n> values for all three message variables. If we just did that, then you'd\n> be back to sharable code.\n\nI recommended he only allow valid values for each variable. I think if\nwe say we only support values X,Y,Z we had better throw an error if it\nanything else.\n\n> BTW, is it a good idea for server_min_messages and\n> log_min_error_statement to be PGC_USERSET? I could see an argument that\n> they should be PGC_SIGHUP, ie, settable only by the admin. As it is,\n> any user can hide his activity from the logs. OTOH, in the past we've\n> allowed anyone to change the debug level, and there haven't been\n> complaints about it.\n> \n> There's some value in being able to kick the log level up a notch for\n> a specific session, but knocking it down from the admin's default could\n> be considered a bad thing. I suppose we could invent a PGC_SIGHUP\n> \"min_server_min_messages\" variable that sets a minimum value below which\n> the user can't set server_min_messages. Does that seem like a good\n> idea, or overkill?\n\nSeems a new GUC variable seems like overkill to me, and I think we need\nto allow it to be raised. I think we can make server_min_messages\nPGC_SUSET so only the admin can change it.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 11:21:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Tom Lane wrote:\n> There's some value in being able to kick the log level up a notch for\n> a specific session, but knocking it down from the admin's default could\n> be considered a bad thing. I suppose we could invent a PGC_SIGHUP\n> \"min_server_min_messages\" variable that sets a minimum value below which\n> the user can't set server_min_messages. Does that seem like a good\n> idea, or overkill?\n> \n> A compromise position would be to make these two variables PG_SUSET,\n> ie settable per-session but only if you're superuser.\n\nOh, I just saw your compromise position. Yes, I think that is the way\nto go.\n\nIn fact, aside from the security issue, allowing users to throw\nvoluminous debug info into the server logs doesn't seem like a good idea\nanyway.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 11:25:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n>> I do not see a good reason why \"fatal\" and \"off\" shouldn't be allowed\n>> values for all three message variables. If we just did that, then you'd\n>> be back to sharable code.\n\n> I recommended he only allow valid values for each variable. I think if\n> we say we only support values X,Y,Z we had better throw an error if it\n> anything else.\n\nThat's not what I said: I said allow all the values for each variable.\nAnd document it. Why shouldn't we let people turn off error logging\nif they want to?\n\n> Seems a new GUC variable seems like overkill to me, and I think we need\n> to allow it to be raised. I think we can make server_min_messages\n> PGC_SUSET so only the admin can change it.\n\nOkay, and log_min_error_statement too.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 11:28:14 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> >> I do not see a good reason why \"fatal\" and \"off\" shouldn't be allowed\n> >> values for all three message variables. If we just did that, then you'd\n> >> be back to sharable code.\n> \n> > I recommended he only allow valid values for each variable. I think if\n> > we say we only support values X,Y,Z we had better throw an error if it\n> > anything else.\n> \n> That's not what I said: I said allow all the values for each variable.\n> And document it. Why shouldn't we let people turn off error logging\n> if they want to?\n\nBut the client side doesn't make any sense to support FATAL. Am I\nmissing something?\n\n> > Seems a new GUC variable seems like overkill to me, and I think we need\n> > to allow it to be raised. I think we can make server_min_messages\n> > PGC_SUSET so only the admin can change it.\n> \n> Okay, and log_min_error_statement too.\n\nYes.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 11:30:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> But the client side doesn't make any sense to support FATAL. Am I\n> missing something?\n\nHm. I suppose a client setting above ERROR might break some application\nprograms that expect either ERROR or a command-complete response ...\nbut do we need to go out of our way to prohibit people from choosing\nsettings that break their clients? If so, I've got a long list of\nthings we'd better worry about ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 11:40:35 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > But the client side doesn't make any sense to support FATAL. Am I\n> > missing something?\n> \n> Hm. I suppose a client setting above ERROR might break some application\n> programs that expect either ERROR or a command-complete response ...\n> but do we need to go out of our way to prohibit people from choosing\n> settings that break their clients? If so, I've got a long list of\n> things we'd better worry about ...\n\nclient_min_messages currently shows:\n\n #client_min_messages = notice # Values, in order of decreasing detail:\n # debug5, debug4, debug3, debug2, debug1,\n # log, info, notice, warning, error\n\nso it is only fatal and panic that are not allowed for clients. If you\nwant to allow them, that is fine with me. It would make it more\nconsistent, but of course I don't think a fatal or panic ever makes it\nto the client side.\n\nYour point that there should be a way of eliminating even ERROR coming\nto a client seems valid to me. Let's make the change.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 11:51:28 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> of course I don't think a fatal or panic ever makes it\n> to the client side.\n\nOf course it does. Try entering a bad password as a trivial example.\nWe punt *after* we send the elog.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 12:16:09 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > of course I don't think a fatal or panic ever makes it\n> > to the client side.\n> \n> Of course it does. Try entering a bad password as a trivial example.\n> We punt *after* we send the elog.\n\nOh, that's good. I guess it was PANIC I assumed never made it to the\nclient. Well, anyway, client should support the same values as server.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 12:43:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" }, { "msg_contents": "On Thu, 5 Sep 2002, Tom Lane wrote:\n\n> Gavin Sherry <swm@linuxworld.com.au> writes:\n> > Since the flawed code is now in beta, it will need to be fixed. Do people\n> > like the above solution or should I just revert to having a seperate\n> > function for each GUC variable affected?\n> \n> I do not see a good reason why \"fatal\" and \"off\" shouldn't be allowed\n> values for all three message variables. If we just did that, then you'd\n> be back to sharable code.\n\nThis was one of my other suggestions: does it matter if people can set\nclient_min_messages to, say, PANIC -- since they wont get it anyway.\n\n> BTW, is it a good idea for server_min_messages and\n> log_min_error_statement to be PGC_USERSET? I could see an argument that\n> they should be PGC_SIGHUP, ie, settable only by the admin. As it is,\n> any user can hide his activity from the logs. OTOH, in the past we've\n> allowed anyone to change the debug level, and there haven't been\n> complaints about it.\n> \n> There's some value in being able to kick the log level up a notch for\n> a specific session, but knocking it down from the admin's default could\n> be considered a bad thing. I suppose we could invent a PGC_SIGHUP\n> \"min_server_min_messages\" variable that sets a minimum value below which\n> the user can't set server_min_messages. Does that seem like a good\n> idea, or overkill?\n\nI think it would be important to implement it this way. I'm surprised this\nhasn't come up before. Still, it'd be a 7.4 item.\n\n> \n> A compromise position would be to make these two variables PG_SUSET,\n> ie settable per-session but only if you're superuser.\n\nSounds like a reasonably compromise. I cannot think of a reason why\npeople would be setting server_min_messages per session in\nproduction. Perhaps this should be changed for 7.3?\n\n> \n> \t\t\tregards, tom lane\n> \n\nGavin\n\n", "msg_date": "Fri, 6 Sep 2002 10:17:37 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: I am done " }, { "msg_contents": "Gavin Sherry wrote:\n> On Thu, 5 Sep 2002, Tom Lane wrote:\n> \n> > Gavin Sherry <swm@linuxworld.com.au> writes:\n> > > Since the flawed code is now in beta, it will need to be fixed. Do people\n> > > like the above solution or should I just revert to having a seperate\n> > > function for each GUC variable affected?\n> > \n> > I do not see a good reason why \"fatal\" and \"off\" shouldn't be allowed\n> > values for all three message variables. If we just did that, then you'd\n> > be back to sharable code.\n> \n> This was one of my other suggestions: does it matter if people can set\n> client_min_messages to, say, PANIC -- since they wont get it anyway.\n\n\nSeems it is OK and is equivalent to off.\n\n\n> > BTW, is it a good idea for server_min_messages and\n> > log_min_error_statement to be PGC_USERSET? I could see an argument that\n> > they should be PGC_SIGHUP, ie, settable only by the admin. As it is,\n> > any user can hide his activity from the logs. OTOH, in the past we've\n> > allowed anyone to change the debug level, and there haven't been\n> > complaints about it.\n> > \n> > There's some value in being able to kick the log level up a notch for\n> > a specific session, but knocking it down from the admin's default could\n> > be considered a bad thing. I suppose we could invent a PGC_SIGHUP\n> > \"min_server_min_messages\" variable that sets a minimum value below which\n> > the user can't set server_min_messages. Does that seem like a good\n> > idea, or overkill?\n> \n> I think it would be important to implement it this way. I'm surprised this\n> hasn't come up before. Still, it'd be a 7.4 item.\n\nI think restricting it to super-user is better.\n\n> > A compromise position would be to make these two variables PG_SUSET,\n> > ie settable per-session but only if you're superuser.\n> \n> Sounds like a reasonably compromise. I cannot think of a reason why\n> people would be setting server_min_messages per session in\n> production. Perhaps this should be changed for 7.3?\n\nI can imagine doing it so you can log something and look at it later.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Thu, 5 Sep 2002 20:48:04 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: I am done" } ]
[ { "msg_contents": "Whatver happened to Rod's work on the BETWEEN command? I remember he got to\nthe stage of realising a lot of execetor changes had to be made...?\n\nChris\n\n", "msg_date": "Mon, 2 Sep 2002 14:36:06 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "BETWEEN SYMMERIC" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> Whatver happened to Rod's work on the BETWEEN command? I remember he got to\n> the stage of realising a lot of execetor changes had to be made...?\n\nThe problem was that the optimizer wouldn't recognize it as an\noptimizable/indexable case so it would be worse than what we have now,\nand I don't think we wanted or could continue.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 02:41:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMERIC" }, { "msg_contents": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au> writes:\n> Whatver happened to Rod's work on the BETWEEN command?\n\nIt got applied, some people reported failures, it was hastily backed\nout, and no one seems to have followed up on it.\n\nAt this point I'd say it's not happening for 7.3 ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 02:44:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMERIC " }, { "msg_contents": "On Mon, 2002-09-02 at 02:36, Christopher Kings-Lynne wrote:\n> Whatver happened to Rod's work on the BETWEEN command? I remember he got to\n> the stage of realising a lot of execetor changes had to be made...?\n\nI've not had time to implement the optimizer portion.\n\n", "msg_date": "02 Sep 2002 08:46:44 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: BETWEEN SYMMERIC" } ]
[ { "msg_contents": "Hi,\n\nI've tried to evaluate the compatibility of the current CVS version to our current 7.2.1 version, and noticed that 7.2.x dumps are not compatible with 7.3, not even in cleartext format. \nTimestamps in 7.2.x are this format: 2002-07-04 15:19:11.363562+02\n7.3 expects a timestamp per default in this format: 2002-09-02 08:51:27,223455+02\n\nI found no way to import this database, experimenting with \"set datestyle ...\" did not help. Will 7.3 really be that incompatible regarding timestamps? Or is this simply a problem with my current locale?\n\nBest regards,\n\tMario Weilguni\n\n\n\n", "msg_date": "Mon, 2 Sep 2002 08:55:08 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "pg_dump compatibility between 7.3 and 7.2?" }, { "msg_contents": "Looks like a locale problem to me, Mario...\n\nThat comma is a non-USA (or oz, uk, canada) decimal separator...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Mario Weilguni\n> Sent: Monday, 2 September 2002 2:55 PM\n> To: Hackers\n> Subject: [HACKERS] pg_dump compatibility between 7.3 and 7.2?\n> \n> \n> Hi,\n> \n> I've tried to evaluate the compatibility of the current CVS \n> version to our current 7.2.1 version, and noticed that 7.2.x \n> dumps are not compatible with 7.3, not even in cleartext format. \n> Timestamps in 7.2.x are this format: 2002-07-04 15:19:11.363562+02\n> 7.3 expects a timestamp per default in this format: 2002-09-02 \n> 08:51:27,223455+02\n> \n> I found no way to import this database, experimenting with \"set \n> datestyle ...\" did not help. Will 7.3 really be that incompatible \n> regarding timestamps? Or is this simply a problem with my current locale?\n> \n> Best regards,\n> \tMario Weilguni\n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n", "msg_date": "Mon, 2 Sep 2002 14:57:30 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility between 7.3 and 7.2?" }, { "msg_contents": "Mario Weilguni writes:\n\n> Timestamps in 7.2.x are this format: 2002-07-04 15:19:11.363562+02\n> 7.3 expects a timestamp per default in this format: 2002-09-02 08:51:27,223455+02\n\nIf you're not running the reallly latest 7.3 tip, update and try again.\nSomething related to this was fixed recently.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 3 Sep 2002 00:26:31 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: pg_dump compatibility between 7.3 and 7.2?" }, { "msg_contents": "Am Dienstag, 3. September 2002 00:26 schrieb Peter Eisentraut:\n> Mario Weilguni writes:\n> > Timestamps in 7.2.x are this format: 2002-07-04 15:19:11.363562+02\n> > 7.3 expects a timestamp per default in this format: 2002-09-02\n> > 08:51:27,223455+02\n>\n> If you're not running the reallly latest 7.3 tip, update and try again.\n> Something related to this was fixed recently.\n\nIt seems to work now, thanks. My last update was yesterday morning, so I thought it was fresh enough.\n\nThanks!\n", "msg_date": "Tue, 3 Sep 2002 09:00:22 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "Re: pg_dump compatibility between 7.3 and 7.2?" } ]
[ { "msg_contents": "I see:\n\n/usr/jakarta-ant-1.4.1/bin/ant -buildfile ./build.xml all \\\n -Dmajor=7 -Dminor=3 -Dfullversion=7.3devel -Ddef_pgport=5432 -Denable_debug=yes\nBuildfile: ./build.xml\n\nBUILD FAILED\n\n/usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/./build.xml:51: Class org.apache.tools.ant.taskdefs.condition.And doesn't support the nested \"isset\" element.\n\nIs my ant 1.4.1 is too old to build PostgreSQL?\n--\nTatsuo Ishii\n", "msg_date": "Mon, 02 Sep 2002 16:30:56 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "current build fail" }, { "msg_contents": "Yeah, it seems to require Ant 1.5 now. This should probably go in the\nconfigure script, ie a test for Ant 1.5 if java enabled?\n\nTom.\n\nOn Mon, 2002-09-02 at 16:30, Tatsuo Ishii wrote:\n> I see:\n> \n> /usr/jakarta-ant-1.4.1/bin/ant -buildfile ./build.xml all \\\n> -Dmajor=7 -Dminor=3 -Dfullversion=7.3devel -Ddef_pgport=5432 -Denable_debug=yes\n> Buildfile: ./build.xml\n> \n> BUILD FAILED\n> \n> /usr/local/src/pgsql/current/pgsql/src/interfaces/jdbc/./build.xml:51: Class org.apache.tools.ant.taskdefs.condition.And doesn't support the nested \"isset\" element.\n> \n> Is my ant 1.4.1 is too old to build PostgreSQL?\n> --\n> Tatsuo Ishii\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n-- \nThomas O'Dowd. - Nooping - http://nooper.com\ntom@nooper.com - Testing - http://nooper.co.jp/labs\n\n", "msg_date": "02 Sep 2002 16:49:56 +0900", "msg_from": "Thomas O'Dowd <tom@nooper.com>", "msg_from_op": false, "msg_subject": "Re: current build fail" }, { "msg_contents": "> Yeah, it seems to require Ant 1.5 now.\n\nThanks I got Ant 1.5 and now PostgreSQL builds fine.\n\n> This should probably go in the\n> configure script, ie a test for Ant 1.5 if java enabled?\n\nSounds nice idea.\n--\nTatsuo Ishii\n", "msg_date": "Mon, 02 Sep 2002 18:06:10 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: current build fail" }, { "msg_contents": "Tatsuo Ishii wrote:\n> > Yeah, it seems to require Ant 1.5 now.\n> \n> Thanks I got Ant 1.5 and now PostgreSQL builds fine.\n> \n> > This should probably go in the\n> > configure script, ie a test for Ant 1.5 if java enabled?\n> \n> Sounds nice idea.\n\nYes, I am going to add a test for >= Ant 1.5 to configure.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 09:43:59 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: current build fail" }, { "msg_contents": "Patch applied to test for Ant >= 1.5. Autconf run.\n\n---------------------------------------------------------------------------\n\nBruce Momjian wrote:\n> Tatsuo Ishii wrote:\n> > > Yeah, it seems to require Ant 1.5 now.\n> > \n> > Thanks I got Ant 1.5 and now PostgreSQL builds fine.\n> > \n> > > This should probably go in the\n> > > configure script, ie a test for Ant 1.5 if java enabled?\n> > \n> > Sounds nice idea.\n> \n> Yes, I am going to add a test for >= Ant 1.5 to configure.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n\nIndex: configure.in\n===================================================================\nRCS file: /cvsroot/pgsql-server/configure.in,v\nretrieving revision 1.200\ndiff -c -c -r1.200 configure.in\n*** configure.in\t30 Aug 2002 17:14:30 -0000\t1.200\n--- configure.in\t2 Sep 2002 16:09:11 -0000\n***************\n*** 381,386 ****\n--- 381,389 ----\n PGAC_PATH_ANT\n if test -z \"$ANT\"; then\n AC_MSG_ERROR([Ant is required to build Java components])\n+ fi\n+ if \"$ANT\" -version | sed q | egrep -v ' 1\\.[[5-9]]| [[2-9]]\\.' >/dev/null ; then\n+ AC_MSG_ERROR([Ant version >= 1.5 is required to build Java components])\n fi],\n [AC_MSG_RESULT(no)])\n AC_SUBST(with_java)\n***************\n*** 835,841 ****\n HPUXMATHLIB=\"\"\n case $host_cpu in\n hppa1.1) \n! \tif [[ -r /lib/pa1.1/libm.a ]] ; then\n \t HPUXMATHLIB=\"-L /lib/pa1.1 -lm\"\n \tfi ;;\n esac\n--- 838,844 ----\n HPUXMATHLIB=\"\"\n case $host_cpu in\n hppa1.1) \n! \tif test -r /lib/pa1.1/libm.a ; then\n \t HPUXMATHLIB=\"-L /lib/pa1.1 -lm\"\n \tfi ;;\n esac\n***************\n*** 931,937 ****\n \n dnl If we need to use \"long long int\", figure out whether nnnLL notation works.\n \n! if [[ x\"$HAVE_LONG_LONG_INT_64\" = xyes ]] ; then\n AC_TRY_COMPILE([\n #define INT64CONST(x) x##LL\n long long int foo = INT64CONST(0x1234567890123456);\n--- 934,940 ----\n \n dnl If we need to use \"long long int\", figure out whether nnnLL notation works.\n \n! if test x\"$HAVE_LONG_LONG_INT_64\" = xyes ; then\n AC_TRY_COMPILE([\n #define INT64CONST(x) x##LL\n long long int foo = INT64CONST(0x1234567890123456);", "msg_date": "Mon, 2 Sep 2002 12:13:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: current build fail" } ]
[ { "msg_contents": "Hi,\n\nI'm planning to experiment with a new index access method. More\nspecifically I want to plug an external index program in to postgres as an\nindex.\n\nDoes anybody have some hints on how I can find some info on where to\nbegin? I have looked a little bit on the GIST project, and in the\nprogrammers guide, but they don't go in to many details about\nimplementation. I would like to get an overview on how much/what I need to\nimplement\n\nRegards,\n\nH�kon\n", "msg_date": "Mon, 02 Sep 2002 14:34:18 +0200", "msg_from": "=?iso-8859-1?q?H=E5kon_Hagen_Clausen?= <hakonhc@ifi.uio.no>", "msg_from_op": true, "msg_subject": "new index access methods" } ]
[ { "msg_contents": "Hi,\n\nI'm playing around with the CVS version and noticed a change from 7.2 in \nregards to serial datatypes - they no longer automatically have an \nindex. Is this a deliberate thing? I did a search in the archives but \ndidn't come across mention of the change. A pointer to discussion on \nthis would be great.\n\nCREATE TABLE author (\nauthorid SERIAL,\nfirstname VARCHAR(255),\nsurname VARCHAR(255),\ndateofbirth DATE,\ngender CHAR(1)\n);\n\nNOTICE: CREATE TABLE will create implicit sequence \n'author_authorid_seq' for SERIAL column 'author.authorid'\nCREATE TABLE\n\nlibrary=# \\d author\n Table \"public.author\"\n Column | Type | Modifiers\n-------------+------------------------+--------------------------------------------------------------\n authorid | integer | not null default \nnextval('public.author_authorid_seq'::text)\n firstname | character varying(255) |\n surname | character varying(255) |\n dateofbirth | date |\n gender | character(1) |\n\nlibrary=# \\di\nNo relations found.\n\n", "msg_date": "Tue, 03 Sep 2002 00:00:43 +1000", "msg_from": "Chris <pghackers@designmagick.com>", "msg_from_op": true, "msg_subject": "serial datatype changes for v7.3?" }, { "msg_contents": "Chris <pghackers@designmagick.com> writes:\n> I'm playing around with the CVS version and noticed a change from 7.2 in \n> regards to serial datatypes - they no longer automatically have an \n> index. Is this a deliberate thing?\n\nYes. The release notes mention:\n\nSERIAL no longer implies UNIQUE; specify explicitly if index is wanted\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 10:56:46 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: serial datatype changes for v7.3? " }, { "msg_contents": "Tom Lane wrote:\n> Chris <pghackers@designmagick.com> writes:\n> > I'm playing around with the CVS version and noticed a change from 7.2 in \n> > regards to serial datatypes - they no longer automatically have an \n> > index. Is this a deliberate thing?\n> \n> Yes. The release notes mention:\n> \n> SERIAL no longer implies UNIQUE; specify explicitly if index is wanted\n\nWhat was the logic for this change?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 11:23:13 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: serial datatype changes for v7.3?" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom Lane wrote:\n>> Yes. The release notes mention:\n>> SERIAL no longer implies UNIQUE; specify explicitly if index is wanted\n\n> What was the logic for this change?\n\nSee the thread back around 17 Aug:\nhttp://archives.postgresql.org/pgsql-hackers/2002-08/msg01336.php\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 12:12:17 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: serial datatype changes for v7.3? " }, { "msg_contents": "\nOK, we just need to make that clear in the release notes. I had\nforgotten about the change.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Tom Lane wrote:\n> >> Yes. The release notes mention:\n> >> SERIAL no longer implies UNIQUE; specify explicitly if index is wanted\n> \n> > What was the logic for this change?\n> \n> See the thread back around 17 Aug:\n> http://archives.postgresql.org/pgsql-hackers/2002-08/msg01336.php\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 12:34:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: serial datatype changes for v7.3?" } ]
[ { "msg_contents": "Tom, when you loosened the restriction on reindexing toast tables, did\nyou continue to restrict indexing of system tables to the superuser? Is\nthat required?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 16:07:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "reindex of toast tables" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Tom, when you loosened the restriction on reindexing toast tables, did\n> you continue to restrict indexing of system tables to the superuser? Is\n> that required?\n\nHm. Now that I look at it, the forms REINDEX TABLE and REINDEX INDEX\ndon't seem to have any permissions checks at all :-(. One would think\nthat they need to require ownership rights on the target table.\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 16:29:45 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: reindex of toast tables " }, { "msg_contents": "Tom Lane dijo: \n\n> Hm. Now that I look at it, the forms REINDEX TABLE and REINDEX INDEX\n> don't seem to have any permissions checks at all :-(. One would think\n> that they need to require ownership rights on the target table.\n\nThey are done in tcop/utility.c AFAICS.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Cuando ma�ana llegue pelearemos segun lo que ma�ana exija\" (Mowgli)\n\n", "msg_date": "Mon, 2 Sep 2002 16:37:55 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: reindex of toast tables " }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Tom Lane dijo: \n>> Hm. Now that I look at it, the forms REINDEX TABLE and REINDEX INDEX\n>> don't seem to have any permissions checks at all :-(. One would think\n>> that they need to require ownership rights on the target table.\n\n> They are done in tcop/utility.c AFAICS.\n\nOh, right. I was just looking at the catalog/index routines. Nevermind...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 19:50:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: reindex of toast tables " } ]
[ { "msg_contents": "Now that the \"multibyte\"-based character set recoding is a fixed part of\nthe feature set, is there any need to keep the \"Cyrillic\" recode support?\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 3 Sep 2002 00:25:54 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Future of --enable-recode?" }, { "msg_contents": "Peter Eisentraut <peter_e@gmx.net> writes:\n> Now that the \"multibyte\"-based character set recoding is a fixed part of\n> the feature set, is there any need to keep the \"Cyrillic\" recode support?\n\nIt's probably not worth maintaining anymore ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Mon, 02 Sep 2002 20:00:40 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Future of --enable-recode? " }, { "msg_contents": "Tom Lane wrote:\n> Peter Eisentraut <peter_e@gmx.net> writes:\n> > Now that the \"multibyte\"-based character set recoding is a fixed part of\n> > the feature set, is there any need to keep the \"Cyrillic\" recode support?\n> \n> It's probably not worth maintaining anymore ...\n\nAdded to TODO:\n\n\t> * Remove Cyrillic recode support\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 18:09:47 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Future of --enable-recode?" } ]
[ { "msg_contents": "Gettext cannot handle compile-time string concatenation with macros. This\nis a made-up example:\n\n printf(gettext(\"now at file position \" INT64_FORMAT), (int64) offset);\n\nAt the time when the message catalogs are extracted, INT64_FORMAT is\nunknown.\n\nThe solution in the Gettext manual is to rewrite the code like this:\n\n char buf[100];\n sprintf(INT64_FORMAT, (int64) offset);\n printf(gettext(\"now at file position %s\"), buf);\n\nSince the only affected cases are a few low-probability error messages in\nthe sequence code and in pg_dump this isn't an aesthetic disaster, so I\nplan to fix it along those lines.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 3 Sep 2002 00:30:15 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": true, "msg_subject": "Gettext and INT64_FORMAT" } ]
[ { "msg_contents": "\nJust a quick one before I package up the wrong thing ... where should I be\npulling docs from? :)\n\n\n", "msg_date": "Mon, 2 Sep 2002 20:27:44 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Docs for v7.3 ..." }, { "msg_contents": "Marc G. Fournier writes:\n\n> Just a quick one before I package up the wrong thing ... where should I be\n> pulling docs from? :)\n\ncd doc/src\ngmake postgres.tar.gz\n\nYou can take the man pages from an old release until we figure them out.\n\n(Any news on repackaging 7.2.2?)\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Tue, 3 Sep 2002 18:56:44 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Docs for v7.3 ..." }, { "msg_contents": "Peter Eisentraut wrote:\n> Marc G. Fournier writes:\n> \n> > Just a quick one before I package up the wrong thing ... where should I be\n> > pulling docs from? :)\n> \n> cd doc/src\n> gmake postgres.tar.gz\n> \n> You can take the man pages from an old release until we figure them out.\n\nWoh. Better to ship no manual pages rather than ship the ones from\n7.2.2.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 3 Sep 2002 15:05:45 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Docs for v7.3 ..." } ]
[ { "msg_contents": "Looks like we got an honourable mention *sigh*:\n\nServer Appliance: SnapGear for Lite/Lite+ SOHO Firewall/VPN Client\nHonorable Mention: Sun Microsystems for Cobalt Qube\nSecurity Tool: GPG\nWeb Server: IBM for xSeries\nHonorable Mention: Sun Microsystems for Cobalt RaQ XRT\nEnterprise Application Server: Zope\nTechnical Workstation: HP for x4000\nWeb Client (Tie): Mozilla and Galeon\nHonorable Mention: Konqueror\nGraphics Application: The GIMP\nConsumer Software: KDE 3.0\nCommunication Tool: Ximian for Evolution\nDevelopment Tool: Emacs\nHonorable Mentions: Borland for Kylix, and Kdevelop\nDatabase: MySQL Honorable Mention: PostgresSQL\nBackup Software: Sistina Software for Logical Volume Manager\nOffice Application: Sun Microsystems for OpenOffice 1.0\nMobile Device: Sharp for Zaurus\nTraining and Certification Program: Linux Professional Institute\nGame: Sunspire Studios for TuxRacer\nHonorable Mention: Pysol\nTechnical Book: Linux Device Drivers 2nd Edition by Alessandro Rubini and\nJonathan Corbet (O'Reilly & Associates)\nNon-Technical Book: The Future of Ideas: The Fate of the Commons in a\nConnected World by Lawrence Lessig (Random House)\nWeb Site: Google\nProduct of the Year: Sharp for Zaurus\n\nhttp://www.linuxjournal.com/edchoice/\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 09:49:38 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Linux Journal Editors Choice Awards" }, { "msg_contents": "On Tue, 3 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Database: MySQL Honorable Mention: PostgresSQL\n\nNothing wrong with that. From your list it seemed that in the categories\nwhere there were competing open source and open source/commercial backed\nsoftware then the latter seemed to win over. \n\nThis makes sense if their judging criteria included things like\n'commercial support contracts', 'service level agreements', 'warranties',\netc.\n\nGavin\n\n", "msg_date": "Tue, 3 Sep 2002 12:12:34 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "> On Tue, 3 Sep 2002, Christopher Kings-Lynne wrote:\n>\n> > Database: MySQL Honorable Mention: PostgresSQL\n>\n> Nothing wrong with that. From your list it seemed that in the categories\n> where there were competing open source and open source/commercial backed\n> software then the latter seemed to win over.\n>\n> This makes sense if their judging criteria included things like\n> 'commercial support contracts', 'service level agreements', 'warranties',\n> etc.\n\nI think the whole thing's pretty biased anyway. I mean the open source\ndatabase market now includes SapDB for crying out loud - how can MySQL (and\neven postgres really) compete with that? And what about Firebird? I think\nthe nominations were put forward by a bunch of people who've only ever heard\nof MySQL and PostgreSQL...\n\n(Not that I'd switch to SapDB ;) )\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 10:17:57 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "Christopher Kings-Lynne wrote:\n> > On Tue, 3 Sep 2002, Christopher Kings-Lynne wrote:\n> >\n> > > Database: MySQL Honorable Mention: PostgresSQL\n> >\n> > Nothing wrong with that. From your list it seemed that in the categories\n> > where there were competing open source and open source/commercial backed\n> > software then the latter seemed to win over.\n> >\n> > This makes sense if their judging criteria included things like\n> > 'commercial support contracts', 'service level agreements', 'warranties',\n> > etc.\n> \n> I think the whole thing's pretty biased anyway. I mean the open source\n> database market now includes SapDB for crying out loud - how can MySQL (and\n> even postgres really) compete with that? And what about Firebird? I think\n> the nominations were put forward by a bunch of people who've only ever heard\n> of MySQL and PostgreSQL...\n> \n> (Not that I'd switch to SapDB ;) )\n\nNo question there is bias. 50% of the awards racket is just to generate\ntraffic of people who want to see who you picked. Red Hat DB won for\n\"Productivity Application\" last year at LinuxWorld. I think they\njust applied for everything.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 22:23:54 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "On Tue, 3 Sep 2002, Christopher Kings-Lynne wrote:\n\n> database market now includes SapDB for crying out loud - how can MySQL (and\n> even postgres really) compete with that? And what about Firebird? I think\n\nAnd berkeley db. *Easily* the most widely used open source database and\nthe most profitable. :)\n\nGavin\n\n\n", "msg_date": "Tue, 3 Sep 2002 12:28:22 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "* Christopher Kings-Lynne <chriskl@familyhealth.com.au> [2002-09-03 10:17 +0800]:\n> > On Tue, 3 Sep 2002, Christopher Kings-Lynne wrote:\n> >\n> > > Database: MySQL Honorable Mention: PostgresSQL\n> >\n> > Nothing wrong with that. From your list it seemed that in the categories\n> > where there were competing open source and open source/commercial backed\n> > software then the latter seemed to win over.\n> >\n> > This makes sense if their judging criteria included things like\n> > 'commercial support contracts', 'service level agreements', 'warranties',\n> > etc.\n> \n> I think the whole thing's pretty biased anyway. I mean the open source\n> database market now includes SapDB for crying out loud - how can MySQL (and\n> even postgres really) compete with that?\n\nPostgreSQL code and build process is maintainable. Besides, I don't\nthink that PostgreSQL is no match for SAPdb, as PostgreSQL will have a\nnative win32 port, replication, schemas and prepared statements in the\nforseeable future. What else is missing? Cross-database queries? I\nsuspect that at the current pace, PostgreSQL will match SAPdb's features\nreasonably soon.\n\nBtw. SAPdb has a win32 port, but still doesn't run on most Unixen (not\neven on FreeBSD), which brings me back the the \"maintainable code and\nbuild process\" point ;-)\n\n> And what about Firebird?\n\nYou can get commercial support for it, too. Just as for PostgreSQL and\nSAPdb.\n\n-- Gerhard\n", "msg_date": "Tue, 3 Sep 2002 04:28:55 +0200", "msg_from": "Gerhard =?iso-8859-1?Q?H=E4ring?= <haering_postgresql@gmx.de>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "Actually, Linux Journal (and their editors) are fans of PostgreSQL.\n\nThis year, MySQL may actually have clued in to transactions and a few\nother big database features. I don't know that they actually *have*\nthese features polished up, but LJ is giving them credit for trying...\n\n - Thomas\n", "msg_date": "Mon, 02 Sep 2002 19:46:27 -0700", "msg_from": "Thomas Lockhart <lockhart@fourpalms.org>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "Thomas Lockhart wrote:\n> Actually, Linux Journal (and their editors) are fans of PostgreSQL.\n> \n> This year, MySQL may actually have clued in to transactions and a few\n> other big database features. I don't know that they actually *have*\n> these features polished up, but LJ is giving them credit for trying...\n\nYea, but that assume we are sitting here doing nothing. We are\nadvancing at light speed compared to the other open source databases. I\ndon't think anyone disputes that.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 22:48:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "> Actually, Linux Journal (and their editors) are fans of PostgreSQL.\n>\n> This year, MySQL may actually have clued in to transactions and a few\n> other big database features. I don't know that they actually *have*\n> these features polished up, but LJ is giving them credit for trying...\n\nIt still disturbs me that you have to use a non-standard table type to\nsupport transactions, plus the hijinks that will occur when you attempt to\nperform a transaction that involves changes to transactional and\nnon-transactional tables...\n\n\"If you do a ROLLBACK when you have updated a non-transactional table you\nwill get an error (ER_WARNING_NOT_COMPLETE_ROLLBACK) as a warning. All\ntransactional safe tables will be restored but any non-transactional table\nwill not change.\"\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 11:09:17 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Re: Linux Journal Editors Choice Awards" }, { "msg_contents": "Le Mardi 3 Septembre 2002 04:28, Gerhard Häring a écrit :\n> PostgreSQL will have a\n> native win32 port,\n\nJust out of interest, what is the advancement of the Windows port.\nBest regards, Jean-Michel\n", "msg_date": "Tue, 3 Sep 2002 08:16:01 +0200", "msg_from": "Jean-Michel POURE <jm.poure@freesurf.fr>", "msg_from_op": false, "msg_subject": "Re: Linux Journal Editors Choice Awards" } ]
[ { "msg_contents": "Bruce suggested that we need a porting guide to help people look for\napplication and client-library code that will be broken by the changes\nin PG 7.3. Here is a first cut at documenting the issues.\nComments welcome --- in particular, what have I missed?\n\n\t\t\tregards, tom lane\n\n\nRevising client-side code for PG 7.3 system catalogs\n\n\nHere are some notes about things to look out for in updating client-side\ncode for PG 7.3. Almost anything that looks at the system catalogs is\nprobably going to need work, if you want it to behave reasonably when you\nstart using 7.3's new features such as schemas and DROP COLUMN.\n\nAs an example, consider the task of listing the names and datatypes for\na table named \"foo\". In the past you may have done this with a query like\n\n\tSELECT a.attname, format_type(a.atttypid, a.atttypmod)\n\tFROM pg_class c, pg_attribute a\n\tWHERE c.relname = 'foo'\n\t AND a.attnum > 0 AND a.attrelid = c.oid\n\tORDER BY a.attnum\n\n(this in fact is exactly what 7.2 psql uses to implement \"\\d foo\").\nThis query will work perfectly well in 7.2 or 7.1, but it's broken in half\na dozen ways for 7.3.\n\nThe biggest problem is that with the addition of schemas, there might be\nseveral tables named \"foo\" listed in pg_class. The old query will produce\na list of all of their attributes mixed together. For example, after\n\tcreate schema a;\n\tcreate schema b;\n\tcreate table a.foo (f1 int, f2 text);\n\tcreate table b.foo (f1 text, f2 numeric(10,1));\nwe'd get:\n\n attname | format_type\n---------+---------------\n f1 | text\n f1 | integer\n f2 | text\n f2 | numeric(10,1)\n(4 rows)\n\nNot good. We need to decide exactly which foo we want, and restrict the\nquery to find only that row in pg_class. There are a couple of ways to\ndo this, depending on how fancy you want to get.\n\nIf you just want to handle an unqualified table name \"foo\", and find the\nsame foo that would be found if you said \"select * from foo\", then one way\nto do it is to restrict the query to \"visible\" rows of pg_class:\n\n\tSELECT ...\n\tFROM ...\n\tWHERE c.relname = 'foo' AND pg_table_is_visible(c.oid)\n\t AND ...\n\npg_table_is_visible() will only return true for pg_class rows that are in\nyour current search path and are not hidden by similarly-named tables that\nare in earlier schemas of the search path.\n\nAn alternative way is to eliminate the explicit join to pg_class, and\ninstead use the new datatype \"regclass\" to look up the correct pg_class\nOID:\n\n\tSELECT ...\n\tFROM pg_attribute a\n\tWHERE a.attrelid = 'foo'::regclass\n\t AND a.attnum > 0\n\tORDER BY a.attnum\n\nThe regclass input converter looks up the given string as a table name\n(obeying schema visibility rules) and produces an OID constant that you\ncan compare directly to attrelid. This is more efficient than doing\nthe join, but there are a couple of things to note about it. One is\nthat if there isn't any \"foo\" table, you'll get an ERROR message from\nthe regclass input converter, whereas with the old query you got zero\nrows out and no error message. You might or might not prefer the old\nbehavior. Another limitation is that there isn't any way to adapt\nthis approach to search for a partially-specified table name;\nwhereas in the original query you could use a LIKE or regex pattern to\nmatch the table name, not only a simple equality test.\n\nNow, what if you'd like to be able to specify a qualified table name\n--- that is, show the attributes of \"a.foo\" or \"b.foo\" on demand?\nIt will not work to say\n\tWHERE c.relname = 'a.foo'\nso this is another way in which the original query fails for 7.3.\n\nIt turns out that the regclass method will work for this: if you say\n\tWHERE a.attrelid = 'a.foo'::regclass\nthen the right things happen.\n\nIf you don't want to use regclass then you're going to have to do an\nexplicit join against pg_namespace to find out which foo you want:\n\n\tSELECT a.attname, format_type(a.atttypid, a.atttypmod)\n\tFROM pg_namespace n, pg_class c, pg_attribute a\n\tWHERE n.nspname = 'a' AND c.relname = 'foo'\n\t AND c.relnamespace = n.oid\n\t AND a.attnum > 0 AND a.attrelid = c.oid\n\tORDER BY a.attnum\n\nThis is somewhat tedious because you have to be prepared to split the\nqualified name into its components on the client side. An advantage\nis that once you've done that, you can again consider using LIKE or\nregex patterns instead of simple name equality. This is essentially\nwhat 7.3 psql does to support wildcard patterns like \"\\dt a*.f*\".\n\nOkay, I think we've about beaten the issue of \"which foo do you want\"\nto death. But what other ways are there for the schema feature to cause\ntrouble in this apparently now well-fixed-up query?\n\nOne way is that the system catalogs themselves live in a schema, and\nif that schema isn't frontmost in your search path then your references\nto pg_class and so forth might find the wrong tables. (It's legal now\nfor ordinary users to create tables named like \"pg_xxx\", so long as they\ndon't try to put 'em in the system's schema.) This is probably not a\nbig issue for standalone applications, which can assume they know what the\nsearch path is. But in client-side libraries, psql, and similar code\nthat has to be able to deal with someone else's choice of search path,\nwe really ought to make the references to system catalogs be fully\nqualified:\n\n\tSELECT ...\n\tFROM pg_catalog.pg_namespace n, pg_catalog.pg_class c,\n\t pg_catalog.pg_attribute a\n\tWHERE ...\n\n(If you weren't using table aliases in your queries before, here is a good\nplace to start...)\n\nIn fact it's worse than that: function names, type names, etc also live in\nschemas. So you really ought to qualify references to built-in functions\nand types too:\n\n\tSELECT ..., pg_catalog.format_type(...) ...\n\n\tWHERE a.attrelid = 'foo'::pg_catalog.regclass ...\n\nThe truly paranoid might want to qualify their operator names too,\nthough I draw the line at this because of the horribly ugly syntax needed:\n\n\tWHERE a.attrelid OPERATOR(pg_catalog.=) 'foo'::pg_catalog.regclass\n\nThere's another, non-schema-related, gotcha in this apparently simple task\nof showing attribute names: in 7.3 you need to exclude dropped attributes\nfrom your display. ALTER TABLE DROP COLUMN doesn't remove the\npg_attribute entry for the dropped column, it only changes it to have\nattisdropped = true. So you will typically want to add\n\n\tWHERE NOT attisdropped\n\nwhen looking at pg_attribute.\n\nNote however that excluding dropped columns like this means there may be\ngaps in the series of attnum values you see. That doesn't bother this\nparticular query, but it could easily confuse applications that expect the\nattributes to have consecutive attnums 1 to N. For example, pg_dump makes\nan array of attributes and wants to index into the array with attnums.\nIt proved easier to make pg_dump include dropped attributes in its array\n(and filter them out later) than to change the indexing logic.\n\nIf you have client-side code that looks in pg_proc, pg_type, or\npg_operator then exactly the same sorts of schema-related issues appear:\nthe name alone is no longer unique, you have to think about identifying\nthe function, type, or operator within the schema you want.\n\nThat's about all the mileage I can get out of the \"show a table's\nattributes\" example, but there are still more schema-related trouble\nitems to check for.\n\nOne problem to look for is code that scans pg_class (or another system\ntable, but most commonly pg_class) to make a list of things to operate\non. For example, various people have built scripts to automatically\nreindex every table in a database. Such code will fail as soon as you\nstart using schemas, because it will find tables that aren't in your\ncurrent schema search path and try to operate on them. Depending on what\nyou want to do, you could change the code to emit fully qualified names\nof tables it wants to operate on (so it will work no matter which schema\nthey are in), or you could restrict the pg_class search to find only\nvisible tables.\n\nIf you want to use qualified names, the straightforward way to do it\nis to join against pg_namespace to get the schema name:\n\n\tSELECT nspname, relname FROM pg_class c, pg_namespace n\n\tWHERE relnamespace = n.oid AND relname LIKE 'foo%' AND ...\n\nA less obvious way is to use the regclass output converter:\n\n\tSELECT c.oid::regclass FROM pg_class c\n\tWHERE relname LIKE 'foo%' AND ...\n\nThis will give you back a table name that is qualified only if it needs to\nbe (i.e., the table is not visible in your search path), so you can use\nit directly in the command you want to give next. Another interesting\nproperty of the regclass converter is that it will double-quote the name\ncorrectly if necessary --- for example, you'll get \"TEST\" (with the\nquotes) if the table is named TEST. So you can splice the name directly\ninto a SQL command without any special pushups and be confident that it\nwill produce the right results.\n\nBTW, there are similar output converters for type, function, and operator\nnames, if you need them.\n\nAnother thing to look for is code that tries to exclude system tables by\nexcluding tablenames beginning with \"pg_\"; typical code is like\n\tWHERE relname NOT LIKE 'pg\\\\_%'\nor\n\tWHERE relname !~ '^pg_'\nThis is not the preferred method anymore: the right way to do this is to\njoin against pg_namespace and exclude tables that live in schemas whose\nnames begin with \"pg_\".\n\nA related point is that temporary tables no longer have names (in the\ncatalogs) of the form \"pg_temp_NNN\"; rather they have exactly the name\nthat the creating user gave them. They are kept separate from other\ntables by placing them in schemas named like \"pg_temp_NNN\" (where now\nNNN identifies an active backend, not a single table). So if you wanted\nyour scan to exclude temp tables then you'd definitely better change to\nexcluding on the basis of schema name not table name. On the upside,\nif you do want your scan to show temp tables then it's much easier than\nbefore. (BTW, the pg_table_is_visible function is the best way of\ndistinguishing your own session's temp tables from other people's. Yours\nwill be visible, other people's won't.)\n\nOther things that are less likely to concern most applications, but could\nbreak some:\n\nAggregate functions now have entries in pg_proc; pg_aggregate has lost\nmost of its columns and now is just an extension of a pg_proc entry.\nIf you have code that knows the difference between a plain function and\nan aggregate function then it will surely need work.\n\npg_class.relkind has a new possible value, 'c' for a composite type.\n\npg_type.typtype has two new possible values, 'd' for a domain and 'p' for\na pseudo-type.\n", "msg_date": "Mon, 02 Sep 2002 21:54:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "7.3 gotchas for applications and client libraries" }, { "msg_contents": "Tom, do you think there is millage in adding functions (at least to\ncontrib) to PostgreSQL to avoid some of the common tasks applications\nlook into pg_* for?\n\nFor example I recently audited our code here for pg_* access, and\nmanaged to create two plpgsql functions to replace all\noccurrences. They were relatively simple queries to check if a table\nexisted and to check if a column existed, functions for 7.2.x:\n\n \\echo creating function: column_exists\n CREATE OR REPLACE FUNCTION column_exists(NAME, NAME) RETURNS BOOLEAN AS '\n\tDECLARE\n\t\ttab ALIAS FOR $1;\n\t\tcol ALIAS FOR $2;\n\t\trec RECORD;\n\tBEGIN\n\t\tSELECT INTO rec *\n\t\t\tFROM pg_class c, pg_attribute a\n\t\t\tWHERE c.relname = tab\n\t\t\tAND c.oid = a.attrelid\n\t\t\tAND a.attnum > 0\n\t\t\tAND a.attname = col;\n\t\tIF NOT FOUND THEN\n\t\t\tRETURN false;\n\t\tELSE\n\t\t\tRETURN true;\n\t\tEND IF;\n\tEND;\n ' LANGUAGE 'plpgsql';\n\n \\echo creating function: table_exists\n CREATE OR REPLACE FUNCTION table_exists(NAME) RETURNS BOOLEAN AS '\n\tDECLARE\n\t\ttab ALIAS FOR $1;\n\t\trec RECORD;\n\tBEGIN\n\t\tSELECT INTO rec *\n\t\t\tFROM pg_class c\n\t\t\tWHERE c.relname = tab;\n\t\tIF NOT FOUND THEN\n\t\t\tRETURN false;\n\t\tELSE\n\t\t\tRETURN true;\n\t\tEND IF;\n\tEND;\n ' LANGUAGE 'plpgsql';\n\nObviously these need attention when our application targets 7.3 (and\nthanks for the heads-up), but all changes are localised. Surely these\nmust be fairly common tests and maybe better added to the database\nserver so applications are less dependant on internal catalogues?\n\nAny desire for me to polish these two functions up for contrib in 7.3?\nActually the Cookbook at http://www.brasileiro.net/postgres/ has\nsimilar function which will need attention for 7.3 too, is the\neventual plan for this to be folded into the core release?\n\nThanks, Lee.\n\nTom Lane writes:\n > Bruce suggested that we need a porting guide to help people look for\n > application and client-library code that will be broken by the changes\n > in PG 7.3. Here is a first cut at documenting the issues.\n > Comments welcome --- in particular, what have I missed?\n > [snip ]\n", "msg_date": "Tue, 3 Sep 2002 11:25:36 +0100", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "7.3 gotchas for applications and client libraries" }, { "msg_contents": "Lee Kindness <lkindness@csl.co.uk> writes:\n> CREATE OR REPLACE FUNCTION column_exists(NAME, NAME) RETURNS BOOLEAN AS '\n\n> CREATE OR REPLACE FUNCTION table_exists(NAME) RETURNS BOOLEAN AS '\n\n> Obviously these need attention when our application targets 7.3 (and\n> thanks for the heads-up), but all changes are localised.\n\nThey are? What will your policy be about schema names --- won't you\nhave to touch every caller to add a schema name parameter?\n\nI'm not averse to trying to push logic over to the backend, but I think\nthe space of application requirements is wide enough that designing\ngeneral-purpose functions will be quite difficult.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 09:16:11 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": true, "msg_subject": "Re: 7.3 gotchas for applications and client libraries " }, { "msg_contents": "Was this going to make it into the release notes or something?\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Tuesday, 3 September 2002 9:54 AM\n> To: pgsql-hackers@postgresql.org; pgsql-interfaces@postgresql.org\n> Subject: [HACKERS] 7.3 gotchas for applications and client libraries\n>\n>\n> Bruce suggested that we need a porting guide to help people look for\n> application and client-library code that will be broken by the changes\n> in PG 7.3. Here is a first cut at documenting the issues.\n> Comments welcome --- in particular, what have I missed?\n>\n> \t\t\tregards, tom lane\n>\n>\n> Revising client-side code for PG 7.3 system catalogs\n>\n>\n> Here are some notes about things to look out for in updating client-side\n> code for PG 7.3. Almost anything that looks at the system catalogs is\n> probably going to need work, if you want it to behave reasonably when you\n> start using 7.3's new features such as schemas and DROP COLUMN.\n>\n> As an example, consider the task of listing the names and datatypes for\n> a table named \"foo\". In the past you may have done this with a query like\n>\n> \tSELECT a.attname, format_type(a.atttypid, a.atttypmod)\n> \tFROM pg_class c, pg_attribute a\n> \tWHERE c.relname = 'foo'\n> \t AND a.attnum > 0 AND a.attrelid = c.oid\n> \tORDER BY a.attnum\n\n...\n\n", "msg_date": "Thu, 5 Sep 2002 16:45:18 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: 7.3 gotchas for applications and client libraries" }, { "msg_contents": "\nI have copied Tom's fine email to:\n\n\thttp://www.ca.postgresql.org/docs/momjian/upgrade_7.3\n\nand have added a mention of it in the HISTORY file:\n\n A dump/restore using \"pg_dump\" is required for those wishing to migrate\n data from any previous release. A summary of changes needed in client\n applications is at http://www.ca.postgresql.org/docs/momjian/upgrade_7.3.\n\n---------------------------------------------------------------------------\n\nTom Lane wrote:\n> Bruce suggested that we need a porting guide to help people look for\n> application and client-library code that will be broken by the changes\n> in PG 7.3. Here is a first cut at documenting the issues.\n> Comments welcome --- in particular, what have I missed?\n> \n> \t\t\tregards, tom lane\n> \n> \n> Revising client-side code for PG 7.3 system catalogs\n> \n> \n> Here are some notes about things to look out for in updating client-side\n> code for PG 7.3. Almost anything that looks at the system catalogs is\n> probably going to need work, if you want it to behave reasonably when you\n> start using 7.3's new features such as schemas and DROP COLUMN.\n> \n> As an example, consider the task of listing the names and datatypes for\n> a table named \"foo\". In the past you may have done this with a query like\n> \n> \tSELECT a.attname, format_type(a.atttypid, a.atttypmod)\n> \tFROM pg_class c, pg_attribute a\n> \tWHERE c.relname = 'foo'\n> \t AND a.attnum > 0 AND a.attrelid = c.oid\n> \tORDER BY a.attnum\n> \n> (this in fact is exactly what 7.2 psql uses to implement \"\\d foo\").\n> This query will work perfectly well in 7.2 or 7.1, but it's broken in half\n> a dozen ways for 7.3.\n> \n> The biggest problem is that with the addition of schemas, there might be\n> several tables named \"foo\" listed in pg_class. The old query will produce\n> a list of all of their attributes mixed together. For example, after\n> \tcreate schema a;\n> \tcreate schema b;\n> \tcreate table a.foo (f1 int, f2 text);\n> \tcreate table b.foo (f1 text, f2 numeric(10,1));\n> we'd get:\n> \n> attname | format_type\n> ---------+---------------\n> f1 | text\n> f1 | integer\n> f2 | text\n> f2 | numeric(10,1)\n> (4 rows)\n> \n> Not good. We need to decide exactly which foo we want, and restrict the\n> query to find only that row in pg_class. There are a couple of ways to\n> do this, depending on how fancy you want to get.\n> \n> If you just want to handle an unqualified table name \"foo\", and find the\n> same foo that would be found if you said \"select * from foo\", then one way\n> to do it is to restrict the query to \"visible\" rows of pg_class:\n> \n> \tSELECT ...\n> \tFROM ...\n> \tWHERE c.relname = 'foo' AND pg_table_is_visible(c.oid)\n> \t AND ...\n> \n> pg_table_is_visible() will only return true for pg_class rows that are in\n> your current search path and are not hidden by similarly-named tables that\n> are in earlier schemas of the search path.\n> \n> An alternative way is to eliminate the explicit join to pg_class, and\n> instead use the new datatype \"regclass\" to look up the correct pg_class\n> OID:\n> \n> \tSELECT ...\n> \tFROM pg_attribute a\n> \tWHERE a.attrelid = 'foo'::regclass\n> \t AND a.attnum > 0\n> \tORDER BY a.attnum\n> \n> The regclass input converter looks up the given string as a table name\n> (obeying schema visibility rules) and produces an OID constant that you\n> can compare directly to attrelid. This is more efficient than doing\n> the join, but there are a couple of things to note about it. One is\n> that if there isn't any \"foo\" table, you'll get an ERROR message from\n> the regclass input converter, whereas with the old query you got zero\n> rows out and no error message. You might or might not prefer the old\n> behavior. Another limitation is that there isn't any way to adapt\n> this approach to search for a partially-specified table name;\n> whereas in the original query you could use a LIKE or regex pattern to\n> match the table name, not only a simple equality test.\n> \n> Now, what if you'd like to be able to specify a qualified table name\n> --- that is, show the attributes of \"a.foo\" or \"b.foo\" on demand?\n> It will not work to say\n> \tWHERE c.relname = 'a.foo'\n> so this is another way in which the original query fails for 7.3.\n> \n> It turns out that the regclass method will work for this: if you say\n> \tWHERE a.attrelid = 'a.foo'::regclass\n> then the right things happen.\n> \n> If you don't want to use regclass then you're going to have to do an\n> explicit join against pg_namespace to find out which foo you want:\n> \n> \tSELECT a.attname, format_type(a.atttypid, a.atttypmod)\n> \tFROM pg_namespace n, pg_class c, pg_attribute a\n> \tWHERE n.nspname = 'a' AND c.relname = 'foo'\n> \t AND c.relnamespace = n.oid\n> \t AND a.attnum > 0 AND a.attrelid = c.oid\n> \tORDER BY a.attnum\n> \n> This is somewhat tedious because you have to be prepared to split the\n> qualified name into its components on the client side. An advantage\n> is that once you've done that, you can again consider using LIKE or\n> regex patterns instead of simple name equality. This is essentially\n> what 7.3 psql does to support wildcard patterns like \"\\dt a*.f*\".\n> \n> Okay, I think we've about beaten the issue of \"which foo do you want\"\n> to death. But what other ways are there for the schema feature to cause\n> trouble in this apparently now well-fixed-up query?\n> \n> One way is that the system catalogs themselves live in a schema, and\n> if that schema isn't frontmost in your search path then your references\n> to pg_class and so forth might find the wrong tables. (It's legal now\n> for ordinary users to create tables named like \"pg_xxx\", so long as they\n> don't try to put 'em in the system's schema.) This is probably not a\n> big issue for standalone applications, which can assume they know what the\n> search path is. But in client-side libraries, psql, and similar code\n> that has to be able to deal with someone else's choice of search path,\n> we really ought to make the references to system catalogs be fully\n> qualified:\n> \n> \tSELECT ...\n> \tFROM pg_catalog.pg_namespace n, pg_catalog.pg_class c,\n> \t pg_catalog.pg_attribute a\n> \tWHERE ...\n> \n> (If you weren't using table aliases in your queries before, here is a good\n> place to start...)\n> \n> In fact it's worse than that: function names, type names, etc also live in\n> schemas. So you really ought to qualify references to built-in functions\n> and types too:\n> \n> \tSELECT ..., pg_catalog.format_type(...) ...\n> \n> \tWHERE a.attrelid = 'foo'::pg_catalog.regclass ...\n> \n> The truly paranoid might want to qualify their operator names too,\n> though I draw the line at this because of the horribly ugly syntax needed:\n> \n> \tWHERE a.attrelid OPERATOR(pg_catalog.=) 'foo'::pg_catalog.regclass\n> \n> There's another, non-schema-related, gotcha in this apparently simple task\n> of showing attribute names: in 7.3 you need to exclude dropped attributes\n> from your display. ALTER TABLE DROP COLUMN doesn't remove the\n> pg_attribute entry for the dropped column, it only changes it to have\n> attisdropped = true. So you will typically want to add\n> \n> \tWHERE NOT attisdropped\n> \n> when looking at pg_attribute.\n> \n> Note however that excluding dropped columns like this means there may be\n> gaps in the series of attnum values you see. That doesn't bother this\n> particular query, but it could easily confuse applications that expect the\n> attributes to have consecutive attnums 1 to N. For example, pg_dump makes\n> an array of attributes and wants to index into the array with attnums.\n> It proved easier to make pg_dump include dropped attributes in its array\n> (and filter them out later) than to change the indexing logic.\n> \n> If you have client-side code that looks in pg_proc, pg_type, or\n> pg_operator then exactly the same sorts of schema-related issues appear:\n> the name alone is no longer unique, you have to think about identifying\n> the function, type, or operator within the schema you want.\n> \n> That's about all the mileage I can get out of the \"show a table's\n> attributes\" example, but there are still more schema-related trouble\n> items to check for.\n> \n> One problem to look for is code that scans pg_class (or another system\n> table, but most commonly pg_class) to make a list of things to operate\n> on. For example, various people have built scripts to automatically\n> reindex every table in a database. Such code will fail as soon as you\n> start using schemas, because it will find tables that aren't in your\n> current schema search path and try to operate on them. Depending on what\n> you want to do, you could change the code to emit fully qualified names\n> of tables it wants to operate on (so it will work no matter which schema\n> they are in), or you could restrict the pg_class search to find only\n> visible tables.\n> \n> If you want to use qualified names, the straightforward way to do it\n> is to join against pg_namespace to get the schema name:\n> \n> \tSELECT nspname, relname FROM pg_class c, pg_namespace n\n> \tWHERE relnamespace = n.oid AND relname LIKE 'foo%' AND ...\n> \n> A less obvious way is to use the regclass output converter:\n> \n> \tSELECT c.oid::regclass FROM pg_class c\n> \tWHERE relname LIKE 'foo%' AND ...\n> \n> This will give you back a table name that is qualified only if it needs to\n> be (i.e., the table is not visible in your search path), so you can use\n> it directly in the command you want to give next. Another interesting\n> property of the regclass converter is that it will double-quote the name\n> correctly if necessary --- for example, you'll get \"TEST\" (with the\n> quotes) if the table is named TEST. So you can splice the name directly\n> into a SQL command without any special pushups and be confident that it\n> will produce the right results.\n> \n> BTW, there are similar output converters for type, function, and operator\n> names, if you need them.\n> \n> Another thing to look for is code that tries to exclude system tables by\n> excluding tablenames beginning with \"pg_\"; typical code is like\n> \tWHERE relname NOT LIKE 'pg\\\\_%'\n> or\n> \tWHERE relname !~ '^pg_'\n> This is not the preferred method anymore: the right way to do this is to\n> join against pg_namespace and exclude tables that live in schemas whose\n> names begin with \"pg_\".\n> \n> A related point is that temporary tables no longer have names (in the\n> catalogs) of the form \"pg_temp_NNN\"; rather they have exactly the name\n> that the creating user gave them. They are kept separate from other\n> tables by placing them in schemas named like \"pg_temp_NNN\" (where now\n> NNN identifies an active backend, not a single table). So if you wanted\n> your scan to exclude temp tables then you'd definitely better change to\n> excluding on the basis of schema name not table name. On the upside,\n> if you do want your scan to show temp tables then it's much easier than\n> before. (BTW, the pg_table_is_visible function is the best way of\n> distinguishing your own session's temp tables from other people's. Yours\n> will be visible, other people's won't.)\n> \n> Other things that are less likely to concern most applications, but could\n> break some:\n> \n> Aggregate functions now have entries in pg_proc; pg_aggregate has lost\n> most of its columns and now is just an extension of a pg_proc entry.\n> If you have code that knows the difference between a plain function and\n> an aggregate function then it will surely need work.\n> \n> pg_class.relkind has a new possible value, 'c' for a composite type.\n> \n> pg_type.typtype has two new possible values, 'd' for a domain and 'p' for\n> a pseudo-type.\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 18 Sep 2002 01:18:07 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.3 gotchas for applications and client libraries" }, { "msg_contents": "Tom/Hackers,\n\nGoing back a bit, but relevant with 7.3's release...\n\nTom Lane writes on 03 Sep 2002:\n > Lee Kindness <lkindness@csl.co.uk> writes:\n > >\n > > [ original post was regarding the mileage in adding utility\n > > functions to PostgreSQL to cut-out common catalog lookups, thus\n > > making apps less fragile to catalog changes ]\n > >\n > > CREATE OR REPLACE FUNCTION column_exists(NAME, NAME) RETURNS BOOLEAN AS '\n > > CREATE OR REPLACE FUNCTION table_exists(NAME) RETURNS BOOLEAN AS'\n > >\n > > Obviously these need attention when our application targets 7.3 (and\n > > thanks for the heads-up), but all changes are localised.\n >\n > They are? What will your policy be about schema names --- won't you\n > have to touch every caller to add a schema name parameter?\n\nAs it turns out, no. And thinking about i'm sure this is right\napproach too, assuming:\n\n CREATE SCHEMA a;\n CREATE SCHEMA b;\n CREATE TABLE a.foo(f1 INT, f2 TEXT);\n CREATE TABLE b.foo(f1 TEXT, f2 NUMERIC(10,1));\n\nthen:\n\n SELECT column_exists('foo', 'f1');\n\nshould return 'f', however:\n\n SELECT column_exists('a.foo', 'f1');\n\nshould return 't', likewise with:\n\n SET SEARCH_PATH TO \"a\",\"public\";\n SELECT column_exists('foo', 'f1');\n\nI can't see any use in a separate parameter - the user will want the\ncurrent - in scope - table, or explicitly specify the schema with the\ntable name.\n\n > I'm not averse to trying to push logic over to the backend, but I think\n > the space of application requirements is wide enough that designing\n > general-purpose functions will be quite difficult.\n\nOn the whole I'd agree, but I think determining if a table/column\nexists has quite a high usage... More so with things like\ncurrent_database() added to 7.3. Anyway, for reference here are\ncolumn_exists(table, column) and table_exists(table) functions for\nPostgreSQL 7.3, changes from 7.3 version maked by ' -- PG7.3':\n\n\\echo creating function: column_exists\nCREATE OR REPLACE FUNCTION column_exists(NAME, NAME) RETURNS BOOLEAN AS '\n\tDECLARE\n\t\ttab ALIAS FOR $1;\n\t\tcol ALIAS FOR $2;\n\t\trec RECORD;\n\tBEGIN\n\t\tSELECT INTO rec *\n\t\t\tFROM pg_class c, pg_attribute a\n\t\t\tWHERE c.relname = tab\n\t\t\tAND pg_table_is_visible(c.oid) -- PG7.3\n\t\t\tAND c.oid = a.attrelid\n\t\t\tAND a.attnum > 0\n\t\t\tAND a.attname = col;\n\t\tIF NOT FOUND THEN\n\t\t\tRETURN false;\n\t\tELSE\n\t\t\tRETURN true;\n\t\tEND IF;\n\tEND;\n' LANGUAGE 'plpgsql';\n\n\\echo creating function: table_exists\nCREATE OR REPLACE FUNCTION table_exists(NAME) RETURNS BOOLEAN AS '\n\tDECLARE\n\t\ttab ALIAS FOR $1;\n\t\trec RECORD;\n\tBEGIN\n\t\tSELECT INTO rec *\n\t\t\tFROM pg_class c\n\t\t\tWHERE c.relname = tab;\n\t\t\tAND pg_table_is_visible(c.oid) -- PG7.3\n\t\tIF NOT FOUND THEN\n\t\t\tRETURN false;\n\t\tELSE\n\t\t\tRETURN true;\n\t\tEND IF;\n\tEND;\n' LANGUAGE 'plpgsql';\n\nOf course, thanks for the original email in this thread:\n\n http://www.ca.postgresql.org/docs/momjian/upgrade_tips_7.3\n\nThanks, Lee Kindness.\n", "msg_date": "Mon, 2 Dec 2002 16:30:15 +0000", "msg_from": "Lee Kindness <lkindness@csl.co.uk>", "msg_from_op": false, "msg_subject": "Re: 7.3 gotchas for applications and client libraries " } ]
[ { "msg_contents": "If we got rid of the other NOT NULL != CHECK (a IS NOT NULL) instance, may\nas well get rid of the one on this page:\n\nhttp://developer.postgresql.org/docs/postgres/sql-createdomain.html\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 11:13:51 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "Another use of check (a is not null)" } ]
[ { "msg_contents": "Marc, do you want to start trimming /contrib? I know the MySQL and\nOracle tools each have their own websites.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Mon, 2 Sep 2002 23:43:18 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "/contrib port modules" } ]
[ { "msg_contents": "I am still working on the 7.3 HISTORY file. I have extracted the items,\nbut I have to worksmith them and write an introduction.\n\nIt is midnight here now. I don't think I can finish before ~3am and at\nthat point, I am not sure I will know what I am writing.\n\nBasically, one day of feature freeze wasn't enough time for me to get\nthis together. I could have worked on it earlier, but I didn't have time\nthen either --- only the feature freeze has given me time to work on\nthis.\n\nShould we ship beta1 without a HISTORY or delay one day? People should\nprobably review this HISTORY file too, and there isn't time for that\neither unless we delay.\n\nIf all our beta testers are on Hackers, I can post the list when I am\ndone so you can package.\n\nI don't know of any other items holding up the packaging.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 3 Sep 2002 00:09:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "HISTORY file" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't know of any other items holding up the packaging.\n\nGotta brand the thing as 7.3beta1 not 7.3devel, no?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 00:24:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY file " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't know of any other items holding up the packaging.\n> \n> Gotta brand the thing as 7.3beta1 not 7.3devel, no?\n\nYes, I haven't gotten to the release checklist yet. Let's delay a day.\n\nI have a 17k line log file down to 3.5k lines, but it will end up around\n~250 lines. That will take some time.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 3 Sep 2002 00:33:48 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY file" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, I haven't gotten to the release checklist yet. Let's delay a day.\n\nOr at least late in the day tomorrow. I have some loose ends to clean\nup yet as well, but I'm beat and am going to bed.\n\nBut I assume we are now officially in feature freeze, right?\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 00:45:06 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY file " }, { "msg_contents": "\nS'alright, I can do the package together tomorrow morning to let you wrap\nup the loose ends :)\n\nOn Tue, 3 Sep 2002, Bruce Momjian wrote:\n\n> I am still working on the 7.3 HISTORY file. I have extracted the items,\n> but I have to worksmith them and write an introduction.\n>\n> It is midnight here now. I don't think I can finish before ~3am and at\n> that point, I am not sure I will know what I am writing.\n>\n> Basically, one day of feature freeze wasn't enough time for me to get\n> this together. I could have worked on it earlier, but I didn't have time\n> then either --- only the feature freeze has given me time to work on\n> this.\n>\n> Should we ship beta1 without a HISTORY or delay one day? People should\n> probably review this HISTORY file too, and there isn't time for that\n> either unless we delay.\n>\n> If all our beta testers are on Hackers, I can post the list when I am\n> done so you can package.\n>\n> I don't know of any other items holding up the packaging.\n>\n> --\n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n>\n> http://www.postgresql.org/users-lounge/docs/faq.html\n>\n\n", "msg_date": "Tue, 3 Sep 2002 09:23:47 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: HISTORY file" }, { "msg_contents": "On Tue, 3 Sep 2002, Tom Lane wrote:\n\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Yes, I haven't gotten to the release checklist yet. Let's delay a day.\n>\n> Or at least late in the day tomorrow. I have some loose ends to clean\n> up yet as well, but I'm beat and am going to bed.\n>\n> But I assume we are now officially in feature freeze, right?\n\nYes, definitely ... and unless any of you have any showstoppers, I'll do\nthe beta1 packaging first thing Wed morning instead of today ;);\n\n", "msg_date": "Tue, 3 Sep 2002 09:24:53 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: HISTORY file " } ]
[ { "msg_contents": "findoidjoins doens't seem to compile:\n\ngmake[1]: Entering directory `/home/chriskl/pgsql-head/contrib/findoidjoins'\ngcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../sr\nc/interfaces/libpgeasy -I../../src/interfaces/libpq -I. -I../../src/include \n -c -o findoidjoins.o findoidjoins.c -MMD\nfindoidjoins.c:8: halt.h: No such file or directory\nfindoidjoins.c:9: libpgeasy.h: No such file or directory\nfindoidjoins.c: In function `main':\nfindoidjoins.c:26: warning: implicit declaration of function `halt'\nfindoidjoins.c:29: warning: implicit declaration of function `connectdb'\nfindoidjoins.c:31: warning: implicit declaration of function\n`on_error_continue'\nfindoidjoins.c:32: warning: implicit declaration of function `on_error_stop'\nfindoidjoins.c:34: warning: implicit declaration of function `doquery'\nfindoidjoins.c:50: warning: implicit declaration of function `get_result'\nfindoidjoins.c:50: warning: assignment makes pointer from integer without a\ncast\nfindoidjoins.c:60: warning: assignment makes pointer from integer without a\ncast\nfindoidjoins.c:62: warning: implicit declaration of function `set_result'\nfindoidjoins.c:63: warning: implicit declaration of function `fetch'\nfindoidjoins.c:63: `END_OF_TUPLES' undeclared (first use in this function)\nfindoidjoins.c:63: (Each undeclared identifier is reported only once\nfindoidjoins.c:63: for each function it appears in.)\nfindoidjoins.c:66: warning: implicit declaration of function `reset_fetch'\nfindoidjoins.c:69: warning: implicit declaration of function `unset_result'\nfindoidjoins.c:83: warning: passing arg 2 of `sprintf' makes pointer from\ninteger without a cast\nfindoidjoins.c:107: warning: implicit declaration of function `disconnectdb'\ngmake[1]: *** [findoidjoins.o] Error 1\ngmake[1]: Leaving directory `/home/chriskl/pgsql-head/contrib/findoidjoins'\ngmake: *** [install] Error 2\n\n", "msg_date": "Tue, 3 Sep 2002 14:41:50 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "findoidjoins" }, { "msg_contents": "Christopher Kings-Lynne dijo: \n\n> findoidjoins doens't seem to compile:\n> \n> gmake[1]: Entering directory `/home/chriskl/pgsql-head/contrib/findoidjoins'\n> gcc -pipe -O -g -Wall -Wmissing-prototypes -Wmissing-declarations -I../../sr\n> c/interfaces/libpgeasy -I../../src/interfaces/libpq -I. -I../../src/include \n> -c -o findoidjoins.o findoidjoins.c -MMD\n> findoidjoins.c:8: halt.h: No such file or directory\n> findoidjoins.c:9: libpgeasy.h: No such file or directory\n\nSeems related to the ripping of libpgeasy out of the main\ndistribution...\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Uno puede defenderse de los ataques; contra los elogios se esta indefenso\"\n\n", "msg_date": "Tue, 3 Sep 2002 02:46:46 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: findoidjoins" }, { "msg_contents": "Alvaro Herrera <alvherre@atentus.com> writes:\n> Christopher Kings-Lynne dijo: \n>> findoidjoins doens't seem to compile:\n\n> Seems related to the ripping of libpgeasy out of the main\n> distribution...\n\nI believe it's been broken for some time (disremember just why, maybe a\nschema issue?). I had a TODO item to resurrect it so that we could\nupdate the oidjoins regression test, which is sadly out of date for\nthe current system catalogs. If anyone wants to work on that ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 09:21:33 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins " }, { "msg_contents": "Tom Lane wrote:\n> Alvaro Herrera <alvherre@atentus.com> writes:\n>>Christopher Kings-Lynne dijo: \n>>>findoidjoins doens't seem to compile:\n>>Seems related to the ripping of libpgeasy out of the main\n>>distribution...\n> \n> I believe it's been broken for some time (disremember just why, maybe a\n> schema issue?). I had a TODO item to resurrect it so that we could\n> update the oidjoins regression test, which is sadly out of date for\n> the current system catalogs. If anyone wants to work on that ...\n\nI'm not sure I interpreted the intent of findoidjoins just right, but \nhere it is updated for schemas, new reg* types, using SPI instead of \nlibpgeasy, and returning the results as a table function. Any \ncorrections/comments? If there is any interest, I'll polish this up a \nbit more and submit to patches. Just let me know.\n\n(Should qualify as a fix, right?)\n\nThanks,\n\nJoe", "msg_date": "Tue, 03 Sep 2002 23:06:11 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: findoidjoins" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://207.106.42.251/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Alvaro Herrera <alvherre@atentus.com> writes:\n> >>Christopher Kings-Lynne dijo: \n> >>>findoidjoins doens't seem to compile:\n> >>Seems related to the ripping of libpgeasy out of the main\n> >>distribution...\n> > \n> > I believe it's been broken for some time (disremember just why, maybe a\n> > schema issue?). I had a TODO item to resurrect it so that we could\n> > update the oidjoins regression test, which is sadly out of date for\n> > the current system catalogs. If anyone wants to work on that ...\n> \n> I'm not sure I interpreted the intent of findoidjoins just right, but \n> here it is updated for schemas, new reg* types, using SPI instead of \n> libpgeasy, and returning the results as a table function. Any \n> corrections/comments? If there is any interest, I'll polish this up a \n> bit more and submit to patches. Just let me know.\n> \n> (Should qualify as a fix, right?)\n> \n> Thanks,\n> \n> Joe\n> \n\n[ application/x-gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 02:20:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> I'm not sure I interpreted the intent of findoidjoins just right, but \n> here it is updated for schemas, new reg* types, using SPI instead of \n> libpgeasy, and returning the results as a table function. Any \n> corrections/comments?\n\nFor what we want it for (viz, regenerating the oidjoins test every so\noften), this is really a step backwards. It requires more work to run\nthan the original program, and it modifies the database under test,\nwhich is undesirable because it's commonly run against template1.\n\nI was thinking of keeping it as a client program, but recasting it to\nuse libpq since libpgeasy isn't in the standard distribution anymore.\n\nI've looked through my notes and I can't find why I thought findoidjoins\nwas broken for 7.3. Did you come across anything obviously wrong with\nits queries?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 10:55:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins " }, { "msg_contents": "Tom Lane wrote:\n> For what we want it for (viz, regenerating the oidjoins test every so\n> often), this is really a step backwards. It requires more work to run\n> than the original program, and it modifies the database under test,\n> which is undesirable because it's commonly run against template1.\n> \n> I was thinking of keeping it as a client program, but recasting it to\n> use libpq since libpgeasy isn't in the standard distribution anymore.\n\nOK. I'll take another shot using that approach. A couple questions:\n\nIs it useful to have the reference count and unreferenced counts like \ncurrently written, or should I just faithfully reproduce the original \nbehavior (except schema aware queries) using libpq in place of libpgeasy?\n\nIs the oidjoins.sql test just the output of the make_oidjoins_check \nscript? It is probably easier to produce that output while looping \nthrough the first time versus running a script -- should I do that?\n\n\n> I've looked through my notes and I can't find why I thought findoidjoins\n> was broken for 7.3. Did you come across anything obviously wrong with\n> its queries?\n\nAs written the queries did not know anything about schemas or the newer \nreg* data types, e.g. this:\n\nSELECT typname, relname, a.attname\nFROM pg_class c, pg_attribute a, pg_type t\nWHERE a.attnum > 0 AND\n\t relkind = 'r' AND\n\t (typname = 'oid' OR\n\t typname = 'regproc' OR\n\t typname = 'regclass' OR\n\t typname = 'regtype') AND\n\t a.attrelid = c.oid AND\n\t a.atttypid = t.oid\nORDER BY 2, a.attnum ;\n\nbecame this:\n\nSELECT c.relname,\n(SELECT nspname FROM pg_catalog.pg_namespace n\n WHERE n.oid = c.relnamespace) AS nspname,\na.attname,\nt.typname\nFROM pg_catalog.pg_class c,\n pg_catalog.pg_attribute a,\n pg_catalog.pg_type t\nWHERE a.attnum > 0 AND c.relkind = 'r'\nAND t.typnamespace IN\n (SELECT n.oid FROM pg_catalog.pg_namespace n\n WHERE nspname LIKE 'pg\\\\_%')\nAND (t.typname = 'oid' OR t.typname LIKE 'reg%')\nAND a.attrelid = c.oid\nAND a.atttypid = t.oid\nORDER BY nspname, c.relname, a.attnum\n\nDoes the latter produce the desired result?\n\nJoe\n\n", "msg_date": "Wed, 04 Sep 2002 09:21:39 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: findoidjoins" }, { "msg_contents": "\nPatch withdrawn by author.\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Alvaro Herrera <alvherre@atentus.com> writes:\n> >>Christopher Kings-Lynne dijo: \n> >>>findoidjoins doens't seem to compile:\n> >>Seems related to the ripping of libpgeasy out of the main\n> >>distribution...\n> > \n> > I believe it's been broken for some time (disremember just why, maybe a\n> > schema issue?). I had a TODO item to resurrect it so that we could\n> > update the oidjoins regression test, which is sadly out of date for\n> > the current system catalogs. If anyone wants to work on that ...\n> \n> I'm not sure I interpreted the intent of findoidjoins just right, but \n> here it is updated for schemas, new reg* types, using SPI instead of \n> libpgeasy, and returning the results as a table function. Any \n> corrections/comments? If there is any interest, I'll polish this up a \n> bit more and submit to patches. Just let me know.\n> \n> (Should qualify as a fix, right?)\n> \n> Thanks,\n> \n> Joe\n> \n\n[ application/x-gzip is not supported, skipping... ]\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 12:43:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins" }, { "msg_contents": "Joe Conway <mail@joeconway.com> writes:\n> Is it useful to have the reference count and unreferenced counts like \n> currently written, or should I just faithfully reproduce the original \n> behavior (except schema aware queries) using libpq in place of libpgeasy?\n\nI'd be inclined to reproduce the original behavior. findoidjoins is\npretty slow already, and I don't much want to slow it down more in order\nto provide info that's useless for the primary purpose.\n\n> Is the oidjoins.sql test just the output of the make_oidjoins_check \n> script?\n\nYes.\n\n> It is probably easier to produce that output while looping \n> through the first time versus running a script -- should I do that?\n\nThe separation between findoidjoins and make_oidjoins_check is\ndeliberate --- this allows for easy hand-editing of the join list to\nremove unwanted joins before preparing the regression test script\n(cf the notes in the README about bogus joins). Even though this is\nan extra manual step, I think it's a better answer than trying to make\nfindoidjoins smart enough to suppress the bogus joins itself. Part of\nthe reason for running findoidjoins is to detect any unexpected\nlinkages, so it should not be too eager to hide things. Also, because\nthe output of findoidjoins *should* be manually reviewed, it's better\nto put it out in an easy-to-read one-line-per-join format than to put\nout the finished regression script directly.\n\n>> I've looked through my notes and I can't find why I thought findoidjoins\n>> was broken for 7.3. Did you come across anything obviously wrong with\n>> its queries?\n\n> As written the queries did not know anything about schemas or the newer \n> reg* data types, e.g. this:\n> Does the latter produce the desired result?\n\nNot sure. My oldest note saying it was busted predates the invention of\nthe new reg* types, I think. And while schema awareness is nice, it's\nnot critical to the usefulness of the script: we're only really going to\nuse it for checking the stuff in pg_catalog. So I'm not at all sure why\nI made that note. Do you get a plausible set of joins out of your\nversion?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 12:57:53 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins " }, { "msg_contents": "Tom Lane wrote:\n> Joe Conway <mail@joeconway.com> writes:\n>>Is it useful to have the reference count and unreferenced counts like \n>>currently written, or should I just faithfully reproduce the original \n>>behavior (except schema aware queries) using libpq in place of libpgeasy?\n >\n> I'd be inclined to reproduce the original behavior. findoidjoins is\n> pretty slow already, and I don't much want to slow it down more in order\n> to provide info that's useless for the primary purpose.\n\nIt was only taking about 7 seconds for me on an empty database, but if \nit's not useful I'll go back to the original output format.\n\n\n>>It is probably easier to produce that output while looping \n>>through the first time versus running a script -- should I do that?\n> \n> The separation between findoidjoins and make_oidjoins_check is\n> deliberate --- this allows for easy hand-editing of the join list to\n> remove unwanted joins before preparing the regression test script\n> (cf the notes in the README about bogus joins). Even though this is\n> an extra manual step, I think it's a better answer than trying to make\n> findoidjoins smart enough to suppress the bogus joins itself. Part of\n> the reason for running findoidjoins is to detect any unexpected\n> linkages, so it should not be too eager to hide things. Also, because\n> the output of findoidjoins *should* be manually reviewed, it's better\n> to put it out in an easy-to-read one-line-per-join format than to put\n> out the finished regression script directly.\n\nOK. I'll leave as is.\n\n>>As written the queries did not know anything about schemas or the newer \n>>reg* data types, e.g. this:\n>>Does the latter produce the desired result?\n> \n> Not sure. My oldest note saying it was busted predates the invention of\n> the new reg* types, I think. And while schema awareness is nice, it's\n> not critical to the usefulness of the script: we're only really going to\n> use it for checking the stuff in pg_catalog. So I'm not at all sure why\n> I made that note. Do you get a plausible set of joins out of your\n> version?\n\nLooks plausible. But I guess it will be easier to tell once it produces \nresults in the same format as before. I'll make the changes and send it \nin to patches.\n\nThanks,\n\nJoe\n\n", "msg_date": "Wed, 04 Sep 2002 10:15:19 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: findoidjoins" }, { "msg_contents": "Tom Lane wrote:\n> I'd be inclined to reproduce the original behavior. findoidjoins is\n> pretty slow already, and I don't much want to slow it down more in order\n> to provide info that's useless for the primary purpose.\n\nHere's take two. It produces results similar to the previous version, \nbut using libpq and schema aware queries.\n\n\n> use it for checking the stuff in pg_catalog. So I'm not at all sure why\n> I made that note. Do you get a plausible set of joins out of your\n> version?\n\nLooks reasonable to me. I attached the outputs of findoidjoins and \nmake_oidjoins_check for review as well.\n\nPlease review and commit, or kick back to me if more work is needed.\n\nThanks,\n\nJoe", "msg_date": "Wed, 04 Sep 2002 22:25:19 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "findoidjoins patch (was Re: [HACKERS] findoidjoins)" }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > I'd be inclined to reproduce the original behavior. findoidjoins is\n> > pretty slow already, and I don't much want to slow it down more in order\n> > to provide info that's useless for the primary purpose.\n> \n> Here's take two. It produces results similar to the previous version, \n> but using libpq and schema aware queries.\n> \n> \n> > use it for checking the stuff in pg_catalog. So I'm not at all sure why\n> > I made that note. Do you get a plausible set of joins out of your\n> > version?\n> \n> Looks reasonable to me. I attached the outputs of findoidjoins and \n> make_oidjoins_check for review as well.\n> \n> Please review and commit, or kick back to me if more work is needed.\n> \n> Thanks,\n> \n> Joe\n> \n\n> Index: contrib/findoidjoins/Makefile\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/findoidjoins/Makefile,v\n> retrieving revision 1.13\n> diff -c -r1.13 Makefile\n> *** contrib/findoidjoins/Makefile\t6 Sep 2001 10:49:29 -0000\t1.13\n> --- contrib/findoidjoins/Makefile\t4 Sep 2002 23:36:27 -0000\n> ***************\n> *** 1,5 ****\n> - # $Header: /opt/src/cvs/pgsql-server/contrib/findoidjoins/Makefile,v 1.13 2001/09/06 10:49:29 petere Exp $\n> - \n> subdir = contrib/findoidjoins\n> top_builddir = ../..\n> include $(top_builddir)/src/Makefile.global\n> --- 1,3 ----\n> ***************\n> *** 7,17 ****\n> PROGRAM = findoidjoins\n> OBJS\t= findoidjoins.o\n> \n> ! libpgeasy_srcdir = $(top_srcdir)/src/interfaces/libpgeasy\n> ! libpgeasy_builddir = $(top_builddir)/src/interfaces/libpgeasy\n> ! \n> ! PG_CPPFLAGS = -I$(libpgeasy_srcdir) -I$(libpq_srcdir)\n> ! PG_LIBS = -L$(libpgeasy_builddir) -lpgeasy $(libpq)\n> \n> SCRIPTS = make_oidjoins_check\n> DOCS = README.findoidjoins\n> --- 5,12 ----\n> PROGRAM = findoidjoins\n> OBJS\t= findoidjoins.o\n> \n> ! PG_CPPFLAGS = -I$(libpq_srcdir)\n> ! PG_LIBS = $(libpq)\n> \n> SCRIPTS = make_oidjoins_check\n> DOCS = README.findoidjoins\n> Index: contrib/findoidjoins/README.findoidjoins\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/findoidjoins/README.findoidjoins,v\n> retrieving revision 1.5\n> diff -c -r1.5 README.findoidjoins\n> *** contrib/findoidjoins/README.findoidjoins\t25 Apr 2002 02:56:55 -0000\t1.5\n> --- contrib/findoidjoins/README.findoidjoins\t5 Sep 2002 04:42:21 -0000\n> ***************\n> *** 1,24 ****\n> \n> \t\t\t findoidjoins\n> \n> ! This program scans a database, and prints oid fields (also regproc, regclass\n> ! and regtype fields) and the tables they join to. CAUTION: it is ver-r-r-y\n> ! slow on a large database, or even a not-so-large one. We don't really\n> ! recommend running it on anything but an empty database, such as template1.\n> ! \n> ! Uses pgeasy library.\n> \n> Run on an empty database, it returns the system join relationships (shown\n> ! below for 7.2). Note that unexpected matches may indicate bogus entries\n> in system tables --- don't accept a peculiar match without question.\n> In particular, a field shown as joining to more than one target table is\n> ! probably messed up. In 7.2, the *only* field that should join to more\n> ! than one target is pg_description.objoid. (Running make_oidjoins_check\n> ! is an easy way to spot fields joining to more than one table, BTW.)\n> \n> The shell script make_oidjoins_check converts findoidjoins' output\n> into an SQL script that checks for dangling links (entries in an\n> ! OID or REGPROC column that don't match any row in the expected table).\n> Note that fields joining to more than one table are NOT processed.\n> \n> The result of make_oidjoins_check should be installed as the \"oidjoins\"\n> --- 1,22 ----\n> \n> \t\t\t findoidjoins\n> \n> ! This program scans a database, and prints oid fields (also reg* fields)\n> ! and the tables they join to. We don't really recommend running it on\n> ! anything but an empty database, such as template1.\n> \n> Run on an empty database, it returns the system join relationships (shown\n> ! below for 7.3). Note that unexpected matches may indicate bogus entries\n> in system tables --- don't accept a peculiar match without question.\n> In particular, a field shown as joining to more than one target table is\n> ! probably messed up. In 7.3, the *only* fields that should join to more\n> ! than one target are pg_description.objoid, pg_depend.objid, and\n> ! pg_depend.refobjid. (Running make_oidjoins_check is an easy way to spot\n> ! fields joining to more than one table, BTW.)\n> \n> The shell script make_oidjoins_check converts findoidjoins' output\n> into an SQL script that checks for dangling links (entries in an\n> ! OID or REG* columns that don't match any row in the expected table).\n> Note that fields joining to more than one table are NOT processed.\n> \n> The result of make_oidjoins_check should be installed as the \"oidjoins\"\n> ***************\n> *** 27,43 ****\n> (Ideally we'd just regenerate the script as part of the regression\n> tests themselves, but that seems too slow...)\n> \n> ! NOTE: in 7.2, make_oidjoins_check produces one bogus join check, for\n> pg_class.relfilenode => pg_class.oid. This is an artifact and should not\n> be added to the oidjoins regress test.\n> \n> ---------------------------------------------------------------------------\n> ! \n> Join pg_aggregate.aggtransfn => pg_proc.oid\n> Join pg_aggregate.aggfinalfn => pg_proc.oid\n> - Join pg_aggregate.aggbasetype => pg_type.oid\n> Join pg_aggregate.aggtranstype => pg_type.oid\n> - Join pg_aggregate.aggfinaltype => pg_type.oid\n> Join pg_am.amgettuple => pg_proc.oid\n> Join pg_am.aminsert => pg_proc.oid\n> Join pg_am.ambeginscan => pg_proc.oid\n> --- 25,39 ----\n> (Ideally we'd just regenerate the script as part of the regression\n> tests themselves, but that seems too slow...)\n> \n> ! NOTE: in 7.3, make_oidjoins_check produces one bogus join check, for\n> pg_class.relfilenode => pg_class.oid. This is an artifact and should not\n> be added to the oidjoins regress test.\n> \n> ---------------------------------------------------------------------------\n> ! Join pg_aggregate.aggfnoid => pg_proc.oid\n> Join pg_aggregate.aggtransfn => pg_proc.oid\n> Join pg_aggregate.aggfinalfn => pg_proc.oid\n> Join pg_aggregate.aggtranstype => pg_type.oid\n> Join pg_am.amgettuple => pg_proc.oid\n> Join pg_am.aminsert => pg_proc.oid\n> Join pg_am.ambeginscan => pg_proc.oid\n> ***************\n> *** 54,68 ****\n> --- 50,95 ----\n> Join pg_amproc.amproc => pg_proc.oid\n> Join pg_attribute.attrelid => pg_class.oid\n> Join pg_attribute.atttypid => pg_type.oid\n> + Join pg_cast.castsource => pg_type.oid\n> + Join pg_cast.casttarget => pg_type.oid\n> + Join pg_cast.castfunc => pg_proc.oid\n> + Join pg_class.relnamespace => pg_namespace.oid\n> Join pg_class.reltype => pg_type.oid\n> Join pg_class.relam => pg_am.oid\n> + Join pg_class.relfilenode => pg_class.oid\n> Join pg_class.reltoastrelid => pg_class.oid\n> Join pg_class.reltoastidxid => pg_class.oid\n> + Join pg_conversion.connamespace => pg_namespace.oid\n> + Join pg_conversion.conproc => pg_proc.oid\n> + Join pg_database.datlastsysoid => pg_conversion.oid\n> + Join pg_depend.classid => pg_class.oid\n> + Join pg_depend.objid => pg_conversion.oid\n> + Join pg_depend.objid => pg_rewrite.oid\n> + Join pg_depend.objid => pg_type.oid\n> + Join pg_depend.refclassid => pg_class.oid\n> + Join pg_depend.refobjid => pg_cast.oid\n> + Join pg_depend.refobjid => pg_class.oid\n> + Join pg_depend.refobjid => pg_language.oid\n> + Join pg_depend.refobjid => pg_namespace.oid\n> + Join pg_depend.refobjid => pg_opclass.oid\n> + Join pg_depend.refobjid => pg_operator.oid\n> + Join pg_depend.refobjid => pg_proc.oid\n> + Join pg_depend.refobjid => pg_trigger.oid\n> + Join pg_depend.refobjid => pg_type.oid\n> + Join pg_description.objoid => pg_am.oid\n> + Join pg_description.objoid => pg_database.oid\n> + Join pg_description.objoid => pg_language.oid\n> + Join pg_description.objoid => pg_namespace.oid\n> + Join pg_description.objoid => pg_proc.oid\n> + Join pg_description.objoid => pg_type.oid\n> Join pg_description.classoid => pg_class.oid\n> Join pg_index.indexrelid => pg_class.oid\n> Join pg_index.indrelid => pg_class.oid\n> + Join pg_language.lanvalidator => pg_proc.oid\n> Join pg_opclass.opcamid => pg_am.oid\n> + Join pg_opclass.opcnamespace => pg_namespace.oid\n> Join pg_opclass.opcintype => pg_type.oid\n> + Join pg_operator.oprnamespace => pg_namespace.oid\n> Join pg_operator.oprleft => pg_type.oid\n> Join pg_operator.oprright => pg_type.oid\n> Join pg_operator.oprresult => pg_type.oid\n> ***************\n> *** 70,94 ****\n> Join pg_operator.oprnegate => pg_operator.oid\n> Join pg_operator.oprlsortop => pg_operator.oid\n> Join pg_operator.oprrsortop => pg_operator.oid\n> Join pg_operator.oprcode => pg_proc.oid\n> Join pg_operator.oprrest => pg_proc.oid\n> Join pg_operator.oprjoin => pg_proc.oid\n> Join pg_proc.prolang => pg_language.oid\n> Join pg_proc.prorettype => pg_type.oid\n> Join pg_rewrite.ev_class => pg_class.oid\n> - Join pg_statistic.starelid => pg_class.oid\n> - Join pg_statistic.staop1 => pg_operator.oid\n> - Join pg_statistic.staop2 => pg_operator.oid\n> - Join pg_statistic.staop3 => pg_operator.oid\n> Join pg_trigger.tgrelid => pg_class.oid\n> Join pg_trigger.tgfoid => pg_proc.oid\n> Join pg_type.typrelid => pg_class.oid\n> Join pg_type.typelem => pg_type.oid\n> Join pg_type.typinput => pg_proc.oid\n> Join pg_type.typoutput => pg_proc.oid\n> - Join pg_type.typreceive => pg_proc.oid\n> - Join pg_type.typsend => pg_proc.oid\n> - \n> ---------------------------------------------------------------------------\n> \n> Bruce Momjian (root@candle.pha.pa.us)\n> --- 97,119 ----\n> Join pg_operator.oprnegate => pg_operator.oid\n> Join pg_operator.oprlsortop => pg_operator.oid\n> Join pg_operator.oprrsortop => pg_operator.oid\n> + Join pg_operator.oprltcmpop => pg_operator.oid\n> + Join pg_operator.oprgtcmpop => pg_operator.oid\n> Join pg_operator.oprcode => pg_proc.oid\n> Join pg_operator.oprrest => pg_proc.oid\n> Join pg_operator.oprjoin => pg_proc.oid\n> + Join pg_proc.pronamespace => pg_namespace.oid\n> Join pg_proc.prolang => pg_language.oid\n> Join pg_proc.prorettype => pg_type.oid\n> Join pg_rewrite.ev_class => pg_class.oid\n> Join pg_trigger.tgrelid => pg_class.oid\n> Join pg_trigger.tgfoid => pg_proc.oid\n> + Join pg_type.typnamespace => pg_namespace.oid\n> Join pg_type.typrelid => pg_class.oid\n> Join pg_type.typelem => pg_type.oid\n> Join pg_type.typinput => pg_proc.oid\n> Join pg_type.typoutput => pg_proc.oid\n> ---------------------------------------------------------------------------\n> \n> Bruce Momjian (root@candle.pha.pa.us)\n> + Updated for 7.3 by Joe Conway (mail@joeconway.com)\n> Index: contrib/findoidjoins/findoidjoins.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/findoidjoins/findoidjoins.c,v\n> retrieving revision 1.17\n> diff -c -r1.17 findoidjoins.c\n> *** contrib/findoidjoins/findoidjoins.c\t4 Sep 2002 20:31:06 -0000\t1.17\n> --- contrib/findoidjoins/findoidjoins.c\t5 Sep 2002 04:51:16 -0000\n> ***************\n> *** 1,109 ****\n> /*\n> ! * findoidjoins.c, requires src/interfaces/libpgeasy\n> *\n> */\n> - #include \"postgres_fe.h\"\n> \n> ! #include \"libpq-fe.h\"\n> ! #include \"halt.h\"\n> ! #include \"libpgeasy.h\"\n> \n> ! PGresult *attres,\n> ! \t\t *relres;\n> \n> int\n> main(int argc, char **argv)\n> {\n> ! \tchar\t\tquery[4000];\n> ! \tchar\t\trelname[256];\n> ! \tchar\t\trelname2[256];\n> ! \tchar\t\tattname[256];\n> ! \tchar\t\ttypname[256];\n> ! \tint\t\t\tcount;\n> ! \tchar\t\toptstr[256];\n> \n> \tif (argc != 2)\n> ! \t\thalt(\"Usage: %s database\\n\", argv[0]);\n> \n> ! \tsnprintf(optstr, 256, \"dbname=%s\", argv[1]);\n> ! \tconnectdb(optstr);\n> \n> ! \ton_error_continue();\n> ! \ton_error_stop();\n> \n> ! \tdoquery(\"BEGIN WORK\");\n> ! \tdoquery(\"\\\n> ! \t\tDECLARE c_attributes BINARY CURSOR FOR \\\n> ! \t\tSELECT typname, relname, a.attname \\\n> ! \t\tFROM pg_class c, pg_attribute a, pg_type t \\\n> ! \t\tWHERE a.attnum > 0 AND \\\n> ! \t\t\t relkind = 'r' AND \\\n> ! \t\t\t (typname = 'oid' OR \\\n> ! \t\t\t typname = 'regproc' OR \\\n> ! \t\t\t typname = 'regclass' OR \\\n> ! \t\t\t typname = 'regtype') AND \\\n> ! \t\t\t a.attrelid = c.oid AND \\\n> ! \t\t\t a.atttypid = t.oid \\\n> ! \t\tORDER BY 2, a.attnum ; \\\n> ! \t\t\");\n> ! \tdoquery(\"FETCH ALL IN c_attributes\");\n> ! \tattres = get_result();\n> ! \n> ! \tdoquery(\"\\\n> ! \t\tDECLARE c_relations BINARY CURSOR FOR \\\n> ! \t\tSELECT relname \\\n> ! \t\tFROM pg_class c \\\n> ! \t\tWHERE relkind = 'r' AND relhasoids \\\n> ! \t\tORDER BY 1; \\\n> ! \t\t\");\n> ! \tdoquery(\"FETCH ALL IN c_relations\");\n> ! \trelres = get_result();\n> \n> ! \tset_result(attres);\n> ! \twhile (fetch(typname, relname, attname) != END_OF_TUPLES)\n> \t{\n> ! \t\tset_result(relres);\n> ! \t\treset_fetch();\n> ! \t\twhile (fetch(relname2) != END_OF_TUPLES)\n> ! \t\t{\n> ! \t\t\tunset_result(relres);\n> ! \t\t\tif (strcmp(typname, \"oid\") == 0)\n> ! \t\t\t\tsnprintf(query, 4000, \"\\\n> ! \t\t\t\t\tDECLARE c_matches BINARY CURSOR FOR \\\n> ! \t\t\t\t\tSELECT\tcount(*)::int4 \\\n> ! \t\t\t\t\t\tFROM \\\"%s\\\" t1, \\\"%s\\\" t2 \\\n> ! \t\t\t\t\tWHERE t1.\\\"%s\\\" = t2.oid \",\n> ! \t\t\t\t\t\t relname, relname2, attname);\n> ! \t\t\telse\n> ! \t\t\t\tsprintf(query, 4000, \"\\\n> ! \t\t\t\t\tDECLARE c_matches BINARY CURSOR FOR \\\n> ! \t\t\t\t\tSELECT\tcount(*)::int4 \\\n> ! \t\t\t\t\t\tFROM \\\"%s\\\" t1, \\\"%s\\\" t2 \\\n> ! \t\t\t\t\tWHERE t1.\\\"%s\\\"::oid = t2.oid \",\n> ! \t\t\t\t\t\trelname, relname2, attname);\n> ! \n> ! \t\t\tdoquery(query);\n> ! \t\t\tdoquery(\"FETCH ALL IN c_matches\");\n> ! \t\t\tfetch(&count);\n> ! \t\t\tif (count != 0)\n> ! \t\t\t\tprintf(\"Join %s.%s => %s.oid\\n\", relname, attname, relname2);\n> ! \t\t\tdoquery(\"CLOSE c_matches\");\n> ! \t\t\tset_result(relres);\n> ! \t\t}\n> ! \t\tset_result(attres);\n> \t}\n> \n> ! \tset_result(relres);\n> ! \tdoquery(\"CLOSE c_relations\");\n> ! \tPQclear(relres);\n> ! \n> ! \tset_result(attres);\n> ! \tdoquery(\"CLOSE c_attributes\");\n> ! \tPQclear(attres);\n> ! \tunset_result(attres);\n> \n> ! \tdoquery(\"COMMIT WORK\");\n> \n> ! \tdisconnectdb();\n> ! \treturn 0;\n> }\n> --- 1,152 ----\n> /*\n> ! * findoidjoins\n> ! *\n> ! * Copyright 2002 by PostgreSQL Global Development Group\n> ! *\n> ! * Permission to use, copy, modify, and distribute this software and its\n> ! * documentation for any purpose, without fee, and without a written agreement\n> ! * is hereby granted, provided that the above copyright notice and this\n> ! * paragraph and the following two paragraphs appear in all copies.\n> ! * \n> ! * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> ! * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> ! * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> ! * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> ! * POSSIBILITY OF SUCH DAMAGE.\n> ! * \n> ! * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> ! * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> ! * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> ! * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> ! * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> *\n> */\n> \n> ! #include <stdlib.h>\n> \n> ! #include \"postgres_fe.h\"\n> ! #include \"libpq-fe.h\"\n> ! #include \"pqexpbuffer.h\"\n> \n> int\n> main(int argc, char **argv)\n> {\n> ! \tPGconn\t\t\t *conn;\n> ! \tPQExpBufferData\t\tsql;\n> ! \tPGresult\t\t *res;\n> ! \tPGresult\t\t *pkrel_res;\n> ! \tPGresult\t\t *fkrel_res;\n> ! \tchar\t\t\t *fk_relname;\n> ! \tchar\t\t\t *fk_nspname;\n> ! \tchar\t\t\t *fk_attname;\n> ! \tchar\t\t\t *fk_typname;\n> ! \tchar\t\t\t *pk_relname;\n> ! \tchar\t\t\t *pk_nspname;\n> ! \tint\t\t\t\t\tfk, pk;\t\t/* loop counters */\n> \n> \tif (argc != 2)\n> ! \t{\n> ! \t\tfprintf(stderr, \"Usage: %s database\\n\", argv[0]);\n> ! \t\texit(EXIT_FAILURE);\n> ! \t}\t\t\n> \n> ! \tinitPQExpBuffer(&sql);\n> ! \tappendPQExpBuffer(&sql, \"dbname=%s\", argv[1]);\n> \n> ! \tconn = PQconnectdb(sql.data);\n> ! \tif (PQstatus(conn) == CONNECTION_BAD)\n> ! \t{\n> ! \t\tfprintf(stderr, \"connection error: %s\\n\", PQerrorMessage(conn));\n> ! \t\texit(EXIT_FAILURE);\n> ! \t}\n> ! \n> ! \ttermPQExpBuffer(&sql);\n> ! \tinitPQExpBuffer(&sql);\n> \n> ! \tappendPQExpBuffer(&sql, \"%s\",\n> ! \t\t\"SELECT c.relname, (SELECT nspname FROM \"\n> ! \t\t\"pg_catalog.pg_namespace n WHERE n.oid = c.relnamespace) AS nspname \"\n> ! \t\t\"FROM pg_catalog.pg_class c \"\n> ! \t\t\"WHERE c.relkind = 'r' \"\n> ! \t\t\"AND c.relhasoids \"\n> ! \t\t\"ORDER BY nspname, c.relname\"\n> ! \t\t);\n> \n> ! \tres = PQexec(conn, sql.data);\n> ! \tif (!res || PQresultStatus(res) != PGRES_TUPLES_OK)\n> \t{\n> ! \t\tfprintf(stderr, \"sql error: %s\\n\", PQerrorMessage(conn));\n> ! \t\texit(EXIT_FAILURE);\n> \t}\n> + \tpkrel_res = res;\n> + \n> + \ttermPQExpBuffer(&sql);\n> + \tinitPQExpBuffer(&sql);\n> \n> ! \tappendPQExpBuffer(&sql, \"%s\",\n> ! \t\t\"SELECT c.relname, \"\n> ! \t\t\"(SELECT nspname FROM pg_catalog.pg_namespace n WHERE n.oid = c.relnamespace) AS nspname, \"\n> ! \t\t\"a.attname, \"\n> ! \t\t\"t.typname \"\n> ! \t\t\"FROM pg_catalog.pg_class c, pg_catalog.pg_attribute a, pg_catalog.pg_type t \"\n> ! \t\t\"WHERE a.attnum > 0 AND c.relkind = 'r' \"\n> ! \t\t\"AND t.typnamespace IN (SELECT n.oid FROM pg_catalog.pg_namespace n WHERE nspname LIKE 'pg\\\\_%') \"\n> ! \t\t\"AND (t.typname = 'oid' OR t.typname LIKE 'reg%') \"\n> ! \t\t\"AND a.attrelid = c.oid \"\n> ! \t\t\"AND a.atttypid = t.oid \"\n> ! \t\t\"ORDER BY nspname, c.relname, a.attnum\"\n> ! \t\t);\n> \n> ! \tres = PQexec(conn, sql.data);\n> ! \tif (!res || PQresultStatus(res) != PGRES_TUPLES_OK)\n> ! \t{\n> ! \t\tfprintf(stderr, \"sql error: %s\\n\", PQerrorMessage(conn));\n> ! \t\texit(EXIT_FAILURE);\n> ! \t}\n> ! \tfkrel_res = res;\n> ! \n> ! \ttermPQExpBuffer(&sql);\n> ! \tinitPQExpBuffer(&sql);\n> ! \n> ! \tfor (fk = 0; fk < PQntuples(fkrel_res); fk++)\n> ! \t{\n> ! \t\tfk_relname = PQgetvalue(fkrel_res, fk, 0);\n> ! \t\tfk_nspname = PQgetvalue(fkrel_res, fk, 1);\n> ! \t\tfk_attname = PQgetvalue(fkrel_res, fk, 2);\n> ! \t\tfk_typname = PQgetvalue(fkrel_res, fk, 3);\n> ! \n> ! \t\tfor (pk = 0; pk < PQntuples(pkrel_res); pk++)\n> ! \t\t{\n> ! \t\t\tpk_relname = PQgetvalue(pkrel_res, pk, 0);\n> ! \t\t\tpk_nspname = PQgetvalue(pkrel_res, pk, 1);\n> ! \n> ! \t\t\tappendPQExpBuffer(&sql,\n> ! \t\t\t\t\"SELECT\t1 \"\n> ! \t\t\t\t\"FROM \\\"%s\\\".\\\"%s\\\" t1, \"\n> ! \t\t\t\t\"\\\"%s\\\".\\\"%s\\\" t2 \"\n> ! \t\t\t\t\"WHERE t1.\\\"%s\\\"::oid = t2.oid\",\n> ! \t\t\t\tfk_nspname, fk_relname, pk_nspname, pk_relname, fk_attname);\n> ! \n> ! \t\t\tres = PQexec(conn, sql.data);\n> ! \t\t\tif (!res || PQresultStatus(res) != PGRES_TUPLES_OK)\n> ! \t\t\t{\n> ! \t\t\t\tfprintf(stderr, \"sql error: %s\\n\", PQerrorMessage(conn));\n> ! \t\t\t\texit(EXIT_FAILURE);\n> ! \t\t\t}\n> ! \n> ! \t\t\tif (PQntuples(res) != 0)\n> ! \t\t\t\tprintf(\"Join %s.%s => %s.oid\\n\",\n> ! \t\t\t\t\t\tfk_relname, fk_attname, pk_relname);\n> ! \n> ! \t\t\tPQclear(res);\n> ! \n> ! \t\t\ttermPQExpBuffer(&sql);\n> ! \t\t\tinitPQExpBuffer(&sql);\n> ! \t\t}\n> ! \t}\n> ! \tPQclear(pkrel_res);\n> ! \tPQclear(fkrel_res);\n> ! \tPQfinish(conn);\n> \n> ! \texit(EXIT_SUCCESS);\n> }\n\n> Join pg_aggregate.aggfnoid => pg_proc.oid\n> Join pg_aggregate.aggtransfn => pg_proc.oid\n> Join pg_aggregate.aggfinalfn => pg_proc.oid\n> Join pg_aggregate.aggtranstype => pg_type.oid\n> Join pg_am.amgettuple => pg_proc.oid\n> Join pg_am.aminsert => pg_proc.oid\n> Join pg_am.ambeginscan => pg_proc.oid\n> Join pg_am.amrescan => pg_proc.oid\n> Join pg_am.amendscan => pg_proc.oid\n> Join pg_am.ammarkpos => pg_proc.oid\n> Join pg_am.amrestrpos => pg_proc.oid\n> Join pg_am.ambuild => pg_proc.oid\n> Join pg_am.ambulkdelete => pg_proc.oid\n> Join pg_am.amcostestimate => pg_proc.oid\n> Join pg_amop.amopclaid => pg_opclass.oid\n> Join pg_amop.amopopr => pg_operator.oid\n> Join pg_amproc.amopclaid => pg_opclass.oid\n> Join pg_amproc.amproc => pg_proc.oid\n> Join pg_attribute.attrelid => pg_class.oid\n> Join pg_attribute.atttypid => pg_type.oid\n> Join pg_cast.castsource => pg_type.oid\n> Join pg_cast.casttarget => pg_type.oid\n> Join pg_cast.castfunc => pg_proc.oid\n> Join pg_class.relnamespace => pg_namespace.oid\n> Join pg_class.reltype => pg_type.oid\n> Join pg_class.relam => pg_am.oid\n> Join pg_class.relfilenode => pg_class.oid\n> Join pg_class.reltoastrelid => pg_class.oid\n> Join pg_class.reltoastidxid => pg_class.oid\n> Join pg_conversion.connamespace => pg_namespace.oid\n> Join pg_conversion.conproc => pg_proc.oid\n> Join pg_database.datlastsysoid => pg_conversion.oid\n> Join pg_depend.classid => pg_class.oid\n> Join pg_depend.objid => pg_conversion.oid\n> Join pg_depend.objid => pg_rewrite.oid\n> Join pg_depend.objid => pg_type.oid\n> Join pg_depend.refclassid => pg_class.oid\n> Join pg_depend.refobjid => pg_cast.oid\n> Join pg_depend.refobjid => pg_class.oid\n> Join pg_depend.refobjid => pg_language.oid\n> Join pg_depend.refobjid => pg_namespace.oid\n> Join pg_depend.refobjid => pg_opclass.oid\n> Join pg_depend.refobjid => pg_operator.oid\n> Join pg_depend.refobjid => pg_proc.oid\n> Join pg_depend.refobjid => pg_trigger.oid\n> Join pg_depend.refobjid => pg_type.oid\n> Join pg_description.objoid => pg_am.oid\n> Join pg_description.objoid => pg_database.oid\n> Join pg_description.objoid => pg_language.oid\n> Join pg_description.objoid => pg_namespace.oid\n> Join pg_description.objoid => pg_proc.oid\n> Join pg_description.objoid => pg_type.oid\n> Join pg_description.classoid => pg_class.oid\n> Join pg_index.indexrelid => pg_class.oid\n> Join pg_index.indrelid => pg_class.oid\n> Join pg_language.lanvalidator => pg_proc.oid\n> Join pg_opclass.opcamid => pg_am.oid\n> Join pg_opclass.opcnamespace => pg_namespace.oid\n> Join pg_opclass.opcintype => pg_type.oid\n> Join pg_operator.oprnamespace => pg_namespace.oid\n> Join pg_operator.oprleft => pg_type.oid\n> Join pg_operator.oprright => pg_type.oid\n> Join pg_operator.oprresult => pg_type.oid\n> Join pg_operator.oprcom => pg_operator.oid\n> Join pg_operator.oprnegate => pg_operator.oid\n> Join pg_operator.oprlsortop => pg_operator.oid\n> Join pg_operator.oprrsortop => pg_operator.oid\n> Join pg_operator.oprltcmpop => pg_operator.oid\n> Join pg_operator.oprgtcmpop => pg_operator.oid\n> Join pg_operator.oprcode => pg_proc.oid\n> Join pg_operator.oprrest => pg_proc.oid\n> Join pg_operator.oprjoin => pg_proc.oid\n> Join pg_proc.pronamespace => pg_namespace.oid\n> Join pg_proc.prolang => pg_language.oid\n> Join pg_proc.prorettype => pg_type.oid\n> Join pg_rewrite.ev_class => pg_class.oid\n> Join pg_trigger.tgrelid => pg_class.oid\n> Join pg_trigger.tgfoid => pg_proc.oid\n> Join pg_type.typnamespace => pg_namespace.oid\n> Join pg_type.typrelid => pg_class.oid\n> Join pg_type.typelem => pg_type.oid\n> Join pg_type.typinput => pg_proc.oid\n> Join pg_type.typoutput => pg_proc.oid\n\n> --\n> -- This is created by pgsql/contrib/findoidjoins/make_oidjoin_check\n> --\n> SELECT\tctid, pg_aggregate.aggfnoid \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggfnoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_aggregate.aggfnoid);\n> SELECT\tctid, pg_aggregate.aggtransfn \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggtransfn != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_aggregate.aggtransfn);\n> SELECT\tctid, pg_aggregate.aggfinalfn \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggfinalfn != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_aggregate.aggfinalfn);\n> SELECT\tctid, pg_aggregate.aggtranstype \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggtranstype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_aggregate.aggtranstype);\n> SELECT\tctid, pg_am.amgettuple \n> FROM\tpg_am \n> WHERE\tpg_am.amgettuple != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amgettuple);\n> SELECT\tctid, pg_am.aminsert \n> FROM\tpg_am \n> WHERE\tpg_am.aminsert != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.aminsert);\n> SELECT\tctid, pg_am.ambeginscan \n> FROM\tpg_am \n> WHERE\tpg_am.ambeginscan != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ambeginscan);\n> SELECT\tctid, pg_am.amrescan \n> FROM\tpg_am \n> WHERE\tpg_am.amrescan != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amrescan);\n> SELECT\tctid, pg_am.amendscan \n> FROM\tpg_am \n> WHERE\tpg_am.amendscan != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amendscan);\n> SELECT\tctid, pg_am.ammarkpos \n> FROM\tpg_am \n> WHERE\tpg_am.ammarkpos != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ammarkpos);\n> SELECT\tctid, pg_am.amrestrpos \n> FROM\tpg_am \n> WHERE\tpg_am.amrestrpos != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amrestrpos);\n> SELECT\tctid, pg_am.ambuild \n> FROM\tpg_am \n> WHERE\tpg_am.ambuild != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ambuild);\n> SELECT\tctid, pg_am.ambulkdelete \n> FROM\tpg_am \n> WHERE\tpg_am.ambulkdelete != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ambulkdelete);\n> SELECT\tctid, pg_am.amcostestimate \n> FROM\tpg_am \n> WHERE\tpg_am.amcostestimate != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amcostestimate);\n> SELECT\tctid, pg_amop.amopclaid \n> FROM\tpg_amop \n> WHERE\tpg_amop.amopclaid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_opclass AS t1 WHERE t1.oid = pg_amop.amopclaid);\n> SELECT\tctid, pg_amop.amopopr \n> FROM\tpg_amop \n> WHERE\tpg_amop.amopopr != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_amop.amopopr);\n> SELECT\tctid, pg_amproc.amopclaid \n> FROM\tpg_amproc \n> WHERE\tpg_amproc.amopclaid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_opclass AS t1 WHERE t1.oid = pg_amproc.amopclaid);\n> SELECT\tctid, pg_amproc.amproc \n> FROM\tpg_amproc \n> WHERE\tpg_amproc.amproc != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_amproc.amproc);\n> SELECT\tctid, pg_attribute.attrelid \n> FROM\tpg_attribute \n> WHERE\tpg_attribute.attrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_attribute.attrelid);\n> SELECT\tctid, pg_attribute.atttypid \n> FROM\tpg_attribute \n> WHERE\tpg_attribute.atttypid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_attribute.atttypid);\n> SELECT\tctid, pg_cast.castsource \n> FROM\tpg_cast \n> WHERE\tpg_cast.castsource != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_cast.castsource);\n> SELECT\tctid, pg_cast.casttarget \n> FROM\tpg_cast \n> WHERE\tpg_cast.casttarget != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_cast.casttarget);\n> SELECT\tctid, pg_cast.castfunc \n> FROM\tpg_cast \n> WHERE\tpg_cast.castfunc != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_cast.castfunc);\n> SELECT\tctid, pg_class.relnamespace \n> FROM\tpg_class \n> WHERE\tpg_class.relnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_class.relnamespace);\n> SELECT\tctid, pg_class.reltype \n> FROM\tpg_class \n> WHERE\tpg_class.reltype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_class.reltype);\n> SELECT\tctid, pg_class.relam \n> FROM\tpg_class \n> WHERE\tpg_class.relam != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_am AS t1 WHERE t1.oid = pg_class.relam);\n> SELECT\tctid, pg_class.relfilenode \n> FROM\tpg_class \n> WHERE\tpg_class.relfilenode != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_class.relfilenode);\n> SELECT\tctid, pg_class.reltoastrelid \n> FROM\tpg_class \n> WHERE\tpg_class.reltoastrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_class.reltoastrelid);\n> SELECT\tctid, pg_class.reltoastidxid \n> FROM\tpg_class \n> WHERE\tpg_class.reltoastidxid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_class.reltoastidxid);\n> SELECT\tctid, pg_conversion.connamespace \n> FROM\tpg_conversion \n> WHERE\tpg_conversion.connamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_conversion.connamespace);\n> SELECT\tctid, pg_conversion.conproc \n> FROM\tpg_conversion \n> WHERE\tpg_conversion.conproc != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_conversion.conproc);\n> SELECT\tctid, pg_database.datlastsysoid \n> FROM\tpg_database \n> WHERE\tpg_database.datlastsysoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_conversion AS t1 WHERE t1.oid = pg_database.datlastsysoid);\n> SELECT\tctid, pg_depend.classid \n> FROM\tpg_depend \n> WHERE\tpg_depend.classid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_depend.classid);\n> SELECT\tctid, pg_depend.refclassid \n> FROM\tpg_depend \n> WHERE\tpg_depend.refclassid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_depend.refclassid);\n> SELECT\tctid, pg_description.classoid \n> FROM\tpg_description \n> WHERE\tpg_description.classoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_description.classoid);\n> SELECT\tctid, pg_index.indexrelid \n> FROM\tpg_index \n> WHERE\tpg_index.indexrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_index.indexrelid);\n> SELECT\tctid, pg_index.indrelid \n> FROM\tpg_index \n> WHERE\tpg_index.indrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_index.indrelid);\n> SELECT\tctid, pg_language.lanvalidator \n> FROM\tpg_language \n> WHERE\tpg_language.lanvalidator != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_language.lanvalidator);\n> SELECT\tctid, pg_opclass.opcamid \n> FROM\tpg_opclass \n> WHERE\tpg_opclass.opcamid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_am AS t1 WHERE t1.oid = pg_opclass.opcamid);\n> SELECT\tctid, pg_opclass.opcnamespace \n> FROM\tpg_opclass \n> WHERE\tpg_opclass.opcnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_opclass.opcnamespace);\n> SELECT\tctid, pg_opclass.opcintype \n> FROM\tpg_opclass \n> WHERE\tpg_opclass.opcintype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_opclass.opcintype);\n> SELECT\tctid, pg_operator.oprnamespace \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_operator.oprnamespace);\n> SELECT\tctid, pg_operator.oprleft \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprleft != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_operator.oprleft);\n> SELECT\tctid, pg_operator.oprright \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprright != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_operator.oprright);\n> SELECT\tctid, pg_operator.oprresult \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprresult != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_operator.oprresult);\n> SELECT\tctid, pg_operator.oprcom \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprcom != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprcom);\n> SELECT\tctid, pg_operator.oprnegate \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprnegate != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprnegate);\n> SELECT\tctid, pg_operator.oprlsortop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprlsortop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprlsortop);\n> SELECT\tctid, pg_operator.oprrsortop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprrsortop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprrsortop);\n> SELECT\tctid, pg_operator.oprltcmpop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprltcmpop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprltcmpop);\n> SELECT\tctid, pg_operator.oprgtcmpop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprgtcmpop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprgtcmpop);\n> SELECT\tctid, pg_operator.oprcode \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprcode != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_operator.oprcode);\n> SELECT\tctid, pg_operator.oprrest \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprrest != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_operator.oprrest);\n> SELECT\tctid, pg_operator.oprjoin \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprjoin != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_operator.oprjoin);\n> SELECT\tctid, pg_proc.pronamespace \n> FROM\tpg_proc \n> WHERE\tpg_proc.pronamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_proc.pronamespace);\n> SELECT\tctid, pg_proc.prolang \n> FROM\tpg_proc \n> WHERE\tpg_proc.prolang != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_language AS t1 WHERE t1.oid = pg_proc.prolang);\n> SELECT\tctid, pg_proc.prorettype \n> FROM\tpg_proc \n> WHERE\tpg_proc.prorettype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_proc.prorettype);\n> SELECT\tctid, pg_rewrite.ev_class \n> FROM\tpg_rewrite \n> WHERE\tpg_rewrite.ev_class != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_rewrite.ev_class);\n> SELECT\tctid, pg_trigger.tgrelid \n> FROM\tpg_trigger \n> WHERE\tpg_trigger.tgrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_trigger.tgrelid);\n> SELECT\tctid, pg_trigger.tgfoid \n> FROM\tpg_trigger \n> WHERE\tpg_trigger.tgfoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_trigger.tgfoid);\n> SELECT\tctid, pg_type.typnamespace \n> FROM\tpg_type \n> WHERE\tpg_type.typnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_type.typnamespace);\n> SELECT\tctid, pg_type.typrelid \n> FROM\tpg_type \n> WHERE\tpg_type.typrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_type.typrelid);\n> SELECT\tctid, pg_type.typelem \n> FROM\tpg_type \n> WHERE\tpg_type.typelem != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_type.typelem);\n> SELECT\tctid, pg_type.typinput \n> FROM\tpg_type \n> WHERE\tpg_type.typinput != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_type.typinput);\n> SELECT\tctid, pg_type.typoutput \n> FROM\tpg_type \n> WHERE\tpg_type.typoutput != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_type.typoutput);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 10 Sep 2002 22:32:35 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins patch (was Re: [HACKERS] findoidjoins)" }, { "msg_contents": "Bruce Momjian wrote:\n> Your patch has been added to the PostgreSQL unapplied patches list at:\n> \n> \thttp://candle.pha.pa.us/cgi-bin/pgpatches\n> \n> I will try to apply it within the next 48 hours.\n> \n\nI think I saw a commit message from Tom applying this already...yup:\n\nhttp://developer.postgresql.org/cvsweb.cgi/pgsql-server/contrib/findoidjoins/findoidjoins.c\n\nJoe\n\n", "msg_date": "Tue, 10 Sep 2002 19:54:26 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: findoidjoins patch (was Re: [HACKERS] findoidjoins)" }, { "msg_contents": "\nPatch already applied by Tom.\n\n---------------------------------------------------------------------------\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > I'd be inclined to reproduce the original behavior. findoidjoins is\n> > pretty slow already, and I don't much want to slow it down more in order\n> > to provide info that's useless for the primary purpose.\n> \n> Here's take two. It produces results similar to the previous version, \n> but using libpq and schema aware queries.\n> \n> \n> > use it for checking the stuff in pg_catalog. So I'm not at all sure why\n> > I made that note. Do you get a plausible set of joins out of your\n> > version?\n> \n> Looks reasonable to me. I attached the outputs of findoidjoins and \n> make_oidjoins_check for review as well.\n> \n> Please review and commit, or kick back to me if more work is needed.\n> \n> Thanks,\n> \n> Joe\n> \n\n> Index: contrib/findoidjoins/Makefile\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/findoidjoins/Makefile,v\n> retrieving revision 1.13\n> diff -c -r1.13 Makefile\n> *** contrib/findoidjoins/Makefile\t6 Sep 2001 10:49:29 -0000\t1.13\n> --- contrib/findoidjoins/Makefile\t4 Sep 2002 23:36:27 -0000\n> ***************\n> *** 1,5 ****\n> - # $Header: /opt/src/cvs/pgsql-server/contrib/findoidjoins/Makefile,v 1.13 2001/09/06 10:49:29 petere Exp $\n> - \n> subdir = contrib/findoidjoins\n> top_builddir = ../..\n> include $(top_builddir)/src/Makefile.global\n> --- 1,3 ----\n> ***************\n> *** 7,17 ****\n> PROGRAM = findoidjoins\n> OBJS\t= findoidjoins.o\n> \n> ! libpgeasy_srcdir = $(top_srcdir)/src/interfaces/libpgeasy\n> ! libpgeasy_builddir = $(top_builddir)/src/interfaces/libpgeasy\n> ! \n> ! PG_CPPFLAGS = -I$(libpgeasy_srcdir) -I$(libpq_srcdir)\n> ! PG_LIBS = -L$(libpgeasy_builddir) -lpgeasy $(libpq)\n> \n> SCRIPTS = make_oidjoins_check\n> DOCS = README.findoidjoins\n> --- 5,12 ----\n> PROGRAM = findoidjoins\n> OBJS\t= findoidjoins.o\n> \n> ! PG_CPPFLAGS = -I$(libpq_srcdir)\n> ! PG_LIBS = $(libpq)\n> \n> SCRIPTS = make_oidjoins_check\n> DOCS = README.findoidjoins\n> Index: contrib/findoidjoins/README.findoidjoins\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/findoidjoins/README.findoidjoins,v\n> retrieving revision 1.5\n> diff -c -r1.5 README.findoidjoins\n> *** contrib/findoidjoins/README.findoidjoins\t25 Apr 2002 02:56:55 -0000\t1.5\n> --- contrib/findoidjoins/README.findoidjoins\t5 Sep 2002 04:42:21 -0000\n> ***************\n> *** 1,24 ****\n> \n> \t\t\t findoidjoins\n> \n> ! This program scans a database, and prints oid fields (also regproc, regclass\n> ! and regtype fields) and the tables they join to. CAUTION: it is ver-r-r-y\n> ! slow on a large database, or even a not-so-large one. We don't really\n> ! recommend running it on anything but an empty database, such as template1.\n> ! \n> ! Uses pgeasy library.\n> \n> Run on an empty database, it returns the system join relationships (shown\n> ! below for 7.2). Note that unexpected matches may indicate bogus entries\n> in system tables --- don't accept a peculiar match without question.\n> In particular, a field shown as joining to more than one target table is\n> ! probably messed up. In 7.2, the *only* field that should join to more\n> ! than one target is pg_description.objoid. (Running make_oidjoins_check\n> ! is an easy way to spot fields joining to more than one table, BTW.)\n> \n> The shell script make_oidjoins_check converts findoidjoins' output\n> into an SQL script that checks for dangling links (entries in an\n> ! OID or REGPROC column that don't match any row in the expected table).\n> Note that fields joining to more than one table are NOT processed.\n> \n> The result of make_oidjoins_check should be installed as the \"oidjoins\"\n> --- 1,22 ----\n> \n> \t\t\t findoidjoins\n> \n> ! This program scans a database, and prints oid fields (also reg* fields)\n> ! and the tables they join to. We don't really recommend running it on\n> ! anything but an empty database, such as template1.\n> \n> Run on an empty database, it returns the system join relationships (shown\n> ! below for 7.3). Note that unexpected matches may indicate bogus entries\n> in system tables --- don't accept a peculiar match without question.\n> In particular, a field shown as joining to more than one target table is\n> ! probably messed up. In 7.3, the *only* fields that should join to more\n> ! than one target are pg_description.objoid, pg_depend.objid, and\n> ! pg_depend.refobjid. (Running make_oidjoins_check is an easy way to spot\n> ! fields joining to more than one table, BTW.)\n> \n> The shell script make_oidjoins_check converts findoidjoins' output\n> into an SQL script that checks for dangling links (entries in an\n> ! OID or REG* columns that don't match any row in the expected table).\n> Note that fields joining to more than one table are NOT processed.\n> \n> The result of make_oidjoins_check should be installed as the \"oidjoins\"\n> ***************\n> *** 27,43 ****\n> (Ideally we'd just regenerate the script as part of the regression\n> tests themselves, but that seems too slow...)\n> \n> ! NOTE: in 7.2, make_oidjoins_check produces one bogus join check, for\n> pg_class.relfilenode => pg_class.oid. This is an artifact and should not\n> be added to the oidjoins regress test.\n> \n> ---------------------------------------------------------------------------\n> ! \n> Join pg_aggregate.aggtransfn => pg_proc.oid\n> Join pg_aggregate.aggfinalfn => pg_proc.oid\n> - Join pg_aggregate.aggbasetype => pg_type.oid\n> Join pg_aggregate.aggtranstype => pg_type.oid\n> - Join pg_aggregate.aggfinaltype => pg_type.oid\n> Join pg_am.amgettuple => pg_proc.oid\n> Join pg_am.aminsert => pg_proc.oid\n> Join pg_am.ambeginscan => pg_proc.oid\n> --- 25,39 ----\n> (Ideally we'd just regenerate the script as part of the regression\n> tests themselves, but that seems too slow...)\n> \n> ! NOTE: in 7.3, make_oidjoins_check produces one bogus join check, for\n> pg_class.relfilenode => pg_class.oid. This is an artifact and should not\n> be added to the oidjoins regress test.\n> \n> ---------------------------------------------------------------------------\n> ! Join pg_aggregate.aggfnoid => pg_proc.oid\n> Join pg_aggregate.aggtransfn => pg_proc.oid\n> Join pg_aggregate.aggfinalfn => pg_proc.oid\n> Join pg_aggregate.aggtranstype => pg_type.oid\n> Join pg_am.amgettuple => pg_proc.oid\n> Join pg_am.aminsert => pg_proc.oid\n> Join pg_am.ambeginscan => pg_proc.oid\n> ***************\n> *** 54,68 ****\n> --- 50,95 ----\n> Join pg_amproc.amproc => pg_proc.oid\n> Join pg_attribute.attrelid => pg_class.oid\n> Join pg_attribute.atttypid => pg_type.oid\n> + Join pg_cast.castsource => pg_type.oid\n> + Join pg_cast.casttarget => pg_type.oid\n> + Join pg_cast.castfunc => pg_proc.oid\n> + Join pg_class.relnamespace => pg_namespace.oid\n> Join pg_class.reltype => pg_type.oid\n> Join pg_class.relam => pg_am.oid\n> + Join pg_class.relfilenode => pg_class.oid\n> Join pg_class.reltoastrelid => pg_class.oid\n> Join pg_class.reltoastidxid => pg_class.oid\n> + Join pg_conversion.connamespace => pg_namespace.oid\n> + Join pg_conversion.conproc => pg_proc.oid\n> + Join pg_database.datlastsysoid => pg_conversion.oid\n> + Join pg_depend.classid => pg_class.oid\n> + Join pg_depend.objid => pg_conversion.oid\n> + Join pg_depend.objid => pg_rewrite.oid\n> + Join pg_depend.objid => pg_type.oid\n> + Join pg_depend.refclassid => pg_class.oid\n> + Join pg_depend.refobjid => pg_cast.oid\n> + Join pg_depend.refobjid => pg_class.oid\n> + Join pg_depend.refobjid => pg_language.oid\n> + Join pg_depend.refobjid => pg_namespace.oid\n> + Join pg_depend.refobjid => pg_opclass.oid\n> + Join pg_depend.refobjid => pg_operator.oid\n> + Join pg_depend.refobjid => pg_proc.oid\n> + Join pg_depend.refobjid => pg_trigger.oid\n> + Join pg_depend.refobjid => pg_type.oid\n> + Join pg_description.objoid => pg_am.oid\n> + Join pg_description.objoid => pg_database.oid\n> + Join pg_description.objoid => pg_language.oid\n> + Join pg_description.objoid => pg_namespace.oid\n> + Join pg_description.objoid => pg_proc.oid\n> + Join pg_description.objoid => pg_type.oid\n> Join pg_description.classoid => pg_class.oid\n> Join pg_index.indexrelid => pg_class.oid\n> Join pg_index.indrelid => pg_class.oid\n> + Join pg_language.lanvalidator => pg_proc.oid\n> Join pg_opclass.opcamid => pg_am.oid\n> + Join pg_opclass.opcnamespace => pg_namespace.oid\n> Join pg_opclass.opcintype => pg_type.oid\n> + Join pg_operator.oprnamespace => pg_namespace.oid\n> Join pg_operator.oprleft => pg_type.oid\n> Join pg_operator.oprright => pg_type.oid\n> Join pg_operator.oprresult => pg_type.oid\n> ***************\n> *** 70,94 ****\n> Join pg_operator.oprnegate => pg_operator.oid\n> Join pg_operator.oprlsortop => pg_operator.oid\n> Join pg_operator.oprrsortop => pg_operator.oid\n> Join pg_operator.oprcode => pg_proc.oid\n> Join pg_operator.oprrest => pg_proc.oid\n> Join pg_operator.oprjoin => pg_proc.oid\n> Join pg_proc.prolang => pg_language.oid\n> Join pg_proc.prorettype => pg_type.oid\n> Join pg_rewrite.ev_class => pg_class.oid\n> - Join pg_statistic.starelid => pg_class.oid\n> - Join pg_statistic.staop1 => pg_operator.oid\n> - Join pg_statistic.staop2 => pg_operator.oid\n> - Join pg_statistic.staop3 => pg_operator.oid\n> Join pg_trigger.tgrelid => pg_class.oid\n> Join pg_trigger.tgfoid => pg_proc.oid\n> Join pg_type.typrelid => pg_class.oid\n> Join pg_type.typelem => pg_type.oid\n> Join pg_type.typinput => pg_proc.oid\n> Join pg_type.typoutput => pg_proc.oid\n> - Join pg_type.typreceive => pg_proc.oid\n> - Join pg_type.typsend => pg_proc.oid\n> - \n> ---------------------------------------------------------------------------\n> \n> Bruce Momjian (root@candle.pha.pa.us)\n> --- 97,119 ----\n> Join pg_operator.oprnegate => pg_operator.oid\n> Join pg_operator.oprlsortop => pg_operator.oid\n> Join pg_operator.oprrsortop => pg_operator.oid\n> + Join pg_operator.oprltcmpop => pg_operator.oid\n> + Join pg_operator.oprgtcmpop => pg_operator.oid\n> Join pg_operator.oprcode => pg_proc.oid\n> Join pg_operator.oprrest => pg_proc.oid\n> Join pg_operator.oprjoin => pg_proc.oid\n> + Join pg_proc.pronamespace => pg_namespace.oid\n> Join pg_proc.prolang => pg_language.oid\n> Join pg_proc.prorettype => pg_type.oid\n> Join pg_rewrite.ev_class => pg_class.oid\n> Join pg_trigger.tgrelid => pg_class.oid\n> Join pg_trigger.tgfoid => pg_proc.oid\n> + Join pg_type.typnamespace => pg_namespace.oid\n> Join pg_type.typrelid => pg_class.oid\n> Join pg_type.typelem => pg_type.oid\n> Join pg_type.typinput => pg_proc.oid\n> Join pg_type.typoutput => pg_proc.oid\n> ---------------------------------------------------------------------------\n> \n> Bruce Momjian (root@candle.pha.pa.us)\n> + Updated for 7.3 by Joe Conway (mail@joeconway.com)\n> Index: contrib/findoidjoins/findoidjoins.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/contrib/findoidjoins/findoidjoins.c,v\n> retrieving revision 1.17\n> diff -c -r1.17 findoidjoins.c\n> *** contrib/findoidjoins/findoidjoins.c\t4 Sep 2002 20:31:06 -0000\t1.17\n> --- contrib/findoidjoins/findoidjoins.c\t5 Sep 2002 04:51:16 -0000\n> ***************\n> *** 1,109 ****\n> /*\n> ! * findoidjoins.c, requires src/interfaces/libpgeasy\n> *\n> */\n> - #include \"postgres_fe.h\"\n> \n> ! #include \"libpq-fe.h\"\n> ! #include \"halt.h\"\n> ! #include \"libpgeasy.h\"\n> \n> ! PGresult *attres,\n> ! \t\t *relres;\n> \n> int\n> main(int argc, char **argv)\n> {\n> ! \tchar\t\tquery[4000];\n> ! \tchar\t\trelname[256];\n> ! \tchar\t\trelname2[256];\n> ! \tchar\t\tattname[256];\n> ! \tchar\t\ttypname[256];\n> ! \tint\t\t\tcount;\n> ! \tchar\t\toptstr[256];\n> \n> \tif (argc != 2)\n> ! \t\thalt(\"Usage: %s database\\n\", argv[0]);\n> \n> ! \tsnprintf(optstr, 256, \"dbname=%s\", argv[1]);\n> ! \tconnectdb(optstr);\n> \n> ! \ton_error_continue();\n> ! \ton_error_stop();\n> \n> ! \tdoquery(\"BEGIN WORK\");\n> ! \tdoquery(\"\\\n> ! \t\tDECLARE c_attributes BINARY CURSOR FOR \\\n> ! \t\tSELECT typname, relname, a.attname \\\n> ! \t\tFROM pg_class c, pg_attribute a, pg_type t \\\n> ! \t\tWHERE a.attnum > 0 AND \\\n> ! \t\t\t relkind = 'r' AND \\\n> ! \t\t\t (typname = 'oid' OR \\\n> ! \t\t\t typname = 'regproc' OR \\\n> ! \t\t\t typname = 'regclass' OR \\\n> ! \t\t\t typname = 'regtype') AND \\\n> ! \t\t\t a.attrelid = c.oid AND \\\n> ! \t\t\t a.atttypid = t.oid \\\n> ! \t\tORDER BY 2, a.attnum ; \\\n> ! \t\t\");\n> ! \tdoquery(\"FETCH ALL IN c_attributes\");\n> ! \tattres = get_result();\n> ! \n> ! \tdoquery(\"\\\n> ! \t\tDECLARE c_relations BINARY CURSOR FOR \\\n> ! \t\tSELECT relname \\\n> ! \t\tFROM pg_class c \\\n> ! \t\tWHERE relkind = 'r' AND relhasoids \\\n> ! \t\tORDER BY 1; \\\n> ! \t\t\");\n> ! \tdoquery(\"FETCH ALL IN c_relations\");\n> ! \trelres = get_result();\n> \n> ! \tset_result(attres);\n> ! \twhile (fetch(typname, relname, attname) != END_OF_TUPLES)\n> \t{\n> ! \t\tset_result(relres);\n> ! \t\treset_fetch();\n> ! \t\twhile (fetch(relname2) != END_OF_TUPLES)\n> ! \t\t{\n> ! \t\t\tunset_result(relres);\n> ! \t\t\tif (strcmp(typname, \"oid\") == 0)\n> ! \t\t\t\tsnprintf(query, 4000, \"\\\n> ! \t\t\t\t\tDECLARE c_matches BINARY CURSOR FOR \\\n> ! \t\t\t\t\tSELECT\tcount(*)::int4 \\\n> ! \t\t\t\t\t\tFROM \\\"%s\\\" t1, \\\"%s\\\" t2 \\\n> ! \t\t\t\t\tWHERE t1.\\\"%s\\\" = t2.oid \",\n> ! \t\t\t\t\t\t relname, relname2, attname);\n> ! \t\t\telse\n> ! \t\t\t\tsprintf(query, 4000, \"\\\n> ! \t\t\t\t\tDECLARE c_matches BINARY CURSOR FOR \\\n> ! \t\t\t\t\tSELECT\tcount(*)::int4 \\\n> ! \t\t\t\t\t\tFROM \\\"%s\\\" t1, \\\"%s\\\" t2 \\\n> ! \t\t\t\t\tWHERE t1.\\\"%s\\\"::oid = t2.oid \",\n> ! \t\t\t\t\t\trelname, relname2, attname);\n> ! \n> ! \t\t\tdoquery(query);\n> ! \t\t\tdoquery(\"FETCH ALL IN c_matches\");\n> ! \t\t\tfetch(&count);\n> ! \t\t\tif (count != 0)\n> ! \t\t\t\tprintf(\"Join %s.%s => %s.oid\\n\", relname, attname, relname2);\n> ! \t\t\tdoquery(\"CLOSE c_matches\");\n> ! \t\t\tset_result(relres);\n> ! \t\t}\n> ! \t\tset_result(attres);\n> \t}\n> \n> ! \tset_result(relres);\n> ! \tdoquery(\"CLOSE c_relations\");\n> ! \tPQclear(relres);\n> ! \n> ! \tset_result(attres);\n> ! \tdoquery(\"CLOSE c_attributes\");\n> ! \tPQclear(attres);\n> ! \tunset_result(attres);\n> \n> ! \tdoquery(\"COMMIT WORK\");\n> \n> ! \tdisconnectdb();\n> ! \treturn 0;\n> }\n> --- 1,152 ----\n> /*\n> ! * findoidjoins\n> ! *\n> ! * Copyright 2002 by PostgreSQL Global Development Group\n> ! *\n> ! * Permission to use, copy, modify, and distribute this software and its\n> ! * documentation for any purpose, without fee, and without a written agreement\n> ! * is hereby granted, provided that the above copyright notice and this\n> ! * paragraph and the following two paragraphs appear in all copies.\n> ! * \n> ! * IN NO EVENT SHALL THE AUTHORS OR DISTRIBUTORS BE LIABLE TO ANY PARTY FOR\n> ! * DIRECT, INDIRECT, SPECIAL, INCIDENTAL, OR CONSEQUENTIAL DAMAGES, INCLUDING\n> ! * LOST PROFITS, ARISING OUT OF THE USE OF THIS SOFTWARE AND ITS\n> ! * DOCUMENTATION, EVEN IF THE AUTHOR OR DISTRIBUTORS HAVE BEEN ADVISED OF THE\n> ! * POSSIBILITY OF SUCH DAMAGE.\n> ! * \n> ! * THE AUTHORS AND DISTRIBUTORS SPECIFICALLY DISCLAIM ANY WARRANTIES,\n> ! * INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY\n> ! * AND FITNESS FOR A PARTICULAR PURPOSE. THE SOFTWARE PROVIDED HEREUNDER IS\n> ! * ON AN \"AS IS\" BASIS, AND THE AUTHOR AND DISTRIBUTORS HAS NO OBLIGATIONS TO\n> ! * PROVIDE MAINTENANCE, SUPPORT, UPDATES, ENHANCEMENTS, OR MODIFICATIONS.\n> *\n> */\n> \n> ! #include <stdlib.h>\n> \n> ! #include \"postgres_fe.h\"\n> ! #include \"libpq-fe.h\"\n> ! #include \"pqexpbuffer.h\"\n> \n> int\n> main(int argc, char **argv)\n> {\n> ! \tPGconn\t\t\t *conn;\n> ! \tPQExpBufferData\t\tsql;\n> ! \tPGresult\t\t *res;\n> ! \tPGresult\t\t *pkrel_res;\n> ! \tPGresult\t\t *fkrel_res;\n> ! \tchar\t\t\t *fk_relname;\n> ! \tchar\t\t\t *fk_nspname;\n> ! \tchar\t\t\t *fk_attname;\n> ! \tchar\t\t\t *fk_typname;\n> ! \tchar\t\t\t *pk_relname;\n> ! \tchar\t\t\t *pk_nspname;\n> ! \tint\t\t\t\t\tfk, pk;\t\t/* loop counters */\n> \n> \tif (argc != 2)\n> ! \t{\n> ! \t\tfprintf(stderr, \"Usage: %s database\\n\", argv[0]);\n> ! \t\texit(EXIT_FAILURE);\n> ! \t}\t\t\n> \n> ! \tinitPQExpBuffer(&sql);\n> ! \tappendPQExpBuffer(&sql, \"dbname=%s\", argv[1]);\n> \n> ! \tconn = PQconnectdb(sql.data);\n> ! \tif (PQstatus(conn) == CONNECTION_BAD)\n> ! \t{\n> ! \t\tfprintf(stderr, \"connection error: %s\\n\", PQerrorMessage(conn));\n> ! \t\texit(EXIT_FAILURE);\n> ! \t}\n> ! \n> ! \ttermPQExpBuffer(&sql);\n> ! \tinitPQExpBuffer(&sql);\n> \n> ! \tappendPQExpBuffer(&sql, \"%s\",\n> ! \t\t\"SELECT c.relname, (SELECT nspname FROM \"\n> ! \t\t\"pg_catalog.pg_namespace n WHERE n.oid = c.relnamespace) AS nspname \"\n> ! \t\t\"FROM pg_catalog.pg_class c \"\n> ! \t\t\"WHERE c.relkind = 'r' \"\n> ! \t\t\"AND c.relhasoids \"\n> ! \t\t\"ORDER BY nspname, c.relname\"\n> ! \t\t);\n> \n> ! \tres = PQexec(conn, sql.data);\n> ! \tif (!res || PQresultStatus(res) != PGRES_TUPLES_OK)\n> \t{\n> ! \t\tfprintf(stderr, \"sql error: %s\\n\", PQerrorMessage(conn));\n> ! \t\texit(EXIT_FAILURE);\n> \t}\n> + \tpkrel_res = res;\n> + \n> + \ttermPQExpBuffer(&sql);\n> + \tinitPQExpBuffer(&sql);\n> \n> ! \tappendPQExpBuffer(&sql, \"%s\",\n> ! \t\t\"SELECT c.relname, \"\n> ! \t\t\"(SELECT nspname FROM pg_catalog.pg_namespace n WHERE n.oid = c.relnamespace) AS nspname, \"\n> ! \t\t\"a.attname, \"\n> ! \t\t\"t.typname \"\n> ! \t\t\"FROM pg_catalog.pg_class c, pg_catalog.pg_attribute a, pg_catalog.pg_type t \"\n> ! \t\t\"WHERE a.attnum > 0 AND c.relkind = 'r' \"\n> ! \t\t\"AND t.typnamespace IN (SELECT n.oid FROM pg_catalog.pg_namespace n WHERE nspname LIKE 'pg\\\\_%') \"\n> ! \t\t\"AND (t.typname = 'oid' OR t.typname LIKE 'reg%') \"\n> ! \t\t\"AND a.attrelid = c.oid \"\n> ! \t\t\"AND a.atttypid = t.oid \"\n> ! \t\t\"ORDER BY nspname, c.relname, a.attnum\"\n> ! \t\t);\n> \n> ! \tres = PQexec(conn, sql.data);\n> ! \tif (!res || PQresultStatus(res) != PGRES_TUPLES_OK)\n> ! \t{\n> ! \t\tfprintf(stderr, \"sql error: %s\\n\", PQerrorMessage(conn));\n> ! \t\texit(EXIT_FAILURE);\n> ! \t}\n> ! \tfkrel_res = res;\n> ! \n> ! \ttermPQExpBuffer(&sql);\n> ! \tinitPQExpBuffer(&sql);\n> ! \n> ! \tfor (fk = 0; fk < PQntuples(fkrel_res); fk++)\n> ! \t{\n> ! \t\tfk_relname = PQgetvalue(fkrel_res, fk, 0);\n> ! \t\tfk_nspname = PQgetvalue(fkrel_res, fk, 1);\n> ! \t\tfk_attname = PQgetvalue(fkrel_res, fk, 2);\n> ! \t\tfk_typname = PQgetvalue(fkrel_res, fk, 3);\n> ! \n> ! \t\tfor (pk = 0; pk < PQntuples(pkrel_res); pk++)\n> ! \t\t{\n> ! \t\t\tpk_relname = PQgetvalue(pkrel_res, pk, 0);\n> ! \t\t\tpk_nspname = PQgetvalue(pkrel_res, pk, 1);\n> ! \n> ! \t\t\tappendPQExpBuffer(&sql,\n> ! \t\t\t\t\"SELECT\t1 \"\n> ! \t\t\t\t\"FROM \\\"%s\\\".\\\"%s\\\" t1, \"\n> ! \t\t\t\t\"\\\"%s\\\".\\\"%s\\\" t2 \"\n> ! \t\t\t\t\"WHERE t1.\\\"%s\\\"::oid = t2.oid\",\n> ! \t\t\t\tfk_nspname, fk_relname, pk_nspname, pk_relname, fk_attname);\n> ! \n> ! \t\t\tres = PQexec(conn, sql.data);\n> ! \t\t\tif (!res || PQresultStatus(res) != PGRES_TUPLES_OK)\n> ! \t\t\t{\n> ! \t\t\t\tfprintf(stderr, \"sql error: %s\\n\", PQerrorMessage(conn));\n> ! \t\t\t\texit(EXIT_FAILURE);\n> ! \t\t\t}\n> ! \n> ! \t\t\tif (PQntuples(res) != 0)\n> ! \t\t\t\tprintf(\"Join %s.%s => %s.oid\\n\",\n> ! \t\t\t\t\t\tfk_relname, fk_attname, pk_relname);\n> ! \n> ! \t\t\tPQclear(res);\n> ! \n> ! \t\t\ttermPQExpBuffer(&sql);\n> ! \t\t\tinitPQExpBuffer(&sql);\n> ! \t\t}\n> ! \t}\n> ! \tPQclear(pkrel_res);\n> ! \tPQclear(fkrel_res);\n> ! \tPQfinish(conn);\n> \n> ! \texit(EXIT_SUCCESS);\n> }\n\n> Join pg_aggregate.aggfnoid => pg_proc.oid\n> Join pg_aggregate.aggtransfn => pg_proc.oid\n> Join pg_aggregate.aggfinalfn => pg_proc.oid\n> Join pg_aggregate.aggtranstype => pg_type.oid\n> Join pg_am.amgettuple => pg_proc.oid\n> Join pg_am.aminsert => pg_proc.oid\n> Join pg_am.ambeginscan => pg_proc.oid\n> Join pg_am.amrescan => pg_proc.oid\n> Join pg_am.amendscan => pg_proc.oid\n> Join pg_am.ammarkpos => pg_proc.oid\n> Join pg_am.amrestrpos => pg_proc.oid\n> Join pg_am.ambuild => pg_proc.oid\n> Join pg_am.ambulkdelete => pg_proc.oid\n> Join pg_am.amcostestimate => pg_proc.oid\n> Join pg_amop.amopclaid => pg_opclass.oid\n> Join pg_amop.amopopr => pg_operator.oid\n> Join pg_amproc.amopclaid => pg_opclass.oid\n> Join pg_amproc.amproc => pg_proc.oid\n> Join pg_attribute.attrelid => pg_class.oid\n> Join pg_attribute.atttypid => pg_type.oid\n> Join pg_cast.castsource => pg_type.oid\n> Join pg_cast.casttarget => pg_type.oid\n> Join pg_cast.castfunc => pg_proc.oid\n> Join pg_class.relnamespace => pg_namespace.oid\n> Join pg_class.reltype => pg_type.oid\n> Join pg_class.relam => pg_am.oid\n> Join pg_class.relfilenode => pg_class.oid\n> Join pg_class.reltoastrelid => pg_class.oid\n> Join pg_class.reltoastidxid => pg_class.oid\n> Join pg_conversion.connamespace => pg_namespace.oid\n> Join pg_conversion.conproc => pg_proc.oid\n> Join pg_database.datlastsysoid => pg_conversion.oid\n> Join pg_depend.classid => pg_class.oid\n> Join pg_depend.objid => pg_conversion.oid\n> Join pg_depend.objid => pg_rewrite.oid\n> Join pg_depend.objid => pg_type.oid\n> Join pg_depend.refclassid => pg_class.oid\n> Join pg_depend.refobjid => pg_cast.oid\n> Join pg_depend.refobjid => pg_class.oid\n> Join pg_depend.refobjid => pg_language.oid\n> Join pg_depend.refobjid => pg_namespace.oid\n> Join pg_depend.refobjid => pg_opclass.oid\n> Join pg_depend.refobjid => pg_operator.oid\n> Join pg_depend.refobjid => pg_proc.oid\n> Join pg_depend.refobjid => pg_trigger.oid\n> Join pg_depend.refobjid => pg_type.oid\n> Join pg_description.objoid => pg_am.oid\n> Join pg_description.objoid => pg_database.oid\n> Join pg_description.objoid => pg_language.oid\n> Join pg_description.objoid => pg_namespace.oid\n> Join pg_description.objoid => pg_proc.oid\n> Join pg_description.objoid => pg_type.oid\n> Join pg_description.classoid => pg_class.oid\n> Join pg_index.indexrelid => pg_class.oid\n> Join pg_index.indrelid => pg_class.oid\n> Join pg_language.lanvalidator => pg_proc.oid\n> Join pg_opclass.opcamid => pg_am.oid\n> Join pg_opclass.opcnamespace => pg_namespace.oid\n> Join pg_opclass.opcintype => pg_type.oid\n> Join pg_operator.oprnamespace => pg_namespace.oid\n> Join pg_operator.oprleft => pg_type.oid\n> Join pg_operator.oprright => pg_type.oid\n> Join pg_operator.oprresult => pg_type.oid\n> Join pg_operator.oprcom => pg_operator.oid\n> Join pg_operator.oprnegate => pg_operator.oid\n> Join pg_operator.oprlsortop => pg_operator.oid\n> Join pg_operator.oprrsortop => pg_operator.oid\n> Join pg_operator.oprltcmpop => pg_operator.oid\n> Join pg_operator.oprgtcmpop => pg_operator.oid\n> Join pg_operator.oprcode => pg_proc.oid\n> Join pg_operator.oprrest => pg_proc.oid\n> Join pg_operator.oprjoin => pg_proc.oid\n> Join pg_proc.pronamespace => pg_namespace.oid\n> Join pg_proc.prolang => pg_language.oid\n> Join pg_proc.prorettype => pg_type.oid\n> Join pg_rewrite.ev_class => pg_class.oid\n> Join pg_trigger.tgrelid => pg_class.oid\n> Join pg_trigger.tgfoid => pg_proc.oid\n> Join pg_type.typnamespace => pg_namespace.oid\n> Join pg_type.typrelid => pg_class.oid\n> Join pg_type.typelem => pg_type.oid\n> Join pg_type.typinput => pg_proc.oid\n> Join pg_type.typoutput => pg_proc.oid\n\n> --\n> -- This is created by pgsql/contrib/findoidjoins/make_oidjoin_check\n> --\n> SELECT\tctid, pg_aggregate.aggfnoid \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggfnoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_aggregate.aggfnoid);\n> SELECT\tctid, pg_aggregate.aggtransfn \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggtransfn != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_aggregate.aggtransfn);\n> SELECT\tctid, pg_aggregate.aggfinalfn \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggfinalfn != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_aggregate.aggfinalfn);\n> SELECT\tctid, pg_aggregate.aggtranstype \n> FROM\tpg_aggregate \n> WHERE\tpg_aggregate.aggtranstype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_aggregate.aggtranstype);\n> SELECT\tctid, pg_am.amgettuple \n> FROM\tpg_am \n> WHERE\tpg_am.amgettuple != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amgettuple);\n> SELECT\tctid, pg_am.aminsert \n> FROM\tpg_am \n> WHERE\tpg_am.aminsert != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.aminsert);\n> SELECT\tctid, pg_am.ambeginscan \n> FROM\tpg_am \n> WHERE\tpg_am.ambeginscan != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ambeginscan);\n> SELECT\tctid, pg_am.amrescan \n> FROM\tpg_am \n> WHERE\tpg_am.amrescan != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amrescan);\n> SELECT\tctid, pg_am.amendscan \n> FROM\tpg_am \n> WHERE\tpg_am.amendscan != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amendscan);\n> SELECT\tctid, pg_am.ammarkpos \n> FROM\tpg_am \n> WHERE\tpg_am.ammarkpos != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ammarkpos);\n> SELECT\tctid, pg_am.amrestrpos \n> FROM\tpg_am \n> WHERE\tpg_am.amrestrpos != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amrestrpos);\n> SELECT\tctid, pg_am.ambuild \n> FROM\tpg_am \n> WHERE\tpg_am.ambuild != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ambuild);\n> SELECT\tctid, pg_am.ambulkdelete \n> FROM\tpg_am \n> WHERE\tpg_am.ambulkdelete != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.ambulkdelete);\n> SELECT\tctid, pg_am.amcostestimate \n> FROM\tpg_am \n> WHERE\tpg_am.amcostestimate != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_am.amcostestimate);\n> SELECT\tctid, pg_amop.amopclaid \n> FROM\tpg_amop \n> WHERE\tpg_amop.amopclaid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_opclass AS t1 WHERE t1.oid = pg_amop.amopclaid);\n> SELECT\tctid, pg_amop.amopopr \n> FROM\tpg_amop \n> WHERE\tpg_amop.amopopr != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_amop.amopopr);\n> SELECT\tctid, pg_amproc.amopclaid \n> FROM\tpg_amproc \n> WHERE\tpg_amproc.amopclaid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_opclass AS t1 WHERE t1.oid = pg_amproc.amopclaid);\n> SELECT\tctid, pg_amproc.amproc \n> FROM\tpg_amproc \n> WHERE\tpg_amproc.amproc != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_amproc.amproc);\n> SELECT\tctid, pg_attribute.attrelid \n> FROM\tpg_attribute \n> WHERE\tpg_attribute.attrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_attribute.attrelid);\n> SELECT\tctid, pg_attribute.atttypid \n> FROM\tpg_attribute \n> WHERE\tpg_attribute.atttypid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_attribute.atttypid);\n> SELECT\tctid, pg_cast.castsource \n> FROM\tpg_cast \n> WHERE\tpg_cast.castsource != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_cast.castsource);\n> SELECT\tctid, pg_cast.casttarget \n> FROM\tpg_cast \n> WHERE\tpg_cast.casttarget != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_cast.casttarget);\n> SELECT\tctid, pg_cast.castfunc \n> FROM\tpg_cast \n> WHERE\tpg_cast.castfunc != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_cast.castfunc);\n> SELECT\tctid, pg_class.relnamespace \n> FROM\tpg_class \n> WHERE\tpg_class.relnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_class.relnamespace);\n> SELECT\tctid, pg_class.reltype \n> FROM\tpg_class \n> WHERE\tpg_class.reltype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_class.reltype);\n> SELECT\tctid, pg_class.relam \n> FROM\tpg_class \n> WHERE\tpg_class.relam != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_am AS t1 WHERE t1.oid = pg_class.relam);\n> SELECT\tctid, pg_class.relfilenode \n> FROM\tpg_class \n> WHERE\tpg_class.relfilenode != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_class.relfilenode);\n> SELECT\tctid, pg_class.reltoastrelid \n> FROM\tpg_class \n> WHERE\tpg_class.reltoastrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_class.reltoastrelid);\n> SELECT\tctid, pg_class.reltoastidxid \n> FROM\tpg_class \n> WHERE\tpg_class.reltoastidxid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_class.reltoastidxid);\n> SELECT\tctid, pg_conversion.connamespace \n> FROM\tpg_conversion \n> WHERE\tpg_conversion.connamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_conversion.connamespace);\n> SELECT\tctid, pg_conversion.conproc \n> FROM\tpg_conversion \n> WHERE\tpg_conversion.conproc != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_conversion.conproc);\n> SELECT\tctid, pg_database.datlastsysoid \n> FROM\tpg_database \n> WHERE\tpg_database.datlastsysoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_conversion AS t1 WHERE t1.oid = pg_database.datlastsysoid);\n> SELECT\tctid, pg_depend.classid \n> FROM\tpg_depend \n> WHERE\tpg_depend.classid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_depend.classid);\n> SELECT\tctid, pg_depend.refclassid \n> FROM\tpg_depend \n> WHERE\tpg_depend.refclassid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_depend.refclassid);\n> SELECT\tctid, pg_description.classoid \n> FROM\tpg_description \n> WHERE\tpg_description.classoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_description.classoid);\n> SELECT\tctid, pg_index.indexrelid \n> FROM\tpg_index \n> WHERE\tpg_index.indexrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_index.indexrelid);\n> SELECT\tctid, pg_index.indrelid \n> FROM\tpg_index \n> WHERE\tpg_index.indrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_index.indrelid);\n> SELECT\tctid, pg_language.lanvalidator \n> FROM\tpg_language \n> WHERE\tpg_language.lanvalidator != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_language.lanvalidator);\n> SELECT\tctid, pg_opclass.opcamid \n> FROM\tpg_opclass \n> WHERE\tpg_opclass.opcamid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_am AS t1 WHERE t1.oid = pg_opclass.opcamid);\n> SELECT\tctid, pg_opclass.opcnamespace \n> FROM\tpg_opclass \n> WHERE\tpg_opclass.opcnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_opclass.opcnamespace);\n> SELECT\tctid, pg_opclass.opcintype \n> FROM\tpg_opclass \n> WHERE\tpg_opclass.opcintype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_opclass.opcintype);\n> SELECT\tctid, pg_operator.oprnamespace \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_operator.oprnamespace);\n> SELECT\tctid, pg_operator.oprleft \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprleft != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_operator.oprleft);\n> SELECT\tctid, pg_operator.oprright \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprright != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_operator.oprright);\n> SELECT\tctid, pg_operator.oprresult \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprresult != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_operator.oprresult);\n> SELECT\tctid, pg_operator.oprcom \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprcom != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprcom);\n> SELECT\tctid, pg_operator.oprnegate \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprnegate != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprnegate);\n> SELECT\tctid, pg_operator.oprlsortop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprlsortop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprlsortop);\n> SELECT\tctid, pg_operator.oprrsortop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprrsortop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprrsortop);\n> SELECT\tctid, pg_operator.oprltcmpop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprltcmpop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprltcmpop);\n> SELECT\tctid, pg_operator.oprgtcmpop \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprgtcmpop != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_operator AS t1 WHERE t1.oid = pg_operator.oprgtcmpop);\n> SELECT\tctid, pg_operator.oprcode \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprcode != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_operator.oprcode);\n> SELECT\tctid, pg_operator.oprrest \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprrest != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_operator.oprrest);\n> SELECT\tctid, pg_operator.oprjoin \n> FROM\tpg_operator \n> WHERE\tpg_operator.oprjoin != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_operator.oprjoin);\n> SELECT\tctid, pg_proc.pronamespace \n> FROM\tpg_proc \n> WHERE\tpg_proc.pronamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_proc.pronamespace);\n> SELECT\tctid, pg_proc.prolang \n> FROM\tpg_proc \n> WHERE\tpg_proc.prolang != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_language AS t1 WHERE t1.oid = pg_proc.prolang);\n> SELECT\tctid, pg_proc.prorettype \n> FROM\tpg_proc \n> WHERE\tpg_proc.prorettype != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_proc.prorettype);\n> SELECT\tctid, pg_rewrite.ev_class \n> FROM\tpg_rewrite \n> WHERE\tpg_rewrite.ev_class != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_rewrite.ev_class);\n> SELECT\tctid, pg_trigger.tgrelid \n> FROM\tpg_trigger \n> WHERE\tpg_trigger.tgrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_trigger.tgrelid);\n> SELECT\tctid, pg_trigger.tgfoid \n> FROM\tpg_trigger \n> WHERE\tpg_trigger.tgfoid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_trigger.tgfoid);\n> SELECT\tctid, pg_type.typnamespace \n> FROM\tpg_type \n> WHERE\tpg_type.typnamespace != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_namespace AS t1 WHERE t1.oid = pg_type.typnamespace);\n> SELECT\tctid, pg_type.typrelid \n> FROM\tpg_type \n> WHERE\tpg_type.typrelid != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_class AS t1 WHERE t1.oid = pg_type.typrelid);\n> SELECT\tctid, pg_type.typelem \n> FROM\tpg_type \n> WHERE\tpg_type.typelem != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_type AS t1 WHERE t1.oid = pg_type.typelem);\n> SELECT\tctid, pg_type.typinput \n> FROM\tpg_type \n> WHERE\tpg_type.typinput != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_type.typinput);\n> SELECT\tctid, pg_type.typoutput \n> FROM\tpg_type \n> WHERE\tpg_type.typoutput != 0 AND \n> \tNOT EXISTS(SELECT * FROM pg_proc AS t1 WHERE t1.oid = pg_type.typoutput);\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 10 Sep 2002 22:57:32 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: findoidjoins patch (was Re: [HACKERS] findoidjoins)" } ]
[ { "msg_contents": "I know everyone is busy with the 7.3beta, but maybe this is something to think of before releasing the beta. Currently VACUUM will vacuum every table, but sometimes\nit's desireable to leave tables untouched because the're mostly static or protocol tables. In my case this would be the pg_largeobject which is around 4GB of data, while the\nother tables are ~40MB. Vacuuming the data is important, the large object table however rarely changes. The same goes for a protocol table which is around 1GB and never is\nchanged beside INSERTS, so it's just growing, but never needs vacuum. VACUUM on the 4GB table needs a long long time and no improvements, it just hurts performance and\nfills OS buffers.\n\nIf pg_class would have a field for storing misc flags (e.g. a bitfield). This would allow to set a flag like NO_AUTO_VACUUM and modify the vacuum code to leave that tables untouched\nif not specified by hand. Maybe there are other uses for such a bitfield too, and will help prevent an initdb for simple improvements.\n\nAny comments?\n\nBest regards,\n\tMario Weilguni\n\n\n\n", "msg_date": "Tue, 3 Sep 2002 08:55:15 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": true, "msg_subject": "possible vacuum improvement?" }, { "msg_contents": "On 3 Sep 2002 at 8:55, Mario Weilguni wrote:\n\n> I know everyone is busy with the 7.3beta, but maybe this is something to think of before releasing the beta. Currently VACUUM will vacuum every table, but sometimes\n> it's desireable to leave tables untouched because the're mostly static or protocol tables. In my case this would be the pg_largeobject which is around 4GB of data, while the\n> other tables are ~40MB. Vacuuming the data is important, the large object table however rarely changes. The same goes for a protocol table which is around 1GB and never is\n> changed beside INSERTS, so it's just growing, but never needs vacuum. VACUUM on the 4GB table needs a long long time and no improvements, it just hurts performance and\n> fills OS buffers.\n> \n> If pg_class would have a field for storing misc flags (e.g. a bitfield). This would allow to set a flag like NO_AUTO_VACUUM and modify the vacuum code to leave that tables untouched\n> if not specified by hand. Maybe there are other uses for such a bitfield too, and will help prevent an initdb for simple improvements.\n> \n> Any comments?\n\nI suggest vacumming only the table that changes. Further I believe, \nupdates/deletes should be watched for performance as they cause dead tuples. Of \ncourse insert impacts statistics and should be monitored but something like a \nlog table does not need vacuuming that often..\n\nKnowing the application load can help a lot in tuning the DB, in short.\n\nI was running a banking simulation for benchmarking. I know that accounts table \ngets updated for each transaction but log table is just an insert. So rather \nthan vacumming entire db, just doing 'vacuum analyze accounts' give me almost \nsame results. \n\nPerformance was far better in earlier case. Without any vacuum I got something \nlike 50 tps for 80K transactions. With 'vacuum analyze accounts' for each 5K \ntransactions I got 200tps.\n\nPersonally I would prefer to have a trigger on a metadata table where I could \ntrigger vacuuming a particular table each n number of transactions(Oh it would \nbe great if that vacuum runs in background not blocking meta data table.. just \na wishlist...). Can anybody tell me which table I could write such a trigger? I \nwent thr. pg_* for some time but didn't find what I was looking for..\n\nBye\n Shridhar\n\n--\nReisner's Rule of Conceptual Inertia:\tIf you think big enough, you'll never \nhave to do it.\n\n", "msg_date": "Tue, 03 Sep 2002 12:37:15 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "> Personally I would prefer to have a trigger on a metadata table\n> where I could\n> trigger vacuuming a particular table each n number of\n> transactions(Oh it would\n> be great if that vacuum runs in background not blocking meta data\n> table.. just\n> a wishlist...). Can anybody tell me which table I could write\n> such a trigger? I\n> went thr. pg_* for some time but didn't find what I was looking for..\n\nActually, if you wrote it in C and kept some static data on each table, you\ncould probably write a vacuum trigger pretty easily. You could even keep\nthe info in a table.\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 15:14:23 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "On 3 Sep 2002 at 15:14, Christopher Kings-Lynne wrote:\n\n> > Personally I would prefer to have a trigger on a metadata table\n> > where I could\n> > trigger vacuuming a particular table each n number of\n> > transactions(Oh it would\n> > be great if that vacuum runs in background not blocking meta data\n> > table.. just\n> > a wishlist...). Can anybody tell me which table I could write\n> > such a trigger? I\n> > went thr. pg_* for some time but didn't find what I was looking for..\n> \n> Actually, if you wrote it in C and kept some static data on each table, you\n> could probably write a vacuum trigger pretty easily. You could even keep\n> the info in a table.\n\nActually that's what I did. Update global transaction counter than trigger the \nvacuum from a spare thread.\n\nbut having it in DB has advantages of centralisation. It's just a good to have \nkind of thing..\n\nBye\n Shridhar\n\n--\n\"I don't know why, but first C programs tend to look a lot worse thanfirst \nprograms in any other language (maybe except for fortran, but thenI suspect all \nfortran programs look like `firsts')\"(By Olaf Kirch)\n\n", "msg_date": "Tue, 03 Sep 2002 13:01:37 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "> gets updated for each transaction but log table is just an insert. So\nrather\n> than vacumming entire db, just doing 'vacuum analyze accounts' give me\nalmost\n> same results.\n>\n\nThat is not really practicable, one datebase has 107 tables, and making a\ncron job\nwith 107 vacuum calls is completly out of question and very error prone\nanyway.\n\nRegards,\n Mario Weilguni\n\n\n", "msg_date": "Tue, 3 Sep 2002 09:36:23 +0200", "msg_from": "\"Mario Weilguni\" <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "> Actually that's what I did. Update global transaction counter\n> than trigger the\n> vacuum from a spare thread.\n>\n> but having it in DB has advantages of centralisation. It's just a\n> good to have\n> kind of thing..\n\nCare to submit it as a BSD licensed contrib module then? Or at least create\na project for it on http://gborg.postgresql.org/ ?\n\nChris\n\n", "msg_date": "Tue, 3 Sep 2002 15:39:11 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "On 3 Sep 2002 at 9:36, Mario Weilguni wrote:\n> That is not really practicable, one datebase has 107 tables, and making a\n> cron job\n> with 107 vacuum calls is completly out of question and very error prone\n> anyway.\n\nThat's correct.. What are the possible alternatives? Either backend has to \nsupport something or the DBA has to script something.\n\n1)If number of tables that need vacuum are far more than those who don't, then \na simple all vacuum would do. But again sizes of individual tables will affect \nthat judgement as well.\n\n2)As OP suggested, if vacuum could pick up only those tables marked by \nbitfields, ay by an additional option like, 'vacuum analyse frequent_ones'.. \nthis is going to need a backend change.\n\n3)I guess scripting cron job for vacuum is one time job. If it's desparately \nneeded, say 60 tables out of 107 require vacuum, personally I would spend some \ntime making that script. Depends upon the requirement actually.\n\nOn a sidenote, does anybody have some statistics from benchmark may be, as in \nwhat's a rule of thumb for vacuuming? I found that a vacuum every 5K-10K \ntransactions increases the tps like anything but below 1K transactions, it's \nnot as much effective. May be one should consider this factor as well..\n\nBye\n Shridhar\n\n--\nPascal:\tA programming language named after a man who would turn over\tin his \ngrave if he knew about it.\t\t-- Datamation, January 15, 1984\n\n", "msg_date": "Tue, 03 Sep 2002 13:14:49 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "On 3 Sep 2002 at 15:39, Christopher Kings-Lynne wrote:\n\n> > Actually that's what I did. Update global transaction counter\n> > than trigger the\n> > vacuum from a spare thread.\n> >\n> > but having it in DB has advantages of centralisation. It's just a\n> > good to have\n> > kind of thing..\n> \n> Care to submit it as a BSD licensed contrib module then? Or at least create\n> a project for it on http://gborg.postgresql.org/ ?\n\nSounds like a nice idea. I would do that by this week end, once I finalise the \ndetails about it.\n\nGive me couple of days to finish it. Will come back soon with that..\n\nBye\n Shridhar\n\n--\nReporter, n.:\tA writer who guesses his way to the truth and dispels it with a\t\ntempest of words.\t\t-- Ambrose Bierce, \"The Devil's Dictionary\"\n\n", "msg_date": "Tue, 03 Sep 2002 13:29:03 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "On Tue, 2002-09-03 at 03:36, Mario Weilguni wrote:\n> > gets updated for each transaction but log table is just an insert. So\n> rather\n> > than vacumming entire db, just doing 'vacuum analyze accounts' give me\n> almost\n> > same results.\n> >\n> \n> That is not really practicable, one datebase has 107 tables, and making a\n> cron job\n> with 107 vacuum calls is completly out of question and very error prone\n> anyway.\n\nSo... Write a script which does something like:\n\nskiptables = \"'skipme' 'andme'\"\ntables = `psql -c 'SELECT relname from pg_class where relname not in\n(${skiptables})' template1`\n\nfor tab in ${tables} ; do\n vacuumdb -t ${tab}\ndone\n\n\nFill in the holes and your done -- get the right pg_class type, handle\nschemas appropriately, etc.\n\n", "msg_date": "03 Sep 2002 07:57:54 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "Mario Weilguni <mweilguni@sime.com> writes:\n> I know everyone is busy with the 7.3beta, but maybe this is something\n> to think of before releasing the beta.\n\nWe are already in feature freeze.\n\nIn terms of what might happen for 7.4 or beyond, what I'd personally\nlike to see is some \"auto vacuum\" facility that would launch background\nvacuums automatically every so often. This could (eventually) be made\nself-tuning so that it would vacuum heavily-updated tables more often\nthan seldom-updated ones --- while not forgetting the\nevery-billion-transactions rule...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 09:49:54 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "On 3 Sep 2002 at 9:49, Tom Lane wrote:\n\n> In terms of what might happen for 7.4 or beyond, what I'd personally\n> like to see is some \"auto vacuum\" facility that would launch background\n> vacuums automatically every so often. This could (eventually) be made\n> self-tuning so that it would vacuum heavily-updated tables more often\n> than seldom-updated ones --- while not forgetting the\n> every-billion-transactions rule...\n\nOK, I plan to work on this. Here is my brief idea\n\n1)Create a table (vacuum_info) that stores table name and auto vacuum defaults. \nSince I am planning this in contrib, I would not touch pg_class.\n\nThe table will store\n\t- table names\n\t- number of transactions to trigger vacuum analyze(default 1K)\n\t- number of transactions to trigger full vacuum(default 10K)\t\n\nA trigger on pg_class i.e. table creation should add a row in this table as \nwell.\n\n2)Write a trigger on tables that updates statistics on table activity. I see \n\n-pg_stat_all_tables\n-pg_stat_sys_tables\n-pg_stat_user_tables. \n\nThe columns are \n\n-n_tup_ins \n-n_tup_upd \n-n_tup_del \n\nOf course it will ignore it's own updates and inserts to avoid infinite loops. \nThis will update the pseudo statistics in vacuum_info table\n\nAnother trigger on vacuum_info will trigger vacuum if required. Ideally I would \nwrite it in external multithreaded library to trigger vacuum in background \nwithout blocking operations on vacuum_info table.\n\nI need to know the following..\n\n1)Is this sounds like a workable solution?\n\n2)Is this as simple as I have put here or am I missing some vital components?\n\n3)Is there some kind of rework involved?\n\n4)Is use of threads sounds portable enough? I just need to trigger a thread in \nbackground and return. No locking, nothing is required. Will there be any \nproblem for postgres invoking such an external trigger?\n\n5)When I create a function in a .so, is it possible to invoke init/startup \nroutines? I can create and destroy thread in these routine to avoid thread \ncreation overhead. If postgres is using dlopen, I can use _init, _fini. \n\n6)such a 'daemon' would be on per back-end basis if I am guessing correctly. \nWould locking things in transactions for vacuum_info be sufficient?\n\nI hope I am making a sensible proposal/design(My first attempt to contribute to \npostgres). Please let me know your comments. \n\n\nBye\n Shridhar\n\n--\nBlast medicine anyway! We've learned to tie into every organ in thehuman body \nbut one. The brain! The brain is what life is all about.\t\t-- McCoy, \"The \nMenagerie\", stardate 3012.4\n\n", "msg_date": "Tue, 03 Sep 2002 19:55:05 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> 1)Is this sounds like a workable solution?\n\nAdding a trigger to every tuple update won't do at all. Storing the\ncounts in a table won't do either, as the updates on that table will\ngenerate a huge amount of wasted space themselves (not to mention\nenough contention to destroy concurrent performance).\n\n> 4)Is use of threads sounds portable enough?\n\nThreads are completely out of the question, at least if you have any\nhope of seeing this code get accepted into the core distro.\n\n\nFor vacuum's purposes all that we really care to know about is the\nnumber of obsoleted tuples in each table: committed deletes and updates,\nand aborted inserts and updates all count. Furthermore, we do not need\nor want a 100% reliable solution; approximate counts would be plenty\ngood enough.\n\nWhat I had in the back of my mind was: each backend counts attempted\ninsertions and deletions in its relcache entries (an update adds to both\ncounts). At transaction commit or abort, we know which of these two\ncounts represents the number of dead tuples added to each relation, so\nwhile we scan the relcache for post-xact cleanup (which we will be doing\nanyway) we can transfer the correct count into the shared FSM entry for\nthe relation. This gives us a reasonably accurate count in shared\nmemory of all the tuple obsoletions since bootup, at least for\nheavily-used tables. (The FSM might choose to forget about lightly-used\ntables.) The auto vacuumer could look at the FSM numbers to decide\nwhich tables are highest priority to vacuum.\n\nThis scheme would lose the count info on a database restart, but that\ndoesn't bother me. In typical scenarios the same tables will soon get\nenough new counts to be highly ranked for vacuuming. In any case the\nauto vacuumer must be designed so that it vacuums every table every so\noften anyhow, so the possibility of forgetting that there were some dead\ntuples in a given table isn't catastrophic.\n\nI do not think we need or want a control table for this; certainly I see\nno need for per-table manual control over this process. There should\nprobably be a few knobs in the form of GUC parameters so that the admin\ncan control how much overall work the auto-vacuumer does. For instance\nyou'd probably like to turn it off when under peak interactive load.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 11:01:19 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "On Tue, 2002-09-03 at 11:01, Tom Lane wrote:\n> \"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> > 1)Is this sounds like a workable solution?\n> \n> Adding a trigger to every tuple update won't do at all. Storing the\n> counts in a table won't do either, as the updates on that table will\n> generate a huge amount of wasted space themselves (not to mention\n> enough contention to destroy concurrent performance).\n> \n> > 4)Is use of threads sounds portable enough?\n> \n> Threads are completely out of the question, at least if you have any\n> hope of seeing this code get accepted into the core distro.\n> \n> \n> For vacuum's purposes all that we really care to know about is the\n> number of obsoleted tuples in each table: committed deletes and updates,\n> and aborted inserts and updates all count. Furthermore, we do not need\n> or want a 100% reliable solution; approximate counts would be plenty\n> good enough.\n\nIt would be nice if it could track successful inserts, and fire off an\nanalyze run when it changes more than 20% from what stats says.\n\n", "msg_date": "03 Sep 2002 11:09:55 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> On Tue, 2002-09-03 at 11:01, Tom Lane wrote:\n>> For vacuum's purposes all that we really care to know about is the\n>> number of obsoleted tuples in each table: committed deletes and updates,\n>> and aborted inserts and updates all count. Furthermore, we do not need\n>> or want a 100% reliable solution; approximate counts would be plenty\n>> good enough.\n\n> It would be nice if it could track successful inserts, and fire off an\n> analyze run when it changes more than 20% from what stats says.\n\nThat's a thought too. I was only thinking of space reclamation, but\nit'd be easy to extend the scheme to keep track of the number of tuples\nsuccessfully inserted, changed, or deleted (all three events would\naffect stats) as well as the number of dead tuples. Then you could fire\nauto-analyze every so often, along with auto-vacuum.\n\nAuto-analyze might need more tuning controls than auto-vacuum, though.\nVacuum doesn't have any question about when it needs to run: a dead\ntuple is a dead tuple. But for analyze you might have plenty of update\ntraffic and yet no meaningful change in the interesting stats for a\ntable. An admin who knows the behavior of his tables would like to be\nable to configure the frequency of analyze runs, rather than trust to\na necessarily-not-too-bright auto-analyze routine. (Not sure whether\nthis is important enough to warrant the complications of making it\nconfigurable though. You can always do it the old-fashioned way with\ncron scripts if you want that kind of control, I suppose.)\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 11:19:20 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "Wouldn't it make sense to implement autovacuum information in a struture \nlike the FSM, a Dirty Space Map (DSM)? As blocks are dirtied by \ntransactions they can be added to the DSM. Then vacuum can give \npriority processing to those blocks only. The reason I suggest this is \nthat in many usage senerios it will be more efficient to only vacuum \npart of a table than the entire table. Given a large table that grows \nover time, it tends to be the case that older data in the table becomes \nmore static as it ages (a lot of financial data is like this, when it is \ninitially created it may get a lot of updates done early in it's life \nand may even be deleted, but once the data gets older (for example a \nyear old), it is unlikely to change). This would imply that over time \nthe first blocks in a table will change less and most activity will \noccur towards the end of the table. If you have a multigig table, where \nmost of the activity occurs near the end, a lot of cpu cycles can be \nwasted going over the mostly static begining of the table.\n\nthanks,\n--Barry\n\nTom Lane wrote:\n\n>\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in> writes:\n> \n>\n>>1)Is this sounds like a workable solution?\n>> \n>>\n>\n>Adding a trigger to every tuple update won't do at all. Storing the\n>counts in a table won't do either, as the updates on that table will\n>generate a huge amount of wasted space themselves (not to mention\n>enough contention to destroy concurrent performance).\n>\n> \n>\n>>4)Is use of threads sounds portable enough?\n>> \n>>\n>\n>Threads are completely out of the question, at least if you have any\n>hope of seeing this code get accepted into the core distro.\n>\n>\n>For vacuum's purposes all that we really care to know about is the\n>number of obsoleted tuples in each table: committed deletes and updates,\n>and aborted inserts and updates all count. Furthermore, we do not need\n>or want a 100% reliable solution; approximate counts would be plenty\n>good enough.\n>\n>What I had in the back of my mind was: each backend counts attempted\n>insertions and deletions in its relcache entries (an update adds to both\n>counts). At transaction commit or abort, we know which of these two\n>counts represents the number of dead tuples added to each relation, so\n>while we scan the relcache for post-xact cleanup (which we will be doing\n>anyway) we can transfer the correct count into the shared FSM entry for\n>the relation. This gives us a reasonably accurate count in shared\n>memory of all the tuple obsoletions since bootup, at least for\n>heavily-used tables. (The FSM might choose to forget about lightly-used\n>tables.) The auto vacuumer could look at the FSM numbers to decide\n>which tables are highest priority to vacuum.\n>\n>This scheme would lose the count info on a database restart, but that\n>doesn't bother me. In typical scenarios the same tables will soon get\n>enough new counts to be highly ranked for vacuuming. In any case the\n>auto vacuumer must be designed so that it vacuums every table every so\n>often anyhow, so the possibility of forgetting that there were some dead\n>tuples in a given table isn't catastrophic.\n>\n>I do not think we need or want a control table for this; certainly I see\n>no need for per-table manual control over this process. There should\n>probably be a few knobs in the form of GUC parameters so that the admin\n>can control how much overall work the auto-vacuumer does. For instance\n>you'd probably like to turn it off when under peak interactive load.\n>\n>\t\t\tregards, tom lane\n>\n>---------------------------(end of broadcast)---------------------------\n>TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n> \n>\n\n\n", "msg_date": "Tue, 03 Sep 2002 11:15:32 -0700", "msg_from": "Barry Lind <barry@xythos.com>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" } ]
[ { "msg_contents": "\n\nIt's probably a pretty basic question explained in some document I haven't seen\nbut...if I do something like a CreateTupleDescCopy() how do I know my memory\ncontext owns everything allocated without following the code all the way\nthrough until it returns to me?\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Tue, 3 Sep 2002 12:28:37 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Memory management question" }, { "msg_contents": "On Tue, Sep 03, 2002 at 12:28:37PM +0100, Nigel J. Andrews wrote:\n> \n> \n> It's probably a pretty basic question explained in some document I haven't seen\n> but...if I do something like a CreateTupleDescCopy() how do I know my memory\n> context owns everything allocated without following the code all the way\n> through until it returns to me?\n\n If some code doesn't call MemoryContextSwitchTo() all is allocated in\ncurrent memory context. You can check if CurrentMemoryContext is same\nbefore and after call that is important for you - but this check say\nnothing, bacuse some code can switch to other context and after usage\nswitch back to your context. IMHO is not common way how check it.\n(Ok, maybe check all contexts size before/after call...)\n\n Suggestion: add to memory managment counter that handle number\n of MemoryContextSwitchTo() calls. IMHO it can be compile\n only if MEMORY_CONTEXT_CHECKING is define.\n\n But I think there is not to much places which switching between\ncontexts and all are good commented (I hope, I wish :-)\n\n Karel\n\n-- \n Karel Zak <zakkr@zf.jcu.cz>\n http://home.zf.jcu.cz/~zakkr/\n \n C, PostgreSQL, PHP, WWW, http://docs.linux.cz, http://mape.jcu.cz\n", "msg_date": "Tue, 3 Sep 2002 13:52:09 +0200", "msg_from": "Karel Zak <zakkr@zf.jcu.cz>", "msg_from_op": false, "msg_subject": "Re: Memory management question" }, { "msg_contents": "On Tue, 3 Sep 2002, Nigel J. Andrews wrote:\n\n> \n> \n> It's probably a pretty basic question explained in some document I haven't seen\n> but...if I do something like a CreateTupleDescCopy() how do I know my memory\n> context owns everything allocated without following the code all the way\n> through until it returns to me?\n\nUmm.. how else could you *really* know unless you read the\nsource? Basically, all convenience routines off this nature allow memory\nin the current memory context.\n\nAs for CreateTupleDescCopy() you don't have to look too far to see what it\ndoes:\n\n--\n\nCreateTupleDescCopy(TupleDesc tupdesc)\n{\n TupleDesc desc;\n int i,\n size;\n\n desc = (TupleDesc) palloc(sizeof(struct tupleDesc));\n\n--\n\nGavin\n\n", "msg_date": "Tue, 3 Sep 2002 21:53:27 +1000 (EST)", "msg_from": "Gavin Sherry <swm@linuxworld.com.au>", "msg_from_op": false, "msg_subject": "Re: Memory management question" }, { "msg_contents": "On Tue, 3 Sep 2002, Karel Zak wrote:\n\n> On Tue, Sep 03, 2002 at 12:28:37PM +0100, Nigel J. Andrews wrote:\n> > \n> > \n> > It's probably a pretty basic question explained in some document I haven't seen\n> > but...if I do something like a CreateTupleDescCopy() how do I know my memory\n> > context owns everything allocated without following the code all the way\n> > through until it returns to me?\n> \n> If some code doesn't call MemoryContextSwitchTo() all is allocated in\n> current memory context. You can check if CurrentMemoryContext is same\n> before and after call that is important for you - but this check say\n> nothing, bacuse some code can switch to other context and after usage\n> switch back to your context. IMHO is not common way how check it.\n> (Ok, maybe check all contexts size before/after call...)\n> \n> Suggestion: add to memory managment counter that handle number\n> of MemoryContextSwitchTo() calls. IMHO it can be compile\n> only if MEMORY_CONTEXT_CHECKING is define.\n\n\nI quite like that idea. Only thing is it doesn't full address the issue of\nidentifying if my context owns memory allocated by other functions I've\nused. For example:\n\nA called procedure could be doing (psuedo code obviously):\n\nSwitchContext()\nmem=palloc(anumber)\n/* use mem */\npfree(mem)\nSwitchContectBack()\nretmem=palloc(anothersize)\n\nThere, net effect is that I do own retmem but the test on context switch\ncounters would indicate that I may not.\n\nI think the problem is that I don't fully understand why [and when] is context\nswitch is or should be done. \n\n> But I think there is not to much places which switching between\n> contexts and all are good commented (I hope, I wish :-)\n\nAs someone pointed out my example wasn't very complex so checking the source\nwasn't onerous. Checking something like heap_modifytuple() is more time\nconsuming.\n\nI was hoping there was some sort of 'rule of thumb'. In general I can't see how\nit could be sensibly known without such a rule and without tracing through the\nsource.\n\n\n-- \nNigel J. Andrews\n\n", "msg_date": "Tue, 3 Sep 2002 14:14:31 +0100 (BST)", "msg_from": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk>", "msg_from_op": true, "msg_subject": "Re: Memory management question" }, { "msg_contents": "\"Nigel J. Andrews\" <nandrews@investsystems.co.uk> writes:\n> It's probably a pretty basic question explained in some document I\n> haven't seen but...if I do something like a CreateTupleDescCopy() how\n> do I know my memory context owns everything allocated without\n> following the code all the way through until it returns to me?\n\nIf it doesn't, then it's broken. A general rule of the system is that\nstructures being allocated for return to a routine's caller must be\nallocated in the caller's CurrentMemoryContext. The only exceptions are\nfor cases where the routine in question is taking responsibility for the\nlong-term management of the object (for example, a syscache) --- in\nwhich case, it isn't your problem.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 09:41:12 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Memory management question " }, { "msg_contents": "\nMaybe when this thread is over, some parts of it can be\nadded to the dev. FAQ?\n\n-s\n\nOn Tue, 3 Sep 2002, Karel Zak wrote:\n\n> Date: Tue, 3 Sep 2002 13:52:09 +0200\n> From: Karel Zak <zakkr@zf.jcu.cz>\n> To: Nigel J. Andrews <nandrews@investsystems.co.uk>\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] Memory management question\n>\n> On Tue, Sep 03, 2002 at 12:28:37PM +0100, Nigel J. Andrews wrote:\n> >\n> >\n> > It's probably a pretty basic question explained in some document I haven't seen\n> > but...if I do something like a CreateTupleDescCopy() how do I know my memory\n> > context owns everything allocated without following the code all the way\n> > through until it returns to me?\n>\n> If some code doesn't call MemoryContextSwitchTo() all is allocated in\n> current memory context. You can check if CurrentMemoryContext is same\n> before and after call that is important for you - but this check say\n> nothing, bacuse some code can switch to other context and after usage\n> switch back to your context. IMHO is not common way how check it.\n> (Ok, maybe check all contexts size before/after call...)\n>\n> Suggestion: add to memory managment counter that handle number\n> of MemoryContextSwitchTo() calls. IMHO it can be compile\n> only if MEMORY_CONTEXT_CHECKING is define.\n>\n> But I think there is not to much places which switching between\n> contexts and all are good commented (I hope, I wish :-)\n>\n> Karel\n\n", "msg_date": "Tue, 3 Sep 2002 11:07:31 -0400 (EDT)", "msg_from": "\"Serguei A. Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Memory management question" }, { "msg_contents": "Nigel J. Andrews wrote:\n> \n> It's probably a pretty basic question explained in some document I haven't seen\n> but...if I do something like a CreateTupleDescCopy() how do I know my memory\n> context owns everything allocated without following the code all the way\n> through until it returns to me?\n\nI asked a related question recently. Here it is with Tom's response:\n\nTom Lane wrote:\n > Joe Conway wrote:\n >>Does a good primer on proper backend memory-context handling exist?\n >\n > The original design document is in src/backend/utils/mmgr/README;\n > somebody needs to recast that into present tense and put it into the\n > Developer's Guide SGML docs.\n >\n > If you read that and feel you understand it, next read\n > executor/nodeAgg.c and see if you follow the memory management\n > there...\n > AFAIR that's the most complex use of short-term contexts in the\n > system.\n\nYou might want to read through those to get a better understanding.\n\nHTH,\n\nJoe\n\n", "msg_date": "Tue, 03 Sep 2002 11:14:02 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Memory management question" } ]
[ { "msg_contents": "\nignore if you see this ...\n\n", "msg_date": "Tue, 3 Sep 2002 11:10:40 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Just testing tighgter UCE controls ..." }, { "msg_contents": "On Tue, 3 Sep 2002, Marc G. Fournier wrote:\n\n>\n> ignore if you see this ...\n\nWhat if we don't see it?\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Tue, 3 Sep 2002 10:52:08 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: Just testing tighgter UCE controls ..." }, { "msg_contents": "On Tue, 3 Sep 2002, Vince Vielhaber wrote:\n\n> On Tue, 3 Sep 2002, Marc G. Fournier wrote:\n>\n> >\n> > ignore if you see this ...\n>\n> What if we don't see it?\n\nlet me know? :)\n\n\n", "msg_date": "Tue, 3 Sep 2002 13:30:18 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": true, "msg_subject": "Re: Just testing tighgter UCE controls ..." } ]
[ { "msg_contents": ">I do not think we need or want a control table for this; certainly I see\n>no need for per-table manual control over this process. There should\n>probably be a few knobs in the form of GUC parameters so that the admin\n>can control how much overall work the auto-vacuumer does. For instance\n>you'd probably like to turn it off when under peak interactive load.\n\nIf (auto)vacuum is clever to check that some tables do not need vacuum\nthere's really no need for that. That brings me to another point, can't the\nstatistics collector used for that?\n\nFor my database I wrote a statistic display program for web-access, and all\nthe info autovacuum would need is here.\nhttp://mw.sime.com/pgsql.htm\n\nThat brings me to another point, is there interest for this\nweb-statistics-frontend, maybe for /contrib? I found it extremly useful\nbecause it showed up the weak points in my applications.\n\nBest regards,\n\tMario Weilguni\n", "msg_date": "Tue, 3 Sep 2002 17:26:11 +0200", "msg_from": "\"Mario Weilguni\" <mario.weilguni@icomedias.com>", "msg_from_op": true, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "\"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> That brings me to another point, can't the\n> statistics collector used for that?\n\nHmm, that would be a different way of attacking the problem. Not sure\noffhand which is better, but it'd surely be worth considering both.\n\nNote that collecting of dead-tuple counts requires input from aborted\ntransactions as well as successful ones. I don't recall whether the\nstats collector currently collects anything from aborted xacts; that\nmight or might not be a sticky point.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 16:24:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "> That brings me to another point, is there interest for this\n> web-statistics-frontend, maybe for /contrib? I found it extremly useful\n> because it showed up the weak points in my applications.\n\nWhy not create a project here for it: http://gborg.postgresql.org/\n\nChris\n\n", "msg_date": "Wed, 4 Sep 2002 10:03:37 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement? " }, { "msg_contents": "On Tuesday 03 September 2002 16:24, Tom Lane wrote:\n> \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > That brings me to another point, can't the\n> > statistics collector used for that?\n>\n> Hmm, that would be a different way of attacking the problem. Not sure\n> offhand which is better, but it'd surely be worth considering both.\n>\n> Note that collecting of dead-tuple counts requires input from aborted\n> transactions as well as successful ones. I don't recall whether the\n> stats collector currently collects anything from aborted xacts; that\n> might or might not be a sticky point.\n\nI have been doing some poking around with this item, and I was planning on \nusing the stats collector to do \"intelligent\" auto-vacuuming. I was planning \non adding some new columns that account for activity that has taken place \nsince the last vacuum. The current stats collector shows n_tup_ins, \nn_tup_upd and n_tup_del for any given rel, but those numbers have nothing to \ndo with what has happened since the last vacuum, hence nothing to do with \ncurrent status or need for vacuum.\n\nI hope to have something worth showing soon (a week or two). I know that is a \nbit slow, but I am new at pg internals and since we are in beta I know this \nis a 7.4 item.\n\nFYI, the current stats collector does keep track of inserts, updates and \ndeletes that are part of a rolled back transaction, as shown in the example \nbelow:\n\nmatthew=# create TABLE foo (id serial, name text);\nNOTICE: CREATE TABLE will create implicit sequence 'foo_id_seq' for SERIAL \ncolumn 'foo.id'\nCREATE TABLE\nmatthew=# select relname,n_tup_ins, n_tup_upd, n_tup_del from \npg_stat_all_tables where relname = 'foo';\n relname | n_tup_ins | n_tup_upd | n_tup_del\n---------+-----------+-----------+-----------\n foo | 0 | 0 | 0\n(1 row)\n\nmatthew=# INSERT INTO foo (name) VALUES ('asdf');\nINSERT 17075 1\nmatthew=# UPDATE foo SET name='qwert';\nUPDATE 1\nmatthew=# DELETE FROM foo;\nDELETE 1\nmatthew=# select relname,n_tup_ins, n_tup_upd, n_tup_del from \npg_stat_all_tables where relname = 'foo';\n relname | n_tup_ins | n_tup_upd | n_tup_del\n---------+-----------+-----------+-----------\n foo | 1 | 1 | 1\n(1 row)\n\nmatthew=# begin;\nBEGIN\nmatthew=# INSERT INTO foo (name) VALUES ('asdf');\nINSERT 17076 1\nmatthew=# UPDATE foo SET name='qwert';\nUPDATE 1\nmatthew=# DELETE FROM foo;\nDELETE 1\nmatthew=# rollback;\nROLLBACK\nmatthew=# select relname,n_tup_ins, n_tup_upd, n_tup_del from \npg_stat_all_tables where relname = 'foo';\n relname | n_tup_ins | n_tup_upd | n_tup_del\n---------+-----------+-----------+-----------\n foo | 2 | 2 | 2\n(1 row)\n\n\n", "msg_date": "Tue, 3 Sep 2002 23:44:58 -0400", "msg_from": "\"Matthew T. OConnor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "> I have been doing some poking around with this item, and I was \n> planning on \n> using the stats collector to do \"intelligent\" auto-vacuuming. I \n> was planning \n> on adding some new columns that account for activity that has taken place \n> since the last vacuum. The current stats collector shows n_tup_ins, \n> n_tup_upd and n_tup_del for any given rel, but those numbers have \n> nothing to \n> do with what has happened since the last vacuum, hence nothing to do with \n> current status or need for vacuum.\n\nPostgres 7.3-beta has a new function 'pg_stat_reset()' that you can call to reset the stats collector after a vacuum...\n\nChris\n\n", "msg_date": "Wed, 4 Sep 2002 11:47:49 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "On Tuesday 03 September 2002 23:47, Christopher Kings-Lynne wrote:\n> > I have been doing some poking around with this item, and I was\n> > planning on\n> > using the stats collector to do \"intelligent\" auto-vacuuming. I\n> > was planning\n> > on adding some new columns that account for activity that has taken place\n> > since the last vacuum. The current stats collector shows n_tup_ins,\n> > n_tup_upd and n_tup_del for any given rel, but those numbers have\n> > nothing to\n> > do with what has happened since the last vacuum, hence nothing to do with\n> > current status or need for vacuum.\n>\n> Postgres 7.3-beta has a new function 'pg_stat_reset()' that you can call to\n> reset the stats collector after a vacuum...\n\nJust my opinion here, but I don't think having autovac constantly resetting \nthe stats is a good idea, it means that you lose the current stat \nfunctionality when using autovacuum, and also implies that the stats mean \ndiffernet things if autovac is turned on or off. \n", "msg_date": "Wed, 4 Sep 2002 00:00:03 -0400", "msg_from": "\"Matthew T. OConnor\" <matthew@zeut.net>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "Am Mittwoch, 4. September 2002 05:44 schrieb Matthew T. OConnor:\n> I have been doing some poking around with this item, and I was planning on\n> using the stats collector to do \"intelligent\" auto-vacuuming. I was\n> planning on adding some new columns that account for activity that has\n> taken place since the last vacuum. The current stats collector shows\n> n_tup_ins, n_tup_upd and n_tup_del for any given rel, but those numbers\n> have nothing to do with what has happened since the last vacuum, hence\n> nothing to do with current status or need for vacuum.\n\nThis should be no real problem, extending the table pg_stat_all_tables with 3 fields\n\"av_n_tup_ins\", \"av_n_tup_upd\", \"av_n_tup_del\" should do it IMO.\n", "msg_date": "Wed, 4 Sep 2002 07:50:53 +0200", "msg_from": "Mario Weilguni <mweilguni@sime.com>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" }, { "msg_contents": "How about counting the number of dead tuples examined and the number of live\ntuples returned. As the ratio of dead tuples over live tuples visited\nincreases the table becomes a candidate for vacuuming.\n-regards\nricht\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of Tom Lane\n> Sent: Tuesday, September 03, 2002 4:25 PM\n> To: Mario Weilguni\n> Cc: pgsql-hackers@postgresql.org\n> Subject: Re: [HACKERS] possible vacuum improvement?\n>\n>\n> \"Mario Weilguni\" <mario.weilguni@icomedias.com> writes:\n> > That brings me to another point, can't the\n> > statistics collector used for that?\n>\n> Hmm, that would be a different way of attacking the problem. Not sure\n> offhand which is better, but it'd surely be worth considering both.\n>\n> Note that collecting of dead-tuple counts requires input from aborted\n> transactions as well as successful ones. I don't recall whether the\n> stats collector currently collects anything from aborted xacts; that\n> might or might not be a sticky point.\n>\n> \t\t\tregards, tom lane\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n>\n\n", "msg_date": "Wed, 04 Sep 2002 11:04:51 -0400", "msg_from": "Richard Tucker <richt@peerdirect.com>", "msg_from_op": false, "msg_subject": "Re: possible vacuum improvement?" } ]
[ { "msg_contents": "On Tue, 3 Sep 2002, Kaare Rasmussen wrote:\n\n> > Are you guys competing for the modesty award? ;-)\n> > I heard Stallman is trying to win it this year. :-)\n> \n> Hah, that's a good one.\n> \n> For doing what - telling you not to call it GNU/Linux, only Linux/GNU ?\n> :-)\n\n SELECT FreeProject FROM History ORDER BY FreeProject\n\n Seems quite logical : GNU/Linux ;-)\n\n-- \n\t\t\t Alexandre Dulaunoy -- http://www.foo.be/\n 3B12 DCC2 82FA 2931 2F5B 709A 09E2 CD49 44E6 CBCD --- AD993-6BONE\n\"People who fight may lose.People who do not fight have already lost.\"\n\t\t\t\t\t\t\tBertolt Brecht\n\n\n\n\n\n", "msg_date": "Tue, 3 Sep 2002 18:54:28 +0200 (CEST)", "msg_from": "Alexandre Dulaunoy <adulau@conostix.com>", "msg_from_op": true, "msg_subject": "Re: I am done " } ]
[ { "msg_contents": "Seems it wants to run a redo entry that doesn't exist.\n\nNot a big deal as it's a test environment only. It was recently\nupgraded from 7.2.1 to 7.2.2. I'm wondering whether the person who did\nthe upgrade shutdown the daemon before installing.\n\n\n\nFATAL 1: The database system is starting up\nFATAL 1: The database system is starting up\nDEBUG: database system is ready\nDEBUG: server process (pid 9084) was terminated by signal 10\nDEBUG: terminating any other active server processes\nDEBUG: all server processes terminated; reinitializing shared memory\nand semaphores\nDEBUG: database system was interrupted at 2002-09-03 13:54:33 EDT\nDEBUG: checkpoint record is at 0/1E1F1D90\nDEBUG: redo record is at 0/1E1F1D90; undo record is at 0/0; shutdown\nTRUE\nDEBUG: next transaction id: 320415; next oid: 488052\nDEBUG: database system was not properly shut down; automatic recovery\nin progress\nDEBUG: ReadRecord: record with zero length at 0/1E1F1DD0\nDEBUG: redo is not required\nFATAL 1: The database system is starting up\nFATAL 1: The database system is starting up\nDEBUG: database system is ready\nDEBUG: server process (pid 9097) was terminated by signal 10\nDEBUG: terminating any other active server processes\nDEBUG: all server processes terminated; reinitializing shared memory\nand semaphores\nDEBUG: database system was interrupted at 2002-09-03 13:54:37 EDT\nDEBUG: checkpoint record is at 0/1E1F1DD0\nDEBUG: redo record is at 0/1E1F1DD0; undo record is at 0/0; shutdown\nTRUE\nDEBUG: next transaction id: 320415; next oid: 488052\nDEBUG: database system was not properly shut down; automatic recovery\nin progress\nDEBUG: ReadRecord: record with zero length at 0/1E1F1E10\nDEBUG: redo is not required\nFATAL 1: The database system is starting up\n\n\n\n\n", "msg_date": "03 Sep 2002 14:03:58 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "7.2.2 bug?" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> DEBUG: server process (pid 9097) was terminated by signal 10\n\nCould we have a backtrace from that core dump?\n\nAFAICT it's getting through the WAL redo just fine, so the problem\nis (probably) not what you think.\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 16:42:49 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.2 bug? " }, { "msg_contents": "On Tue, 2002-09-03 at 16:42, Tom Lane wrote:\n> Rod Taylor <rbt@zort.ca> writes:\n> > DEBUG: server process (pid 9097) was terminated by signal 10\n> \n> Could we have a backtrace from that core dump?\n> \n> AFAICT it's getting through the WAL redo just fine, so the problem\n> is (probably) not what you think.\n\nTook me a while, but I eventually figured out that they changed the\nNAMEDATALEN in the old version, and didn't match it in the new one.\n\nSo the error is exactly what is expected -- memory allocation errors.\n\n\n", "msg_date": "03 Sep 2002 17:16:28 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": true, "msg_subject": "Re: 7.2.2 bug?" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Took me a while, but I eventually figured out that they changed the\n> NAMEDATALEN in the old version, and didn't match it in the new one.\n\nGrumble. It occured to us to store NAMEDATALEN in pg_control in 7.3,\nbut 7.2 doesn't have that defense. Sorry bout that...\n\n\t\t\tregards, tom lane\n", "msg_date": "Tue, 03 Sep 2002 17:19:24 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: 7.2.2 bug? " } ]
[ { "msg_contents": "i've hacked out a webdav (apache2) interface to pgsql. i'd love\nto replicate the browsing/managing interfaces of the bigname\nRDBMS's using webdav for pgsql but i'm not sure how far i'll\nget. right now the 0.1.15 release supports a fair number of\nbrowsing options (tables, columns, functions, sequences, etc)\n\nits only ready for hackers i think so i'm posting here - hope\nthat's ok.\n\nyou can see some screenshots at:\n\thttp://home.attbi.com/~joelwreed/\n\nand download the source from the above URL or this one:\n\thttp://sourceforge.net/project/showfiles.php?group_id=60618\n\njr\n\n-- \n------------------------------------------------------------\nJoel W. Reed 412-257-3881\n--------All the simple programs have been written.----------", "msg_date": "Tue, 3 Sep 2002 23:11:49 -0400", "msg_from": "\"joel w. reed\" <jreed@ddiworld.com>", "msg_from_op": true, "msg_subject": "webdav interface to pgsql" }, { "msg_contents": "Cool. Is it worth putting it on greatbridge? gborg.postgresql.org\n\nWith the new tightening of the postgres source tree, it's unlikely this\nwould make it into our CVS methinks, however people are working on setting\nup greatbridge as a one-stop-shop for postgres add-ons...\n\nChris\n\n> -----Original Message-----\n> From: pgsql-hackers-owner@postgresql.org\n> [mailto:pgsql-hackers-owner@postgresql.org]On Behalf Of joel w. reed\n> Sent: Wednesday, 4 September 2002 11:12 AM\n> To: pgsql-hackers@postgresql.org\n> Subject: [HACKERS] webdav interface to pgsql\n>\n>\n> i've hacked out a webdav (apache2) interface to pgsql. i'd love\n> to replicate the browsing/managing interfaces of the bigname\n> RDBMS's using webdav for pgsql but i'm not sure how far i'll\n> get. right now the 0.1.15 release supports a fair number of\n> browsing options (tables, columns, functions, sequences, etc)\n>\n> its only ready for hackers i think so i'm posting here - hope\n> that's ok.\n>\n> you can see some screenshots at:\n> \thttp://home.attbi.com/~joelwreed/\n>\n> and download the source from the above URL or this one:\n> \thttp://sourceforge.net/project/showfiles.php?group_id=60618\n>\n> jr\n>\n> --\n> ------------------------------------------------------------\n> Joel W. Reed 412-257-3881\n> --------All the simple programs have been written.----------\n>\n>\n\n", "msg_date": "Wed, 4 Sep 2002 14:01:34 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: webdav interface to pgsql" }, { "msg_contents": "On Wed, 4 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Cool. Is it worth putting it on greatbridge? gborg.postgresql.org\n\ngreatbridge is back again? *raised eyebrow*\n\n\n", "msg_date": "Wed, 4 Sep 2002 11:14:34 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: webdav interface to pgsql" } ]
[ { "msg_contents": "Does anyone else get this rubbish when they post to -php ?\n\nOur domain isn't on any blacklists AFAIK...\n\nChris\n\n> -----Original Message-----\n> From: GWAVA [mailto:Postmaster@akf.dk]\n> Sent: Wednesday, 4 September 2002 11:24 AM\n> To: chriskl@familyhealth.com.au\n> Subject: [GWAVA:fku1fb18] Source block message notification\n>\n>\n> Den postmeddelelse du pr�vede at sende til [No To Addresses] blev\n> ikke afleveret.\n> Meddelelsen kom fra en adresse som ikke tillades i postsystemet\n> akf.dk, og blev derfor afvist.\n>\n> Kontakt venligst din systemadministrator for at f� flere\n> oplysninger om problemet.\n>\n> Information om den afviste postmeddelelse:\n>\n> FRA: chriskl@familyhealth.com.au\n> TIL: [No To Addresses]\n> Emne: Re: [PHP] fastest way to retrieve data\n>\n> Vedh�ftet fil:\n>\n> The message you tried to send to [No To Addresses] was not delivered.\n> The message was sent from an address which is not permitted in\n> the akf.dk mail system and was rejected.\n>\n> Please contact your system administrator for more information.\n>\n> Information about the problem message:\n>\n> FROM: chriskl@familyhealth.com.au\n> TO: [No To Addresses]\n> Subject: Re: [PHP] fastest way to retrieve data\n>\n> Attachment Name:\n>\n\n", "msg_date": "Wed, 4 Sep 2002 11:26:54 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "FW: [GWAVA:fku1fb18] Source block message notification" }, { "msg_contents": "\nYes, we have told Marc to remove it several times. He may be having\ntrouble figuring out which email address is generating it.\n\n---------------------------------------------------------------------------\n\nChristopher Kings-Lynne wrote:\n> Does anyone else get this rubbish when they post to -php ?\n> \n> Our domain isn't on any blacklists AFAIK...\n> \n> Chris\n> \n> > -----Original Message-----\n> > From: GWAVA [mailto:Postmaster@akf.dk]\n> > Sent: Wednesday, 4 September 2002 11:24 AM\n> > To: chriskl@familyhealth.com.au\n> > Subject: [GWAVA:fku1fb18] Source block message notification\n> >\n> >\n> > Den postmeddelelse du pr?vede at sende til [No To Addresses] blev\n> > ikke afleveret.\n> > Meddelelsen kom fra en adresse som ikke tillades i postsystemet\n> > akf.dk, og blev derfor afvist.\n> >\n> > Kontakt venligst din systemadministrator for at f? flere\n> > oplysninger om problemet.\n> >\n> > Information om den afviste postmeddelelse:\n> >\n> > FRA: chriskl@familyhealth.com.au\n> > TIL: [No To Addresses]\n> > Emne: Re: [PHP] fastest way to retrieve data\n> >\n> > Vedh?ftet fil:\n> >\n> > The message you tried to send to [No To Addresses] was not delivered.\n> > The message was sent from an address which is not permitted in\n> > the akf.dk mail system and was rejected.\n> >\n> > Please contact your system administrator for more information.\n> >\n> > Information about the problem message:\n> >\n> > FROM: chriskl@familyhealth.com.au\n> > TO: [No To Addresses]\n> > Subject: Re: [PHP] fastest way to retrieve data\n> >\n> > Attachment Name:\n> >\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Tue, 3 Sep 2002 23:41:08 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: FW: [GWAVA:fku1fb18] Source block message notification" }, { "msg_contents": "\n\nCan anyone find an email in all of that? I did a search of 'afk' ... oops\n:) got it and removed ...\n\nOn Wed, 4 Sep 2002, Christopher Kings-Lynne wrote:\n\n> Does anyone else get this rubbish when they post to -php ?\n>\n> Our domain isn't on any blacklists AFAIK...\n>\n> Chris\n>\n> > -----Original Message-----\n> > From: GWAVA [mailto:Postmaster@akf.dk]\n> > Sent: Wednesday, 4 September 2002 11:24 AM\n> > To: chriskl@familyhealth.com.au\n> > Subject: [GWAVA:fku1fb18] Source block message notification\n> >\n> >\n> > Den postmeddelelse du pr�vede at sende til [No To Addresses] blev\n> > ikke afleveret.\n> > Meddelelsen kom fra en adresse som ikke tillades i postsystemet\n> > akf.dk, og blev derfor afvist.\n> >\n> > Kontakt venligst din systemadministrator for at f� flere\n> > oplysninger om problemet.\n> >\n> > Information om den afviste postmeddelelse:\n> >\n> > FRA: chriskl@familyhealth.com.au\n> > TIL: [No To Addresses]\n> > Emne: Re: [PHP] fastest way to retrieve data\n> >\n> > Vedh�ftet fil:\n> >\n> > The message you tried to send to [No To Addresses] was not delivered.\n> > The message was sent from an address which is not permitted in\n> > the akf.dk mail system and was rejected.\n> >\n> > Please contact your system administrator for more information.\n> >\n> > Information about the problem message:\n> >\n> > FROM: chriskl@familyhealth.com.au\n> > TO: [No To Addresses]\n> > Subject: Re: [PHP] fastest way to retrieve data\n> >\n> > Attachment Name:\n> >\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 1: subscribe and unsubscribe commands go to majordomo@postgresql.org\n>\n\n", "msg_date": "Wed, 4 Sep 2002 10:04:51 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: FW: [GWAVA:fku1fb18] Source block message notification" } ]
[ { "msg_contents": "OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n\nI used the same HISTORY categories Peter made in 7.2. I liked them.\n\nPlease review the HISTORY file. I am sure there are improvements that\ncan be made.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 03:24:10 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "HISTORY updated, 7.3 branded" }, { "msg_contents": "On 4 Sep 2002 at 3:24, Bruce Momjian wrote:\n\n> OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> \n> I used the same HISTORY categories Peter made in 7.2. I liked them.\n> \n> Please review the HISTORY file. I am sure there are improvements that\n> can be made.\n\nSome minor stuff,\n\n1) Line 74/Line 20 are same. Since they are in notes for different releases, I \nsuspect one of them has to move.\n\n2)Line 61\n cash I/O improvements (Tom)\n\nIs that 'cash' is correct(cache?)?\n\nSorry, if I have missed earlier threads on this. The file I am looking at is \nlast updated on Aug. 25. (anoncvs.postgresql.org).\n\nI will update once again in an hour and check again..\n\nBye\n Shridhar\n\n--\nThere's nothing disgusting about it [the Companion]. It's just anotherlife \nform, that's all. You get used to those things.\t\t-- McCoy, \"Metamorphosis\", \nstardate 3219.8\n\n", "msg_date": "Wed, 04 Sep 2002 13:08:56 +0530", "msg_from": "\"Shridhar Daithankar\" <shridhar_daithankar@persistent.co.in>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "\nI assume you are not looking at the 7.3 release notes. It does take a\nwhile for anon to get the changes.\n\n\n---------------------------------------------------------------------------\n\nShridhar Daithankar wrote:\n> On 4 Sep 2002 at 3:24, Bruce Momjian wrote:\n> \n> > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> > \n> > I used the same HISTORY categories Peter made in 7.2. I liked them.\n> > \n> > Please review the HISTORY file. I am sure there are improvements that\n> > can be made.\n> \n> Some minor stuff,\n> \n> 1) Line 74/Line 20 are same. Since they are in notes for different releases, I \n> suspect one of them has to move.\n> \n> 2)Line 61\n> cash I/O improvements (Tom)\n> \n> Is that 'cash' is correct(cache?)?\n> \n> Sorry, if I have missed earlier threads on this. The file I am looking at is \n> last updated on Aug. 25. (anoncvs.postgresql.org).\n> \n> I will update once again in an hour and check again..\n> \n> Bye\n> Shridhar\n> \n> --\n> There's nothing disgusting about it [the Companion]. It's just anotherlife \n> form, that's all. You get used to those things.\t\t-- McCoy, \"Metamorphosis\", \n> stardate 3219.8\n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 03:43:17 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "> OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> \n> I used the same HISTORY categories Peter made in 7.2. I liked them.\n> \n> Please review the HISTORY file. I am sure there are improvements that\n> can be made.\n\nPlease change:\n\n> Add CREATE/DROP CONVERSION, allowing loadable encodings (Tatsuo)\n\nTo:\n\nAdd CREATE/DROP CONVERSION, allowing loadable encodings (Tatsuo, Kaori)\n\nShe provided lots of encodings for CONVERSION.\n--\nTatsuo Ishii\n", "msg_date": "Wed, 04 Sep 2002 20:40:31 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Found this line without a name:\n\nPropagate column or table renaming to foreign key constraints\n\nIs that item complete? pg_constraint follows (as such dump / restore\nwill work) but the triggers themselves still break, don't they?\n\nOn Wed, 2002-09-04 at 03:24, Bruce Momjian wrote:\n> OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> \n> I used the same HISTORY categories Peter made in 7.2. I liked them.\n> \n> Please review the HISTORY file. I am sure there are improvements that\n> can be made.\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n\n\n", "msg_date": "04 Sep 2002 08:17:19 -0400", "msg_from": "Rod Taylor <rbt@zort.ca>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Rod Taylor <rbt@zort.ca> writes:\n> Found this line without a name:\n> Propagate column or table renaming to foreign key constraints\n> Is that item complete? pg_constraint follows (as such dump / restore\n> will work) but the triggers themselves still break, don't they?\n\nYes, no. There's hackery in tablecmds.c to fix the triggers.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 09:49:29 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded " }, { "msg_contents": "Shridhar Daithankar dijo: \n\n> On 4 Sep 2002 at 3:24, Bruce Momjian wrote:\n> \n> > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> \n> Some minor stuff,\n\nIn the schema changes description:\n\n\"Schemas allow users to create objects in their own namespace\nso two people can have the same table with the same name.\"\n\nShouldn't it read \"so two people can have tables with the same name\" ?\nMy point is that the tables are not the same, they just have the same\nname.\n\n-- \nAlvaro Herrera (<alvherre[a]atentus.com>)\n\"Tiene valor aquel que admite que es un cobarde\" (Fernandel)\n\n", "msg_date": "Wed, 4 Sep 2002 10:52:32 -0400 (CLT)", "msg_from": "Alvaro Herrera <alvherre@atentus.com>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Tatsuo Ishii wrote:\n> > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> > \n> > I used the same HISTORY categories Peter made in 7.2. I liked them.\n> > \n> > Please review the HISTORY file. I am sure there are improvements that\n> > can be made.\n> \n> Please change:\n> \n> > Add CREATE/DROP CONVERSION, allowing loadable encodings (Tatsuo)\n> \n> To:\n> \n> Add CREATE/DROP CONVERSION, allowing loadable encodings (Tatsuo, Kaori)\n> \n> She provided lots of encodings for CONVERSION.\n\nDone:\n\n\tAdd CREATE/DROP CONVERSION, allowing loadable encodings (Tatsuo, Kaori)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 12:56:51 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Rod Taylor wrote:\n> Found this line without a name:\n> \n> Propagate column or table renaming to foreign key constraints\n> \n> Is that item complete? pg_constraint follows (as such dump / restore\n> will work) but the triggers themselves still break, don't they?\n\nNo idea. The item only talks about the contraint, not the trigger.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 12:58:25 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Alvaro Herrera wrote:\n> Shridhar Daithankar dijo: \n> \n> > On 4 Sep 2002 at 3:24, Bruce Momjian wrote:\n> > \n> > > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> > \n> > Some minor stuff,\n> \n> In the schema changes description:\n> \n> \"Schemas allow users to create objects in their own namespace\n> so two people can have the same table with the same name.\"\n> \n> Shouldn't it read \"so two people can have tables with the same name\" ?\n> My point is that the tables are not the same, they just have the same\n> name.\n\nGood point. Updated:\n\n Schemas allow users to create objects in their own namespace\n so two people can have tables with the same name. There is \n also a public schema for shared tables. Table/index creation\n can be restricted by removing permissions on the public schema.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 12:59:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Bruce Momjian wrote:\n> OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> \n> I used the same HISTORY categories Peter made in 7.2. I liked them.\n> \n> Please review the HISTORY file. I am sure there are improvements that\n> can be made.\n> \n\nA few minor comments:\n\n1. suggested rewording:\n\nTable Functions\n\n Functions can now return sets, with multiple rows\n and multiple columns. You specify these functions in\n the SELECT FROM clause, similar to a table or view.\n\n2. couldn't find mention of:\n\nData Types and Functions\n========================\nAdd named composite type creation - CREATE TYPE typename AS (column \ndefinition list)\n\nAllow anonymous composite type specification at query runtime in the \ntable alias clause - FROM tablename AS aliasname(column definition list)\n\nAdd new API to simplify creation of C language table functions\n\nJoe\n\n", "msg_date": "Wed, 04 Sep 2002 10:08:11 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Joe Conway wrote:\n> Bruce Momjian wrote:\n> > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> > \n> > I used the same HISTORY categories Peter made in 7.2. I liked them.\n> > \n> > Please review the HISTORY file. I am sure there are improvements that\n> > can be made.\n> > \n> \n> A few minor comments:\n> \n> 1. suggested rewording:\n> \n> Table Functions\n> \n> Functions can now return sets, with multiple rows\n> and multiple columns. You specify these functions in\n> the SELECT FROM clause, similar to a table or view.\n\nDone.\n\n> 2. couldn't find mention of:\n> \n> Data Types and Functions\n> ========================\n> Add named composite type creation - CREATE TYPE typename AS (column \n> definition list)\n> \n> Allow anonymous composite type specification at query runtime in the \n> table alias clause - FROM tablename AS aliasname(column definition list)\n> \n> Add new API to simplify creation of C language table functions\n\nAnd done:\n\nAdd named composite types using CREATE TYPE typename AS (column) (Joe)\nAllow composite type definition in the table alias clause (Joe)\nAdd new API to simplify creation of C language table functions (Joe)\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 13:24:16 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Please review the HISTORY file.\n\n PostgreSQL now support ALTER TABLE ... DROP COLUMN functionality.\n\ns/support/supports/\n\n Functions can now return sets, with multiple rows\n and multiple columns. You specify these functions in\n the SELECT FROM clause, similar to a table or view.\n\nI don't like this description: it's always been possible for functions\nto return sets, it was just hard to use the feature. Try to explain\nwhat we really added. Maybe:\n\nFunctions returning sets (multiple rows) and/or tuples (multiple\ncolumns) are now much easier to use than before. You can call\nsuch a function in the SELECT FROM clause, treating its output\nlike a table. Such a function can be declared to return RECORD,\nwith the actual output column set varying from one query to the\nnext. Also, plpgsql functions can now return sets.\n\n Both multibyte and locale are now enabled by default.\n\ns/enabled by default/always enabled/ --- AFAIK it is impossible to\ndisable them, so \"by default\" is pretty misleading.\n\n By default, functions can now take up to 32 parameters, and \n identifiers can be up to 64 bytes long.\n\ns/64/63/\n\nAdd pg_locks table to show locks (Neil)\n\ns/table/view/\n\nEXPLAIN now outputs as a query (Tom)\n\nThis doesn't seem to belong under the Performance heading.\n\nDisplay sort keys in EXPLAIN (Tom)\n\nLikewise.\n\nRestrict comments to the current database\n\nShould probably say \"comments on databases\".\n\nIncrease maximum number of function parameters to 32 (Bruce) momjian\n\nThis line seems to need editing?\n\nModify a few error messages for consistency (Bruce) momjian\n\nThis too.\n\nCleanups in array internal handling (Tom)\n\nJoe should get credit on that one.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 14:37:27 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Please review the HISTORY file.\n> \n> PostgreSQL now support ALTER TABLE ... DROP COLUMN functionality.\n> \n> s/support/supports/\n> \n> Functions can now return sets, with multiple rows\n> and multiple columns. You specify these functions in\n> the SELECT FROM clause, similar to a table or view.\n> \n> I don't like this description: it's always been possible for functions\n> to return sets, it was just hard to use the feature. Try to explain\n> what we really added. Maybe:\n> \n> Functions returning sets (multiple rows) and/or tuples (multiple\n> columns) are now much easier to use than before. You can call\n> such a function in the SELECT FROM clause, treating its output\n> like a table. Such a function can be declared to return RECORD,\n> with the actual output column set varying from one query to the\n> next. Also, plpgsql functions can now return sets.\n\n\nWell, this is a summary section. That seems like too much detail. I\ndon't remember every seeing a function returning sets before. Can you\ngive an example? I can add the word \"'easily' return sets\" but I don't\nthink it is that easy.\n\n\n> Both multibyte and locale are now enabled by default.\n> \n> s/enabled by default/always enabled/ --- AFAIK it is impossible to\n> disable them, so \"by default\" is pretty misleading.\n\n\n\nDone.\n\n> \n> By default, functions can now take up to 32 parameters, and \n> identifiers can be up to 64 bytes long.\n> \n> s/64/63/\n\nOops, got it.\n\n> Add pg_locks table to show locks (Neil)\n> \n> s/table/view/\n\n\nYep.\n\n> \n> EXPLAIN now outputs as a query (Tom)\n> \n> This doesn't seem to belong under the Performance heading.\n\nI had it there because EXPLAIN is a performance tool, though I wondered\nabout that logic too. Move to utilities.\n\n> \n> Display sort keys in EXPLAIN (Tom)\n> \n> Likewise.\n\nMoved.\n\n> \n> Restrict comments to the current database\n> \n> Should probably say \"comments on databases\".\n\nChanged to:\n\nRestrict comment to the current database \n\n> \n> Increase maximum number of function parameters to 32 (Bruce) momjian\n> \n> This line seems to need editing?\n\nFixed.\n\n> \n> Modify a few error messages for consistency (Bruce) momjian\n> \n> This too.\n\nFixed.\n\n> \n> Cleanups in array internal handling (Tom)\n> \n> Joe should get credit on that one.\n\nDone.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 14:45:02 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> I don't remember every seeing a function returning sets before. Can you\n> give an example?\n\nhttp://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/xfunc-sql.html#AEN26392\n\nAlso, the preceding subsection shows SQL functions returning rows. So\nthese features have been there, but they were messy and awkward to use.\nRecall that the TODO item was\n\t* -Functions returning sets do not totally work\nand not \"we don't have functions returning sets\".\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 14:51:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded " }, { "msg_contents": "Tom Lane wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > I don't remember every seeing a function returning sets before. Can you\n> > give an example?\n> \n> http://www.ca.postgresql.org/users-lounge/docs/7.2/postgres/xfunc-sql.html#AEN26392\n> \n> Also, the preceding subsection shows SQL functions returning rows. So\n> these features have been there, but they were messy and awkward to use.\n> Recall that the TODO item was\n> \t* -Functions returning sets do not totally work\n> and not \"we don't have functions returning sets\".\n\nYes, now I remember, only SQL functions could return sets. How about\nthis:\n\n PL/PgSQL and C functions can now return sets, with multiple\n rows and multiple columns. You specify these functions in the\n SELECT FROM clause, similar to a table or view.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 15:01:39 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Yes, now I remember, only SQL functions could return sets. How about\n> this:\n\n> PL/PgSQL and C functions can now return sets, with multiple\n> rows and multiple columns. You specify these functions in the\n> SELECT FROM clause, similar to a table or view.\n\nC functions have always been able to return sets too; you don't honestly\nthink that a SQL function can do something a C function can't, do you?\n\nThere are really two independent improvements here: one is the ability\nfor plpgsql functions to return sets, and the other is a group of\nimprovements that make it easier to use a function-returning-set,\nindependently of what language it's written in.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 15:13:18 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded " }, { "msg_contents": "Tom Lane wrote:\n> C functions have always been able to return sets too; you don't honestly\n> think that a SQL function can do something a C function can't, do you?\n\nThe original dblink is an example.\n\n> \n> There are really two independent improvements here: one is the ability\n> for plpgsql functions to return sets, and the other is a group of\n> improvements that make it easier to use a function-returning-set,\n> independently of what language it's written in.\n> \n\nAs an example, although you *could* return a composite type before, it \nwas almost useless, because what you actually got returned to you was a \npointer:\n\ntest=# create function get_foo() returns setof foo as 'select * from \nfoo' language sql;\nCREATE\ntest=# select get_foo();\n get_foo\n-----------\n 137867648\n 137867648\n 137867648\n(3 rows)\n\nIn order to get the individual columns, you had to do:\n\ntest=# select f1(get_foo()), f2(get_foo()), f3(get_foo());\n f1 | f2 | f3\n----+----+-----\n 1 | 1 | abc\n 1 | 2 | def\n 2 | 1 | ghi\n(3 rows)\n\nPretty ugly, but it did work.\n\nWhat about this:\n\nFunctions returning multiple rows and/or multiple columns are\nnow much easier to use than before. You can call such a\n\"table function\" in the SELECT FROM clause, treating its output\nlike a table. Also, plpgsql functions can now return sets.\n\n\nJoe\n\n\n", "msg_date": "Wed, 04 Sep 2002 12:53:20 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" }, { "msg_contents": "Joe Conway wrote:\n> What about this:\n> \n> Functions returning multiple rows and/or multiple columns are\n> now much easier to use than before. You can call such a\n> \"table function\" in the SELECT FROM clause, treating its output\n> like a table. Also, plpgsql functions can now return sets.\n\nAdded.\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 16:04:27 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded" } ]
[ { "msg_contents": "Hello everyone.\n I can't understand Tid Scan, Who can tell me what it's that and where I could find document on the Web. Thanks for your reponse.\n\n Guo longjiang Harbin China\n______________________________________\n\n===================================================================\n������ѵ������� (http://mail.sina.com.cn)\n���˷�����Ϣ�������г���һ�ߣ��ó���ʱ�ͳ��֣� (http://classad.sina.com.cn/2shou/)\n�������ֻ�ͼƬ�����׶�������������ѡ��ÿ�춼�и��� (http://sms.sina.com.cn/cgi-bin/sms/smspic.cgi)\n", "msg_date": "Wed, 04 Sep 2002 18:43:36 +0800", "msg_from": "ljguo_1234 <ljguo_1234@sina.com>", "msg_from_op": true, "msg_subject": "What is Tid Scan" }, { "msg_contents": "On Wed, 2002-09-04 at 12:43, ljguo_1234 wrote:\n> Hello everyone.\n> I can't understand Tid Scan, Who can tell me what it's that and where I could find document on the Web. Thanks for your reponse.\n\nIt is scanning table by TupleID's. A tuple id is a 6-byte entity which\nconsists of 4-byte page number and 2-byte tuple index inside page.\n\nSo if you know the TID you can directly get the corresponding tuple.\n\n--------------\nHannu\n\n", "msg_date": "04 Sep 2002 15:50:57 +0200", "msg_from": "Hannu Krosing <hannu@tm.ee>", "msg_from_op": false, "msg_subject": "Re: What is Tid Scan" } ]
[ { "msg_contents": "Hello, Mr. Tom Lane\n I am a chinese student studied in Harbin institute of technology. I want join to PostgreSQL Global Development Group and I want work on the planner/optimizer. I have been reading the source code for 2 months. There many data strucutres I can't understand. Can you tell me what document I must read first. If you have documents about planner/optimizer of PostgreSQL, send me please.\n a doctor student English name is Mohan. Chinese name is Guo long jiang.\n Thank you very much! 04/09/2002 \n______________________________________\n\n===================================================================\n������ѵ������� (http://mail.sina.com.cn)\n���˷�����Ϣ�������г���һ�ߣ��ó���ʱ�ͳ��֣� (http://classad.sina.com.cn/2shou/)\n�������ֻ�ͼƬ�����׶�������������ѡ��ÿ�춼�и��� (http://sms.sina.com.cn/cgi-bin/sms/smspic.cgi)\n", "msg_date": "Wed, 04 Sep 2002 18:50:17 +0800", "msg_from": "ljguo_1234 <ljguo_1234@sina.com>", "msg_from_op": true, "msg_subject": "" }, { "msg_contents": "\nI assume you have read everything on the developers web page:\n\n\thttp://developer.postgresql.org/index.php\n\n---------------------------------------------------------------------------\n\nljguo_1234 wrote:\n> Hello, Mr. Tom Lane\n> I am a chinese student studied in Harbin institute of technology. I want join to PostgreSQL Global Development Group and I want work on the planner/optimizer. I have been reading the source code for 2 months. There many data strucutres I can't understand. Can you tell me what document I must read first. If you have documents about planner/optimizer of PostgreSQL, send me please.\n> a doctor student English name is Mohan. Chinese name is Guo long jiang.\n> Thank you very much! 04/09/2002 \n> ______________________________________\n> \n> ===================================================================\n> ������ѵ������� (http://mail.sina.com.cn)\n> ���˷�����Ϣ�������г���һ�ߣ��ó���ʱ�ͳ��֣� (http://classad.sina.com.cn/2shou/)\n> �������ֻ�ͼƬ�����׶�������������ѡ��ÿ�춼�и��� (http://sms.sina.com.cn/cgi-bin/sms/smspic.cgi)\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 4: Don't 'kill -9' the postmaster\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 12:55:38 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: " } ]
[ { "msg_contents": "> I found one bug in file src/backend/utils/adt/oracle_compat.c and there were your name, related with Multibyte enhancement, so i write to you.\n> Functions upper,lower and initcap doesn't work with utf-8 data which is not of Latin letters.At my work i do databases for Russian users and when i tried to use unicode encoding for database and Russsian alphabet than these functions didn't work. So i wrote some patches, because i don't think that problem is in that or other shell variable like LANG or LC_CTYPE. As i don't know any other \n> languages except Russian and English, i wrote small test(test.tar.gz) only for them.Execute it befor and after patching and feel the difference:). And by the way, do encodings(and appropriative languages) EUC_JP,EUC_CN,EUC_KR and EUC_TW have logical operations upper,lower and initcap? \n> \t\t\t\t\t\tregards,Eugene.\n\nFor EUC_JP, there is no upper,lower and initcap. I'm not sure about\nother languages.\n\n> P.S.It doesn't seem bad for me to use lib unicode instead of functions like mbtowc,wctomb from stdlib and towupper,towlower from wctype, but may be somebody will find decision based on them or other lib?\n\nI'm not sure. What do you think, Peter or other guys who is familiar\nwith Unicode?\n\nBTW, I don't like your patches. If there's no unicode.h, configure\naborts with:\n\nconfigure: error: header file <unicode.h> is required for unicode support\n\nwhich seems not acceptable to me. I suggest you #ifdef out the unicode\nupper,lower and initcap support if libunicode and/or unicode.h are not\nfound in the system.\n--\nTatsuo Ishii\n\n(I have included patches for review purpose)", "msg_date": "Wed, 04 Sep 2002 21:01:25 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Multibyte support in oracle_compat.c" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> > Functions upper,lower and initcap doesn't work with utf-8 data\n\nThe backend routines use the host OS locales, so look there. On my\nmachine I have several Russian locales, which seem to address the issue of\ncharacter sets:\n\nru_RU\nru_RU.koi8r\nru_RU.utf8\nru_UA\nrussian\n\nThis is bogus, because the LC_CTYPE choice is cluster-wide and the\nencoding choice is database-specific (in other words: it's broken), but\nthere's nothing we can do about that right now.\n\n> > P.S.It doesn't seem bad for me to use lib unicode instead of functions like mbtowc,wctomb from stdlib and towupper,towlower from wctype\n>\n> I'm not sure. What do you think, Peter or other guys who is familiar\n> with Unicode?\n\nI don't know that that libunicode is, but that shouldn't prevent us from\npossibly evaluating it. :-)\n\nBtw., I just happened to think about this very issue over the last few\ndays. What I would like to attack for the next release is to implement\ncharacter classification and conversion using the Unicode tables so we can\ncut the LC_CTYPE system locale out of the picture. Perhaps this is what\nthe poster was thinking of, too.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 5 Sep 2002 00:46:39 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Multibyte support in oracle_compat.c" }, { "msg_contents": "\n\nOn Thu, 5 Sep 2002, Peter Eisentraut wrote:\n\n> Date: Thu, 5 Sep 2002 00:46:39 +0200 (CEST)\n> From: Peter Eisentraut <peter_e@gmx.net>\n> To: Tatsuo Ishii <t-ishii@sra.co.jp>\n> Cc: pgsql-hackers@postgresql.org, eutm@yandex.ru\n> Subject: Re: [HACKERS] Multibyte support in oracle_compat.c\n>\n> Tatsuo Ishii writes:\n>\n> > > Functions upper,lower and initcap doesn't work with utf-8 data\n>\n> The backend routines use the host OS locales, so look there. On my\n> machine I have several Russian locales, which seem to address the issue of\n> character sets:\n>\n> ru_RU\n> ru_RU.koi8r\n> ru_RU.utf8\n> ru_UA\n> russian\n\nYeah, our character sets is a major pain for internatianlization. And the\nabove list is not exhaustive. I guess you are right, for the time being\nyou'll have to bear with it.\n\n-s\n\n", "msg_date": "Wed, 4 Sep 2002 18:54:59 -0400 (EDT)", "msg_from": "\"Serguei A. Mokhov\" <sa_mokho@alcor.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Multibyte support in oracle_compat.c" }, { "msg_contents": "> The backend routines use the host OS locales, so look there. On my\n> machine I have several Russian locales, which seem to address the issue of\n> character sets:\n> \n> ru_RU\n> ru_RU.koi8r\n> ru_RU.utf8\n> ru_UA\n> russian\n> \n> This is bogus, because the LC_CTYPE choice is cluster-wide and the\n> encoding choice is database-specific (in other words: it's broken), but\n> there's nothing we can do about that right now.\n\nI thought his idea was using UTF-8 locale and Unicode (UTF-8) encoded\ndatabase.\n\n> Btw., I just happened to think about this very issue over the last few\n> days. What I would like to attack for the next release is to implement\n> character classification and conversion using the Unicode tables so we can\n> cut the LC_CTYPE system locale out of the picture. Perhaps this is what\n> the poster was thinking of, too.\n\nInteresting idea. If you are saying that you are going to remove the\ndependecy on system locale, I will agree with your idea.\n\nBTW, nls has same problem as above, no? I guess nls depeneds on locale\nand it may conflict with the database-specific encoding and/or the\nautomatic FE/BE encoding conversion.\n--\nTatsuo Ishii\n", "msg_date": "Thu, 05 Sep 2002 10:09:06 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Multibyte support in oracle_compat.c" }, { "msg_contents": "Tatsuo Ishii writes:\n\n> BTW, nls has same problem as above, no? I guess nls depeneds on locale\n> and it may conflict with the database-specific encoding and/or the\n> automatic FE/BE encoding conversion.\n\nGNU gettext does its own encoding conversion. It reads the program's\ncharacter encoding from the LC_CTYPE locale and converts the material in\nthe translation catalogs on the fly for output. This is great in general,\nreally, but for the postmaster it's a problem. If LC_CTYPE is fixed for\nthe cluster and you later on change your mind about the message language\nthe it will be recoded into the character set that LC_CTYPE says. And if\nthat character set does not match the one that is set as the backend\nencoding internally then who knows what will happen when this stuff is\nrecoded again as it's sent to the client. Big, big mess.\n\n-- \nPeter Eisentraut peter_e@gmx.net\n\n", "msg_date": "Thu, 5 Sep 2002 23:38:10 +0200 (CEST)", "msg_from": "Peter Eisentraut <peter_e@gmx.net>", "msg_from_op": false, "msg_subject": "Re: Multibyte support in oracle_compat.c" }, { "msg_contents": "> GNU gettext does its own encoding conversion. It reads the program's\n> character encoding from the LC_CTYPE locale and converts the material in\n> the translation catalogs on the fly for output. This is great in general,\n> really, but for the postmaster it's a problem. If LC_CTYPE is fixed for\n> the cluster and you later on change your mind about the message language\n> the it will be recoded into the character set that LC_CTYPE says. And if\n> that character set does not match the one that is set as the backend\n> encoding internally then who knows what will happen when this stuff is\n> recoded again as it's sent to the client. Big, big mess.\n\nThen in another word, it's completely broken. Sigh.\n--\nTatsuo Ishii\n", "msg_date": "Fri, 06 Sep 2002 10:21:39 +0900 (JST)", "msg_from": "Tatsuo Ishii <t-ishii@sra.co.jp>", "msg_from_op": true, "msg_subject": "Re: Multibyte support in oracle_compat.c" } ]
[ { "msg_contents": "\nWarning: Supplied argument is not a valid PostgreSQL link resource in\n/usr/local/www/gborg3/html/index.php on line 52\n\nWarning: Supplied argument is not a valid PostgreSQL link resource in\n/usr/local/www/gborg3/html/include/project.php on line 196\n\nWarning: Supplied argument is not a valid PostgreSQL result resource in\n/usr/local/www/gborg3/html/include/project.php on line 205\n\nWarning: Supplied argument is not a valid PostgreSQL link resource in\n/usr/local/www/gborg3/html/include/project.php on line 274\n\nWarning: Supplied argument is not a valid PostgreSQL result resource in\n/usr/local/www/gborg3/html/include/project.php on line 286\n\nChris\n\n", "msg_date": "Wed, 4 Sep 2002 21:01:21 +0800 (WST)", "msg_from": "Christopher Kings-Lynne <chriskl@familyhealth.com.au>", "msg_from_op": true, "msg_subject": "GBorg is down" }, { "msg_contents": "On Wed, 4 Sep 2002, Christopher Kings-Lynne wrote:\n\n>\n> Warning: Supplied argument is not a valid PostgreSQL link resource in\n> /usr/local/www/gborg3/html/index.php on line 52\n\nLooks like the machine with the database on it. The mirror update\ncronjob is failing too.\n\nVince.\n-- \n==========================================================================\nVince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n http://www.camping-usa.com http://www.cloudninegifts.com\n http://www.meanstreamradio.com http://www.unknown-artists.com\n==========================================================================\n\n\n\n", "msg_date": "Wed, 4 Sep 2002 09:04:45 -0400 (EDT)", "msg_from": "Vince Vielhaber <vev@michvhf.com>", "msg_from_op": false, "msg_subject": "Re: GBorg is down" }, { "msg_contents": "\nshould already be fixed ...\n\nOn Wed, 4 Sep 2002, Vince Vielhaber wrote:\n\n> On Wed, 4 Sep 2002, Christopher Kings-Lynne wrote:\n>\n> >\n> > Warning: Supplied argument is not a valid PostgreSQL link resource in\n> > /usr/local/www/gborg3/html/index.php on line 52\n>\n> Looks like the machine with the database on it. The mirror update\n> cronjob is failing too.\n>\n> Vince.\n> --\n> ==========================================================================\n> Vince Vielhaber -- KA8CSH email: vev@michvhf.com http://www.pop4.net\n> 56K Nationwide Dialup from $16.00/mo at Pop4 Networking\n> http://www.camping-usa.com http://www.cloudninegifts.com\n> http://www.meanstreamradio.com http://www.unknown-artists.com\n> ==========================================================================\n>\n>\n>\n>\n> ---------------------------(end of broadcast)---------------------------\n> TIP 6: Have you searched our list archives?\n>\n> http://archives.postgresql.org\n>\n\n", "msg_date": "Wed, 4 Sep 2002 11:13:50 -0300 (ADT)", "msg_from": "\"Marc G. Fournier\" <scrappy@hub.org>", "msg_from_op": false, "msg_subject": "Re: GBorg is down" } ]
[ { "msg_contents": "> Shridhar Daithankar dijo: \n> \n> > On 4 Sep 2002 at 3:24, Bruce Momjian wrote:\n> > \n> > > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> > \n> > Some minor stuff,\n> \n> In the schema changes description:\n> \n> \"Schemas allow users to create objects in their own namespace\n> so two people can have the same table with the same name.\"\n\n> Shouldn't it read \"so two people can have tables with the same name\"\n> ? My point is that the tables are not the same, they just have the\n> same name.\n\nHow about this for a wording:\n\n \"Schemas allow users or applications to have their own namespaces in\n which to create objects. \n\n A typical application of this is to allow creation of tables that\n _appear_ to have the same name. For instance, if some GNOME\n applications were using PostgreSQL to store their configuration, a\n \"GNUMERIC\" namespace might have a table PREFERENCES to store\n preferences for that application, while a \"POWERSHELL\" namespace\n would allow _that_ application to store configuration in a\n PREFERENCES table that is quite distinct from the \"GNUMERIC\" one.\n\n The \"true\" table names may be GNUMERIC.PREFERENCES and\n POWERSHELL.PREFERENCES, but by using Schemas, applications do not\n need to be speckled with gratuitious added prefixes of GNUMERIC or\n POWERSHELL.\"\n\nNote that I'm pointing at \"applications\" as the primary purpose for\nthis, as opposed to \"users.\"\n\nIn the long run, are not applications more likely to be the driving\nforce encouraging the use of schemas?\n--\n(reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\nhttp://www3.sympatico.ca/cbbrowne/unix.html\n\"The most precisely-explained and voluminously-documented user\ninterface \"rule\" can and will be shot to pieces with the introduction\nof a single new priority consideration.\" -- Michael Peck\n\n\n\n\n", "msg_date": "Wed, 04 Sep 2002 12:15:28 -0400", "msg_from": "cbbrowne@cbbrowne.com", "msg_from_op": true, "msg_subject": "Re: HISTORY updated, 7.3 branded " }, { "msg_contents": "\nOK, wording updated to add 'applications':\n\n Schemas allow users to create objects in their own namespace\n so two people or applications can have tables with the same\n name. There is also a public schema for shared tables.\n Table/index creation can be restricted by removing\n permissions on the public schema.\n\n\n---------------------------------------------------------------------------\n\ncbbrowne@cbbrowne.com wrote:\n> > Shridhar Daithankar dijo: \n> > \n> > > On 4 Sep 2002 at 3:24, Bruce Momjian wrote:\n> > > \n> > > > OK, the HISTORY file is updated, and 7.3 is branded and ready for beta1.\n> > > \n> > > Some minor stuff,\n> > \n> > In the schema changes description:\n> > \n> > \"Schemas allow users to create objects in their own namespace\n> > so two people can have the same table with the same name.\"\n> \n> > Shouldn't it read \"so two people can have tables with the same name\"\n> > ? My point is that the tables are not the same, they just have the\n> > same name.\n> \n> How about this for a wording:\n> \n> \"Schemas allow users or applications to have their own namespaces in\n> which to create objects. \n> \n> A typical application of this is to allow creation of tables that\n> _appear_ to have the same name. For instance, if some GNOME\n> applications were using PostgreSQL to store their configuration, a\n> \"GNUMERIC\" namespace might have a table PREFERENCES to store\n> preferences for that application, while a \"POWERSHELL\" namespace\n> would allow _that_ application to store configuration in a\n> PREFERENCES table that is quite distinct from the \"GNUMERIC\" one.\n> \n> The \"true\" table names may be GNUMERIC.PREFERENCES and\n> POWERSHELL.PREFERENCES, but by using Schemas, applications do not\n> need to be speckled with gratuitious added prefixes of GNUMERIC or\n> POWERSHELL.\"\n> \n> Note that I'm pointing at \"applications\" as the primary purpose for\n> this, as opposed to \"users.\"\n> \n> In the long run, are not applications more likely to be the driving\n> force encouraging the use of schemas?\n> --\n> (reverse (concatenate 'string \"gro.gultn@\" \"enworbbc\"))\n> http://www3.sympatico.ca/cbbrowne/unix.html\n> \"The most precisely-explained and voluminously-documented user\n> interface \"rule\" can and will be shot to pieces with the introduction\n> of a single new priority consideration.\" -- Michael Peck\n> \n> \n> \n> \n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n> \n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 13:02:44 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: HISTORY updated, 7.3 branded" } ]
[ { "msg_contents": "Hi,\n\nI think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n\nIt seems Makefile.shlib has changed between 722 and 73 and -z text has\nbeen added. However with this on, it fails to build libperl.so\n\nMaybe I'm wrong, but could someone consider this patch.\n\nRegards,\n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)", "msg_date": "Wed, 4 Sep 2002 18:18:19 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Bug in Makefile.shlib" }, { "msg_contents": "----- Original Message ----- \nFrom: \"Olivier PRENANT\" <ohp@pyrenet.fr>\nSent: September 04, 2002 12:18 PM\n\n> I think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n> \n> It seems Makefile.shlib has changed between 722 and 73 and -z text has\n> been added. However with this on, it fails to build libperl.so\n> \n> Maybe I'm wrong, but could someone consider this patch.\n\nYour patch got it backwards :)\n\n-s\n", "msg_date": "Wed, 4 Sep 2002 12:23:11 -0400", "msg_from": "\"Serguei Mokhov\" <mokhov@cs.concordia.ca>", "msg_from_op": false, "msg_subject": "Re: Bug in Makefile.shlib" }, { "msg_contents": "Oops...\n\nThis one should be all right!!\n\nSorry\n\nRegards\nOn Wed, 4 Sep 2002, Serguei Mokhov wrote:\n\n> Date: Wed, 4 Sep 2002 12:23:11 -0400\n> From: Serguei Mokhov <mokhov@cs.concordia.ca>\n> To: ohp@pyrenet.fr, pgsql-hackers list <pgsql-hackers@postgresql.org>\n> Subject: Re: [HACKERS] Bug in Makefile.shlib\n> \n> ----- Original Message ----- \n> From: \"Olivier PRENANT\" <ohp@pyrenet.fr>\n> Sent: September 04, 2002 12:18 PM\n> \n> > I think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n> > \n> > It seems Makefile.shlib has changed between 722 and 73 and -z text has\n> > been added. However with this on, it fails to build libperl.so\n> > \n> > Maybe I'm wrong, but could someone consider this patch.\n> \n> Your patch got it backwards :)\n> \n> -s\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)", "msg_date": "Wed, 4 Sep 2002 18:51:33 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug in Makefile.shlib" }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> I think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n\n> It seems Makefile.shlib has changed between 722 and 73 and -z text has\n> been added.\n\nNot hardly. The \"-z text\" option has been in there since at least 6.4.\n6.4's Makefile.shlib has\n\nifeq ($(PORTNAME), unixware)\n ...\n LDFLAGS_SL := -G -z text\n ...\nendif\n\nwhich was cribbed from even older shlib support in other files. We used\nthat up through 7.0 without any revisions. In 7.1 Makefile.shlib was\nrevised pretty heavily; 7.1 has a unixware section that is identical to\ncurrent sources, in particular\n\n LINK.shared\t\t+= -Wl,-z,text -Wl,-h,$(soname)\n\nSo I think this code is pretty well tested and removing the -z option\nis more likely to break things than fix them.\n\nWhat misbehavior are you seeing exactly?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 13:28:39 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in Makefile.shlib " }, { "msg_contents": "On Wed, 2002-09-04 at 12:28, Tom Lane wrote:\n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > I think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n> \n> > It seems Makefile.shlib has changed between 722 and 73 and -z text has\n> > been added.\n> \n> Not hardly. The \"-z text\" option has been in there since at least 6.4.\n> 6.4's Makefile.shlib has\n> \n> ifeq ($(PORTNAME), unixware)\n> ...\n> LDFLAGS_SL := -G -z text\n> ...\n> endif\n> \n> which was cribbed from even older shlib support in other files. We used\n> that up through 7.0 without any revisions. In 7.1 Makefile.shlib was\n> revised pretty heavily; 7.1 has a unixware section that is identical to\n> current sources, in particular\n> \n> LINK.shared\t\t+= -Wl,-z,text -Wl,-h,$(soname)\n> \n> So I think this code is pretty well tested and removing the -z option\n> is more likely to break things than fix them.\n> \n> What misbehavior are you seeing exactly?\nsee my post from ~2 weeks ago on -hackers with a 7.2.[12] problem. \n\nIt flat doesn't work. \n\nI can dig the post up if you want. \n\n\n> \n> \t\t\tregards, tom lane\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 5: Have you checked our extensive FAQ?\n> \n> http://www.postgresql.org/users-lounge/docs/faq.html\n> \n-- \nLarry Rosenman http://www.lerctr.org/~ler\nPhone: +1 972-414-9812 E-Mail: ler@lerctr.org\nUS Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749\n\n", "msg_date": "04 Sep 2002 12:37:35 -0500", "msg_from": "Larry Rosenman <ler@lerctr.org>", "msg_from_op": false, "msg_subject": "Re: Bug in Makefile.shlib" }, { "msg_contents": "Well, Tom and Larry,\n\nI've posted already on -hackers but my posts did'nt semm to get through!\n\nThe problem is that at link time, ld complains about text segment beeing\nwritten to in Dynaloader.\n\nThe only way was to remove -Wl,-z text.\n\nI agree this sounded stupid. But I can't think of something else.\nThis is with perl-5.6.1 FWIW\n\nRegards\nOn 4 Sep 2002, Larry Rosenman wrote:\n\n> Date: 04 Sep 2002 12:37:35 -0500\n> From: Larry Rosenman <ler@lerctr.org>\n> To: Tom Lane <tgl@sss.pgh.pa.us>\n> Cc: ohp@pyrenet.fr, pgsql-hackers list <pgsql-hackers@postgresql.org>\n> Subject: Re: [HACKERS] Bug in Makefile.shlib\n> \n> On Wed, 2002-09-04 at 12:28, Tom Lane wrote:\n> > Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > > I think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n> > \n> > > It seems Makefile.shlib has changed between 722 and 73 and -z text has\n> > > been added.\n> > \n> > Not hardly. The \"-z text\" option has been in there since at least 6.4.\n> > 6.4's Makefile.shlib has\n> > \n> > ifeq ($(PORTNAME), unixware)\n> > ...\n> > LDFLAGS_SL := -G -z text\n> > ...\n> > endif\n> > \n> > which was cribbed from even older shlib support in other files. We used\n> > that up through 7.0 without any revisions. In 7.1 Makefile.shlib was\n> > revised pretty heavily; 7.1 has a unixware section that is identical to\n> > current sources, in particular\n> > \n> > LINK.shared\t\t+= -Wl,-z,text -Wl,-h,$(soname)\n> > \n> > So I think this code is pretty well tested and removing the -z option\n> > is more likely to break things than fix them.\n> > \n> > What misbehavior are you seeing exactly?\n> see my post from ~2 weeks ago on -hackers with a 7.2.[12] problem. \n> \n> It flat doesn't work. \n> \n> I can dig the post up if you want. \n> \n> \n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 4 Sep 2002 22:28:22 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug in Makefile.shlib" }, { "msg_contents": "Me again!!\n\nThese are errors I get with orginal Makefile.shlib:\n\nWarning: No bytecode->native mapping for a bytecode\nWarning: JIT compiler failed for org/apache/crimson/parser/Parser2.maybeComment(Z)Z\nUX:ld: INFO: text relocations referenced from files:\nDynaLoader.a(DynaLoader.o)\nUX:ld: ERROR: relocations remain against non-writeable, allocatable section .text\ngmake[3]: *** [libplperl.so.0.0] Error 1\ngmake[2]: *** [all] Error 2\ngmake[1]: *** [all] Error 2\ngmake: *** [all] Error 2\nUX:make: ERROR: fatal error.\n\nRegards,\nOn 4 Sep 2002, Larry Rosenman wrote:\n\n> Date: 04 Sep 2002 12:37:35 -0500\n> From: Larry Rosenman <ler@lerctr.org>\n> To: Tom Lane <tgl@sss.pgh.pa.us>\n> Cc: ohp@pyrenet.fr, pgsql-hackers list <pgsql-hackers@postgresql.org>\n> Subject: Re: [HACKERS] Bug in Makefile.shlib\n> \n> On Wed, 2002-09-04 at 12:28, Tom Lane wrote:\n> > Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > > I think I figured why I can't buil plperl on unixware 711/OpenUnix 800.\n> > \n> > > It seems Makefile.shlib has changed between 722 and 73 and -z text has\n> > > been added.\n> > \n> > Not hardly. The \"-z text\" option has been in there since at least 6.4.\n> > 6.4's Makefile.shlib has\n> > \n> > ifeq ($(PORTNAME), unixware)\n> > ...\n> > LDFLAGS_SL := -G -z text\n> > ...\n> > endif\n> > \n> > which was cribbed from even older shlib support in other files. We used\n> > that up through 7.0 without any revisions. In 7.1 Makefile.shlib was\n> > revised pretty heavily; 7.1 has a unixware section that is identical to\n> > current sources, in particular\n> > \n> > LINK.shared\t\t+= -Wl,-z,text -Wl,-h,$(soname)\n> > \n> > So I think this code is pretty well tested and removing the -z option\n> > is more likely to break things than fix them.\n> > \n> > What misbehavior are you seeing exactly?\n> see my post from ~2 weeks ago on -hackers with a 7.2.[12] problem. \n> \n> It flat doesn't work. \n> \n> I can dig the post up if you want. \n> \n> \n> > \n> > \t\t\tregards, tom lane\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 5: Have you checked our extensive FAQ?\n> > \n> > http://www.postgresql.org/users-lounge/docs/faq.html\n> > \n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 4 Sep 2002 22:43:15 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug in Makefile.shlib" }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> The problem is that at link time, ld complains about text segment beeing\n> written to in Dynaloader.\n> I agree this sounded stupid. But I can't think of something else.\n> This is with perl-5.6.1 FWIW\n\nAh. This is a bug in Perl's build process: even if you request a shared\nlibrary, it builds DynaLoader as static code. My own notes about\ninstalling perl 5.6.1 on HPUX read:\n\n\tmake\n\tfix DynaLoader.o per below\n\tmake test\n\tmake install\n\nAt least in 5.6.1, even with \"build shared\" request, DynaLoader.o is not\nmade with +z, which will cause plperl to fail. To fix, simply go into\nperl-5.6.1/ext/DynaLoader, rm DynaLoader.o, and \"make\". I wonder why\ntoplevel makefile thinks it's okay to build DynaLoader static??\n\n\nI'm just now in the middle of installing Perl 5.8.0, and it seems that\nthe oversight has been fixed; DynaLoader is now built sharable:\n\ncc -c -Ae -D_HPUX_SOURCE -Wl,+vnocompatwarnings -DDEBUGGING -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -DVERSION=\\\"1.04\\\" -DXS_VERSION=\\\"1.04\\\" +Z \"-I../..\" -DPERL_CORE -DLIBC=\"/lib/libc.sl\" DynaLoader.c\n\nThere seem to be some other problems --- 7.2 plperl dumps core for me\neven with the above fix. Still looking into that (I'm sorta hoping that\n5.8.0 will fix it, but won't know for a little while...)\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 17:14:13 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in Makefile.shlib " }, { "msg_contents": "Hi Tom,\n\nThanks fr your reply.. Not sure I understood!\nI've tried your hack with no luck. Further more, README in\nperl/ext/Dynaloader says it has to be static to be effective.\n\nWhat concerns me more is that with same perl (5.6.1) it compiles ok with\n722.\n\nRegards\nOn Wed, 4 Sep 2002, Tom Lane wrote:\n\n> Date: Wed, 04 Sep 2002 17:14:13 -0400\n> From: Tom Lane <tgl@sss.pgh.pa.us>\n> To: ohp@pyrenet.fr\n> Cc: Larry Rosenman <ler@lerctr.org>,\n> pgsql-hackers list <pgsql-hackers@postgresql.org>\n> Subject: Re: [HACKERS] Bug in Makefile.shlib \n> \n> Olivier PRENANT <ohp@pyrenet.fr> writes:\n> > The problem is that at link time, ld complains about text segment beeing\n> > written to in Dynaloader.\n> > I agree this sounded stupid. But I can't think of something else.\n> > This is with perl-5.6.1 FWIW\n> \n> Ah. This is a bug in Perl's build process: even if you request a shared\n> library, it builds DynaLoader as static code. My own notes about\n> installing perl 5.6.1 on HPUX read:\n> \n> \tmake\n> \tfix DynaLoader.o per below\n> \tmake test\n> \tmake install\n> \n> At least in 5.6.1, even with \"build shared\" request, DynaLoader.o is not\n> made with +z, which will cause plperl to fail. To fix, simply go into\n> perl-5.6.1/ext/DynaLoader, rm DynaLoader.o, and \"make\". I wonder why\n> toplevel makefile thinks it's okay to build DynaLoader static??\n> \n> \n> I'm just now in the middle of installing Perl 5.8.0, and it seems that\n> the oversight has been fixed; DynaLoader is now built sharable:\n> \n> cc -c -Ae -D_HPUX_SOURCE -Wl,+vnocompatwarnings -DDEBUGGING -I/usr/local/include -D_LARGEFILE_SOURCE -D_FILE_OFFSET_BITS=64 -g -DVERSION=\\\"1.04\\\" -DXS_VERSION=\\\"1.04\\\" +Z \"-I../..\" -DPERL_CORE -DLIBC=\"/lib/libc.sl\" DynaLoader.c\n> \n> There seem to be some other problems --- 7.2 plperl dumps core for me\n> even with the above fix. Still looking into that (I'm sorta hoping that\n> 5.8.0 will fix it, but won't know for a little while...)\n> \n> \t\t\tregards, tom lane\n> \n\n-- \nOlivier PRENANT \tTel:\t+33-5-61-50-97-00 (Work)\nQuartier d'Harraud Turrou +33-5-61-50-97-01 (Fax)\n31190 AUTERIVE +33-6-07-63-80-64 (GSM)\nFRANCE Email: ohp@pyrenet.fr\n------------------------------------------------------------------------------\nMake your life a dream, make your dream a reality. (St Exupery)\n\n", "msg_date": "Wed, 4 Sep 2002 23:59:20 +0200 (MET DST)", "msg_from": "Olivier PRENANT <ohp@pyrenet.fr>", "msg_from_op": true, "msg_subject": "Re: Bug in Makefile.shlib " }, { "msg_contents": "Olivier PRENANT <ohp@pyrenet.fr> writes:\n> Thanks fr your reply.. Not sure I understood!\n> I've tried your hack with no luck. Further more, README in\n> perl/ext/Dynaloader says it has to be static to be effective.\n\nThat's talking about whether it's linked into perl, not whether it's\ncompiled PIC or not.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 18:17:01 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Bug in Makefile.shlib " } ]
[ { "msg_contents": "OK, I talked to Marc and he is going to package up beta1 tonight.\n\nAny more changes to HISTORY?\n\nI want to run pgindent in an hour. Does anyone have a problem with\nthat?\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 13:09:26 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Beta1 schedule" }, { "msg_contents": "Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> Any more changes to HISTORY?\n\nThis is missing:\n\n- Implemented START TRANSACTION, per SQL99 (Neil)\n\nThis was implemented by Peter, I think:\n\n- Add privileges on functions and procedural languages\n\nThis was done by Tom, IIRC:\n\n- Triggers are now fired in alphabetical order\n\nThis could probably be better phrased:\n\n- Have PL/PgSQL FOUND return proper value for PERFORM and SELECT INTO\n (Tom, Neil)\n\nAs (reword as necessary):\n\n- Overhaul the PL/PgSQL FOUND magic variable to be more\n Oracle-compatible, and generally more sane. (Tom, Neil)\n\nThis was implemented by me and Jukka Holappa:\n\n- Clean up use of sprintf in favor of snprintf()\n\nTom did some work on this as well as Chris, I believe:\n\n- Add ALTER TABLE DROP COLUMN (Christopher)\n\nI didn't see any mention of Tom's recent changes to the on-disk array\nstorage format, but I might have just missed that.\n\nCheers,\n\nNeil\n\n-- \nNeil Conway <neilc@samurai.com> || PGP Key ID: DB3C29FC\n\n", "msg_date": "04 Sep 2002 18:17:17 -0400", "msg_from": "Neil Conway <neilc@samurai.com>", "msg_from_op": false, "msg_subject": "Re: Beta1 schedule" }, { "msg_contents": "Neil Conway wrote:\n> Bruce Momjian <pgman@candle.pha.pa.us> writes:\n> > Any more changes to HISTORY?\n> \n> This is missing:\n> \n> - Implemented START TRANSACTION, per SQL99 (Neil)\n\nDone. I didn't mention it earlier because I was unsure of its\nsignificance.\n\n> This was implemented by Peter, I think:\n> \n> - Add privileges on functions and procedural languages\n\nYep, fixed.\n\n> \n> This was done by Tom, IIRC:\n> \n> - Triggers are now fired in alphabetical order\n\nYep, Tom.\n\n> \n> This could probably be better phrased:\n> \n> - Have PL/PgSQL FOUND return proper value for PERFORM and SELECT INTO\n> (Tom, Neil)\n\nDone.\n\n> \n> As (reword as necessary):\n> \n> - Overhaul the PL/PgSQL FOUND magic variable to be more\n> Oracle-compatible, and generally more sane. (Tom, Neil)\n> \n> This was implemented by me and Jukka Holappa:\n> \n> - Clean up use of sprintf in favor of snprintf()\n\n\nAdded.\n\n> \n> Tom did some work on this as well as Chris, I believe:\n> \n> - Add ALTER TABLE DROP COLUMN (Christopher)\n> \n\nAdded.\n\n> I didn't see any mention of Tom's recent changes to the on-disk array\n> storage format, but I might have just missed that.\n\nI already see it. I added Tom's name too:\n\nCleanups in array internal handling (Joe, Tom)\n\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 19:08:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": true, "msg_subject": "Re: Beta1 schedule" }, { "msg_contents": "Line 72 of the HISTORY file (the one in 7.3b1 that Marc just packaged) reads:\n\n* Pre-6.3 clients are no longer supported.\n\nIs that supposed to be 7.3? I assume you're referring to the catalog changes, \n&c. that make old clients that are dependent on them behave incorrectly.\n\nRegards,\n\tJeff Davis\n\nOn Wednesday 04 September 2002 10:09 am, Bruce Momjian wrote:\n> OK, I talked to Marc and he is going to package up beta1 tonight.\n>\n> Any more changes to HISTORY?\n>\n> I want to run pgindent in an hour. Does anyone have a problem with\n> that?\n\n", "msg_date": "Wed, 4 Sep 2002 23:11:23 -0700", "msg_from": "Jeff Davis <list-pgsql-hackers@empires.org>", "msg_from_op": false, "msg_subject": "Re: Beta1 schedule" }, { "msg_contents": "> * Pre-6.3 clients are no longer supported.\n>\n> Is that supposed to be 7.3? I assume you're referring to the\n> catalog changes,\n> &c. that make old clients that are dependent on them behave incorrectly.\n>\n> Regards,\n> \tJeff Davis\n\nNo, he's referring to the on-the-wire protocol. This means that the psql\nprogram that came with 6.2 will not work with 7.3, but the 7.2 psql will\nstill (mostly) work.\n\nChris\n\n", "msg_date": "Thu, 5 Sep 2002 14:18:13 +0800", "msg_from": "\"Christopher Kings-Lynne\" <chriskl@familyhealth.com.au>", "msg_from_op": false, "msg_subject": "Re: Beta1 schedule" } ]
[ { "msg_contents": "Hi,\n\nUnder what conditions would the following statement cause the USERS\ntable to lock out selects?\n\n\nalter table my_coupons\n add constraint FK_mc_user_id\n FOREIGN KEY (mc_frn_user_id)\n REFERENCES users(user_ID);\n\n\nss\n\nScott Shattuck\nTechnical Pursuit Inc.\n\n\n\n", "msg_date": "04 Sep 2002 15:11:42 -0600", "msg_from": "Scott Shattuck <ss@technicalpursuit.com>", "msg_from_op": true, "msg_subject": "locking of referenced table during constraint construction" }, { "msg_contents": "\nOn 4 Sep 2002, Scott Shattuck wrote:\n\n> Under what conditions would the following statement cause the USERS\n> table to lock out selects?\n>\n>\n> alter table my_coupons\n> add constraint FK_mc_user_id\n> FOREIGN KEY (mc_frn_user_id)\n> REFERENCES users(user_ID);\n\nIf I'm reading code correctly, an exclusive lock\non the pk table is grabbed which will block selects\nas well. You're effectively altering both tables\n(you need to add triggers to both tables) and\nboth get locked.\n\n\n", "msg_date": "Wed, 4 Sep 2002 14:51:30 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: locking of referenced table during constraint construction" }, { "msg_contents": "On Wed, 2002-09-04 at 15:51, Stephan Szabo wrote:\n> \n> On 4 Sep 2002, Scott Shattuck wrote:\n> \n> > Under what conditions would the following statement cause the USERS\n> > table to lock out selects?\n> >\n> >\n> > alter table my_coupons\n> > add constraint FK_mc_user_id\n> > FOREIGN KEY (mc_frn_user_id)\n> > REFERENCES users(user_ID);\n> \n> If I'm reading code correctly, an exclusive lock\n> on the pk table is grabbed which will block selects\n> as well. You're effectively altering both tables\n> (you need to add triggers to both tables) and\n> both get locked.\n> \n> \n\nOk, if I understand things correctly the USERS table gets a constraint\nthat says don't delete/update the USER_ID in any way that would orphan a\nrow in the MY_COUPONS table. The MY_COUPONS table gets one that says\ndon't insert/update MC_FRN_USER_ID such that it isn't found in\nUSERS.USER_ID. \n\nBut...\n\nThere are no rows in the my_coupons table so it's not possible to orphan\na row there -- were it even the case that an update or delete were\nrunning...which they aren't. Even if there were rows in the referring\ntable I don't understand why an exclusive table-level lock is being\ntaken out to add a trigger. If I add user-level triggers to do the same\ntask they go in without a hitch but cause other problems in 7.2 since I\ncan't control their order of execution yet (thanks Tom for the 7.3\npatch! :)).\n\nss\n\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n\n\n", "msg_date": "04 Sep 2002 22:29:24 -0600", "msg_from": "Scott Shattuck <ss@technicalpursuit.com>", "msg_from_op": true, "msg_subject": "Re: locking of referenced table during constraint" }, { "msg_contents": "Scott Shattuck <ss@technicalpursuit.com> writes:\n> ... I don't understand why an exclusive table-level lock is being\n> taken out to add a trigger.\n\nWell, that's a schema change; it makes sense to me to forbid access\nwhile we're changing a table's schema.\n\nI think this discussion may just be a miscommunication: it's not clear\nto me whether you're complaining about adding a trigger or just firing\na trigger. The former is not a time-critical task in my book ...\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 00:49:56 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: locking of referenced table during constraint " }, { "msg_contents": "On Wed, 2002-09-04 at 22:49, Tom Lane wrote:\n> Scott Shattuck <ss@technicalpursuit.com> writes:\n> > ... I don't understand why an exclusive table-level lock is being\n> > taken out to add a trigger.\n> \n> Well, that's a schema change; it makes sense to me to forbid access\n> while we're changing a table's schema.\n> \n\nNo. In my book a schema change would alter the data a query would see --\nas in drop column, or add column, etc. This is simply a \"don't let a\ndelete/update get past this trigger from this point forward\". That's not\na bar-the-gates kind of scenario to me. More like \"for any DML operating\nafter the current version stamp make sure this trigger runs.\" Why lock\nanything? \n\nOne scenario I can see. A delete starting at T0 doesn't see a trigger.\nThe alter occurs at T1 but, due to ACID, the delete doesn't see it. The\ndelete tries to commit at T2. Unfortunately, in that scenario you can\nenvision an issue since it would seem the delete should fail since the\nalter is done, but the delete's transaction shouldn't be able to be\naffected by things starting after it does. So, a conflict. But only for\na delete or update. Selects already have transaction isolation\nlevels...why don't they allow the selects to read through adding a\nconstraint?\n\nI have other serious issues with locking and FK constraints as it is.\nThey often cause us serious performance problems. Sadly, the longer I\nuse PG and get hammered by locking issues surrounding the FK constraint\nimplementation the less I find myself likely to suggest PG for similar\ncustomers in the future.\n\n> I think this discussion may just be a miscommunication: it's not clear\n> to me whether you're complaining about adding a trigger or just firing\n> a trigger. The former is not a time-critical task in my book ...\n> \n\nIt becomes time critical when the table has 3 million user account\nentries and the lock blocks people from having their login name\nverified, causing what's supposed to be a 24x7 e-commerce site to\nessentially go offline to users for 5 minutes or more just so you can\nadd a constraint to a new table with no rows. Sorry, but that sucks.\n\nss\n\n\n", "msg_date": "04 Sep 2002 23:16:21 -0600", "msg_from": "Scott Shattuck <ss@technicalpursuit.com>", "msg_from_op": true, "msg_subject": "Re: locking of referenced table during constraint" }, { "msg_contents": "\nOn 4 Sep 2002, Scott Shattuck wrote:\n\n> On Wed, 2002-09-04 at 15:51, Stephan Szabo wrote:\n> >\n> > On 4 Sep 2002, Scott Shattuck wrote:\n> >\n> > > Under what conditions would the following statement cause the USERS\n> > > table to lock out selects?\n> > >\n> > >\n> > > alter table my_coupons\n> > > add constraint FK_mc_user_id\n> > > FOREIGN KEY (mc_frn_user_id)\n> > > REFERENCES users(user_ID);\n> >\n> > If I'm reading code correctly, an exclusive lock\n> > on the pk table is grabbed which will block selects\n> > as well. You're effectively altering both tables\n> > (you need to add triggers to both tables) and\n> > both get locked.\n> >\n> >\n>\n> Ok, if I understand things correctly the USERS table gets a constraint\n> that says don't delete/update the USER_ID in any way that would orphan a\n> row in the MY_COUPONS table. The MY_COUPONS table gets one that says\n> don't insert/update MC_FRN_USER_ID such that it isn't found in\n> USERS.USER_ID.\n>\n> But...\n>\n> There are no rows in the my_coupons table so it's not possible to orphan\n> a row there -- were it even the case that an update or delete were\n> running...which they aren't. Even if there were rows in the referring\n> table I don't understand why an exclusive table-level lock is being\n> taken out to add a trigger. If I add user-level triggers to do the same\n> task they go in without a hitch but cause other problems in 7.2 since I\n> can't control their order of execution yet (thanks Tom for the 7.3\n> patch! :)).\n\nI see the same behavior with user triggers (on my 7.3 devel box) if\nyou don't commit the transaction that selects against the table that\nis having the trigger added to it block until the transaction that\ndid the create trigger is committed or aborted. I think I must\nbe misunderstanding the symptoms.\n\n\n", "msg_date": "Wed, 4 Sep 2002 22:19:03 -0700 (PDT)", "msg_from": "Stephan Szabo <sszabo@megazone23.bigpanda.com>", "msg_from_op": false, "msg_subject": "Re: locking of referenced table during constraint" }, { "msg_contents": "Scott Shattuck <ss@technicalpursuit.com> writes:\n> ...why don't they allow the selects to read through adding a\n> constraint?\n\nHmm. We could probably allow that --- at least for some forms of\nALTER TABLE, a ShareRowExclusive lock ought to be good enough.\n(That would allow SELECT and SELECT FOR UPDATE to run in parallel,\nbut not any actual data changes.) Offhand I think this would be okay\nfor trigger changes, since SELECT and SELECT FOR UPDATE are unaffected\nby triggers. I'm less sure that it's safe for any other kind of ALTER.\n\n> It becomes time critical when the table has 3 million user account\n> entries and the lock blocks people from having their login name\n> verified, causing what's supposed to be a 24x7 e-commerce site to\n> essentially go offline to users for 5 minutes or more just so you can\n> add a constraint to a new table with no rows. Sorry, but that sucks.\n\nThe only way ALTER TABLE ADD CONSTRAINT could take five minutes is if\nyou are putting a new constraint on a large existing table. I don't\nreally see how you can expect that to be a free operation --- the system\nhas to look through all the existing rows to verify the constraint is\nmet. Fooling with the schema of large production tables is not\nsomething you're going to do without downtime in *any* DB.\n\n\t\t\tregards, tom lane\n", "msg_date": "Thu, 05 Sep 2002 09:34:37 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: locking of referenced table during constraint " } ]
[ { "msg_contents": "Call me crazy, but shouldn't the following work? :~|\n\n\nCREATE FUNCTION t() RETURNS TEXT AS '\nDECLARE\n\tcol_name pg_catalog.pg_attribute.attname%TYPE;\nBEGIN\n\tcol_name := ''uga'';\n\tRETURN col_name;\nEND;\n' LANGUAGE 'plpgsql';\nCREATE FUNCTION\n\n\nSELECT t();\nWARNING: plpgsql: ERROR during compile of t near line 2\nERROR: Invalid type name 'pg_catalog.pg_attribute.attname % TYPE'\n\n\n-sc\n\n-- \nSean Chittenden", "msg_date": "Wed, 4 Sep 2002 14:53:18 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> Call me crazy, but shouldn't the following work? :~|\n\nSure should. Want to fix plpgsql's parser?\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 18:16:00 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE.... " }, { "msg_contents": "> Sean Chittenden <sean@chittenden.org> writes:\n> > Call me crazy, but shouldn't the following work? :~|\n> \n> Sure should. Want to fix plpgsql's parser?\n\nWhy not: I've never been one to avoid strapping on 4tons in rocks and\njumping into the deep end. ::sigh:: Is it me or does it look like all\nof pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n\n-- \nSean Chittenden\n", "msg_date": "Wed, 4 Sep 2002 17:38:00 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "Sean Chittenden <sean@chittenden.org> writes:\n> ::sigh:: Is it me or does it look like all\n> of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n\nYeah. The group of routines parse_word, parse_dblword, etc that are\ncalled by the lexer certainly all need work. There are some\ndefinitional issues to think about, too --- plpgsql presently relies on\nthe number of names to give it some idea of what to look for, and those\nrules are probably all toast now. Please come up with a sketch of what\nyou think the behavior should be before you start hacking code.\n\n\t\t\tregards, tom lane\n", "msg_date": "Wed, 04 Sep 2002 20:47:52 -0400", "msg_from": "Tom Lane <tgl@sss.pgh.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE.... " }, { "msg_contents": "Tom Lane wrote:\n> Sean Chittenden <sean@chittenden.org> writes:\n> > ::sigh:: Is it me or does it look like all\n> > of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> \n> Yeah. The group of routines parse_word, parse_dblword, etc that are\n> called by the lexer certainly all need work. There are some\n> definitional issues to think about, too --- plpgsql presently relies on\n> the number of names to give it some idea of what to look for, and those\n> rules are probably all toast now. Please come up with a sketch of what\n> you think the behavior should be before you start hacking code.\n\nAdded to TODO:\n\n\t o Make PL/PgSQL %TYPE schema-aware\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 4 Sep 2002 21:46:36 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "> > ::sigh:: Is it me or does it look like all\n> > of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> \n> Yeah. The group of routines parse_word, parse_dblword, etc that are\n> called by the lexer certainly all need work. There are some\n> definitional issues to think about, too --- plpgsql presently relies\n> on the number of names to give it some idea of what to look for, and\n> those rules are probably all toast now. Please come up with a\n> sketch of what you think the behavior should be before you start\n> hacking code.\n\nNot a problem there. I walked around the code for a bit, made a few\nhacks to see how things are working, and I can tell you strait up that\nif you'd like this by 7.3, it won't be happening from me. <:~) I'm\nstretched kinda thin as is and don't think I'll be able to get this\nworking correctly with time to test by release. I can send you the\npatch I've got for the lexer, but that was chump. What I was going to\ndo could be totally wrong, but...\n\n* Change the lexer to recognize schema.table.column%TYPE as a token\n and was going to create parse_tripwordtype() that'd look up the\n table and column in the appropriate schema and would return the\n appropriate type.\n\nIf I were lazy, I'd just unshift the schema off of the token and\nreturn what comes back from parse_dblwordtype(), but that doesn't\nstrike me as correct for something that's performance sensitive.\nBeyond doing that, I'm at a loss. :-/ Thoughts? -sc\n\n-- \nSean Chittenden\n", "msg_date": "Thu, 5 Sep 2002 13:07:00 -0700", "msg_from": "Sean Chittenden <sean@chittenden.org>", "msg_from_op": true, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "Sean Chittenden wrote:\n> Not a problem there. I walked around the code for a bit, made a few\n> hacks to see how things are working, and I can tell you strait up that\n> if you'd like this by 7.3, it won't be happening from me. <:~) I'm\n> stretched kinda thin as is and don't think I'll be able to get this\n> working correctly with time to test by release. I can send you the\n> patch I've got for the lexer, but that was chump.\n\nIf you want to send me what you've done so far, I'll take a look and see \nif I can figure it out. I think this is probably a must do item for 7.3.\n\nAny further guidance or thoughts?\n\nThanks,\n\nJoe\n\n\n", "msg_date": "Thu, 05 Sep 2002 18:33:14 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "Tom Lane wrote:\n> Sean Chittenden <sean@chittenden.org> writes:\n> \n>>::sigh:: Is it me or does it look like all\n>>of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> \n> \n> Yeah. The group of routines parse_word, parse_dblword, etc that are\n> called by the lexer certainly all need work. There are some\n> definitional issues to think about, too --- plpgsql presently relies on\n> the number of names to give it some idea of what to look for, and those\n> rules are probably all toast now. Please come up with a sketch of what\n> you think the behavior should be before you start hacking code.\n\nAttached is a diff -c format proposal to fix this. I've also attached a short \ntest script. Seems to work OK and passes all regression tests.\n\nHere's a breakdown of how I understand plpgsql's \"Special word rules\" -- I \nthink it illustrates the behavior reasonably well. New functions added by this \npatch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:\n\n============================================================================\nIdentifiers (represents) parsing function\n----------------------------------------------------------------------------\nidentifier plpgsql_parse_word\n tg_argv\n T_LABEL (label)\n T_VARIABLE (variable)\n T_RECORD (record)\n T_ROW (row)\n----------------------------------------------------------------------------\nidentifier.identifier plpgsql_parse_dblword\n T_LABEL\n T_VARIABLE (label.variable)\n T_RECORD (label.record)\n T_ROW (label.row)\n T_RECORD\n T_VARIABLE (record.variable)\n T_ROW\n T_VARIABLE (row.variable)\n----------------------------------------------------------------------------\nidentifier.identifier.identifier plpgsql_parse_tripword\n T_LABEL\n T_RECORD\n T_VARIABLE (label.record.variable)\n T_ROW\n T_VARIABLE (label.row.variable)\n----------------------------------------------------------------------------\nidentifier%TYPE plpgsql_parse_wordtype\n T_VARIABLE\n T_DTYPE (variable%TYPE)\n T_DTYPE (typname%TYPE)\n----------------------------------------------------------------------------\nidentifier.identifier%TYPE plpgsql_parse_dblwordtype\n T_LABEL\n T_VARIABLE\n T_DTYPE (label.variable%TYPE)\n T_DTYPE (relname.attname%TYPE)\n----------------------------------------------------------------------------\n<new>\nidentifier.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n T_DTYPE (nspname.relname.attname%TYPE)\n----------------------------------------------------------------------------\nidentifier%ROWTYPE plpgsql_parse_wordrowtype\n T_DTYPE (relname%ROWTYPE)\n----------------------------------------------------------------------------\n<new>\nidentifier.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n T_DTYPE (nspname.relname%ROWTYPE)\n\n============================================================================\nParameters - parallels the above\n----------------------------------------------------------------------------\n$# plpgsql_parse_word\n$#.identifier plpgsql_parse_dblword\n$#.identifier.identifier plpgsql_parse_tripword\n$#%TYPE plpgsql_parse_wordtype\n$#.identifier%TYPE plpgsql_parse_dblwordtype\n$#.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n$#%ROWTYPE plpgsql_parse_wordrowtype\n$#.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n\nComments?\n\nThanks,\n\nJoe", "msg_date": "Sun, 08 Sep 2002 22:31:26 -0700", "msg_from": "Joe Conway <mail@joeconway.com>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "\nYour patch has been added to the PostgreSQL unapplied patches list at:\n\n\thttp://candle.pha.pa.us/cgi-bin/pgpatches\n\nI will try to apply it within the next 48 hours.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Sean Chittenden <sean@chittenden.org> writes:\n> > \n> >>::sigh:: Is it me or does it look like all\n> >>of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> > \n> > \n> > Yeah. The group of routines parse_word, parse_dblword, etc that are\n> > called by the lexer certainly all need work. There are some\n> > definitional issues to think about, too --- plpgsql presently relies on\n> > the number of names to give it some idea of what to look for, and those\n> > rules are probably all toast now. Please come up with a sketch of what\n> > you think the behavior should be before you start hacking code.\n> \n> Attached is a diff -c format proposal to fix this. I've also attached a short \n> test script. Seems to work OK and passes all regression tests.\n> \n> Here's a breakdown of how I understand plpgsql's \"Special word rules\" -- I \n> think it illustrates the behavior reasonably well. New functions added by this \n> patch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:\n> \n> ============================================================================\n> Identifiers (represents) parsing function\n> ----------------------------------------------------------------------------\n> identifier plpgsql_parse_word\n> tg_argv\n> T_LABEL (label)\n> T_VARIABLE (variable)\n> T_RECORD (record)\n> T_ROW (row)\n> ----------------------------------------------------------------------------\n> identifier.identifier plpgsql_parse_dblword\n> T_LABEL\n> T_VARIABLE (label.variable)\n> T_RECORD (label.record)\n> T_ROW (label.row)\n> T_RECORD\n> T_VARIABLE (record.variable)\n> T_ROW\n> T_VARIABLE (row.variable)\n> ----------------------------------------------------------------------------\n> identifier.identifier.identifier plpgsql_parse_tripword\n> T_LABEL\n> T_RECORD\n> T_VARIABLE (label.record.variable)\n> T_ROW\n> T_VARIABLE (label.row.variable)\n> ----------------------------------------------------------------------------\n> identifier%TYPE plpgsql_parse_wordtype\n> T_VARIABLE\n> T_DTYPE (variable%TYPE)\n> T_DTYPE (typname%TYPE)\n> ----------------------------------------------------------------------------\n> identifier.identifier%TYPE plpgsql_parse_dblwordtype\n> T_LABEL\n> T_VARIABLE\n> T_DTYPE (label.variable%TYPE)\n> T_DTYPE (relname.attname%TYPE)\n> ----------------------------------------------------------------------------\n> <new>\n> identifier.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> T_DTYPE (nspname.relname.attname%TYPE)\n> ----------------------------------------------------------------------------\n> identifier%ROWTYPE plpgsql_parse_wordrowtype\n> T_DTYPE (relname%ROWTYPE)\n> ----------------------------------------------------------------------------\n> <new>\n> identifier.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> T_DTYPE (nspname.relname%ROWTYPE)\n> \n> ============================================================================\n> Parameters - parallels the above\n> ----------------------------------------------------------------------------\n> $# plpgsql_parse_word\n> $#.identifier plpgsql_parse_dblword\n> $#.identifier.identifier plpgsql_parse_tripword\n> $#%TYPE plpgsql_parse_wordtype\n> $#.identifier%TYPE plpgsql_parse_dblwordtype\n> $#.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> $#%ROWTYPE plpgsql_parse_wordrowtype\n> $#.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> \n> Comments?\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/pl/plpgsql/src/pl_comp.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/pl_comp.c,v\n> retrieving revision 1.51\n> diff -c -r1.51 pl_comp.c\n> *** src/pl/plpgsql/src/pl_comp.c\t4 Sep 2002 20:31:47 -0000\t1.51\n> --- src/pl/plpgsql/src/pl_comp.c\t9 Sep 2002 04:22:24 -0000\n> ***************\n> *** 1092,1097 ****\n> --- 1092,1217 ----\n> \treturn T_DTYPE;\n> }\n> \n> + /* ----------\n> + * plpgsql_parse_tripwordtype\t\tSame lookup for word.word.word%TYPE\n> + * ----------\n> + */\n> + #define TYPE_JUNK_LEN\t5\n> + \n> + int\n> + plpgsql_parse_tripwordtype(char *word)\n> + {\n> + \tOid\t\t\tclassOid;\n> + \tHeapTuple\tclasstup;\n> + \tForm_pg_class classStruct;\n> + \tHeapTuple\tattrtup;\n> + \tForm_pg_attribute attrStruct;\n> + \tHeapTuple\ttypetup;\n> + \tForm_pg_type typeStruct;\n> + \tPLpgSQL_type *typ;\n> + \tchar\t *cp[2];\n> + \tint\t\t\tqualified_att_len;\n> + \tint\t\t\tnumdots = 0;\n> + \tint\t\t\ti;\n> + \tRangeVar *relvar;\n> + \n> + \t/* Do case conversion and word separation */\n> + \tqualified_att_len = strlen(word) - TYPE_JUNK_LEN;\n> + \tAssert(word[qualified_att_len] == '%');\n> + \n> + \tfor (i = 0; i < qualified_att_len; i++)\n> + \t{\n> + \t\tif (word[i] == '.' && ++numdots == 2)\n> + \t\t{\n> + \t\t\tcp[0] = (char *) palloc((i + 1) * sizeof(char));\n> + \t\t\tmemset(cp[0], 0, (i + 1) * sizeof(char));\n> + \t\t\tmemcpy(cp[0], word, i * sizeof(char));\n> + \n> + \t\t\t/* qualified_att_len - one based position + 1 (null terminator) */\n> + \t\t\tcp[1] = (char *) palloc((qualified_att_len - i) * sizeof(char));\n> + \t\t\tmemset(cp[1], 0, (qualified_att_len - i) * sizeof(char));\n> + \t\t\tmemcpy(cp[1], &word[i + 1], (qualified_att_len - i - 1) * sizeof(char));\n> + \n> + \t\t\tbreak;\n> + \t\t}\n> + \t}\n> + \n> + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp[0], \"plpgsql_parse_dblwordtype\"));\n> + \tclassOid = RangeVarGetRelid(relvar, true);\n> + \tif (!OidIsValid(classOid))\n> + \t{\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \tclasstup = SearchSysCache(RELOID,\n> + \t\t\t\t\t\t\t ObjectIdGetDatum(classOid),\n> + \t\t\t\t\t\t\t 0, 0, 0);\n> + \tif (!HeapTupleIsValid(classtup))\n> + \t{\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \n> + \t/*\n> + \t * It must be a relation, sequence, view, or type\n> + \t */\n> + \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> + \tif (classStruct->relkind != RELKIND_RELATION &&\n> + \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> + \t\tclassStruct->relkind != RELKIND_VIEW &&\n> + \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> + \t{\n> + \t\tReleaseSysCache(classtup);\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \n> + \t/*\n> + \t * Fetch the named table field and it's type\n> + \t */\n> + \tattrtup = SearchSysCacheAttName(classOid, cp[1]);\n> + \tif (!HeapTupleIsValid(attrtup))\n> + \t{\n> + \t\tReleaseSysCache(classtup);\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \tattrStruct = (Form_pg_attribute) GETSTRUCT(attrtup);\n> + \n> + \ttypetup = SearchSysCache(TYPEOID,\n> + \t\t\t\t\t\t\t ObjectIdGetDatum(attrStruct->atttypid),\n> + \t\t\t\t\t\t\t 0, 0, 0);\n> + \tif (!HeapTupleIsValid(typetup))\n> + \t\telog(ERROR, \"cache lookup for type %u of %s.%s failed\",\n> + \t\t\t attrStruct->atttypid, cp[0], cp[1]);\n> + \ttypeStruct = (Form_pg_type) GETSTRUCT(typetup);\n> + \n> + \t/*\n> + \t * Found that - build a compiler type struct and return it\n> + \t */\n> + \ttyp = (PLpgSQL_type *) malloc(sizeof(PLpgSQL_type));\n> + \n> + \ttyp->typname = strdup(NameStr(typeStruct->typname));\n> + \ttyp->typoid = attrStruct->atttypid;\n> + \tperm_fmgr_info(typeStruct->typinput, &(typ->typinput));\n> + \ttyp->typelem = typeStruct->typelem;\n> + \ttyp->typbyval = typeStruct->typbyval;\n> + \ttyp->typlen = typeStruct->typlen;\n> + \ttyp->atttypmod = attrStruct->atttypmod;\n> + \n> + \tplpgsql_yylval.dtype = typ;\n> + \n> + \tReleaseSysCache(classtup);\n> + \tReleaseSysCache(attrtup);\n> + \tReleaseSysCache(typetup);\n> + \tpfree(cp[0]);\n> + \tpfree(cp[1]);\n> + \treturn T_DTYPE;\n> + }\n> \n> /* ----------\n> * plpgsql_parse_wordrowtype\t\tScanner found word%ROWTYPE.\n> ***************\n> *** 1125,1130 ****\n> --- 1245,1290 ----\n> \n> \tpfree(cp[0]);\n> \tpfree(cp[1]);\n> + \n> + \treturn T_ROW;\n> + }\n> + \n> + /* ----------\n> + * plpgsql_parse_dblwordrowtype\t\tScanner found word.word%ROWTYPE.\n> + *\t\t\tSo word must be namespace qualified a table name.\n> + * ----------\n> + */\n> + #define ROWTYPE_JUNK_LEN\t8\n> + \n> + int\n> + plpgsql_parse_dblwordrowtype(char *word)\n> + {\n> + \tOid\t\t\tclassOid;\n> + \tchar\t *cp;\n> + \tint\t\t\ti;\n> + \tRangeVar *relvar;\n> + \n> + \t/* Do case conversion and word separation */\n> + \t/* We convert %rowtype to .rowtype momentarily to keep converter happy */\n> + \ti = strlen(word) - ROWTYPE_JUNK_LEN;\n> + \tAssert(word[i] == '%');\n> + \n> + \tcp = (char *) palloc((i + 1) * sizeof(char));\n> + \tmemset(cp, 0, (i + 1) * sizeof(char));\n> + \tmemcpy(cp, word, i * sizeof(char));\n> + \n> + \t/* Lookup the relation */\n> + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp, \"plpgsql_parse_dblwordtype\"));\n> + \tclassOid = RangeVarGetRelid(relvar, true);\n> + \tif (!OidIsValid(classOid))\n> + \t\telog(ERROR, \"%s: no such class\", cp);\n> + \n> + \t/*\n> + \t * Build and return the complete row definition\n> + \t */\n> + \tplpgsql_yylval.row = build_rowtype(classOid);\n> + \n> + \tpfree(cp);\n> \n> \treturn T_ROW;\n> }\n> Index: src/pl/plpgsql/src/plpgsql.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/plpgsql.h,v\n> retrieving revision 1.27\n> diff -c -r1.27 plpgsql.h\n> *** src/pl/plpgsql/src/plpgsql.h\t4 Sep 2002 20:31:47 -0000\t1.27\n> --- src/pl/plpgsql/src/plpgsql.h\t9 Sep 2002 04:21:37 -0000\n> ***************\n> *** 568,574 ****\n> --- 568,576 ----\n> extern int\tplpgsql_parse_tripword(char *word);\n> extern int\tplpgsql_parse_wordtype(char *word);\n> extern int\tplpgsql_parse_dblwordtype(char *word);\n> + extern int\tplpgsql_parse_tripwordtype(char *word);\n> extern int\tplpgsql_parse_wordrowtype(char *word);\n> + extern int\tplpgsql_parse_dblwordrowtype(char *word);\n> extern PLpgSQL_type *plpgsql_parse_datatype(char *string);\n> extern void plpgsql_adddatum(PLpgSQL_datum * new);\n> extern int\tplpgsql_add_initdatums(int **varnos);\n> Index: src/pl/plpgsql/src/scan.l\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/scan.l,v\n> retrieving revision 1.22\n> diff -c -r1.22 scan.l\n> *** src/pl/plpgsql/src/scan.l\t30 Aug 2002 00:28:41 -0000\t1.22\n> --- src/pl/plpgsql/src/scan.l\t9 Sep 2002 04:23:49 -0000\n> ***************\n> *** 170,183 ****\n> --- 170,187 ----\n> {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> {identifier}{space}*%TYPE\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> {identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> + {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> {identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> + {identifier}{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> \n> \\${digit}+\t\t\t\t\t\t{ return plpgsql_parse_word(yytext);\t}\n> \\${digit}+{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_dblword(yytext);\t}\n> \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> \\${digit}+{space}*%TYPE\t\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> \\${digit}+{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> + \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> \\${digit}+{space}*%ROWTYPE\t\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> + \\${digit}+{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> \n> {digit}+\t\t{ return T_NUMBER;\t\t\t}\n> \n\n> -- nspname.relname.attname%TYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS TEXT AS '\n> DECLARE\n> col_name pg_catalog.pg_attribute.attname%TYPE;\n> BEGIN\n> col_name := ''uga'';\n> RETURN col_name;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT t();\n> \n> -- nspname.relname%ROWTYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> DECLARE\n> rec pg_catalog.pg_attribute%ROWTYPE;\n> BEGIN\n> SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT * FROM t();\n> \n> -- nspname.relname.attname%TYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute.attname%TYPE AS '\n> DECLARE\n> rec pg_catalog.pg_attribute.attname%TYPE;\n> BEGIN\n> SELECT INTO rec pg_catalog.pg_attribute.attname FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT t();\n> \n> -- nspname.relname%ROWTYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> DECLARE\n> rec pg_catalog.pg_attribute%ROWTYPE;\n> BEGIN\n> SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT * FROM t();\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 11 Sep 2002 00:12:52 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "\nPatch applied. Thanks.\n\n---------------------------------------------------------------------------\n\n\nJoe Conway wrote:\n> Tom Lane wrote:\n> > Sean Chittenden <sean@chittenden.org> writes:\n> > \n> >>::sigh:: Is it me or does it look like all\n> >>of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> > \n> > \n> > Yeah. The group of routines parse_word, parse_dblword, etc that are\n> > called by the lexer certainly all need work. There are some\n> > definitional issues to think about, too --- plpgsql presently relies on\n> > the number of names to give it some idea of what to look for, and those\n> > rules are probably all toast now. Please come up with a sketch of what\n> > you think the behavior should be before you start hacking code.\n> \n> Attached is a diff -c format proposal to fix this. I've also attached a short \n> test script. Seems to work OK and passes all regression tests.\n> \n> Here's a breakdown of how I understand plpgsql's \"Special word rules\" -- I \n> think it illustrates the behavior reasonably well. New functions added by this \n> patch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:\n> \n> ============================================================================\n> Identifiers (represents) parsing function\n> ----------------------------------------------------------------------------\n> identifier plpgsql_parse_word\n> tg_argv\n> T_LABEL (label)\n> T_VARIABLE (variable)\n> T_RECORD (record)\n> T_ROW (row)\n> ----------------------------------------------------------------------------\n> identifier.identifier plpgsql_parse_dblword\n> T_LABEL\n> T_VARIABLE (label.variable)\n> T_RECORD (label.record)\n> T_ROW (label.row)\n> T_RECORD\n> T_VARIABLE (record.variable)\n> T_ROW\n> T_VARIABLE (row.variable)\n> ----------------------------------------------------------------------------\n> identifier.identifier.identifier plpgsql_parse_tripword\n> T_LABEL\n> T_RECORD\n> T_VARIABLE (label.record.variable)\n> T_ROW\n> T_VARIABLE (label.row.variable)\n> ----------------------------------------------------------------------------\n> identifier%TYPE plpgsql_parse_wordtype\n> T_VARIABLE\n> T_DTYPE (variable%TYPE)\n> T_DTYPE (typname%TYPE)\n> ----------------------------------------------------------------------------\n> identifier.identifier%TYPE plpgsql_parse_dblwordtype\n> T_LABEL\n> T_VARIABLE\n> T_DTYPE (label.variable%TYPE)\n> T_DTYPE (relname.attname%TYPE)\n> ----------------------------------------------------------------------------\n> <new>\n> identifier.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> T_DTYPE (nspname.relname.attname%TYPE)\n> ----------------------------------------------------------------------------\n> identifier%ROWTYPE plpgsql_parse_wordrowtype\n> T_DTYPE (relname%ROWTYPE)\n> ----------------------------------------------------------------------------\n> <new>\n> identifier.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> T_DTYPE (nspname.relname%ROWTYPE)\n> \n> ============================================================================\n> Parameters - parallels the above\n> ----------------------------------------------------------------------------\n> $# plpgsql_parse_word\n> $#.identifier plpgsql_parse_dblword\n> $#.identifier.identifier plpgsql_parse_tripword\n> $#%TYPE plpgsql_parse_wordtype\n> $#.identifier%TYPE plpgsql_parse_dblwordtype\n> $#.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> $#%ROWTYPE plpgsql_parse_wordrowtype\n> $#.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> \n> Comments?\n> \n> Thanks,\n> \n> Joe\n\n> Index: src/pl/plpgsql/src/pl_comp.c\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/pl_comp.c,v\n> retrieving revision 1.51\n> diff -c -r1.51 pl_comp.c\n> *** src/pl/plpgsql/src/pl_comp.c\t4 Sep 2002 20:31:47 -0000\t1.51\n> --- src/pl/plpgsql/src/pl_comp.c\t9 Sep 2002 04:22:24 -0000\n> ***************\n> *** 1092,1097 ****\n> --- 1092,1217 ----\n> \treturn T_DTYPE;\n> }\n> \n> + /* ----------\n> + * plpgsql_parse_tripwordtype\t\tSame lookup for word.word.word%TYPE\n> + * ----------\n> + */\n> + #define TYPE_JUNK_LEN\t5\n> + \n> + int\n> + plpgsql_parse_tripwordtype(char *word)\n> + {\n> + \tOid\t\t\tclassOid;\n> + \tHeapTuple\tclasstup;\n> + \tForm_pg_class classStruct;\n> + \tHeapTuple\tattrtup;\n> + \tForm_pg_attribute attrStruct;\n> + \tHeapTuple\ttypetup;\n> + \tForm_pg_type typeStruct;\n> + \tPLpgSQL_type *typ;\n> + \tchar\t *cp[2];\n> + \tint\t\t\tqualified_att_len;\n> + \tint\t\t\tnumdots = 0;\n> + \tint\t\t\ti;\n> + \tRangeVar *relvar;\n> + \n> + \t/* Do case conversion and word separation */\n> + \tqualified_att_len = strlen(word) - TYPE_JUNK_LEN;\n> + \tAssert(word[qualified_att_len] == '%');\n> + \n> + \tfor (i = 0; i < qualified_att_len; i++)\n> + \t{\n> + \t\tif (word[i] == '.' && ++numdots == 2)\n> + \t\t{\n> + \t\t\tcp[0] = (char *) palloc((i + 1) * sizeof(char));\n> + \t\t\tmemset(cp[0], 0, (i + 1) * sizeof(char));\n> + \t\t\tmemcpy(cp[0], word, i * sizeof(char));\n> + \n> + \t\t\t/* qualified_att_len - one based position + 1 (null terminator) */\n> + \t\t\tcp[1] = (char *) palloc((qualified_att_len - i) * sizeof(char));\n> + \t\t\tmemset(cp[1], 0, (qualified_att_len - i) * sizeof(char));\n> + \t\t\tmemcpy(cp[1], &word[i + 1], (qualified_att_len - i - 1) * sizeof(char));\n> + \n> + \t\t\tbreak;\n> + \t\t}\n> + \t}\n> + \n> + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp[0], \"plpgsql_parse_dblwordtype\"));\n> + \tclassOid = RangeVarGetRelid(relvar, true);\n> + \tif (!OidIsValid(classOid))\n> + \t{\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \tclasstup = SearchSysCache(RELOID,\n> + \t\t\t\t\t\t\t ObjectIdGetDatum(classOid),\n> + \t\t\t\t\t\t\t 0, 0, 0);\n> + \tif (!HeapTupleIsValid(classtup))\n> + \t{\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \n> + \t/*\n> + \t * It must be a relation, sequence, view, or type\n> + \t */\n> + \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> + \tif (classStruct->relkind != RELKIND_RELATION &&\n> + \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> + \t\tclassStruct->relkind != RELKIND_VIEW &&\n> + \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> + \t{\n> + \t\tReleaseSysCache(classtup);\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \n> + \t/*\n> + \t * Fetch the named table field and it's type\n> + \t */\n> + \tattrtup = SearchSysCacheAttName(classOid, cp[1]);\n> + \tif (!HeapTupleIsValid(attrtup))\n> + \t{\n> + \t\tReleaseSysCache(classtup);\n> + \t\tpfree(cp[0]);\n> + \t\tpfree(cp[1]);\n> + \t\treturn T_ERROR;\n> + \t}\n> + \tattrStruct = (Form_pg_attribute) GETSTRUCT(attrtup);\n> + \n> + \ttypetup = SearchSysCache(TYPEOID,\n> + \t\t\t\t\t\t\t ObjectIdGetDatum(attrStruct->atttypid),\n> + \t\t\t\t\t\t\t 0, 0, 0);\n> + \tif (!HeapTupleIsValid(typetup))\n> + \t\telog(ERROR, \"cache lookup for type %u of %s.%s failed\",\n> + \t\t\t attrStruct->atttypid, cp[0], cp[1]);\n> + \ttypeStruct = (Form_pg_type) GETSTRUCT(typetup);\n> + \n> + \t/*\n> + \t * Found that - build a compiler type struct and return it\n> + \t */\n> + \ttyp = (PLpgSQL_type *) malloc(sizeof(PLpgSQL_type));\n> + \n> + \ttyp->typname = strdup(NameStr(typeStruct->typname));\n> + \ttyp->typoid = attrStruct->atttypid;\n> + \tperm_fmgr_info(typeStruct->typinput, &(typ->typinput));\n> + \ttyp->typelem = typeStruct->typelem;\n> + \ttyp->typbyval = typeStruct->typbyval;\n> + \ttyp->typlen = typeStruct->typlen;\n> + \ttyp->atttypmod = attrStruct->atttypmod;\n> + \n> + \tplpgsql_yylval.dtype = typ;\n> + \n> + \tReleaseSysCache(classtup);\n> + \tReleaseSysCache(attrtup);\n> + \tReleaseSysCache(typetup);\n> + \tpfree(cp[0]);\n> + \tpfree(cp[1]);\n> + \treturn T_DTYPE;\n> + }\n> \n> /* ----------\n> * plpgsql_parse_wordrowtype\t\tScanner found word%ROWTYPE.\n> ***************\n> *** 1125,1130 ****\n> --- 1245,1290 ----\n> \n> \tpfree(cp[0]);\n> \tpfree(cp[1]);\n> + \n> + \treturn T_ROW;\n> + }\n> + \n> + /* ----------\n> + * plpgsql_parse_dblwordrowtype\t\tScanner found word.word%ROWTYPE.\n> + *\t\t\tSo word must be namespace qualified a table name.\n> + * ----------\n> + */\n> + #define ROWTYPE_JUNK_LEN\t8\n> + \n> + int\n> + plpgsql_parse_dblwordrowtype(char *word)\n> + {\n> + \tOid\t\t\tclassOid;\n> + \tchar\t *cp;\n> + \tint\t\t\ti;\n> + \tRangeVar *relvar;\n> + \n> + \t/* Do case conversion and word separation */\n> + \t/* We convert %rowtype to .rowtype momentarily to keep converter happy */\n> + \ti = strlen(word) - ROWTYPE_JUNK_LEN;\n> + \tAssert(word[i] == '%');\n> + \n> + \tcp = (char *) palloc((i + 1) * sizeof(char));\n> + \tmemset(cp, 0, (i + 1) * sizeof(char));\n> + \tmemcpy(cp, word, i * sizeof(char));\n> + \n> + \t/* Lookup the relation */\n> + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp, \"plpgsql_parse_dblwordtype\"));\n> + \tclassOid = RangeVarGetRelid(relvar, true);\n> + \tif (!OidIsValid(classOid))\n> + \t\telog(ERROR, \"%s: no such class\", cp);\n> + \n> + \t/*\n> + \t * Build and return the complete row definition\n> + \t */\n> + \tplpgsql_yylval.row = build_rowtype(classOid);\n> + \n> + \tpfree(cp);\n> \n> \treturn T_ROW;\n> }\n> Index: src/pl/plpgsql/src/plpgsql.h\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/plpgsql.h,v\n> retrieving revision 1.27\n> diff -c -r1.27 plpgsql.h\n> *** src/pl/plpgsql/src/plpgsql.h\t4 Sep 2002 20:31:47 -0000\t1.27\n> --- src/pl/plpgsql/src/plpgsql.h\t9 Sep 2002 04:21:37 -0000\n> ***************\n> *** 568,574 ****\n> --- 568,576 ----\n> extern int\tplpgsql_parse_tripword(char *word);\n> extern int\tplpgsql_parse_wordtype(char *word);\n> extern int\tplpgsql_parse_dblwordtype(char *word);\n> + extern int\tplpgsql_parse_tripwordtype(char *word);\n> extern int\tplpgsql_parse_wordrowtype(char *word);\n> + extern int\tplpgsql_parse_dblwordrowtype(char *word);\n> extern PLpgSQL_type *plpgsql_parse_datatype(char *string);\n> extern void plpgsql_adddatum(PLpgSQL_datum * new);\n> extern int\tplpgsql_add_initdatums(int **varnos);\n> Index: src/pl/plpgsql/src/scan.l\n> ===================================================================\n> RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/scan.l,v\n> retrieving revision 1.22\n> diff -c -r1.22 scan.l\n> *** src/pl/plpgsql/src/scan.l\t30 Aug 2002 00:28:41 -0000\t1.22\n> --- src/pl/plpgsql/src/scan.l\t9 Sep 2002 04:23:49 -0000\n> ***************\n> *** 170,183 ****\n> --- 170,187 ----\n> {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> {identifier}{space}*%TYPE\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> {identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> + {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> {identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> + {identifier}{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> \n> \\${digit}+\t\t\t\t\t\t{ return plpgsql_parse_word(yytext);\t}\n> \\${digit}+{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_dblword(yytext);\t}\n> \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> \\${digit}+{space}*%TYPE\t\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> \\${digit}+{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> + \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> \\${digit}+{space}*%ROWTYPE\t\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> + \\${digit}+{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> \n> {digit}+\t\t{ return T_NUMBER;\t\t\t}\n> \n\n> -- nspname.relname.attname%TYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS TEXT AS '\n> DECLARE\n> col_name pg_catalog.pg_attribute.attname%TYPE;\n> BEGIN\n> col_name := ''uga'';\n> RETURN col_name;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT t();\n> \n> -- nspname.relname%ROWTYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> DECLARE\n> rec pg_catalog.pg_attribute%ROWTYPE;\n> BEGIN\n> SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT * FROM t();\n> \n> -- nspname.relname.attname%TYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute.attname%TYPE AS '\n> DECLARE\n> rec pg_catalog.pg_attribute.attname%TYPE;\n> BEGIN\n> SELECT INTO rec pg_catalog.pg_attribute.attname FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT t();\n> \n> -- nspname.relname%ROWTYPE\n> DROP FUNCTION t();\n> CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> DECLARE\n> rec pg_catalog.pg_attribute%ROWTYPE;\n> BEGIN\n> SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> RETURN rec;\n> END;\n> ' LANGUAGE 'plpgsql';\n> SELECT * FROM t();\n\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 3: if posting/reading through Usenet, please send an appropriate\n> subscribe-nomail command to majordomo@postgresql.org so that your\n> message can get through to the mailing list cleanly\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 11 Sep 2002 20:24:05 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "Does anyone know if such effort is also required to pl/python to become\n\"schema aware\"?\n\nRegards,\n\n\tGreg Copeland\n\n\nOn Wed, 2002-09-11 at 19:24, Bruce Momjian wrote:\n> \n> Patch applied. Thanks.\n> \n> ---------------------------------------------------------------------------\n> \n> \n> Joe Conway wrote:\n> > Tom Lane wrote:\n> > > Sean Chittenden <sean@chittenden.org> writes:\n> > > \n> > >>::sigh:: Is it me or does it look like all\n> > >>of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> > > \n> > > \n> > > Yeah. The group of routines parse_word, parse_dblword, etc that are\n> > > called by the lexer certainly all need work. There are some\n> > > definitional issues to think about, too --- plpgsql presently relies on\n> > > the number of names to give it some idea of what to look for, and those\n> > > rules are probably all toast now. Please come up with a sketch of what\n> > > you think the behavior should be before you start hacking code.\n> > \n> > Attached is a diff -c format proposal to fix this. I've also attached a short \n> > test script. Seems to work OK and passes all regression tests.\n> > \n> > Here's a breakdown of how I understand plpgsql's \"Special word rules\" -- I \n> > think it illustrates the behavior reasonably well. New functions added by this \n> > patch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:\n> > \n> > ============================================================================\n> > Identifiers (represents) parsing function\n> > ----------------------------------------------------------------------------\n> > identifier plpgsql_parse_word\n> > tg_argv\n> > T_LABEL (label)\n> > T_VARIABLE (variable)\n> > T_RECORD (record)\n> > T_ROW (row)\n> > ----------------------------------------------------------------------------\n> > identifier.identifier plpgsql_parse_dblword\n> > T_LABEL\n> > T_VARIABLE (label.variable)\n> > T_RECORD (label.record)\n> > T_ROW (label.row)\n> > T_RECORD\n> > T_VARIABLE (record.variable)\n> > T_ROW\n> > T_VARIABLE (row.variable)\n> > ----------------------------------------------------------------------------\n> > identifier.identifier.identifier plpgsql_parse_tripword\n> > T_LABEL\n> > T_RECORD\n> > T_VARIABLE (label.record.variable)\n> > T_ROW\n> > T_VARIABLE (label.row.variable)\n> > ----------------------------------------------------------------------------\n> > identifier%TYPE plpgsql_parse_wordtype\n> > T_VARIABLE\n> > T_DTYPE (variable%TYPE)\n> > T_DTYPE (typname%TYPE)\n> > ----------------------------------------------------------------------------\n> > identifier.identifier%TYPE plpgsql_parse_dblwordtype\n> > T_LABEL\n> > T_VARIABLE\n> > T_DTYPE (label.variable%TYPE)\n> > T_DTYPE (relname.attname%TYPE)\n> > ----------------------------------------------------------------------------\n> > <new>\n> > identifier.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> > T_DTYPE (nspname.relname.attname%TYPE)\n> > ----------------------------------------------------------------------------\n> > identifier%ROWTYPE plpgsql_parse_wordrowtype\n> > T_DTYPE (relname%ROWTYPE)\n> > ----------------------------------------------------------------------------\n> > <new>\n> > identifier.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> > T_DTYPE (nspname.relname%ROWTYPE)\n> > \n> > ============================================================================\n> > Parameters - parallels the above\n> > ----------------------------------------------------------------------------\n> > $# plpgsql_parse_word\n> > $#.identifier plpgsql_parse_dblword\n> > $#.identifier.identifier plpgsql_parse_tripword\n> > $#%TYPE plpgsql_parse_wordtype\n> > $#.identifier%TYPE plpgsql_parse_dblwordtype\n> > $#.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> > $#%ROWTYPE plpgsql_parse_wordrowtype\n> > $#.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> > \n> > Comments?\n> > \n> > Thanks,\n> > \n> > Joe\n> \n> > Index: src/pl/plpgsql/src/pl_comp.c\n> > ===================================================================\n> > RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/pl_comp.c,v\n> > retrieving revision 1.51\n> > diff -c -r1.51 pl_comp.c\n> > *** src/pl/plpgsql/src/pl_comp.c\t4 Sep 2002 20:31:47 -0000\t1.51\n> > --- src/pl/plpgsql/src/pl_comp.c\t9 Sep 2002 04:22:24 -0000\n> > ***************\n> > *** 1092,1097 ****\n> > --- 1092,1217 ----\n> > \treturn T_DTYPE;\n> > }\n> > \n> > + /* ----------\n> > + * plpgsql_parse_tripwordtype\t\tSame lookup for word.word.word%TYPE\n> > + * ----------\n> > + */\n> > + #define TYPE_JUNK_LEN\t5\n> > + \n> > + int\n> > + plpgsql_parse_tripwordtype(char *word)\n> > + {\n> > + \tOid\t\t\tclassOid;\n> > + \tHeapTuple\tclasstup;\n> > + \tForm_pg_class classStruct;\n> > + \tHeapTuple\tattrtup;\n> > + \tForm_pg_attribute attrStruct;\n> > + \tHeapTuple\ttypetup;\n> > + \tForm_pg_type typeStruct;\n> > + \tPLpgSQL_type *typ;\n> > + \tchar\t *cp[2];\n> > + \tint\t\t\tqualified_att_len;\n> > + \tint\t\t\tnumdots = 0;\n> > + \tint\t\t\ti;\n> > + \tRangeVar *relvar;\n> > + \n> > + \t/* Do case conversion and word separation */\n> > + \tqualified_att_len = strlen(word) - TYPE_JUNK_LEN;\n> > + \tAssert(word[qualified_att_len] == '%');\n> > + \n> > + \tfor (i = 0; i < qualified_att_len; i++)\n> > + \t{\n> > + \t\tif (word[i] == '.' && ++numdots == 2)\n> > + \t\t{\n> > + \t\t\tcp[0] = (char *) palloc((i + 1) * sizeof(char));\n> > + \t\t\tmemset(cp[0], 0, (i + 1) * sizeof(char));\n> > + \t\t\tmemcpy(cp[0], word, i * sizeof(char));\n> > + \n> > + \t\t\t/* qualified_att_len - one based position + 1 (null terminator) */\n> > + \t\t\tcp[1] = (char *) palloc((qualified_att_len - i) * sizeof(char));\n> > + \t\t\tmemset(cp[1], 0, (qualified_att_len - i) * sizeof(char));\n> > + \t\t\tmemcpy(cp[1], &word[i + 1], (qualified_att_len - i - 1) * sizeof(char));\n> > + \n> > + \t\t\tbreak;\n> > + \t\t}\n> > + \t}\n> > + \n> > + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp[0], \"plpgsql_parse_dblwordtype\"));\n> > + \tclassOid = RangeVarGetRelid(relvar, true);\n> > + \tif (!OidIsValid(classOid))\n> > + \t{\n> > + \t\tpfree(cp[0]);\n> > + \t\tpfree(cp[1]);\n> > + \t\treturn T_ERROR;\n> > + \t}\n> > + \tclasstup = SearchSysCache(RELOID,\n> > + \t\t\t\t\t\t\t ObjectIdGetDatum(classOid),\n> > + \t\t\t\t\t\t\t 0, 0, 0);\n> > + \tif (!HeapTupleIsValid(classtup))\n> > + \t{\n> > + \t\tpfree(cp[0]);\n> > + \t\tpfree(cp[1]);\n> > + \t\treturn T_ERROR;\n> > + \t}\n> > + \n> > + \t/*\n> > + \t * It must be a relation, sequence, view, or type\n> > + \t */\n> > + \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> > + \tif (classStruct->relkind != RELKIND_RELATION &&\n> > + \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> > + \t\tclassStruct->relkind != RELKIND_VIEW &&\n> > + \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> > + \t{\n> > + \t\tReleaseSysCache(classtup);\n> > + \t\tpfree(cp[0]);\n> > + \t\tpfree(cp[1]);\n> > + \t\treturn T_ERROR;\n> > + \t}\n> > + \n> > + \t/*\n> > + \t * Fetch the named table field and it's type\n> > + \t */\n> > + \tattrtup = SearchSysCacheAttName(classOid, cp[1]);\n> > + \tif (!HeapTupleIsValid(attrtup))\n> > + \t{\n> > + \t\tReleaseSysCache(classtup);\n> > + \t\tpfree(cp[0]);\n> > + \t\tpfree(cp[1]);\n> > + \t\treturn T_ERROR;\n> > + \t}\n> > + \tattrStruct = (Form_pg_attribute) GETSTRUCT(attrtup);\n> > + \n> > + \ttypetup = SearchSysCache(TYPEOID,\n> > + \t\t\t\t\t\t\t ObjectIdGetDatum(attrStruct->atttypid),\n> > + \t\t\t\t\t\t\t 0, 0, 0);\n> > + \tif (!HeapTupleIsValid(typetup))\n> > + \t\telog(ERROR, \"cache lookup for type %u of %s.%s failed\",\n> > + \t\t\t attrStruct->atttypid, cp[0], cp[1]);\n> > + \ttypeStruct = (Form_pg_type) GETSTRUCT(typetup);\n> > + \n> > + \t/*\n> > + \t * Found that - build a compiler type struct and return it\n> > + \t */\n> > + \ttyp = (PLpgSQL_type *) malloc(sizeof(PLpgSQL_type));\n> > + \n> > + \ttyp->typname = strdup(NameStr(typeStruct->typname));\n> > + \ttyp->typoid = attrStruct->atttypid;\n> > + \tperm_fmgr_info(typeStruct->typinput, &(typ->typinput));\n> > + \ttyp->typelem = typeStruct->typelem;\n> > + \ttyp->typbyval = typeStruct->typbyval;\n> > + \ttyp->typlen = typeStruct->typlen;\n> > + \ttyp->atttypmod = attrStruct->atttypmod;\n> > + \n> > + \tplpgsql_yylval.dtype = typ;\n> > + \n> > + \tReleaseSysCache(classtup);\n> > + \tReleaseSysCache(attrtup);\n> > + \tReleaseSysCache(typetup);\n> > + \tpfree(cp[0]);\n> > + \tpfree(cp[1]);\n> > + \treturn T_DTYPE;\n> > + }\n> > \n> > /* ----------\n> > * plpgsql_parse_wordrowtype\t\tScanner found word%ROWTYPE.\n> > ***************\n> > *** 1125,1130 ****\n> > --- 1245,1290 ----\n> > \n> > \tpfree(cp[0]);\n> > \tpfree(cp[1]);\n> > + \n> > + \treturn T_ROW;\n> > + }\n> > + \n> > + /* ----------\n> > + * plpgsql_parse_dblwordrowtype\t\tScanner found word.word%ROWTYPE.\n> > + *\t\t\tSo word must be namespace qualified a table name.\n> > + * ----------\n> > + */\n> > + #define ROWTYPE_JUNK_LEN\t8\n> > + \n> > + int\n> > + plpgsql_parse_dblwordrowtype(char *word)\n> > + {\n> > + \tOid\t\t\tclassOid;\n> > + \tchar\t *cp;\n> > + \tint\t\t\ti;\n> > + \tRangeVar *relvar;\n> > + \n> > + \t/* Do case conversion and word separation */\n> > + \t/* We convert %rowtype to .rowtype momentarily to keep converter happy */\n> > + \ti = strlen(word) - ROWTYPE_JUNK_LEN;\n> > + \tAssert(word[i] == '%');\n> > + \n> > + \tcp = (char *) palloc((i + 1) * sizeof(char));\n> > + \tmemset(cp, 0, (i + 1) * sizeof(char));\n> > + \tmemcpy(cp, word, i * sizeof(char));\n> > + \n> > + \t/* Lookup the relation */\n> > + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp, \"plpgsql_parse_dblwordtype\"));\n> > + \tclassOid = RangeVarGetRelid(relvar, true);\n> > + \tif (!OidIsValid(classOid))\n> > + \t\telog(ERROR, \"%s: no such class\", cp);\n> > + \n> > + \t/*\n> > + \t * Build and return the complete row definition\n> > + \t */\n> > + \tplpgsql_yylval.row = build_rowtype(classOid);\n> > + \n> > + \tpfree(cp);\n> > \n> > \treturn T_ROW;\n> > }\n> > Index: src/pl/plpgsql/src/plpgsql.h\n> > ===================================================================\n> > RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/plpgsql.h,v\n> > retrieving revision 1.27\n> > diff -c -r1.27 plpgsql.h\n> > *** src/pl/plpgsql/src/plpgsql.h\t4 Sep 2002 20:31:47 -0000\t1.27\n> > --- src/pl/plpgsql/src/plpgsql.h\t9 Sep 2002 04:21:37 -0000\n> > ***************\n> > *** 568,574 ****\n> > --- 568,576 ----\n> > extern int\tplpgsql_parse_tripword(char *word);\n> > extern int\tplpgsql_parse_wordtype(char *word);\n> > extern int\tplpgsql_parse_dblwordtype(char *word);\n> > + extern int\tplpgsql_parse_tripwordtype(char *word);\n> > extern int\tplpgsql_parse_wordrowtype(char *word);\n> > + extern int\tplpgsql_parse_dblwordrowtype(char *word);\n> > extern PLpgSQL_type *plpgsql_parse_datatype(char *string);\n> > extern void plpgsql_adddatum(PLpgSQL_datum * new);\n> > extern int\tplpgsql_add_initdatums(int **varnos);\n> > Index: src/pl/plpgsql/src/scan.l\n> > ===================================================================\n> > RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/scan.l,v\n> > retrieving revision 1.22\n> > diff -c -r1.22 scan.l\n> > *** src/pl/plpgsql/src/scan.l\t30 Aug 2002 00:28:41 -0000\t1.22\n> > --- src/pl/plpgsql/src/scan.l\t9 Sep 2002 04:23:49 -0000\n> > ***************\n> > *** 170,183 ****\n> > --- 170,187 ----\n> > {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> > {identifier}{space}*%TYPE\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> > {identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> > + {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> > {identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> > + {identifier}{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> > \n> > \\${digit}+\t\t\t\t\t\t{ return plpgsql_parse_word(yytext);\t}\n> > \\${digit}+{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_dblword(yytext);\t}\n> > \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> > \\${digit}+{space}*%TYPE\t\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> > \\${digit}+{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> > + \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> > \\${digit}+{space}*%ROWTYPE\t\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> > + \\${digit}+{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> > \n> > {digit}+\t\t{ return T_NUMBER;\t\t\t}\n> > \n> \n> > -- nspname.relname.attname%TYPE\n> > DROP FUNCTION t();\n> > CREATE OR REPLACE FUNCTION t() RETURNS TEXT AS '\n> > DECLARE\n> > col_name pg_catalog.pg_attribute.attname%TYPE;\n> > BEGIN\n> > col_name := ''uga'';\n> > RETURN col_name;\n> > END;\n> > ' LANGUAGE 'plpgsql';\n> > SELECT t();\n> > \n> > -- nspname.relname%ROWTYPE\n> > DROP FUNCTION t();\n> > CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> > DECLARE\n> > rec pg_catalog.pg_attribute%ROWTYPE;\n> > BEGIN\n> > SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> > RETURN rec;\n> > END;\n> > ' LANGUAGE 'plpgsql';\n> > SELECT * FROM t();\n> > \n> > -- nspname.relname.attname%TYPE\n> > DROP FUNCTION t();\n> > CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute.attname%TYPE AS '\n> > DECLARE\n> > rec pg_catalog.pg_attribute.attname%TYPE;\n> > BEGIN\n> > SELECT INTO rec pg_catalog.pg_attribute.attname FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> > RETURN rec;\n> > END;\n> > ' LANGUAGE 'plpgsql';\n> > SELECT t();\n> > \n> > -- nspname.relname%ROWTYPE\n> > DROP FUNCTION t();\n> > CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> > DECLARE\n> > rec pg_catalog.pg_attribute%ROWTYPE;\n> > BEGIN\n> > SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> > RETURN rec;\n> > END;\n> > ' LANGUAGE 'plpgsql';\n> > SELECT * FROM t();\n> \n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > subscribe-nomail command to majordomo@postgresql.org so that your\n> > message can get through to the mailing list cleanly\n> \n> -- \n> Bruce Momjian | http://candle.pha.pa.us\n> pgman@candle.pha.pa.us | (610) 359-1001\n> + If your life is a hard drive, | 13 Roberts Road\n> + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> \n> ---------------------------(end of broadcast)---------------------------\n> TIP 2: you can get off all lists at once with the unregister command\n> (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)", "msg_date": "12 Sep 2002 10:19:00 -0500", "msg_from": "Greg Copeland <greg@CopelandConsulting.Net>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." }, { "msg_contents": "\nDoes pl/python even have a DECLARE section that can mimick the data type\nof an existing table column?\n\n---------------------------------------------------------------------------\n\nGreg Copeland wrote:\n-- Start of PGP signed section.\n> Does anyone know if such effort is also required to pl/python to become\n> \"schema aware\"?\n> \n> Regards,\n> \n> \tGreg Copeland\n> \n> \n> On Wed, 2002-09-11 at 19:24, Bruce Momjian wrote:\n> > \n> > Patch applied. Thanks.\n> > \n> > ---------------------------------------------------------------------------\n> > \n> > \n> > Joe Conway wrote:\n> > > Tom Lane wrote:\n> > > > Sean Chittenden <sean@chittenden.org> writes:\n> > > > \n> > > >>::sigh:: Is it me or does it look like all\n> > > >>of pl/pgsql is schema un-aware (ie, all of the declarations). -sc\n> > > > \n> > > > \n> > > > Yeah. The group of routines parse_word, parse_dblword, etc that are\n> > > > called by the lexer certainly all need work. There are some\n> > > > definitional issues to think about, too --- plpgsql presently relies on\n> > > > the number of names to give it some idea of what to look for, and those\n> > > > rules are probably all toast now. Please come up with a sketch of what\n> > > > you think the behavior should be before you start hacking code.\n> > > \n> > > Attached is a diff -c format proposal to fix this. I've also attached a short \n> > > test script. Seems to work OK and passes all regression tests.\n> > > \n> > > Here's a breakdown of how I understand plpgsql's \"Special word rules\" -- I \n> > > think it illustrates the behavior reasonably well. New functions added by this \n> > > patch are plpgsql_parse_tripwordtype and plpgsql_parse_dblwordrowtype:\n> > > \n> > > ============================================================================\n> > > Identifiers (represents) parsing function\n> > > ----------------------------------------------------------------------------\n> > > identifier plpgsql_parse_word\n> > > tg_argv\n> > > T_LABEL (label)\n> > > T_VARIABLE (variable)\n> > > T_RECORD (record)\n> > > T_ROW (row)\n> > > ----------------------------------------------------------------------------\n> > > identifier.identifier plpgsql_parse_dblword\n> > > T_LABEL\n> > > T_VARIABLE (label.variable)\n> > > T_RECORD (label.record)\n> > > T_ROW (label.row)\n> > > T_RECORD\n> > > T_VARIABLE (record.variable)\n> > > T_ROW\n> > > T_VARIABLE (row.variable)\n> > > ----------------------------------------------------------------------------\n> > > identifier.identifier.identifier plpgsql_parse_tripword\n> > > T_LABEL\n> > > T_RECORD\n> > > T_VARIABLE (label.record.variable)\n> > > T_ROW\n> > > T_VARIABLE (label.row.variable)\n> > > ----------------------------------------------------------------------------\n> > > identifier%TYPE plpgsql_parse_wordtype\n> > > T_VARIABLE\n> > > T_DTYPE (variable%TYPE)\n> > > T_DTYPE (typname%TYPE)\n> > > ----------------------------------------------------------------------------\n> > > identifier.identifier%TYPE plpgsql_parse_dblwordtype\n> > > T_LABEL\n> > > T_VARIABLE\n> > > T_DTYPE (label.variable%TYPE)\n> > > T_DTYPE (relname.attname%TYPE)\n> > > ----------------------------------------------------------------------------\n> > > <new>\n> > > identifier.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> > > T_DTYPE (nspname.relname.attname%TYPE)\n> > > ----------------------------------------------------------------------------\n> > > identifier%ROWTYPE plpgsql_parse_wordrowtype\n> > > T_DTYPE (relname%ROWTYPE)\n> > > ----------------------------------------------------------------------------\n> > > <new>\n> > > identifier.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> > > T_DTYPE (nspname.relname%ROWTYPE)\n> > > \n> > > ============================================================================\n> > > Parameters - parallels the above\n> > > ----------------------------------------------------------------------------\n> > > $# plpgsql_parse_word\n> > > $#.identifier plpgsql_parse_dblword\n> > > $#.identifier.identifier plpgsql_parse_tripword\n> > > $#%TYPE plpgsql_parse_wordtype\n> > > $#.identifier%TYPE plpgsql_parse_dblwordtype\n> > > $#.identifier.identifier%TYPE plpgsql_parse_tripwordtype\n> > > $#%ROWTYPE plpgsql_parse_wordrowtype\n> > > $#.identifier%ROWTYPE plpgsql_parse_dblwordrowtype\n> > > \n> > > Comments?\n> > > \n> > > Thanks,\n> > > \n> > > Joe\n> > \n> > > Index: src/pl/plpgsql/src/pl_comp.c\n> > > ===================================================================\n> > > RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/pl_comp.c,v\n> > > retrieving revision 1.51\n> > > diff -c -r1.51 pl_comp.c\n> > > *** src/pl/plpgsql/src/pl_comp.c\t4 Sep 2002 20:31:47 -0000\t1.51\n> > > --- src/pl/plpgsql/src/pl_comp.c\t9 Sep 2002 04:22:24 -0000\n> > > ***************\n> > > *** 1092,1097 ****\n> > > --- 1092,1217 ----\n> > > \treturn T_DTYPE;\n> > > }\n> > > \n> > > + /* ----------\n> > > + * plpgsql_parse_tripwordtype\t\tSame lookup for word.word.word%TYPE\n> > > + * ----------\n> > > + */\n> > > + #define TYPE_JUNK_LEN\t5\n> > > + \n> > > + int\n> > > + plpgsql_parse_tripwordtype(char *word)\n> > > + {\n> > > + \tOid\t\t\tclassOid;\n> > > + \tHeapTuple\tclasstup;\n> > > + \tForm_pg_class classStruct;\n> > > + \tHeapTuple\tattrtup;\n> > > + \tForm_pg_attribute attrStruct;\n> > > + \tHeapTuple\ttypetup;\n> > > + \tForm_pg_type typeStruct;\n> > > + \tPLpgSQL_type *typ;\n> > > + \tchar\t *cp[2];\n> > > + \tint\t\t\tqualified_att_len;\n> > > + \tint\t\t\tnumdots = 0;\n> > > + \tint\t\t\ti;\n> > > + \tRangeVar *relvar;\n> > > + \n> > > + \t/* Do case conversion and word separation */\n> > > + \tqualified_att_len = strlen(word) - TYPE_JUNK_LEN;\n> > > + \tAssert(word[qualified_att_len] == '%');\n> > > + \n> > > + \tfor (i = 0; i < qualified_att_len; i++)\n> > > + \t{\n> > > + \t\tif (word[i] == '.' && ++numdots == 2)\n> > > + \t\t{\n> > > + \t\t\tcp[0] = (char *) palloc((i + 1) * sizeof(char));\n> > > + \t\t\tmemset(cp[0], 0, (i + 1) * sizeof(char));\n> > > + \t\t\tmemcpy(cp[0], word, i * sizeof(char));\n> > > + \n> > > + \t\t\t/* qualified_att_len - one based position + 1 (null terminator) */\n> > > + \t\t\tcp[1] = (char *) palloc((qualified_att_len - i) * sizeof(char));\n> > > + \t\t\tmemset(cp[1], 0, (qualified_att_len - i) * sizeof(char));\n> > > + \t\t\tmemcpy(cp[1], &word[i + 1], (qualified_att_len - i - 1) * sizeof(char));\n> > > + \n> > > + \t\t\tbreak;\n> > > + \t\t}\n> > > + \t}\n> > > + \n> > > + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp[0], \"plpgsql_parse_dblwordtype\"));\n> > > + \tclassOid = RangeVarGetRelid(relvar, true);\n> > > + \tif (!OidIsValid(classOid))\n> > > + \t{\n> > > + \t\tpfree(cp[0]);\n> > > + \t\tpfree(cp[1]);\n> > > + \t\treturn T_ERROR;\n> > > + \t}\n> > > + \tclasstup = SearchSysCache(RELOID,\n> > > + \t\t\t\t\t\t\t ObjectIdGetDatum(classOid),\n> > > + \t\t\t\t\t\t\t 0, 0, 0);\n> > > + \tif (!HeapTupleIsValid(classtup))\n> > > + \t{\n> > > + \t\tpfree(cp[0]);\n> > > + \t\tpfree(cp[1]);\n> > > + \t\treturn T_ERROR;\n> > > + \t}\n> > > + \n> > > + \t/*\n> > > + \t * It must be a relation, sequence, view, or type\n> > > + \t */\n> > > + \tclassStruct = (Form_pg_class) GETSTRUCT(classtup);\n> > > + \tif (classStruct->relkind != RELKIND_RELATION &&\n> > > + \t\tclassStruct->relkind != RELKIND_SEQUENCE &&\n> > > + \t\tclassStruct->relkind != RELKIND_VIEW &&\n> > > + \t\tclassStruct->relkind != RELKIND_COMPOSITE_TYPE)\n> > > + \t{\n> > > + \t\tReleaseSysCache(classtup);\n> > > + \t\tpfree(cp[0]);\n> > > + \t\tpfree(cp[1]);\n> > > + \t\treturn T_ERROR;\n> > > + \t}\n> > > + \n> > > + \t/*\n> > > + \t * Fetch the named table field and it's type\n> > > + \t */\n> > > + \tattrtup = SearchSysCacheAttName(classOid, cp[1]);\n> > > + \tif (!HeapTupleIsValid(attrtup))\n> > > + \t{\n> > > + \t\tReleaseSysCache(classtup);\n> > > + \t\tpfree(cp[0]);\n> > > + \t\tpfree(cp[1]);\n> > > + \t\treturn T_ERROR;\n> > > + \t}\n> > > + \tattrStruct = (Form_pg_attribute) GETSTRUCT(attrtup);\n> > > + \n> > > + \ttypetup = SearchSysCache(TYPEOID,\n> > > + \t\t\t\t\t\t\t ObjectIdGetDatum(attrStruct->atttypid),\n> > > + \t\t\t\t\t\t\t 0, 0, 0);\n> > > + \tif (!HeapTupleIsValid(typetup))\n> > > + \t\telog(ERROR, \"cache lookup for type %u of %s.%s failed\",\n> > > + \t\t\t attrStruct->atttypid, cp[0], cp[1]);\n> > > + \ttypeStruct = (Form_pg_type) GETSTRUCT(typetup);\n> > > + \n> > > + \t/*\n> > > + \t * Found that - build a compiler type struct and return it\n> > > + \t */\n> > > + \ttyp = (PLpgSQL_type *) malloc(sizeof(PLpgSQL_type));\n> > > + \n> > > + \ttyp->typname = strdup(NameStr(typeStruct->typname));\n> > > + \ttyp->typoid = attrStruct->atttypid;\n> > > + \tperm_fmgr_info(typeStruct->typinput, &(typ->typinput));\n> > > + \ttyp->typelem = typeStruct->typelem;\n> > > + \ttyp->typbyval = typeStruct->typbyval;\n> > > + \ttyp->typlen = typeStruct->typlen;\n> > > + \ttyp->atttypmod = attrStruct->atttypmod;\n> > > + \n> > > + \tplpgsql_yylval.dtype = typ;\n> > > + \n> > > + \tReleaseSysCache(classtup);\n> > > + \tReleaseSysCache(attrtup);\n> > > + \tReleaseSysCache(typetup);\n> > > + \tpfree(cp[0]);\n> > > + \tpfree(cp[1]);\n> > > + \treturn T_DTYPE;\n> > > + }\n> > > \n> > > /* ----------\n> > > * plpgsql_parse_wordrowtype\t\tScanner found word%ROWTYPE.\n> > > ***************\n> > > *** 1125,1130 ****\n> > > --- 1245,1290 ----\n> > > \n> > > \tpfree(cp[0]);\n> > > \tpfree(cp[1]);\n> > > + \n> > > + \treturn T_ROW;\n> > > + }\n> > > + \n> > > + /* ----------\n> > > + * plpgsql_parse_dblwordrowtype\t\tScanner found word.word%ROWTYPE.\n> > > + *\t\t\tSo word must be namespace qualified a table name.\n> > > + * ----------\n> > > + */\n> > > + #define ROWTYPE_JUNK_LEN\t8\n> > > + \n> > > + int\n> > > + plpgsql_parse_dblwordrowtype(char *word)\n> > > + {\n> > > + \tOid\t\t\tclassOid;\n> > > + \tchar\t *cp;\n> > > + \tint\t\t\ti;\n> > > + \tRangeVar *relvar;\n> > > + \n> > > + \t/* Do case conversion and word separation */\n> > > + \t/* We convert %rowtype to .rowtype momentarily to keep converter happy */\n> > > + \ti = strlen(word) - ROWTYPE_JUNK_LEN;\n> > > + \tAssert(word[i] == '%');\n> > > + \n> > > + \tcp = (char *) palloc((i + 1) * sizeof(char));\n> > > + \tmemset(cp, 0, (i + 1) * sizeof(char));\n> > > + \tmemcpy(cp, word, i * sizeof(char));\n> > > + \n> > > + \t/* Lookup the relation */\n> > > + \trelvar = makeRangeVarFromNameList(stringToQualifiedNameList(cp, \"plpgsql_parse_dblwordtype\"));\n> > > + \tclassOid = RangeVarGetRelid(relvar, true);\n> > > + \tif (!OidIsValid(classOid))\n> > > + \t\telog(ERROR, \"%s: no such class\", cp);\n> > > + \n> > > + \t/*\n> > > + \t * Build and return the complete row definition\n> > > + \t */\n> > > + \tplpgsql_yylval.row = build_rowtype(classOid);\n> > > + \n> > > + \tpfree(cp);\n> > > \n> > > \treturn T_ROW;\n> > > }\n> > > Index: src/pl/plpgsql/src/plpgsql.h\n> > > ===================================================================\n> > > RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/plpgsql.h,v\n> > > retrieving revision 1.27\n> > > diff -c -r1.27 plpgsql.h\n> > > *** src/pl/plpgsql/src/plpgsql.h\t4 Sep 2002 20:31:47 -0000\t1.27\n> > > --- src/pl/plpgsql/src/plpgsql.h\t9 Sep 2002 04:21:37 -0000\n> > > ***************\n> > > *** 568,574 ****\n> > > --- 568,576 ----\n> > > extern int\tplpgsql_parse_tripword(char *word);\n> > > extern int\tplpgsql_parse_wordtype(char *word);\n> > > extern int\tplpgsql_parse_dblwordtype(char *word);\n> > > + extern int\tplpgsql_parse_tripwordtype(char *word);\n> > > extern int\tplpgsql_parse_wordrowtype(char *word);\n> > > + extern int\tplpgsql_parse_dblwordrowtype(char *word);\n> > > extern PLpgSQL_type *plpgsql_parse_datatype(char *string);\n> > > extern void plpgsql_adddatum(PLpgSQL_datum * new);\n> > > extern int\tplpgsql_add_initdatums(int **varnos);\n> > > Index: src/pl/plpgsql/src/scan.l\n> > > ===================================================================\n> > > RCS file: /opt/src/cvs/pgsql-server/src/pl/plpgsql/src/scan.l,v\n> > > retrieving revision 1.22\n> > > diff -c -r1.22 scan.l\n> > > *** src/pl/plpgsql/src/scan.l\t30 Aug 2002 00:28:41 -0000\t1.22\n> > > --- src/pl/plpgsql/src/scan.l\t9 Sep 2002 04:23:49 -0000\n> > > ***************\n> > > *** 170,183 ****\n> > > --- 170,187 ----\n> > > {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> > > {identifier}{space}*%TYPE\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> > > {identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> > > + {identifier}{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> > > {identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> > > + {identifier}{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> > > \n> > > \\${digit}+\t\t\t\t\t\t{ return plpgsql_parse_word(yytext);\t}\n> > > \\${digit}+{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_dblword(yytext);\t}\n> > > \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}\t{ return plpgsql_parse_tripword(yytext); }\n> > > \\${digit}+{space}*%TYPE\t\t\t{ return plpgsql_parse_wordtype(yytext);\t}\n> > > \\${digit}+{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_dblwordtype(yytext); }\n> > > + \\${digit}+{space}*\\.{space}*{identifier}{space}*\\.{space}*{identifier}{space}*%TYPE\t{ return plpgsql_parse_tripwordtype(yytext); }\n> > > \\${digit}+{space}*%ROWTYPE\t\t{ return plpgsql_parse_wordrowtype(yytext);\t}\n> > > + \\${digit}+{space}*\\.{space}*{identifier}{space}*%ROWTYPE\t{ return plpgsql_parse_dblwordrowtype(yytext);\t}\n> > > \n> > > {digit}+\t\t{ return T_NUMBER;\t\t\t}\n> > > \n> > \n> > > -- nspname.relname.attname%TYPE\n> > > DROP FUNCTION t();\n> > > CREATE OR REPLACE FUNCTION t() RETURNS TEXT AS '\n> > > DECLARE\n> > > col_name pg_catalog.pg_attribute.attname%TYPE;\n> > > BEGIN\n> > > col_name := ''uga'';\n> > > RETURN col_name;\n> > > END;\n> > > ' LANGUAGE 'plpgsql';\n> > > SELECT t();\n> > > \n> > > -- nspname.relname%ROWTYPE\n> > > DROP FUNCTION t();\n> > > CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> > > DECLARE\n> > > rec pg_catalog.pg_attribute%ROWTYPE;\n> > > BEGIN\n> > > SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> > > RETURN rec;\n> > > END;\n> > > ' LANGUAGE 'plpgsql';\n> > > SELECT * FROM t();\n> > > \n> > > -- nspname.relname.attname%TYPE\n> > > DROP FUNCTION t();\n> > > CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute.attname%TYPE AS '\n> > > DECLARE\n> > > rec pg_catalog.pg_attribute.attname%TYPE;\n> > > BEGIN\n> > > SELECT INTO rec pg_catalog.pg_attribute.attname FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> > > RETURN rec;\n> > > END;\n> > > ' LANGUAGE 'plpgsql';\n> > > SELECT t();\n> > > \n> > > -- nspname.relname%ROWTYPE\n> > > DROP FUNCTION t();\n> > > CREATE OR REPLACE FUNCTION t() RETURNS pg_catalog.pg_attribute AS '\n> > > DECLARE\n> > > rec pg_catalog.pg_attribute%ROWTYPE;\n> > > BEGIN\n> > > SELECT INTO rec * FROM pg_catalog.pg_attribute WHERE attrelid = 1247 AND attname = ''typname'';\n> > > RETURN rec;\n> > > END;\n> > > ' LANGUAGE 'plpgsql';\n> > > SELECT * FROM t();\n> > \n> > > \n> > > ---------------------------(end of broadcast)---------------------------\n> > > TIP 3: if posting/reading through Usenet, please send an appropriate\n> > > subscribe-nomail command to majordomo@postgresql.org so that your\n> > > message can get through to the mailing list cleanly\n> > \n> > -- \n> > Bruce Momjian | http://candle.pha.pa.us\n> > pgman@candle.pha.pa.us | (610) 359-1001\n> > + If your life is a hard drive, | 13 Roberts Road\n> > + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n> > \n> > ---------------------------(end of broadcast)---------------------------\n> > TIP 2: you can get off all lists at once with the unregister command\n> > (send \"unregister YourEmailAddressHere\" to majordomo@postgresql.org)\n> \n-- End of PGP section, PGP failed!\n\n-- \n Bruce Momjian | http://candle.pha.pa.us\n pgman@candle.pha.pa.us | (610) 359-1001\n + If your life is a hard drive, | 13 Roberts Road\n + Christ can be your backup. | Newtown Square, Pennsylvania 19073\n", "msg_date": "Wed, 18 Sep 2002 00:07:33 -0400 (EDT)", "msg_from": "Bruce Momjian <pgman@candle.pha.pa.us>", "msg_from_op": false, "msg_subject": "Re: Schemas not available for pl/pgsql %TYPE...." } ]